Rules for a Flat World: A Q&A with Gillian K. Hadfield

 
Schwartz Reisman Institute Director Gillian K. Hadfield will discuss her book, Rules for a Flat World, as part of Rotman’s “Big Ideas” series on November 25, 2020. In this Q&A, she offers insights into how we might understand, govern, and build technology responsive to human values.

Schwartz Reisman Institute Director Gillian K. Hadfield will discuss her book, Rules for a Flat World, as part of Rotman’s “Big Ideas” series on November 25, 2020. In this Q&A, she offers insights into how we might understand, govern, and build technology responsive to human values.


Gillian K. Hadfield has been thinking about law for a long time.

Her 2017 book, Rules for a Flat World: Why Humans Invented Law and How to Reinvent It for a Complex Global Economy, aimed to understand and re-envision the global legal system for an increasingly complex world.

Now, a paperback edition of the book is updated with a new prologue about artificial intelligence (AI)—its risks, benefits, evolution, and regulation. Hadfield tackles this most pressing contemporary technology with insights into how we might understand, govern, and build AI that is responsive to human values.

Hadfield will discuss her book as part of the Rotman School of Management’s “Big Ideas” series on November 25, 2020. Below, she offers some timely insights into what problems we face today, how we might begin to solve them, and what is at stake if we don’t.

Schwartz Reisman Institute: In your book, you describe how contemporary law is too expensive, complex, and inflexible for a rapidly changing global context. This is especially true in our current digital environment, where regulation struggles to keep up with technological advances. Could you tell us more about that?

Gillian K. Hadfield: Think about, for example, the terms and conditions that you have to accept when downloading an app. Who reads all of them? And if someone does, do they fully understand the implications for their data and privacy? Probably not.

Digital privacy and governance models are a primary area of our research at Schwartz Reisman, including important work by legal scholar Lisa Austin and computer security expert David Lie—both of whom are research leads here at SRI, and U of T professors. There’s no question that existing legal models for the protection of privacy are simply inadequate for today’s technological context.

There’s an incredibly complex data ecosystem at work today, often involving third parties who have no direct relationship to the people from whom data is being derived. Data flows are multivaried, unpredictable, and not always traceable. And the definition of “data” itself is changing: it’s no longer just text or numbers collected in formal processes. Data can be images and other non-quantitative information, and it can be collected and analyzed in ways that our legislation just couldn’t imagine when it was drafted decades ago.

So, we have to ask questions like: What does meaningful consent look like? Should our laws really be focusing on whether a consumer can understand things like terms and conditions or privacy policies? Or should we be building regulatory environments that deliver trust and respect in a different way? I think we need to overhaul the very concepts we’re working with.

For example, the idea of “personal information” is no longer as simple as it once was. What happens when personal information also serves a public good—such as in fighting COVID-19? What if it’s collected from a public space—such as a “smart city”?

We’re trying to shoehorn new approaches into old categories. Only once we conceive of new categories and definitions for familiar terms like “data,” “privacy,” and “consent” can we actually begin to create adequate governance models for the AI era.

SRI: In your book, you say that 90 per cent of the law is about how to cooperate, not how to punish. Why is this important to understand? And how does it translate to AI and the digital era?

GKH: A big misconception about Hammurabi’s code—one of humanity’s earliest recorded examples of formal, written law—is that it’s all about “an eye for an eye, tooth for tooth.” But most of that code is actually everyday, commonplace stuff like: “If you flood my field, how should you adequately compensate me?” not, “How should you be punished?”

This shows that law is really as an extension of our normative systems. It’s one of the ways we ensure that we are all working together and living together peacefully. Human values are based on cooperation and interdependence. So, it’s critical to understand that’s the starting point for law. Law is a shared platform for getting things done.

In the new paperback edition of my book, what I really want to ask is: how should we talk about law in ways that help humanity cope with the tremendous challenges of an AI-enabled and digital-enabled economy? In fact, one of the reasons I introduce the term “legal infrastructure” in the book is because I want to highlight the fact that there’s a structure, a foundation, underneath human relationships. It’s what we build everything else upon. Whatever our goals are, we need a strong, sturdy infrastructure to support them.

Since my book was published in 2017, it has become evident that AI poses some of the biggest challenges that humanity now faces. The impact of digitization is that it, again, “flattens” almost everything. For example, digital platforms are where developers, advertisers, users, and manufacturers of devices meet and transact. We need these kinds of actors, and others like business leaders, UX professionals, social science researchers, product managers, technologists, and risk and compliance experts, to meet on the “platform” of law as well.

And we need to ensure that platform is solid and will support cooperation and advancement.

SRI: The paperback edition of your book contains a new prologue on AI. Could you give us a glimpse into some of the questions you grapple with—and how you see them playing out in the coming years?

GKH: A very powerful new type of AI is called “machine learning.” This is when we build a machine-based system not just to complete a particular task, but actually to figure out how to complete the task without receiving specific instructions. One of our faculty affiliates, computer scientist Chris Maddison, has a great analogy about muffins to describe machine learning.

The danger here is that we can’t be sure that the machine won’t figure out how to do things that we didn’t intend for it to do.

To address this problem, one of the things my research has turned to in the last few years is what I call the science of human normativity.” That is to say: how might we effectively channel machine or robot "behaviour" the way we channel our own human behaviour—the way we determine and enforce what is and isn’t OK to do in our societies? We can’t just create rules for the machines because, as noted, they can “learn” new things. They may find ways to circumvent our rules.

What’s really pivotal here is that developing AI that is responsible, fair, and beneficial to us isn’t only a computer scientist’s or an engineer’s job. It’s the job of social scientists and humanists as well.

This is part of a big topic of discussion these days called “the alignment problem.

Alignment refers to the ideal that an AI’s actions should align with what humans would want. How can we make sure the machines we build do what we intend them to do and achieve our desired outcomes?

Sometimes, an AI will find ways to go around the restrictions or safety mechanisms that humans place on it. Does this mean it’s malicious? Deliberately trying to evade detection? Not quite—AIs are not conscious and don’t have feelings or intentions. But sometimes, the sophisticated AI tools we’re building these days will do what its human designers asked—but didn’t really want—it to do.

It’s an incredibly complex problem that is rapidly gaining traction in all aspects of our lives—from entertainment recommendation algorithms to healthcare settings to government decision-making processes in sectors like immigration, policing, and financial regulation.

AI is more challenging than anything we’ve previously faced in our technologically-advanced world. So, of course we need rules, regulation, oversight, governance, and accountability. Unfortunately, our legal systems are broken. They simply can’t handle the levels of technological and conceptual complexity we’re currently dealing with—and that’s why I wrote this book in the first place.

We humans have not only rules, but preferences, exceptions, and cultural values that differ and evolve from community to community and from era to era. So, we really have to dig deep and do some fundamental cross-disciplinary thinking and research to figure out how to build complex, dynamic systems of rules that will nudge and align AI with human values over the longer term.

We shouldn’t be talking about “good” AI or “bad” AI; we should be building AI and attendant governance mechanisms that are as responsive and as adaptive to our values as we humans ourselves are.  


Want to learn more?


Browse stories by tag:

Related Posts

 
Previous
Previous

Past injustice and future harm: Deborah Hellman on the stakes of algorithmic decision-making

Next
Next

AI can substantially improve economic analyses: Marlène Koffi in The Hill Times