Good Intentions and Goodhart’s Law

Anat Deracine
7 min readMay 14, 2018

To paraphrase Socrates, all I know is that my opinions are my own, and even that is debatable.

I waited to post this, and am glad I did because there are now so many other journalists and experts who are able to point out the serious ethical issues around consent and power dynamics with the Google Duplex demo.

I am glad not to be alone in feeling queasy about a conversational assistant that can impersonate a human, that takes away the discomfort and inconvenience of having to interact with another human who is a non-native English speaker, and can be the gentle robot that demands children say pretty please. Don’t get me wrong, I clapped like everyone else at the sheer technical excellence of a machine passing the Turing Test effortlessly, and I can see so much value in the feature itself. To help people with hearing loss avoid social isolation from being unable to answer the phone, for instance, something I’m acutely familiar with. To help people communicate in a language they don’t speak. So many amazing things that could be done.

The polite digital assistant has tremendous potential to manage communities at scale, where moderators are scarce and struggling. An automated defuser of tensions would be a great thing to have on 4chan, or on spiraling reditt threads. Let’s go further. An automated, welcoming, gender-blind reviewer for code-commits to open-source may not be far away, which could go a long way to reducing the gender-bias that is known to exist when contributors are identifiable as women. And I’d probably feel a lot more comfortable practicing new languages with a conversational assistant than with a taxi driver when I need to get to my flight on time.

But as my soul-sister Zeynep Tufekci points out, the issue isn’t about the demo. A robot that avoids deceptive slang and admits to being a robot only solves a part of the problem.

Sadly, the news-cycle around this issue has died down, because the vast majority of people (technologists and luddites) are unable to articulate exactly what the problem is here. And if we can’t make this clear (and possibly even if we can), the feature will launch, possibly stripped of some um’s and ahh’s, and nobody inside or outside Google will be able to prevent it.

For the rest of this essay, I’m going to talk about a hypothetical executive named Sridhar Pillai, who has nothing to do with Google, and a hypothetical senior engineer by the name of Jake Dent.

Part #1: Good Intentions

SP and JD are fundamentally optimists. Their fatal flaw, or hamartia in the Greek tragedy sense of the word, is their commitment to be good, do good and only see good in others. They are not products of deep and daily trauma, and their success has involved a sudden catapulting to the top. When people raise issues that threaten their rosy view of the world, these leaders are uncomfortable with “the negativity,” or they dismiss it as an aberration. “By and large, people are good,” they might say. “We have a few bad apples, but there’s no systemic issue.”

They say this because to both SP and JD, a “systemic” issue is one that involves problems that are widespread and people who are rotten at the core, like a building built for cheap with faulty wiring leading to a terrible fire. They don’t get that a systemic issue is not just a case of fundamental incompetence or malice. A systemic issue arises when you fail to protect your system against attack. “For the most part, people are good,” may be true, at least at first. But if you have no plan to deal effectively with the few who are not, sooner or later, evil sneaks under the gate.

Both SP and JD are not just better than the random assortment of terrible tech leaders infecting Silicon Valley, they are deeply, fundamentally good people who want to do the right thing. They are servant leaders, humble and easygoing, the kind of leaders who introduce themselves to you so you’ll open the door for them, or who make you a coffee if they’re making one for themselves.

Since SP and JD are hypothetical, they have never really had technology used against them, nor have they ever desired to use it that way against others. They would be horrified by the satires of Google Duplex where people outsource conversations with their parents or delegate breakups to a digital assistant. They would never understand or believe people could do such things.

But SP and JD aren’t going to read those satires. They aren’t going to hear Zeynep’s voice. Because hypothetical leaders like SP and JD will have a hypothetical communications team whose job it is to read the news for them and let them know, “Yeah there’s some controversy, but we knew it would happen and it’s managed. We made a statement.” SP and JD will likely also have hypothetical (non-digital) executive assistants who triage their email for them, and Jane Admin knows that SP doesn’t have time for ragey rants from random people before the next board meeting. Even on the off-chance that SP and JD read Twitter, they will have a team of lawyers telling them that under no circumstances should they respond. And let’s say there’s someone close enough to SP and JD who is trusted to give constructive feedback, such a person likely has a priority list a mile-long, and there just might be other things on the agenda.

Technology that is built by greedy and unethical people is, in some ways, simpler to dismantle. The fault-lines are clear. You can, with some investigation, find out where they cut corners, who they bullied, what lies they told to get the job done for faster or cheaper. You can burn down the house if you need to.

Technology built by fundamentally good people is a harder problem, because you can’t justify burning down the house. Such technology institutionalizes the rapid execution of good intentions, but consistently fails at preventing malicious use. For example, a smart pacemaker, connected to the cloud, provides for software upgrades that don’t require surgery, better data analytics and more responsive care. But your pacemaker can also testify against you in court. Who thought that would happen?

With fundamentally good people, railing against their lack of moral compass doesn’t work well. Activists lose the moral high-ground the minute they fault ignorance as equivalent to malice. And pointing out someone else’s lack of moral intuition can backfire if there’s even the slightest chink in your own moral authority. But there may be an alternative, one that scales to a leadership team instead of placing the burdens of humanity’s future on the shoulders of a single good person with the power to change the course of history.

Part #2: Goodhart’s Law

This article is the best layperson’s explanation of Goodhart’s Law, which states roughly that when you aim your efforts towards a proxy metric for the thing you actually want to optimize, over time the proxy metric and the real metric you’re aiming for start to diverge, making your proxy metric useless in achieving your real goal.

Let’s say that you’re developing a cool new app that classifies people’s gender based on appearance. I have no idea why anyone would do this, it’s a terrible and dangerous idea, except of course this is happening already with all the usual lack of understanding of the gender-spectrum. Let us say that you want to optimize this system to be able to classify anyone. If your success metric is the number of successful classifications, and let’s say you can’t get further funding unless your success rate is > 95%, you’re not going to test out your classifier on any society that has a population of >5% of transgender, third-gender, or non-binary people. It’s not in your interest to prioritize this work until you get the next round of funding.

If your success metric is the number of daily active users, then you succeed as that number goes up. If that means building a chat community for people to discuss a particular gender-classification and argue “I don’t think she’s a woman, there’s got to be a bug with how we classify Japanese jawbones” sure, you’ll do that without thinking about it, because it’s in your interest to do so.

Tech companies are deeply, deeply metrics driven, and they very often have the wrong metrics. The Board (if the company is publicly held) looks at these metrics and expects them to get better. Changing the metrics that get looked at isn’t just the most effective way to drive culture change, it’s often the only way.

So how might we change the numbers that get looked at by our hypothetical leaders SP and JD and by the Board to which they are beholden?

Part #3: Beyond Good and Evil

I have spent too much of my life studying (and teaching) ethics to ever make a moral claim. The reality of our situation is that a business justification beats a moral argument, almost every time. There are factors that go into that beyond anyone’s control, for instance that busy leaders (even hypothetical ones) don’t respond well to anger. They are trained to de-escalate and delegate, and so their response to any raised issue is fundamentally one of “How fast can I calm this person down and empower them to go fix their own problems?’

Expecting that these leaders build enough moral intuition to avoid the daily dose of crises is unrealistic. No single human being is going to be able to see every blind spot, to imagine all the ways technology can be abused for evil. Moreover, it is unrealistic to expect anyone to see something when their job, their well-being, their happiness or even their sense of self and humanity are all dependent on their not seeing it.

When working with such hypothetical leaders, what we might do instead is present a virtuous cycle. Better proxy metrics (for user happiness, for trust and long-term brand value) that are intuitive enough that they make sense to the Board, the media and the world. A path to reduce frictions in optimizing those metrics. Incentive structures that make the old metrics harder to achieve. And a change management plan that allows for the press to follow along so that leaders are more focused on making the change rather than on crafting “We take this very seriously” statements on a weekly basis.

None of this work will be as fundamentally satisfying as having a leader with the moral intuition and fiber to stand up in front of a crowd and take a stand. But we have been talking about hypothetical leaders. Maybe no real leader, in this day and age, dares to do that. We all have our hands a little dirty.

--

--