· News

Cassie Kozyrkov: Skilled decision-makers key to mastering AI

Speaking at Unleash, Google Cloud's chief decision engineer urged the importance of spotting machine learning opportunities rather than fixating on technical jargon

We must demystify AI and machine learning to truly harness its potential and ensure its reliable and responsible use, according to chief decision engineer at Google Cloud Cassie Kozyrkov.

Delivering a keynote on day one of the Unleash Conference and Expo in London, Kozyrkov explained that persuading people to engage with AI and robots over the past 50 years has necessarily meant glamourising the technology, as before that we got along “seemingly fine” without it. But the reality is much less sensational and far more “prosaic”, even “dull”, she explained.

She entreated her audience to understand machine learning as nothing more complex than “thing labelling with examples rather than instructions”, and as “about automating the inevitable”. “If you have enough examples you can get the task learned… that’s the beauty of AI,” she said.

The problem we have today, explained Kozyrkov, is that firms fixate on the technical aspects of using data, rather than stepping back and focusing on what we want machine learning to do. Kozyrkov drew the analogy of cooking with a microwave: “You can use a microwave without knowing how to build it or indeed how to build a new and better one,” she highlighted.

“But businesses are still hiring people who’ve spent their whole lives trying to build microwaves… we feel the need to hire that specific worker to do that entire end-to-end thing,” she said, adding that “if you need to build a microwave with a teleport function then [that’s] fine”, but otherwise this makes no sense.

Instead organisations must refocus on hiring skilled decision-makers who can meticulously plan how the technology should be utilised. “That person needs to say how good is good enough? How are we going to measure it?” advised Kozyrkov, explaining that this will look very different depending on context: “You’re going to approach this differently if you’re going to make a new type of sushi rather than a new type of pizza. And you need to know that in advance.”

People often assume all AI and machine learning has to be “better than human”, said Kozyrkov. But actually that’ll depend hugely on the task, she said, explaining that you can have a “perfectly good system that doesn’t need to be better than human”.

She added that this phrase, and fear around humans being superseded in general, is actually fairly nebulous. “All of the tools we choose to use are better than human at something,” she said, giving the examples of a PowerPoint clicker, a calculator and a bucket. “AI is just a tool like any other,” she said, adding the warning that “like any other tool the way you use it is your responsibility”.

AI and machine learning decision-makers are critical for testing whether the technology is performing as it should, explained Kozyrkov, advising that when it comes to data processing: “Trust nothing, test everything”. She gave the example of how you test if a student has truly learned calculus. You don’t accept their word on it or look in their brain with a scalpel, but rather set a series of exam questions, she said.

It’s crucial to consider the diversity of those setting, and assessing, these “exam questions” however, warned Kozyrkov. “We have to talk about AI as separate from humans but also as just another tool, another lever, but with a longer echo,” she said, adding: “When levers become really really long we have to ask ourselves: how good are the skills of the person on the other end of it?”

Kozyrkov urged her audience to see datasets as no more sophisticated or complicated than a textbook: “Why do we pronounce data with a capital ‘D’? It’s just like a textbook, authored by humans. They will take on the biases of the author… So someone good better take responsibly for opening the textbook and making sure that’s what you wanted the student [or machine] to learn.

“We have to have a diversity of textbook checkers so we don’t magnify biases”.

She added: “It’s never the lamp or the genie that’s dangerous; it’s an unskilled wisher. It’s not even the wisher who intends evil [that’s dangerous], but the one who doesn’t even think about what they’re doing.”

Regarding the perennial, divisive debate around whether technology will replace or create jobs, Kozyrkov sounded a hopeful note: “Everyone gets promoted with these powerful tools; we don’t need to be computers anymore”. She explained that the technology will create plenty of low-skilled jobs too, as “we still need someone to sit there labelling things ‘cat, not cat’…” and to “move data around”.

“That leads to the creation of lots of jobs, but yes it does change industries and the face of the labour market,” she added. “But it’s not a qualitative shift; it’s the same thing, just more of it.”

Kozyrkov concluded by urging her audience: “Let’s make sure we’re ready for that promotion, that we as decision-makers are skilled [in this]. Let’s make sure we know how to use it safely, reliably and responsibly.”