articles on intelligence

Brave new world: implications for AI

by Christopher Bishop on July 25, 2017 in Original

I was invited to attend the “AI Now | 2017 Symposium” at the MIT Media Lab on July 10th. A team of experts with experience across a range of disciplines shared perspective and insight on various topics related to the “constellation of AI” – to quote speaker Genevieve Bell, anthropologist, university professor and Senior Official at INTEL.

 

FYI – David and I will be attending numerous events, conferences, and seminars in the coming months and are committed to sharing our insights on trending and thought leadership in the AI space. Please bookmark our site – aispeakers.global -and sign up for our newsletter to stay up to date on the latest in AI!

 

Joi Ito’s opening remarks at the AI Now event were quite thought-provoking. He compared the effect of AI on business and culture today to putting jet pacs on remote villagers. Not only do they not know what they are or how to use them – they didn’t even know they were coming!

 

Kate Crawford and Meredith Whittaker, co-founders of AI Now, hosted a terrific event broken into three focus areas: “Bias Traps in AI”, “Governance Gaps under Trump” and “Rights and Liberties in an Automated World.”

 

Meredith Whittaker shared exciting news, announcing that AI Now is opening a research center in New York to focus on bias, labor, basic rights and liberties. Participants will include academics and researchers with the ACLU as a strategic partner, focused on leveraging AI to advance civil rights.

 

As many have stated before, the spectacular growth of AI is due to the convergence of three factors – emergence of Big Data, rapidly accelerating computing power and the rise of deep learning algorithms to make sense of it all.

 

One of the key reality check moments is that we are already dealing with AI in various settings and may or may not know it. It is already included in numerous back-end systems such as helping judges determine who gets released from jail and when.

 

Bias traps in AI

 

The discussion started with a shout out to Joseph Weizenbaum who in 1966, while a professor at MIT, created a comparatively simple program called ELIZA which performed natural language processing, named after the ingenue in George Bernard Shaw’s Pygmalion.

 

It was pointed out that translation programs are a clear indicator of where bias occurs in AI – when you ask for usage of the word “nurse” in a sentence, the example is typically “She is a nurse.” But when the word “doctor” is entered, the proposed sentence says: “He is a doctor.” Gender bias is already built into the system.

 

Cathy O’Neil, author of “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy“, led a provocative thought experiment on bias and AI. She asked us to think about patterns that lead to success and how they might be replicated – for better or worse. Her scenario was a hiring algorithm that might be in place at Fox News, looking for patterns that led to employee success in the past with the assumption that these criteria would lead to success in the future. Her proposed approach was that Fox could build a hiring algorithm based on the last 21 years of people applying for jobs, identifying patterns around who has been successful at FOX News. Logical criteria might include the fact that the person stayed at the company for at least four years and was promoted a couple times.

 

By training this algorithm and then applying it to a current pool of job applicants, women would almost certainly be filtered out because all these algorithms do is look at patterns. Trusting them to be unbiased and failing to scrutinize the process is a meta issue and wherein a real problem with machine learning lies. We tend to not question these kinds of transfers because it is just the way things are done – but historical biases get perpetuated and propagated with this approach.

 

Fairness and AI

 

The implications for fairness and impartiality are of course tremendous and far-reaching. Organizations have to be held accountable for the tools that they use. They cannot just say their AI is a black box. It requires different kinds of professional skills and review processes. Implications might be that we develop algorithms to help us police the other algorithms in order to produce the results we seek.

 

“We need to get away from the ‘accuracy metric’,” said Arvind Narayanan, Assistant Professor of Computer Science at Princeton. “Just because a group of algorithms is performing well could, in fact, mean that they are doing a good job of reproducing existing biases. We need a more multi-dimensional way to evaluate how well our algorithms are doing.”

 

Governance gaps

 

Vanita Gupta, President & CEO, The Leadership Conference on Civil and Human Rights, lamented the current lack of consistency as regards policies and protocols to manage public/private sector deployment around AI. She contends that whoever goes out first with a viable and scalable framework will win. It is still not clear who the right regulator is to determine if AI is positioned correctly in say financial services or healthcare, just to cite two examples. One reason is the general lack of a deep enough understanding to provide viable guidance. And there are of course even broader implications such as who owns the issue of the displacement of labor being driven by AI.

 

Part of the solution needs to be to an increase in government regulators awareness of AI and improved understanding of the technologies associated with it so they can make informed decisions about what is appropriate.

 

Rights and Liberties in an Automated World

 

The need exists for clearer guidance around how and when a company is using AI. Which decisions need to be revealed? All of them or just the important ones…whatever they might be! People need to balance the potential for “information asymmetry” and know what they are giving up when exchanging their data for goods and services. And to understand that increasingly, there are going to be choices that humans make and ones that machines are going to make. And we need to manage these two realms.

 

Blaise Aguera y Arcas described Google’s approach to localized AI, managing data locally on an individual device. Their Smart Select functionality doesn’t send data directly to Google but rather has the device remember the information, such as corrections to a text or email. However, at certain intervals, that data is compressed and after being encrypted, is sent to the cloud. This hybrid model delivers benefits of learning on a large scale without compromising individual privacy. It also provides lower latency by bringing an AI solution closer to the user.

 

AI as a design challenge

 

Genevieve Bell, an anthropologist, university professor and Senior Official at INTEL, described the “Constellation of AI”, that it includes human, cultural and social practices. She described AI writ large as a design challenge – what are the designs that work for everybody and for groups – minimal ones that are universal, or adaptive ones that can be specific.

 

Sendhil Mullainathan, a Harvard economics professor and recipient of a MacArthur Foundation “genius grant”, is excited about the potential of AI to shine a mirror on ourselves and our biases. He described a recent meeting with senior HR executives where he presented his findings regarding AI-driven hiring processes. His research determined that African sounding names were rejected at a far higher rate in the job application process than European-sounding ones. Again – a pre-existing bias being propagated in the new model.

 

In closing, Blaise offered an interesting analogy. He said, “People dig holes of different sizes with shovels, but no one is worrying about how big the holes are that people are making with these tools.”

 

The same approach holds true for the potential of AI.

 

My key takeaways

 

  • AI is not singular – any discussion requires relational questions.
  • AI gives us an instrument we can use to measure ourselves.
  • We need to define our ability to opt out of AI interaction where possible – who has agency – users or owners? There will need to be nuanced solutions.
  • The big cognitive trap is that this is, in fact, a complex issue and cannot be addressed simply.

ShareShare on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn

Please book “the ai guys” now for a high level, entertaining and understandable look into the single most transformative technology of the next 20 years.

 

BOOK aispeakers

 

 

 

@theaiguysdavidhoule.comimprovisingcareers.com