Global Summit 2018

We kunnen haast niet wachten, maar over 1 maand is het zo ver! De tweede editie van de “AI for Good Global Summit” vindt dan plaats in Genève. Van 15 – 17 mei komen dan vertegenwoordigers samen van onder meer ITU, Xprize Foundation, Association for Computing Machinery (ACM) en 30 zuster organisaties van de Verenigde Naties. Dit alles met één doel: a dialogue on “beneficial AI”, oftewel: gesprekken over hoe we kunstmatige intelligentie kunnen inzetten voor goede doelen.

Net als vorig jaar is het programma weer een enorm inspirerend doolhof.

Eén van de terugkerende keynote sprekers is professor Stuart Russel. Wij van Ai for Good NL hadden de kans om met hem vooruit te blikken op de Summit.

Question 1
Will we see more examples of how artificial intelligence is actually helping us with
the SDG’s? Or is it still mostly a promise?

Stuart Russell is a professor (and formerly chair) of Electrical Engineering and Computer Sciences at University of California at Berkeley.

Answer by professor Russell
The goal this year is to develop concrete projects that will be beneficial, connecting people with real needs, UN agencies, technical experts, data and computation resources, and funding sources. Identifying such projects takes time, so we are doing some homework ahead of the meeting.

Question 2
What are your thoughts on educating the public about machine reasoning? Do you feel there is a sense or urgency now? Or will we have to wait for a digital Fukushima to really start talking about this?

Answer by professor Russell
There is a distinction between how an AI system works in general and how it reached its decision in a particular case (most specifically, one’s own case). The analogy to explanation of nuclear reactors is only applicable at the general level, because nuclear reactors have no particular cases. Probably, the general public doesn’t care how AI works in the general case.

Usually, person X only cares when the AI system makes a decision about person X, or about X’s family member. At that point it’s perfectly reasonable to ask for an explanation if the decision appears to make no sense. GDPR includes some such provisions (although I have not read the exact wording). But this is not a digital Fukushima; this is one person, say, losing their driving license.

A digital Fukushima does not have much to do with the right to an individual explanation. A catastrophe can result from systems that can have systemic effects on a global scale. The nature of explanation in this case is at a different level: how the general AI system capability functions and how it interacts in general with human and planetary systems.

Kijk hier voor meer informatie over de Summit hoe men de SDG’s dichterbij wil brengen.

Hanneke

Hanneke van Ewijk is a master student at the VU University Amsterdam. Her research focuses on "chatbots and brands". For Aigency Hanneke writes the event reports.
Hanneke

Latest posts by Hanneke (see all)

Category: events

Discover more: