AI with a conscience?

Three interesting articles on how A.I. shouldn’t be a black box when it comes to scientific value or ethical and moral values.

1. Technology Review writes about how Algorithmic systems have a way of making mistakes or leading to undesired consequences. They offer five principles to help technologists deal with that. Because despite the potential for efficiency gains, algorithms fed by big data can also amplify structural discrimination, produce errors that deny services to individuals, or even seduce an electorate into a false sense of security. Indeed, there is growing awareness that the public should be wary of the societal risks posed by over-reliance on these systems and work to hold them accountable.

Read about these five: Responsibility, Explainability, Accuracy, Audibility & Fairness.

Choose you rol (series)

2. Phys.org stresses the importance of a sixth principe: Reliability. They spoke with Zoubin Ghahramani, Professor of Information Engineering in Cambridge’s Department of Engineering.

“Machines can now achieve near-human abilities at many cognitive tasks even if confronted with a situation they have never seen before, or an incomplete set of data,” says Ghahramani. “But what is going on inside the ‘black box’? If the processes by which decisions were being made were more transparent, then trust would be less of an issue.”

His team builds the algorithms that lie at the heart of these technologies (the “invisible bit” as he refers to it). Trust and transparency are important themes in their work: “We really view the whole mathematics of machine learning as sitting inside a framework of understanding uncertainty. Before you see data – whether you are a baby learning a language or a scientist analysing some data – you start with a lot of uncertainty and then as you have more and more data you have more and more certainty.

It's just an algorithm!

3. Be mindful of the Dark side, Luke
If you want to know why trust and transparency are crucial, check out this article by our friends at TechCrunch. They illustrate their story with sinister examples like Facefind and algorithms that copy your handwriting and that can even impersonate you.

Category: algorithms, ethics

Discover more: > >