Posts in

Intuitive AI


The TED talk of the month comes from designer, futurist and innovator Maurice Conti. At TEDxPortland he showed how robots and humans will work side-by-side to accomplish things neither could do alone.

About this speaker
Maurice Conti is the Director of Applied Research & Innovation at Autodesk. He also leads Autodesk’s Applied Research Lab, which he built from the ground up. His team’s research focuses on advanced robotics, applied machine learning, the Internet of Things and climate change/sea level rise.

The perfect selfie, assisted by AI


Everyone has experienced the feeling of seeing yourself appear on the screen of their phone unexpectedly, because they opened the camera-app with the selfie camera enabled. Not pretty. Well, today I learned that this is due to an effect called perspective distortion.

According to wikipedia, when shooting a portrait foto and fitting the same area inside the frame:

The wide-angle will be used from closer, making the nose larger compared to the rest of the photo, and the telephoto will be used from farther, making the nose smaller compared to the rest of the photo.

Photographers have known this for ages. That’s why professional portraits are usually shot from a distance, using a telephoto lens to fit the subject’s face in the frame. But us civilians mostly capture their faces with a selfie camera, which uses a wide angle lens. Maybe if we had 4m long selfie sticks, we could do something about it. But that doesn’t seem very practical to me.

The technology

Researchers from Princeton and Adobe have developed an algorithm that can adjust camera distance in post production [1]. They achieve this by estimating a 3D model of the face, in which camera position and orientation are included, and fitting this to the 2D image. If you then manually change one variable in this model, the algorithm calculates the expected changes in the remaining properties and the result is projected onto a 2D image corresponding with the new camera position.

The representation of the face is obtained by automatically location 66 facial features around the eyes, nose and chin. For this step, the researches employ existing technology by Saragih et al (2009) [2]. Because the detector they use doesn’t find key points on the ears and top of the head, these points have to be added manually. Those points are necessary to incorporate the ears and hair into the model. Without those, warping would result in an uncanny result where the perspective of the face changes, but that of the hair and ears stays the same. Read More

Fraiday 4: AI & Creativity


Fraiday is the monthly meet-up for professionals who are AI-curious. We have a simple format: involving beers, a theme, lively debate and a group picture.

This location of Fraiday #4 was spectacular in many ways. Our friends at Osudio were kind enough to lend us their office, including the rooftop terrace. And as you can see in this exclusive behind the scenes footage Jim went trough quite some trouble to take the group picture. Now there’s someone who is not afraid to loose his job!

The theme this month was “AI & Creativity” and we invited Erik van der Pluijm to introduce this subject. Erik is the creative director at 30x and the co-author of the best selling book “Design a better business”. Erik did a great job of asking questions (How creative can a machine get? What is the future of value?) and answering some of them. Read More

SXSW: AI highlights


This year there was a lot of AI to discover at the SXSW festival in Texas. For example, AI and emotional intelligent machines, AI and food and much more. We talked with Michiel Berger and in this blogpost we will give you some of the AI highlights of this year. We will discuss the possibilities of AI and give some remarkable examples!

First it is important to mention how good AI already is. It is amazing to see what we can already do with AI. At the SXSW conference there were a lot of examples that show this very well. Some examples are: emotional intelligent machines, image recognition from the sky and mayday voice forensics. These three examples will be discussed more in depth to show how good AI is working already.

Emotional intelligent machines

How do machines understand why someone is mad? Or when someone is making a joke? And how will machines respond to this? This is a very interesting thing to think about. On SXSW there was an example which showed that there already is a lot possible to analyze from spoken language. It is possible to detect very precisely how someone is feeling and what their emotions are. Machines can detect whether people agree or disagree, understand or don’t understand. This is done by word and face recognition, face impressions, micro impressions, voice recognition and biometrical measurements. An important question is how will the system respond? How do you expect a machine to respond to you? There are some different options:

  • Option 1: The system doesn’t detect a feeling but only facts. For example: the machine says ‘your heartbeat is high’.
  • Option 2: The system recognizes feelings. For example: the machine says ‘I can tell you are angry’.
  • Option 3: The system responds like a human, it interprets feelings and gives advice. For example: the machine says ‘I can tell you are sad, maybe you should take a walk?’.

It is possible that we as people have the feeling that you first have to get to know the machine. The expectations of the response can differ in time. Maybe people will get used to it that machines can detect and respond to emotions. Brands can use this to make a connection with consumers in a new way. Important to keep in mind here is: ‘Design only what you can understand, don’t manipulate emotions you don’t understand’.
Read More

A robot walks into a bar


The TED talk of the month comes from Heather Knight, the assistant Director of Robotics at Humanity+. At TEDWomen 2010 she introduced us to -a joke telling- Marilyn Monrobot.

About the speaker
Heather Knight is conducting her doctoral research at the intersection of robotics and entertainment at Carnegie Mellon’s Robotics Institute. Her installations have been featured at the Smithsonian-Cooper Hewitt Design Museum, LACMA, SIGGRAPH, PopTech and the Fortezza da Basso in Florence, Italy.

Fraiday 3: The future of Work


Fraiday is the monthly meet-up for professionals who are AI-curious. We have a simple format, involving beers, lively debate around a theme and a group picture.


This evening featured a fire side chat with Maarten Lens Fitzgerald, former co-founder of Layer and currently on a mission to augment the workplace. Maarten looked back on his adventure with Layar and drew some nice parallels with the current hype around AI. Also Maarten explained how the future of organizations depends on how they put together their teams.

We hope to see you at our next event?

Can we build AI without losing control?


The TED talk of the week comes from neuroscientist and philosopher Sam Harris. We’re going to build superhuman machines, he says, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

About the speaker
Sam Harris is the author of five New York Times bestsellers. His books cover a wide range of topics — neuroscience, moral philosophy, religion, spirituality, violence, human reasoning — but generally focus on how a growing understanding of ourselves and the world is changing our sense of how we should live. His work has been published in more than 20 languages and has been discussed in the New York Times, Time, Scientific American, Nature, Newsweek, Rolling Stone and many other journals. He has written for the New York Times, the Los Angeles Times, The Economist, The Times (London), the Boston Globe, The Atlantic, The Annals of Neurology and elsewhere.

We <3 Robots?


The debate on robots and society has predominantly focused on their presumed talent to destroy rather than create value. Here’s two articles to balance the debate, two positive views on robots entering the workforce.

1. Amazon and its hybrid work-force
Let’s look at how the world’s largest online retailer is using robots in its fulfillment centers. By doing so Amazon has been able to drive down shipping costs and pass those savings on to customers. Cheaper shipping made more people use Amazon, and the company hired more workers to meet this increased demand. Tasks involving fine motor skills, judgment or unpredictability are handled by people. They stock warehouse shelves with items that come off delivery trucks.

In 2016 the company grew its robot workforce by 50 percent, from 30,000 to 45,000. Far from laying off 15,000 people, though, Amazon increased human employment by around 50 percent in the same period of time. Even better, the company’s Q4 2016 earnings report included the announcement that it plans to create more than 100,000 new full-time, full-benefit jobs in the US over the next 18 months. Read More

Fraiday #2: Race against the Machines


Fraiday is the monthly meet-up for professionals who are AI-curious. We have a simple format, involving beers, lively debate around a theme and a group picture.

Last January our theme was “Artificial Intelligence is no match for Natural Stupidity”. We shared a lot of funny anecdotes about how algoritmes are often considered silver bullets and that -although we’ve seen major progress in the last 5 years- real A.I. is still far, far away.

For the February-edition the theme was “Race against the Machines”. All attendees were divided into groups to discuss the aspects of what it is to be human in a world that is rapidly becoming more and more machine driven. What are things that a machine will never replace? And what kind of tasks should (or could only) be done by algorithms?

Musk vs Page
The teams were then given the ultimate assignment: they should argue why AI will eventually save us (Team Page, named after Larry Page who has been quoted saying that AI will give us better lives and more free time) or will destroy us (Team Musk, named after Elon Musk and Stephen Hawking who warn about the downside of given power to machines).

Our lovely assistant Alexa flipped a coin and started the timer for the first round:

  • Team Musk warned us that AI will create an automated form of humans, this AI will learn from the humans, humans are imperfect so this makes the imperfections larger. Moreover, we cannot control AI, because we can’t understand it, we can’t go back if something goes wrong. Another point was that AI carries no responsibility, they can’t go to jail for example. AI becomes anti human and the bad comes before the good, can we really survive AI?
  • In their response, Team Page promises us that AI will bring us a modern paradise. We have to see it as the dog principle, we had dogs as hunters, but now they are social animals. There will be a good system, like health for every person on the globe, global stability, no problems in financials systems, no funny transactions and eliminate abundance. Paradise!

Some quotes from the second round:

  • In the second round of the debate Team Musk came back with counter arguments as: AI is a black box, we can’t learn from AI, we don’t understand it, the software runs on its own. They also mentioned that they didn’t want to be walking on a lease like a dog of AI. You always have to make mistakes to learn, this is dangerous. Last one-liner: Stay on top of the food chain.
  • Team Page responds and says AI reprograms itself and humans are going to learn from AI. To exist together in this beautiful world, to be a better human. Also there are laws for AI. We are in a bad place right now, so we can only go up! Last one-liner: AI is evolution into enlightenment.

And the winner is
In the end the jury had a really hard time to decide on who the winners were. In their wisdom they concluded that the debate itself was the grand prize. As we (technologists) are often accused of techno-optimism it can be very insightful to put ourselves in the shoes of the non-believers. In the end, we have to work it out together.

See you next month?

The jobs of the future


Every other Tuesday our team sits together to watch a TED Talk. Today we listened to professor David Autor. In this video, recorded at TEDxCambridge in September 2016, he addresses the question of why there are still so many jobs and comes up with a surprising, hopeful answer.

About this speaker
David Autos is one of the leading labor economists in the world and a member of the American Academy of Arts and Sciences. He is Professor of Economics and associate department head of the Massachusetts Institute of Technology Department of Economics. His best known research formally models and empirically analyzes how computerization substitutes for and complements human labor; asks how the rapid rise of import competition from China has reshaped U.S. manufacturing, upending the conventional economic wisdom that free trade is a free lunch; explores how the economic pressures of globalization are reshaping U.S. electoral politics; and conducts large-scale randomized experiments that test whether generous financial aid grants improve the odds of college completion and long-run economic security of students from low income families.