Research

AI Hub at Science Park

The University of Amsterdam (UVA) has announced a world-class hub in the field of Artificial Intelligence at Amsterdam Science Park.

The new co-creation space will be the premier place for education, research and entrepreneurship in the field of artificial intelligence. The City of Amsterdam is providing €4 million to support the initiative. Read More

> Read full AI Hub at Science Park post


The perfect selfie, assisted by AI

Everyone has experienced the feeling of seeing yourself appear on the screen of their phone unexpectedly, because they opened the camera-app with the selfie camera enabled. Not pretty. Well, today I learned that this is due to an effect called perspective distortion.

According to wikipedia, when shooting a portrait foto and fitting the same area inside the frame:

The wide-angle will be used from closer, making the nose larger compared to the rest of the photo, and the telephoto will be used from farther, making the nose smaller compared to the rest of the photo.

Photographers have known this for ages. That’s why professional portraits are usually shot from a distance, using a telephoto lens to fit the subject’s face in the frame. But us civilians mostly capture their faces with a selfie camera, which uses a wide angle lens. Maybe if we had 4m long selfie sticks, we could do something about it. But that doesn’t seem very practical to me.

The technology

Researchers from Princeton and Adobe have developed an algorithm that can adjust camera distance in post production [1]. They achieve this by estimating a 3D model of the face, in which camera position and orientation are included, and fitting this to the 2D image. If you then manually change one variable in this model, the algorithm calculates the expected changes in the remaining properties and the result is projected onto a 2D image corresponding with the new camera position.

The representation of the face is obtained by automatically location 66 facial features around the eyes, nose and chin. For this step, the researches employ existing technology by Saragih et al (2009) [2]. Because the detector they use doesn’t find key points on the ears and top of the head, these points have to be added manually. Those points are necessary to incorporate the ears and hair into the model. Without those, warping would result in an uncanny result where the perspective of the face changes, but that of the hair and ears stays the same. Read More

> Read full The perfect selfie, assisted by AI post


A new pair of eyes

Researchers are working very hard on the ability of computers to mimic the human senses—in their own way, to see, smell, touch, taste and hear. In this article we highlight two examples of algorithms that seem to be beating us at our own game.

Your eyes can be deceiving. Sometimes -even for humans- it is hard to distinguish a muffin from a Chiwawa. Most of us can recognize an object after seeing it once or twice. But the algorithms that power computer vision and voice recognition need thousands of examples to become familiar with each new image or word.

Researchers at Google DeepMind now have a way around this. They made a few clever tweaks to a deep-learning algorithm that allows it to recognize objects in images and other things from a single example—something known as “one-shot learning.” Read More

> Read full A new pair of eyes post


Amazing discoveries

Computers that create their own secret language. The end of black and white photographs. And an algorithm that knows what will come next in your photo. In this article three new examples of skills that computers have and humans not so much.

1. Esperanto was once invented to help people from different cultures to talk to each other using an easy to learn, politically neutral, language. Recently researchers at Google witnessed the convergence of a similar concept when machines were -together- trying to talk in a new language. Read More

> Read full Amazing discoveries post