BLOG - Page 2 of 5 - aigency

#FRAIDAY 29 SEPT: AI FOR GOOD


Are you interested in Artificial Intelligence and creating a better world? Come to the next Fraiday on the 29th of September!

How can we use AI for the greater good?

During the next Fraiday we’ll discuss how Artificial Intelligence can be used for humanitarian causes. Expect many students, startups and NGO’s + a surprise speaker. Warning: you might be accused of being overly “optimistic” after you attend this meetup.

  • Date: 29th Sept 2017
  • Time: 6 pm
  • Location: Venture Studio, Science Park, Amsterdam

Interested? Sign up here!

 

Zeynep Tufekci


The TED talk of the month discusses big questions about our societies and our lives, as both algorithms and digital connectivity spread. Complex algorithms that watch, judge and nudge us.

About the speaker
Zeynep Tufekci is a contributing opinion writer at the New York Times, an associate professor at the School of Information and Library Science at University of North Carolina, Chapel Hill. Her book, Twitter and Tear Gas: The Power and Fragility of Networked Protest, was published in 2017 by Yale University Press.

Live from Taiwan!


The largest computer fair in Asia: Computex. Not surprisingly AI was an important theme this year! Our COO Eiso Vaandrager represented Aigency in this browsing environment. He occupied a popular booth in the AI-section and took the stage on day #2.

Computex: AI“We spoke with a lot of different interesting parties and presented the AI concept of Aigency. This resulted in conversations with a lot of interesting Taiwanese companies where cooperation is possible. All the big players where there, like IBM Watson, Google, Intel, and NVIDIA. They were all there to see what is happening in the AI area” says Eiso.

Taiwan is a huge producer of electronica and leader in the AI area, therefore it was an interesting place to present Aigency. Another Dutch company called Travis the Translator, was also present. Travis is a bot that speaks 80 different languages.

Moreover, Eiso says he was “definitely impressed by the whole event, and once again overwhelmed how fast all this is going. Even if you are following the AI updates, you are still behind”.

An interesting insight was that the gap between some big companies who are really going for it, and some others who are staying behind is getting bigger and bigger. An opportunity for us is that we can help these companies, because they have no clue where this is going and they will wake up in a new world.

Besides that, it is remarkable that in Asia there is much more willingness to go and experiment and implement with AI versus here in Europe.

In Europe, there is so much discussion. We have to be careful with this, because in Asia they understand the sense of urgency better. Here in Europe we also have to experiment, that is the way you learn and be able to implement things that are working. In the Netherlands when you have a conversation about AI, it is most of the time a shallow conversation about: “we have data”. In Asia however the conversations are far more in depth, about data and privacy for example. They are one step ahead of us.

A last quote from Eiso: “It was a really nice experience in Taiwan, especially since they are totally electronica crazy over there!”

 

Maurice Conti


The TED talk of the month comes from designer, futurist and innovator Maurice Conti. At TEDxPortland he showed how robots and humans will work side-by-side to accomplish things neither could do alone.

About this speaker
Maurice Conti is the Director of Applied Research & Innovation at Autodesk. He also leads Autodesk’s Applied Research Lab, which he built from the ground up. His team’s research focuses on advanced robotics, applied machine learning, the Internet of Things and climate change/sea level rise.

The perfect selfie, assisted by AI


Everyone has experienced the feeling of seeing yourself appear on the screen of their phone unexpectedly, because they opened the camera-app with the selfie camera enabled. Not pretty. Well, today I learned that this is due to an effect called perspective distortion.

According to wikipedia, when shooting a portrait foto and fitting the same area inside the frame:

The wide-angle will be used from closer, making the nose larger compared to the rest of the photo, and the telephoto will be used from farther, making the nose smaller compared to the rest of the photo.

Photographers have known this for ages. That’s why professional portraits are usually shot from a distance, using a telephoto lens to fit the subject’s face in the frame. But us civilians mostly capture their faces with a selfie camera, which uses a wide angle lens. Maybe if we had 4m long selfie sticks, we could do something about it. But that doesn’t seem very practical to me.

The technology

Researchers from Princeton and Adobe have developed an algorithm that can adjust camera distance in post production [1]. They achieve this by estimating a 3D model of the face, in which camera position and orientation are included, and fitting this to the 2D image. If you then manually change one variable in this model, the algorithm calculates the expected changes in the remaining properties and the result is projected onto a 2D image corresponding with the new camera position.

The representation of the face is obtained by automatically location 66 facial features around the eyes, nose and chin. For this step, the researches employ existing technology by Saragih et al (2009) [2]. Because the detector they use doesn’t find key points on the ears and top of the head, these points have to be added manually. Those points are necessary to incorporate the ears and hair into the model. Without those, warping would result in an uncanny result where the perspective of the face changes, but that of the hair and ears stays the same. Read More

Fraiday 4: AI & Creativity


Fraiday is the monthly meet-up for professionals who are AI-curious. We have a simple format: involving beers, a theme, lively debate and a group picture.

This location of Fraiday #4 was spectacular in many ways. Our friends at Osudio were kind enough to lend us their office, including the rooftop terrace. And as you can see in this exclusive behind the scenes footage Jim went trough quite some trouble to take the group picture. Now there’s someone who is not afraid to loose his job!

The theme this month was “AI & Creativity” and we invited Erik van der Pluijm to introduce this subject. Erik is the creative director at 30x and the co-author of the best selling book “Design a better business”. Erik did a great job of asking questions (How creative can a machine get? What is the future of value?) and answering some of them. Read More

SXSW: AI highlights


This year there was a lot of AI to discover at the SXSW festival in Texas. For example, AI and emotional intelligent machines, AI and food and much more. We talked with Michiel Berger and in this blogpost we will give you some of the AI highlights of this year. We will discuss the possibilities of AI and give some remarkable examples!

First it is important to mention how good AI already is. It is amazing to see what we can already do with AI. At the SXSW conference there were a lot of examples that show this very well. Some examples are: emotional intelligent machines, image recognition from the sky and mayday voice forensics. These three examples will be discussed more in depth to show how good AI is working already.

Emotional intelligent machines

How do machines understand why someone is mad? Or when someone is making a joke? And how will machines respond to this? This is a very interesting thing to think about. On SXSW there was an example which showed that there already is a lot possible to analyze from spoken language. It is possible to detect very precisely how someone is feeling and what their emotions are. Machines can detect whether people agree or disagree, understand or don’t understand. This is done by word and face recognition, face impressions, micro impressions, voice recognition and biometrical measurements. An important question is how will the system respond? How do you expect a machine to respond to you? There are some different options:

  • Option 1: The system doesn’t detect a feeling but only facts. For example: the machine says ‘your heartbeat is high’.
  • Option 2: The system recognizes feelings. For example: the machine says ‘I can tell you are angry’.
  • Option 3: The system responds like a human, it interprets feelings and gives advice. For example: the machine says ‘I can tell you are sad, maybe you should take a walk?’.

It is possible that we as people have the feeling that you first have to get to know the machine. The expectations of the response can differ in time. Maybe people will get used to it that machines can detect and respond to emotions. Brands can use this to make a connection with consumers in a new way. Important to keep in mind here is: ‘Design only what you can understand, don’t manipulate emotions you don’t understand’.
Read More

Heather Knight


The TED talk of the month comes from Heather Knight, the assistant Director of Robotics at Humanity+. At TEDWomen 2010 she introduced us to -a joke telling- Marilyn Monrobot.

About the speaker
Heather Knight is conducting her doctoral research at the intersection of robotics and entertainment at Carnegie Mellon’s Robotics Institute. Her installations have been featured at the Smithsonian-Cooper Hewitt Design Museum, LACMA, SIGGRAPH, PopTech and the Fortezza da Basso in Florence, Italy.

Fraiday 3: The future of Work


Fraiday is the monthly meet-up for professionals who are AI-curious. We have a simple format, involving beers, lively debate around a theme and a group picture.


This evening featured a fire side chat with Maarten Lens Fitzgerald, former co-founder of Layer and currently on a mission to augment the workplace. Maarten looked back on his adventure with Layar and drew some nice parallels with the current hype around AI. Also Maarten explained how the future of organizations depends on how they put together their teams.

We hope to see you at our next event?

Sam Harris


The TED talk of the week comes from neuroscientist and philosopher Sam Harris. We’re going to build superhuman machines, he says, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

About the speaker
Sam Harris is the author of five New York Times bestsellers. His books cover a wide range of topics — neuroscience, moral philosophy, religion, spirituality, violence, human reasoning — but generally focus on how a growing understanding of ourselves and the world is changing our sense of how we should live. His work has been published in more than 20 languages and has been discussed in the New York Times, Time, Scientific American, Nature, Newsweek, Rolling Stone and many other journals. He has written for the New York Times, the Los Angeles Times, The Economist, The Times (London), the Boston Globe, The Atlantic, The Annals of Neurology and elsewhere.