SXSW: AI highlights


This year there was a lot of AI to discover at the SXSW festival in Texas. For example, AI and emotional intelligent machines, AI and food and much more. We talked with Michiel Berger and in this blogpost we will give you some of the AI highlights of this year. We will discuss the possibilities of AI and give some remarkable examples!

First it is important to mention how good AI already is. It is amazing to see what we can already do with AI. At the SXSW conference there were a lot of examples that show this very well. Some examples are: emotional intelligent machines, image recognition from the sky and mayday voice forensics. These three examples will be discussed more in depth to show how good AI is working already.

Emotional intelligent machines

How do machines understand why someone is mad? Or when someone is making a joke? And how will machines respond to this? This is a very interesting thing to think about. On SXSW there was an example which showed that there already is a lot possible to analyze from spoken language. It is possible to detect very precisely how someone is feeling and what their emotions are. Machines can detect whether people agree or disagree, understand or don’t understand. This is done by word and face recognition, face impressions, micro impressions, voice recognition and biometrical measurements. An important question is how will the system respond? How do you expect a machine to respond to you? There are some different options:

  • Option 1: The system doesn’t detect a feeling but only facts. For example: the machine says ‘your heartbeat is high’.
  • Option 2: The system recognizes feelings. For example: the machine says ‘I can tell you are angry’.
  • Option 3: The system responds like a human, it interprets feelings and gives advice. For example: the machine says ‘I can tell you are sad, maybe you should take a walk?’.

It is possible that we as people have the feeling that you first have to get to know the machine. The expectations of the response can differ in time. Maybe people will get used to it that machines can detect and respond to emotions. Brands can use this to make a connection with consumers in a new way. Important to keep in mind here is: ‘Design only what you can understand, don’t manipulate emotions you don’t understand’.

Furthermore, it is exciting to see how companies are already using AI in their daily activities:

Image recognition from sky
For example, companies make pictures of the port of a city from the sky. They use machine learning to detect from these pictures how the economic trends are in a country. For example, to check how many container ships there are in the port. An insight here is: you can just ask the other country how their economic situation is, but you can also check for yourself and know for sure.

Mayday voice forensics
Another really cool example is the Coast Guard checking fake mayday calls with AI. Of course, with such calls you have to be sure if someone is in danger or if it is a fake call. It is surprising how much the system can tell us about a person only based on the voice recordings. In a real case example, they found the person who called is: white, brought up in America (probably the north east), about 175cm tall, about 75kg and about 40 years old. Moreover, they found he is: not in any kind of trouble, not on a boat and he’s sitting on a metal chair on a concrete floor! In this example, we can see how AI can help us in specific situations such as emergency calls.

Trial & error!

These examples showed that AI is already really well developed and can be used for multiple causes. But essential to mention is that it is important to experiment with AI. Nobody knows exactly how AI works so we will just have to try and find out. Only then we can answer questions as: how will consumers respond to machines? What are the expectations of the consumer? This is for example important in conversational web cases.

For chat interfaces the biggest challenge is to create the right expectations. Therefore we need a lot of experimenting. A guided approach seems very useful here, to offer buttons with different options for the consumer. This way the consumer can’t ask everything but you can still learn from the consumer.

Besides that you have to make sure that people still have freedom in their choices. If they entirely rely on technology you can get problems. For example, if people blindly follow their navigation to ‘go right’ and drive into the lake. To find the right expectations a lot of experimenting is needed.

AI is not possible without humans

For a big part humans are needed still needed otherwise AI is not possible. Humans have to learn what is working and what is not working and adjust this. Moreover, humans also have to learn how to use AI machines. Besides that, people aren’t perfect, so should AI be perfect? It is also possible that people give the wrong answer or that they don’t have the knowledge. We already learned a lot of our consumers because we know them from the web, this way we can predict how they will behave and what the response should be. We have to keep learning this to improve AI.

Another example we still need humans is a supporting call with a real person in the company. This is also an opportunity to improve your service. If everything goes automatically with chatbots you don’t have human to human contact anymore, this would be a loss. It would be interesting to build up the customer support in a new way. Then you can think from the start about what you want to automate and what not. You can design a chat option where an employee is supported with AI in the best way. For example, when an employee is chatting with a customer AI can help to find directly the right customer details and helpful information.



Hanneke van Ewijk is a master student at the VU University Amsterdam. Her research focuses on "chatbots and brands". For Aigency Hanneke writes the event reports.

Latest posts by Hanneke (see all)

Further reading: |


Comments are closed.