Researchers are working very hard on the ability of computers to mimic the human senses—in their own way, to see, smell, touch, taste and hear. In this article we highlight two examples of algorithms that seem to be beating us at our own game.
Your eyes can be deceiving. Sometimes -even for humans- it is hard to distinguish a muffin from a Chiwawa. Most of us can recognize an object after seeing it once or twice. But the algorithms that power computer vision and voice recognition need thousands of examples to become familiar with each new image or word.
Researchers at Google DeepMind now have a way around this. They made a few clever tweaks to a deep-learning algorithm that allows it to recognize objects in images and other things from a single example—something known as “one-shot learning.” Read More
> Read full A new pair of eyes post