Over the past four years, readers have doubtlessly noticed quantum leaps in the quality of a wide range of everyday technologies.
The speech recognition functions on our smartphone work much better than they used to. We are increasingly interacting with our computers by talking to them, whether it’s with Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana or the many voice-responsive features of Google.
Chinese search giant Baidu reports customers have tripled their use of speech interfaces in the past 18 months.
Image recognition has advanced. Google, Microsoft, Facebook and Baidu all have features that allow searches and automatic organizing of collections of photos with no identifying tags. You can ask to be shown, say, all the ones that have dogs in them, or snow, or even something fairly abstract like hugs.
Medical startups claim they’ll soon be able to use computers to read X-rays, MRIs, and CT scans more rapidly and accurately than radiologists, enabling them for example to diagnose cancer earlier and less invasively.
All these breakthroughs have been made possible by a family of artificial intelligence (AI) techniques popularly known as deep learning. Many scientists still prefer to call them by their original academic designation: deep neural networks.
Instead of writing code, programmers today feed the computer a learning algorithm, then expose it to terabytes of data—hundreds of thousands of images or years’ worth of speech samples—to train it. The computer figures out for itself how to recognize the desired objects, words, or sentences.
In short, such computers can now teach themselves. “You essentially have software writing software,” says Jen-Hsun Huang, CEO of graphics processing leader Nvidia, which began placing a massive bet on deep learning about five years ago.
Source at Forbes.
Comments