Machines that learn things are nothing new. Type a few instructions into a batch file and you can tell your computer to do just about anything with the programs you're running. Get yourself a webcam and facial recognition software and you will clearly see that your computer is able to recognize your face. However, not all of the things described here are the result of the "thoughts" of the computer. At best, today's average home computer can imitate thought. But there are people in teams around the world who are developing ways to reproduce human thought in machines, even combining the best of both worlds, to create a new form of learning that mimics the intuitive way we capture the world around us.
Although many of us are afraid of the implications of artificial intelligence, there is no doubt that everyone respects it as the pinnacle of machine evolution. How far have we gone in our quest to create machines that can approach human intuition and abstract thought? We'll see what the Google Brain team is up to and how artificial neural networks could influence the way technology interacts with us on a daily basis in the near future.
Google Brain is a project that focuses on large-scale deep learning. The project involves a colossal amount of machines, with 16,000 CPU cores in their data centers all working in unison to create a machine that can effectively "learn" and "understand" things. The image above is actually a "drawing" that the network made. He didn't "copy" the design from anywhere; he simply constructed it abstractly as any painter would.
One of the most notable achievements of this project is the network's ability to detect cats. Modern computers can easily display a video with a cat for your entertainment, but they cannot understand what they are showing you. No one expects their computers to know what a cat is . Yet they show videos of these fuzzy little creatures millions of times a day around the world, completely unaware of their existence. The computer you're reading this on is probably nothing more than a glorified interactive television. Google managed to create a system that could show the cat in a still image (without any prior instruction on what a cat is). This is an unprecedented achievement that could bring us one step closer to the information age.
Imagine having a robot with you that can not only drive you to work, but can also act as a doctor when you are injured. The simple fact that a computer can distinguish what a cat is when surrounded by other objects has major implications. You might have to wait a while (16,000 CPU cores are very hard right now to fit in a small space), but distinguish a wound from the skin around it (and identify the type of wound) means that a "medical module" on a robot could help it suture your body. Once you take some time to think about it, artificial neural networks could lead to technological feats like we didn't think we'd see in our lifetimes. Maybe one day, not too far away, we'll take robots as bike buddies and play football with them, all thanks to how they can adapt and learn like us.
What do you think? Is it overly optimistic to think that we can go from “cat finder” to “robot doctor” at some point in our lives? Tell us below in a comment!