Google’s much vaunted DeepMind AI has taken the next step. A combined experiment by researchers at Google’s DeepMind Unit in London and the University of California used reinforcement learning to train the system about the physical properties of objects by interacting with them in different virtual environments. Significantly, DeepMind has already entered the exciting world of games, creating a single program that taught itself how to play and win at 49 completely different Atari titles, using only raw pixels as input.
Virtual Reality is being used by scientists all over the world in the field of mental health care, especially in exposure therapy for phobias and anxieties. For example, VR headsets can put an introvert in front of a virtual crowd in a controlled environment, thus helping him control his fear of public speaking. This system is an upgrade on face-to-face therapy, as it overcomes the main problems of the latter – cost and accessibility.
A deep learning technique that considers the entire sentence as one unit to be translated has been incorporated into Google Translate. It relies on a recurrent neural network algorithm, consisting of layered nodes, with two eight layer networks acting as the encoder and the decoder. The encoder transforms the input into a list of vectors representing all possible meanings of each word while the decoder translates one word at a time. This new method has already reduced translation errors by 60 percent, compared to the previous algorithm, which was phrase based.
Researchers at IIT Bombay have developed a method to help computers detect sarcasm in sentences, by analyzing the similarity of words and how they relate to each other in Word2Vec, a Google News stories database containing about 3 million words. After determining how frequently words appear next to each other, they were represented as vectors in high dimensional space. Similar words were represented by similar vectors and sentences that contrast similar and dissimilar concepts were more likely sarcastic.
The International Space Station (ISS), the shining beacon of scientific hope for ages, continues to surprise. In an experiment aboard the ISS, a small drone has learned to determine distances using a camera eye. The Synchronized Position Hold Engage and Reorient Experimental Satellite drone started navigating the space station, recording stereo-vision information about the surroundings using two camera eyes and, then learning about distances to nearby obstacles. The drone could also explore autonomously using one eye, when the stereo vision camera was switched off.
A method for retrieving visual information from scattered light has been created by researchers at Massachusetts Institute of Technology (MIT). This could lead to medical imaging devices that use the visible spectrum. The current medical systems use a combination of magnetic field and pulses of radio wave energy, X rays, ultrasound and other more expensive methods. This new method can also lead to computer vision systems functioning in poor visibility. Certainly, flights held up by fog could use this.
Researchers from UMC Utrecht in the Netherlands have used a brain implant to help a patient with amyotrophic lateral sclerosis (ALS) operate a speech computer with her mind. The researchers placed electrodes in the patient’s brain, enabling her to wirelessly control the computer. The patient operates the speech computer by thinking about moving her fingers; this changes the brain signal under the electrodes, which is then converted into a mouse click. A screen is presented to the patient showing the alphabet and a few additional functions, each of which light up one by one. The patient selects a letter by using her brain to influence a mouse click at the right moment; the system enables the patient to compose words, letter by letter, which are then spoken by the speech computer.
Researchers in Japan are developing an artificial intelligence system that can design book covers without human assistance using a machine-vision algorithm that can deduce a book’s genre by its cover. Researchers Brian Kenji Iwana and Seiichi Uchida at Japan’s Kyushu University trained a deep neural network by first downloading 137,788 unique book covers from Amazon.com along with each genre. They then employed 80 percent of the dataset to educate the four-layer, 512-neuron network to identify the genre by the cover image. An additional 10 percent of the dataset was used to validate the model, and then the algorithm was tested on the remaining 10 percent. Iwana and Uchida say the network listed the correct genre in its top three choices more than 40 percent of the time and identified the precise genre more than 20 percent of the time.