The Red Tea Detox
Home / Technology / Robot Can Be Programmed Through Conversation
The Red Tea Detox

Robot Can Be Programmed Through Conversation

The idea of having a helper robot at home that can make fold your laundry or bring you breakfast in bed is kind of awesome, but in order to get to that point, it must be properly trained. In order to make the orientation process easier, Ashutosh Saxena of Cornell University is developing robots that can be trained through verbal communication, without requiring a knowledge of computer programming. While an overall relationship between brain size and cognitive ability is apparent in primates, little is known about the relevance of the size of specific cortical regions in regard to function. In order to shed some light on this poorly understood area, researchers compared the visual cortex of numerous primate species, including humans, and used assessments of visual acuity and the perception of visual illusions provided by other studies as measures of cortical function. The researchers found that as the size of the visual cortex increased, the number of neurons in this region also increased. However, with this increase in size actually came a decrease in overall neuron density. This is likely because the cell bodies (neuronal soma) are more sparsely distributed, resulting in more space being occupied by interneuronal connections. They suggest that this increase in cell number and neuronal connections may allow greater brain plasticity. According to co-author Michael Proulx, this study provides a framework that ties brain structure and function together. “The theory of brain size that we discuss can be tested in the future with more behavioral tests of other species, gathering more comparative neuroanatomical data, and by testing other senses and multi-sensory perception, too,” he added.

Saxena and his team will be presenting their work at the 2014 Robotics: Science and Systems Conference at UC Berkeley on July 12.  The robots are being trained to understand a wide range of functions for a variety of objects, such different pans being used for different things and the fact that things can be heated on the stove or in the microwave. Ultimately, the team hopes to create robots that are flexible enough to perform the same actions, such as cooking, even if placed in different kitchens or having the equipment and ingredients in different places.  In recent testing, a robot was asked to prepare ramen noodles and a dessert made of coffee and ice cream. The robot was able to correctly identify and carry out commands 64% of the time while inferring some missing information, which is about 3-4 times higher than previous trials. While that is fairly impressive, the researchers are constantly striving to improve.  A new study conducted by researchers at the University of Bath has found that enlargement of the visual cortex of the brain in primates is associated with increased visual acuity, suggesting that increases in the size of brain regions are associated with enhanced function.  The study has been published in Frontiers in Neuroanatomy. “Primates with a bigger visual cortex have better visual resolution, the precision of vision, and reduced visual illusion strength,” said lead author Alexandra de Sousa in a news-release. “In essence, the bigger the brain area, the better the visual processing ability.” She explains that the increase in neuronal connections that likely occur in larger brains allow for increasingly complex interneuronal communications to be made that allow for more accurate visual perception.

Giving directions can be tricky, because someone who is really familiar with a process may forget to specify minor, yet critical details. For instance, when instructing on how to boil pasta, you might specify to fill a pan with water, put it on the stove, and add the noodles once it reaches a rolling boil. However, you may have forgotten to specify how full the pan needs to be (you wouldn’t want the water all the way up to the edge), the fact that the stove needed to be turned on, or how high the heat needs to be. Those missing tidbits can be assumed later with experience, but a naïve robot (or even a child learning to cook) might sit there waiting for the water to miraculously begin boiling on its own. Saxena has developed software that converts spoken English into computer commands which the robot can understand, and will use a 3D camera to search the environment for the necessary objects. The robot will react to commands based on prior experience, though it can become confused if commands are worded differently. The researchers are working on correcting this by teaching the robots that different commands will have the same action. The robots are exposed to video simulations of activities being performed while commanded by several different human speakers, allowing them to learn by seeing.  If you would like to help train these robots, the researchers are attempting to crowdsource instructions for a variety of tasks on the Tell Me Dave site. This will expose the robots to a variety of instruction styles and expedite learning.

(Visited 22 times, 1 visits today)

About admin

Check Also

Micro-turbines could revolutionize small-scale energy production

Electrical engineering professor J.C. Chiao and his research associate Smitha Rao have demonstrated their invention …

Leave a Reply

Your email address will not be published. Required fields are marked *

The Red Tea Detox