J. Bohg, M. Ciocarlie, J. Civera, and L. E. Kavraki, “Big Data in Robotics,” Big Data, vol. 4, no. 4, pp. 195–196, Dec. 2016.
Machines are making rapid headway in the field of perception—they are beginning to see, hear, read, and touch in ways not previously possible. The ability to learn from and deal with direct inputs from the real world greatly enriches the ability of robots. While machine learning approaches have been explored extensively in the field of robotics in the past, they have often abstracted away or simplified either the perceptual input or the space of possible actions. Not anymore. The availability of big data and associated new learning techniques opens up the opportunity to remove many of the commonly made assumptions and simplifications in the mapping between perception and action and vice versa. While these assumptions may have enabled computational elegance and efficiency, they may also have been too restrictive and limiting to allow generalization to the complexity in the real world. From autonomous cars to mobile manipulators, new big data methods have the potential to allow robots to understand and operate in significantly more complex environments than was possible even in the recent past. This should lead to a qualitative leap in the performance and deployability of robotics in a wide array of practical applications and real settings.
Publisher: http://dx.doi.org/10.1089/big.2016.29013.rob