Object manipulation using an innovative glove allows large databases of detailed pressure maps to be obtained. Such data could lead to advances in robotic sensing and in our understanding of the role of touch in manipulation.

 

 

The study and replication of human sensory abilities, such as visual, auditory and tactile (touch-based) perception, depend on the availability of suitable data. Generally, the larger and richer the data set, the more closely models can mimic these functions. Advances in artificial visual and speech systems rely on powerful models, known as deep-learning models, and have been fuelled by the ubiquity of databases of digital images and spoken audio (see, for example, go.nature.com/2w7nc0q). By contrast, progress in the development of tactile sensors — devices that convert a stimulus of physical contact into a measurable signal — has been limited, mainly because of the difficulty of integrating electronics into flexible materials1. In a paper in Nature, Sundaram et al.2 report their use of a low-cost tactile glove that addresses this issue.

The authors’ glove consists of a hand-shaped sensing sleeve that is attached to the palm side of a knitted glove (Fig. 1). The sleeve contains a force-sensitive film on which is sewn a network of 64 electrically conducting threads: 32 along one direction of the glove and 32 along the perpendicular direction. Each of the 548 points at which these threads overlap is a pressure sensor, because the electrical resistance of the interleaved film decreases when these points are pressed. The output of the glove can be processed as a 32 × 32 array of greyscale pixels, in which the colour of each pixel indicates the applied pressure from low (black) to high (white). These pressure maps are recorded at about seven frames per second.

 

A glove that uses neural networks to identify individual objects, estimate weights and explore tactile patterns

Figure 1 | A low-cost glove for artificial touch. Sundaram et al.2 describe a glove that consists of a hand-shaped sensing sleeve (black) attached to a knitted glove (yellow). The sleeve contains a force-sensitive film on which a network of electrically conducting threads (silver) is sewn. The points at which these threads overlap form pressure sensors. The authors show that pressure maps collected by these sensors during object manipulation enable machine-learning models to learn to identify individual objects, estimate the weights of objects and distinguish between different hand poses.Credit: Subramanian Sundaram

 

 

In Sundaram and colleagues’ study, the glove was worn to record several videos of pressure maps during 3–5-minute sessions of single-hand manipulation of 26 everyday objects. This procedure resulted in a database of detailed pressure maps that, to my knowledge, is one of the largest data sets of this kind. The authors found that the glove was flexible, robust and sensitive to small pressure changes, despite its fabrication cost of only about US$10.

To demonstrate that the glove captures different interactions of the hand with each object, Sundaram et al. used the recorded data to carry out automatic object identification. They showed how a state-of-the-art deep-learning model, which was originally designed for large-scale image classification, could learn from the gathered pressure maps to re-identify the 26 objects during blind manipulation. The large number of maps and their spatial resolution proved essential for successful object identification.

 

 

 

Next, the authors used the glove to pick objects up, and showed that a similar deep-learning model could estimate the weights of unknown objects. The glove was also worn during different hand poses, and the signal read by the sensors was detailed enough to distinguish between each pose. Finally, Sundaram and colleagues analysed the collaborations between different hand regions during object grasping by looking at signal correlations.

In addition to providing experimental evidence of well-studied principles that underlie human grasping, this data-driven exploration could improve our understanding of the function of touch during object manipulation. Deep-learning models have greatly advanced our knowledge of the neural mechanisms that underlie visual object recognition3. In this respect, a similar approach could be applied to the interpretation of tactile-information processing in the brain.

Sundaram and colleagues simultaneously produced pressure maps and corresponding photographs of the hand during object manipulation, generating a large amount of synchronized visual and tactile information. Data sets of multiple forms of sensory perception are uncommon4, and represent a fundamental step towards the development of multisensory integration systems and an understanding of how the brain develops a coherent perception of the environment.

Such a flexible sensing device might have various applications — for example, in medical diagnostics, personal health care and sport. But it could also impact on the development of active (externally powered) prosthetic and robotic hands. Tactile feedback has a crucial role in controlling hand movement and exerted forces, such that the lack of this information makes it challenging for both humans and robots to achieve a stable grasp4,5. Moreover, the sense of touch directly enables tactile exploration aimed at object recognition and localization. It is also known that providing active prostheses with tactile feedback could help to alleviate phantom-limb pain (the perception of pain from a missing limb), increase the sense of ownership over the prosthesis and reduce the cognitive stress involved in controlling the device, by enabling more natural operation6.

Tactile sensors can be incorporated into a glove that envelops an artificial limb, or directly fixed onto mechanical parts5,7. In this respect, the technology of Sundaram and colleagues’ device can be adapted to various shapes for integration into robotic or prosthetic arms. Currently, the main limitations are the disadvantages of the required dense sensor coverage of the glove. One disadvantage is extensive wiring — although the authors used a design of rows and columns to keep such wiring reasonably constrained. Another aspect is the rate at which pressure maps are recorded, which might need to be higher depending on the application (for example, if the tactile feedback were used to control a robotic hand). Nevertheless, I think that the glove in its present form or improved versions of it offer exciting prospects for robotics applications.

An emerging type of machine-learning model has proved effective in mimicking the human ability to learn to perform actions from experience — a process called reinforcement learning. In the past few years, researchers have used particular gloves to record hand-pose data during object manipulation, and have fed this recorded experience into a model that learns from these data to generate successful manipulations8. This approach to transferring experience from humans to robots could benefit from the use of Sundaram and colleagues’ data-acquisition glove.

Finally, the current study paves the way for several computer-vision models to be reused for tactile-signal processing, allowing the application of decades of computer-vision research. This approach offers many benefits, such as the removal of various problems involving model selection that slowed progress in deep learning in its early stages. Sundaram and colleagues’ glove could therefore lead to rapid advances in tactile sensing. I am confident that the low cost of the glove will facilitate the replication and sharing of the methodology used to fabricate the device and of the data-acquisition set-up. That would foster the use of large and standard data sets in tactile-sensing research — currently a major limitation with respect to computer vision4.

 

 

Nature 569, 638-639 (2019)

 

doi: 10.1038/d41586-019-01593-w

 

 

(원문: 여기를 클릭하세요~)

 

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *