Thursday , August 22 2019
Home / zimbabwe / A machine learning approach could help robots gather mobile phones and other small details on the production line – ScienceDaily

A machine learning approach could help robots gather mobile phones and other small details on the production line – ScienceDaily



In the basement of the MIT 3 building, the robot is carefully considering his next move. It gently strikes the block tower in search of the best block to pull it out without lonely, slow moving, but surprisingly moving Jenga game.

The robot, developed by MIT engineers, is equipped with a soft grip, a power-grip hand cuff and an external camera, all of which are used to see and feel the tower and its individual blocks.

Because the robot pushes the block carefully, the computer takes visual and tactile feedback from the camera and bracelet, and compares these measurements with the robot's previous movements. It also believes that the results of these changes – in particular, whether the block in a particular configuration and with a certain force – were successful or not. In real time, the robot "learns" or continues to push or move to a new block so the tower does not fall.

Jeng's Robot is published in the magazine Science Robots. Alberto Rodriguez, Walter Henry Gale Career Assistant Professor at the Department of Mechanical Engineering at MIT, says the robot demonstrates something that has been difficult to achieve in previous systems: the ability to quickly learn the best way to do a task, not just from visual signals, as it is today studied, but also from tactile, physical interactions.

"Unlike purely cognitive tasks or games, such as chess or Go, playing Jeng also requires learning physical skills such as probing, pushing, dragging, placing and matching, which requires interactive perception and manipulation where you have to go and touch the tower. to find out how and when to move the blocks, "Rodriguez says." It's very difficult to simulate, so the robot must learn in the real world by working with the real Jenga tower. The main challenge is to learn from a relatively small number of experiments using common sense about objects and physics. "

He says that the tactical training system developed by researchers can be used in applications over Jenga, especially in tasks requiring careful physical interaction, including the removal of recyclable items from a landfill and assembly of consumer goods.

"The mobile phone assembly line in almost every single step of forcing, or the threaded screw feeling comes from strength and touch, rather than vision," Rodriguez points out. "Learning models for such activities are the key to this kind of technology."

The author is MIT graduate Nima Fazeli. The team also includes Miquel Oller, Jiajun Wu, Zheng Wu, and Joshua Tenenbaum, Professor of Brain and Cognitive Science at MIT.

Push and pull

Jeng – Swahili plays "build" – 54 rectangular blocks are stacked in 18 layers in three blocks, blocks in each block are placed perpendicular to the blocks below. The object of the game is to carefully pull out the block and place it on the top of the tower, creating a new level without breaking the whole structure.

To program a robot to play Jenga, traditional machine training schemes could require anything that could happen between a block, a robot, and a tower, an expensive computing task that requires data from thousands, if not tens of thousands of blocks of attempts.

Instead, Rodriguez and his colleagues were looking for a more efficient way of processing data so that the robot could learn to play Jeng, inspired by human cognition, and how we could turn to the game.

The team adapted the standard standard ABB IRB 120 robotic arm, then the robot reached the Jenga Tower and began a training period in which the robot first chose the random block and place to block. It then made little effort to push the block out of the tower.

For each block attempt, the computer recorded related visual and power measurements and noted whether each attempt was successful.

Instead of doing tens of thousands of such attempts (which would mean reconstructing the tower almost as many times), the robot trained only about 300, attempting to make similar measurements and results grouped into groups representing the behavior of certain blocks. For example, one cluster could be an attempt on a block that was difficult to move, contrary to what was easier to move, or that moved the tower. For each data cluster, the robot developed a simple model to predict block behavior, taking into account current visual and tactile measurements.

Fazel says that this clustering method greatly increases the effectiveness that a robot can learn to play the game, and it has inspired the natural way in which people associate similar behaviors: "Robots create clusters and then learn patterns for each of these clusters instead of learning the model. who takes absolutely everything that could happen. "

Loading up

Researchers tested their approach to other modern machine learning algorithms by computer simulating the game using the MuJoCo simulator. The lessons learned from the simulator informed researchers how the robot could learn in the real world.

"We offer these algorithms the same information our system receives to see how they learn to play Jeng at a similar level," says Oller. "Compared to our approach, these algorithms need to check multiple towers to find out about the game."

Curious about how their machine learning approach is against certain human players, the team conducted some informal trials with several volunteers.

"We saw how many blocks a person could pull before the tower fell, and the difference was not so much," says Oller.

But there is still a way to go if researchers want their robot to compete with a human player. In addition to physical interaction, Jeng needs a strategy, for example, to pull out just the right block that makes it difficult for an opponent to pull the next block without breaking the tower.

Currently, the team is less interested in developing a robotic Jenga champion, and is more focused on using new robot skills in other application areas.

"There are many tasks that we do with our hands, if the feeling of doing the" right way "comes in the sphere of force and tactile," said Rodriguez. "In the case of such tasks, a similar approach could show us."

This research was partly supported by the National Science Foundation through the National Robot Initiative.

Video: https://www.youtube.com/watch?v=o1j_amoldMs


Source link