This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.

Increasing robot productivity

09 May 2017

Control Engineering Europe looks at how advanced sensing solutions can help increase robot productivity.

Vision-guided robotics (VGR) is fast becoming an enabling technology for the automation of many different processes within a range of industries.  Object recognition technology gives the ability to identify different items, based on their three-dimensional geometry. Whether the process involves loose, mixed, irregular parts, sacks or bags, equipping a robot with an area sensor provides an efficient solution that can be quickly adapted to handle different products.  

FANUC 3D Area Sensors, for example, enable robots to recognise and pick up randomly positioned objects. Capable of locating parts three dimensionally, the sensor adds flexibility and reliability to operations traditionally completed by humans or which otherwise require sophisticated and expensive dedicated machinery.

Typical applications for 3D area sensors include de-palletising materials (including mixed boxes, sacks, bags and food packaging, bin picking - loose random parts, irregular items, and irregularly-shaped sacks or packaging, and sorting, placing and loading picked items into machines. Automating these processes can help increase productivity and reduce costs in many material handling applications. Even setups involving dirty, dusty or rusty products and/or difficult light conditions can benefit from such efficiencies. 

How it works?
FANUC‘s 3D Area Sensor uses structured light projection to create 3D maps of its surroundings. By adding structured light to a vision system, features of the part are not located by identifying features via multiple cameras, but rather through each camera locating the object’s features created by the structured light. Typically, this is performed over a large area, with many 3D points detected for each snap. The result is a point cloud, which compiles several X-Y-Z locations in a tight array across the 3D scene being imaged. 

Using these maps, the system looks for parts. The part manager then does an evaluation and decides which part to pick. Taking reaching distance and collision avoidance into account, it then chooses the fastest picking option. If the part manager decides a pick has been unsuccessful or a part queue does not contain a part to pick, another image is taken and the process starts again using the new results. 

3D vision systems that generate point clouds are very useful for VGR applications, because multiple parts can be located simultaneously. Multi-tasking background processing - part detection takes place while the robot is moving and does not interrupt the workflow- means that shorter cycle times can be achieved.

Mastering new paths
Teaching the robot new paths has been designed to be a simple task. The sensor can be programmed on the shop floor using a graphical interface on iPendant Touch, which utilises FANUC's familiar iRVision graphical interface.  It can set up a bin picking application in a matter of minutes.

One robot can service up to four 3D area sensors. In bin picking applications and area sensor can be top-mounted on an auxiliary axis-powered rail, allowing the robot to directly control the movement of the sensor. Mounting the sensor this way gives the robot two bins in which to work from. As soon as the robot recognises that one bin is empty it will automatically switch to the next one, saving downtime that would be required for an operator to change the bin manually.


Contact Details and Archive...

Related Articles...

Most Viewed Articles...

Print this page | E-mail this page