To create a vision process, perform the following steps:
Step1:Create a new Vision Process
The file for the vision processing in iRVision is called as "vision process". Create and use a vision process that matches your application.
Various templates for vision processes are available for each application.
Here is an example of two of the many types of vision processes that are available in iRVision.
- 2-D Single-View Vision Process: The workpiece is located and the robot motion is offset in two dimensions with one camera. This is used when a relatively small workpiece is placed on a table, etc.
- 2-D Multi-View Vision Process: The workpiece is located and the robot motion is offset in two dimensions with multiple cameras (normally two cameras). This is used when a relatively large workpiece is placed on a pallet, etc.
Step2:Select the camera to use
Select the camera data to use with this vision process.
Step3:Configure the vision processing tools
Configure the blocks in the vision process, such as 'Snap Tool 1' and 'GPM Locator Tool 1', in order.
For this example, set the snap conditions, such as exposure time, to 'Snap Tool 1', and then set the shape of the workpiece to find to 'GPM Locator Tool 1'.
Individual blocks in a vision process, such as 'Snap Tool 1' and 'GPM Locator Tool 1', are called as "command tools". iRVision has various command tools in addition to the 'Snap Tool' and the 'GPM Locator Tool', and it is possible for the vision process to execute a complicated process by combining them.
The command tools are executed in order from top to bottom.
Step4:Set the reference position
Specify the workpiece's reference position where it is to be located in the vision process.
Put the workpiece in exactly the same place where it was placed when the robot positions were taught. Set the reference position in the vision process.
iRVision will use this reference position to calculate the offset data to offset the vision offset. The vision offset will be relative to the workpiece's reference position.
The robot will use the vision offset data to adjust the robot positions.