In this tutorial, we will discuss how to use a segmentation camera sensor in Ignition Gazebo.
Since this tutorial will show how to use a segmentation camera sensor in Ignition Gazebo, you'll need to have Ignition Gazebo installed. We recommend installing all Ignition libraries, using version Fortress or newer (the segmentation camera is not available in Ignition versions prior to Fortress). If you need to install Ignition, pick the version you'd like to use and then follow the installation instructions.
Setting up the segmentation camera
Here's an example of how to attach a segmentation camera sensor to a model in a SDF file:
Let’s take a closer look at the portion of the code above that focuses on the segmentation camera sensor:
As we can see, we define a sensor with the following SDF elements:
<camera>: The camera, which has the following child elements:
<segmentation_type>: The type of segmentation performed by the camera. Use
semanticfor semantic segmentation. For panoptic (instance) segmentation, use
instance. The default value for
<horizontal_fov>: The horizontal field of view, in radians.
<image>: The image size, in pixels.
<clip>: The near and far clip planes. Objects are only rendered if they're within these planes.
<always_on>: Whether the sensor will always be updated (indicated by 1) or not (indicated by 0). This is currently unused by Ignition Gazebo.
<update_rate>: The sensor's update rate, in Hz.
<visualize>: Whether the sensor should be visualized in the GUI (indicated by true) or not (indicated by false). This is currently unused by Ignition Gazebo.
<topic>: The name of the topic which will be used to publish the sensor data.
Label map & Colored map
The segmentation sensor creates 2 maps (or images):
label map: For semantic segmentation, each pixel contains the object's label. For panoptic segmentation, each pixel contains the label and object's instance count.
colored map: A colored version of the label map. In semantic segmentation, all items of the same label will have the same color. In panoptic segmentation, each pixel contains a unique color for each instance in the scene (so, for panoptic segmentation, items of the same label will not have the same color).
Assigning a label to a model
Only models with labels (annotated classes) will be visible by the segmentation camera sensor. Unlabeled models will be considered as background.
To assign a label to a model we use the label plugin in the SDF file:
Lets zoom in the label plugin:
We assign the label of the model by adding the plugin to the model's
<visual> tag. So, in this case, this model has a label of 10.
You can also attach this plugin to the model's
If you're including a model from a place like ignition fuel, you can add the label plugin as a child for the
Running an example:
Now that we've discussed how a segmentation camera and models with labels can be specified, let's run an example world that uses the segmentation camera. Run the following command:
You should see something similar to this:
There are 2 segmentation cameras in the SDF world: a semantic segmentation camera, and an instance/panoptic segmentation camera.
For the instance/panoptic segmentation camera, colored map data is published to the
panoptic/colored_map topic, and label map data is published to the
For the semantic segmentation camera, colored map data is published to the
semantic/colored_map topic, and label map data is published to the
Segmentation Dataset Generation
To save the output of the sensor as segmentation dataset samples, we add the
<save> tag to the
<camera> tag, and we specify the path to save the dataset in.
In the example world we just ran (
segmentation_camera.sdf), you'll notice that the panoptic camera is saving data to
segmentation_data/instance_camera, while the semenatic camera is saving data to
segmentation_data/semantic_camera (these are relative paths).
Up to this point, we have left simulation paused. Go ahead and start simulation by pressing the play button at the bottom-left part of the GUI. You'll see the camera drop, capturing updated segmentation images along the way:
Once the camera has reached the ground plane, you can go ahead and close Ignition Gazebo. We will now discuss how to visualize the segmentation data that was just generated by Ignition Gazebo.
Visualize the segmentation dataset via Python
Put the following code in a Python script called
Make sure you have all of the dependencies that are needed in order to run this Python script:
Run the script, setting the
--path argument to what you specified for the
<save> tag in the SDF file. Since we set the panoptic save path to
segmentation_data/instance_camera, we'd run the following command to view the panoptic segmentation data:
You will see 4 windows:
colored_image (which is a combination of the image and colored_map).
For panoptic/instance segmentation, to parse the
labels_map, click on any pixel on the
labels_map window to see the
instance count of that pixel.
Processing the segmentation sensor via ign-transport
It's possible to process the segmentation data in real time via
ign-transport. You will need to which topics to subscribe to in order to receive this information.
Consider the following SDF snippet from the segmentation camera:
In this scenario, the sensor data will publish the label map data to
segmentation/labels_map, and the colored map data to
segmentation/colored_map. We can write some c++ code that subscribes to these topics:
If you'd like to gain a better understanding of how the subscriber code works, you can go through the ign-transport tutorials.