Difference between revisions of "PCL/OpenNI tutorial 4: 3D object recognition (descriptors)"

From robotica.unileon.es
Jump to: navigation, search
m
m
 
(36 intermediate revisions by the same user not shown)
Line 27: Line 27:
 
There are many 3D descriptors implemented into PCL. Each one has its own method for computing unique values for a point. Some use the difference between the angles of the normals of the point and its neighbors, for example. Others use the distances between the points. Because of this, some are inherently better or worse for certain purposes. A given descriptor may be scale invariant, and another one may be better with occlusions and partial views of objects. Which one you choose depends on what you want to do.
 
There are many 3D descriptors implemented into PCL. Each one has its own method for computing unique values for a point. Some use the difference between the angles of the normals of the point and its neighbors, for example. Others use the distances between the points. Because of this, some are inherently better or worse for certain purposes. A given descriptor may be scale invariant, and another one may be better with occlusions and partial views of objects. Which one you choose depends on what you want to do.
  
After calculating the necessary values, an additional step is performed to reduce the descriptor size: the result is binned into an [http://en.wikipedia.org/wiki/Histogram histogram]. To do this, the value range of each variable that makes up the descriptor is divided into ''n'' subdivisions, and the number of occurrences in each one is counted. Try to imagine a descriptor that computes a single variable, that ranges from 1 to 100. We choose to create 10 bins for it, so the first bin would gather all occurrences between 1 and 10, the second from 11 to 20, and so on. We look at the value of the variable for the first point-neighbor pair, and it is 27, so we increment the value of the third bin by 1. We keep doing this until we get a final histogram for that keypoint. The bin size must be carefully chosen depending on how descriptive that variable is (the variables do not have to share the same number of bins).
+
After calculating the necessary values, an additional step is performed to reduce the descriptor size: the result is binned into an [https://en.wikipedia.org/wiki/Histogram histogram]. To do this, the value range of each variable that makes up the descriptor is divided into ''n'' subdivisions, and the number of occurrences in each one is counted. Try to imagine a descriptor that computes a single variable, that ranges from 1 to 100. We choose to create 10 bins for it, so the first bin would gather all occurrences between 1 and 10, the second from 11 to 20, and so on. We look at the value of the variable for the first point-neighbor pair, and it is 27, so we increment the value of the third bin by 1. We keep doing this until we get a final histogram for that keypoint. The bin size must be carefully chosen depending on how descriptive that variable is (the variables do not have to share the same number of bins, and also the bins do not have to be of the same size; if for example most values from the previous example fell in the 50-100 range then it would be sensible to have more bins of smaller size in that range).
  
 
Descriptors can be classified in two main categories: global and local. The process for computing and using each one (recognition pipeline) is different, so each will be explained in its own section in this article.
 
Descriptors can be classified in two main categories: global and local. The process for computing and using each one (recognition pipeline) is different, so each will be explained in its own section in this article.
Line 37: Line 37:
 
** [http://www.pointclouds.org/assets/icra2013/pcl_features_icra13.pdf How does a good feature look like?]
 
** [http://www.pointclouds.org/assets/icra2013/pcl_features_icra13.pdf How does a good feature look like?]
 
* '''Publication''':
 
* '''Publication''':
** [http://www.inf.ethz.ch/personal/zeislb/publications/aldoma_2012jram_PCLTutorial.pdf Point Cloud Library: Three-Dimensional Object Recognition and 6 DoF Pose Estimation] (Aitor Aldoma et al., 2012)
+
** [https://www.inf.ethz.ch/personal/zeislb/publications/aldoma_2012jram_PCLTutorial.pdf Point Cloud Library: Three-Dimensional Object Recognition and 6 DoF Pose Estimation] (Aitor Aldoma et al., 2012)
 
</div>
 
</div>
  
Line 50: Line 50:
 
! scope="col"| Name
 
! scope="col"| Name
 
! scope="col"| Type
 
! scope="col"| Type
! scope="col"| Size
+
! scope="col"| Size<sup>†</sup>
 +
! scope="col"| Custom PointType<sup>††</sup>
 
|- style="vertical-align:middle;"
 
|- style="vertical-align:middle;"
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#PFH|PFH]] (Point Feature Histogram)
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#PFH|PFH]] (Point Feature Histogram)
| style="color: red;" | Local
+
| style="color: sienna;" | Local
 
| 125
 
| 125
 +
| style="color: green;" | Yes
 
|- style="vertical-align:middle;"
 
|- style="vertical-align:middle;"
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#FPFH|FPFH]] (Fast Point Feature Histogram)
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#FPFH|FPFH]] (Fast Point Feature Histogram)
| style="color: red;" | Local
+
| style="color: sienna;" | Local
 
| 33
 
| 33
 +
| style="color: green;" | Yes
 
|- style="vertical-align:middle;"
 
|- style="vertical-align:middle;"
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#RSD|RSD]] (Radius-Based Surface Descriptor)
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#RSD|RSD]] (Radius-Based Surface Descriptor)
| style="color: red;" | Local
+
| style="color: sienna;" | Local
 
| 289
 
| 289
 +
| style="color: green;" | Yes
 
|- style="vertical-align:middle;"
 
|- style="vertical-align:middle;"
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#3DSC|3DSC]] (3D Shape Context)
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#3DSC|3DSC]] (3D Shape Context)
| style="color: red;" | Local
+
| style="color: sienna;" | Local
 
| 1980
 
| 1980
 +
| style="color: green;" | Yes
 
|- style="vertical-align:middle;"
 
|- style="vertical-align:middle;"
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#USC|USC]] (Unique Shape Context)
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#USC|USC]] (Unique Shape Context)
| style="color: red;" | Local
+
| style="color: sienna;" | Local
| 1980
+
| 1960
 +
| style="color: green;" | Yes
 
|- style="vertical-align:middle;"
 
|- style="vertical-align:middle;"
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#SHOT|SHOT]] (Signatures of Histograms of Orientations)
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#SHOT|SHOT]] (Signatures of Histograms of Orientations)
| style="color: red;" | Local
+
| style="color: sienna;" | Local
 
| 352
 
| 352
 +
| style="color: green;" | Yes
 
|- style="vertical-align:middle;"
 
|- style="vertical-align:middle;"
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#Spin_image|Spin image]]
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#Spin_image|Spin image]]
| style="color: red;" | Local
+
| style="color: sienna;" | Local
| 153
+
| 153*
 +
| style="color: red;" | No
 
|- style="vertical-align:middle;"
 
|- style="vertical-align:middle;"
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#RIFT|RIFT]] (Rotation-Invariant Feature Transform)
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#RIFT|RIFT]] (Rotation-Invariant Feature Transform)
| style="color: red;" | Local
+
| style="color: sienna;" | Local
| 32
+
| 32*
 +
| style="color: red;" | No
 
|- style="vertical-align:middle;"
 
|- style="vertical-align:middle;"
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#NARF|NARF]] (Normal Aligned Radial Feature)
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#NARF|NARF]] (Normal Aligned Radial Feature)
| style="color: red;" | Local
+
| style="color: sienna;" | Local
 
| 36
 
| 36
 +
| style="color: green;" | Yes
 +
|- style="vertical-align:middle;"
 +
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#RoPS|RoPS]] (Rotational Projection Statistics)
 +
| style="color: sienna;" | Local
 +
| 135*
 +
| style="color: red;" | No
 
|- style="vertical-align:middle;"
 
|- style="vertical-align:middle;"
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#VFH|VFH]] (Viewpoint Feature Histogram)
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#VFH|VFH]] (Viewpoint Feature Histogram)
| style="color: green;" | Global
+
| style="color: darkcyan;" | Global
 
| 308
 
| 308
 +
| style="color: green;" | Yes
 
|- style="vertical-align:middle;"
 
|- style="vertical-align:middle;"
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#CVFH|CVFH]] (Clustered Viewpoint Feature Histogram)
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#CVFH|CVFH]] (Clustered Viewpoint Feature Histogram)
| style="color: green;" | Global
+
| style="color: darkcyan;" | Global
 
| 308
 
| 308
 +
| style="color: green;" | Yes
 
|- style="vertical-align:middle;"
 
|- style="vertical-align:middle;"
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#OUR-CVFH|OUR-CVFH]] (Oriented, Unique and Repeatable Clustered Viewpoint Feature Histogram)
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#OUR-CVFH|OUR-CVFH]] (Oriented, Unique and Repeatable Clustered Viewpoint Feature Histogram)
| style="color: green;" | Global
+
| style="color: darkcyan;" | Global
 
| 308
 
| 308
 +
| style="color: green;" | Yes
 
|- style="vertical-align:middle;"
 
|- style="vertical-align:middle;"
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#ESF|ESF]] (Ensemble of Shape Functions)
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#ESF|ESF]] (Ensemble of Shape Functions)
| style="color: green;" | Global
+
| style="color: darkcyan;" | Global
 
| 640
 
| 640
 +
| style="color: green;" | Yes
 
|- style="vertical-align:middle;"
 
|- style="vertical-align:middle;"
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#GFPFH|GFPFH]] (Global Fast Point Feature Histogram)
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#GFPFH|GFPFH]] (Global Fast Point Feature Histogram)
| style="color: green;" | Global
+
| style="color: darkcyan;" | Global
 
| 16
 
| 16
 +
| style="color: green;" | Yes
 
|- style="vertical-align:middle;"
 
|- style="vertical-align:middle;"
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#GRSD|GRSD]] (Global Radius-Based Surface Descriptor)
 
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#GRSD|GRSD]] (Global Radius-Based Surface Descriptor)
| style="color: green;" | Global
+
| style="color: darkcyan;" | Global
| Unknown
+
| 21
 +
| style="color: green;" | Yes
 
|}
 
|}
  
 +
 +
† Values marked with an asterisk (*) indicate that the descriptor's size depends on some parameter(s), and the one given is for the default values.
 +
 +
†† Descriptors without their own custom PointType use the generic <span style="color:#FF1493">"pcl::Histogram<>"</span> type. See [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#Saving_and_loading|Saving and loading]].
  
 
Optionally, you can download a document with a less simple version of the table. Page format is A4, landscape:
 
Optionally, you can download a document with a less simple version of the table. Page format is A4, landscape:
Line 125: Line 150:
 
=Local descriptors=
 
=Local descriptors=
  
Local descriptors are computed for individual points that we give as input. They have no notion of what an object is, they just describe how the local geometry is around that point. Usually, it is your task to choose which points you want a descriptor to be computed for: the ''keypoints''. Most of the time, you can get away by just performing a downsampling and choosing all remaining points, but keypoint detectors are available, like the one used for [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#Finding_keypoints|NARF]].
+
Local descriptors are computed for individual points that we give as input. They have no notion of what an object is, they just describe how the local geometry is around that point. Usually, it is your task to choose which points you want a descriptor to be computed for: the ''keypoints''. Most of the time, you can get away by just performing a downsampling and choosing all remaining points, but keypoint detectors are available, like the one used for [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#Finding_keypoints|NARF]], or [[PCL/OpenNI_tutorial_5:_3D_object_recognition_%28pipeline%29#ISS|ISS]].
  
 
Local descriptors are used for object recognition and [[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Registration | registration]]. Now we will see which ones are implemented into PCL.
 
Local descriptors are used for object recognition and [[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Registration | registration]]. Now we will see which ones are implemented into PCL.
Line 133: Line 158:
 
PFH stands for Point Feature Histogram. It is one of the most important descriptors offered by PCL and the basis of others such as FPFH. The PFH tries to capture information of the geometry surrounding the point by analyzing the difference between the directions of the normals in the vicinity (and because of this, an imprecise normal estimation may produce low-quality descriptors).
 
PFH stands for Point Feature Histogram. It is one of the most important descriptors offered by PCL and the basis of others such as FPFH. The PFH tries to capture information of the geometry surrounding the point by analyzing the difference between the directions of the normals in the vicinity (and because of this, an imprecise normal estimation may produce low-quality descriptors).
  
First, the algorithm pairs all points in the vicinity (not just the chosen keypoint with its neighbors, but also the neighbors with themselves). Then, for each pair, a [http://en.wikipedia.org/wiki/Darboux_frame fixed coordinate frame] is computed from their normals. With this frame, the difference between the normals can be encoded with 3 angular variables. These variables, together with the euclidean distance between the points, are saved, and then binned to an histogram when all pairs have been computed. The final descriptor is the concatenation of the histograms of each variable (4 in total).
+
First, the algorithm pairs all points in the vicinity (not just the chosen keypoint with its neighbors, but also the neighbors with themselves). Then, for each pair, a [https://en.wikipedia.org/wiki/Darboux_frame fixed coordinate frame] is computed from their normals. With this frame, the difference between the normals can be encoded with 3 angular variables. These variables, together with the euclidean distance between the points, are saved, and then binned to an histogram when all pairs have been computed. The final descriptor is the concatenation of the histograms of each variable (4 in total).
  
  
Line 207: Line 232:
 
===FPFH===
 
===FPFH===
  
PFH gives accurate results, but it has a drawback: it is too computationally expensive to perform at real time. For a cloud of ''n'' keypoints with ''k'' neighbors considered, it has a [http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations complexity] of ''O(nk<sup>2</sup>)''. Because of this, a derived descriptor was created, named FPFH (Fast Point Feature Histogram).
+
PFH gives accurate results, but it has a drawback: it is too computationally expensive to perform at real time. For a cloud of ''n'' keypoints with ''k'' neighbors considered, it has a [https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations complexity] of ''O(nk<sup>2</sup>)''. Because of this, a derived descriptor was created, named FPFH (Fast Point Feature Histogram).
  
 
The FPFH considers only the direct connections between the current keypoint and its neighbors, removing additional links between neighbors. This takes the complexity down to ''O(nk)''. Because of this, the resulting histogram is referred to as SPFH (Simplified Point Feature Histogram). The reference frame and the angular variables are computed as always.
 
The FPFH considers only the direct connections between the current keypoint and its neighbors, removing additional links between neighbors. This takes the complexity down to ''O(nk)''. Because of this, the resulting histogram is referred to as SPFH (Simplified Point Feature Histogram). The reference frame and the angular variables are computed as always.
Line 260: Line 285:
 
}</syntaxhighlight>
 
}</syntaxhighlight>
  
An additional implementation of the FPFH estimation that takes advantage of multithreaded optimizations (with the [http://en.wikipedia.org/wiki/OpenMP OpenMP API]) is available in the class <span style="color:#FF1493">"FPFHEstimationOMP"</span>. Its interface is identical to the standard unoptimized implementation. Using it will result in a big performance boost on multi-core systems, meaning faster computation times. Remember to include the header <span style="color:#FF1493">"fpfh_omp.h"</span> instead.
+
An additional implementation of the FPFH estimation that takes advantage of multithreaded optimizations (with the [https://en.wikipedia.org/wiki/OpenMP OpenMP API]) is available in the class <span style="color:#FF1493">"FPFHEstimationOMP"</span>. Its interface is identical to the standard unoptimized implementation. Using it will result in a big performance boost on multi-core systems, meaning faster computation times. Remember to include the header <span style="color:#FF1493">"fpfh_omp.h"</span> instead.
  
 
<div style="background-color: #F8F8F8; border-style: dotted;">
 
<div style="background-color: #F8F8F8; border-style: dotted;">
Line 289: Line 314:
  
  
Currently, the RSD estimator class is not available in PCL because it lacked proper testing and documentation (for more details, see the [http://dev.pointclouds.org/issues/625 issue] or this [http://www.pcl-users.org/File-rsd-h-missing-from-install-directory-td4019925.html post] on the mailing list). When it is restored, I will provide a code snippet as always (I could do it now, but you would not be able to compile it).
+
This is the code for compiling the RSD descriptor:
 
 
<div style="background-color: #F8F8F8; border-style: dotted;">
 
* '''Input''': Points, Normals, Search method, Radius, Maximum Radius
 
* '''Output''': RSD descriptors
 
* '''Publication''':
 
** [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.391.7398&rep=rep1&type=pdf General 3D Modelling of Novel Objects from a Single View] (Zoltan-Csaba Marton et al., 2010)
 
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_r_s_d_estimation.html pcl::RSDEstimation]
 
</div>
 
 
 
 
 
==3DSC==
 
 
 
The 3D Shape Context is a descriptor that extends its existing 2D counterpart to the third dimension. It works by creating a support structure (a sphere, to be precise) centered at the point we are computing the descriptor for, with the given search radius. The "north pole" of that sphere (the notion of "up") is pointed to match the normal at that point. Then, the sphere is divided in 3D regions or bins. In the first 2 coordinates (azimuth and elevation) the divisions are equally spaced, but in the third (the radial dimension), divisions are logarithmically spaced so they are smaller towards the center. A minimum radius can be specified to prevent very small bins, that would be too sensitive to small changes in the surface.
 
 
 
 
 
[[Image:3DSC_support_structure.png|thumb|center|200px|Support structure to compute the 3DSC for a point (image from [http://www.cs.jhu.edu/~misha/Papers/Frome04.pdf original paper]).]]
 
 
 
 
 
For each bin, a weighted count is accumulated for every neighboring point that lies within. The weight depends on the volume of the bin and the local point density (number of points around the current neighbor). This gives the descriptor some degree of resolution invariance.
 
 
 
We have mentioned that the sphere is given the direction of the normal. This still leaves one degree of freedom (only two axes have been locked, the azimuth remains free). Because of this, the descriptor so far does not cope with rotation. To overcome this (so the same point in two different clouds has the same value), the support sphere is rotated around the normal ''N'' times (a number of degrees that corresponds with the divisions in the azimuth) and the process is repeated for each, giving a total of ''N'' descriptors for that point.
 
 
 
You can compute the 3DSC descriptor the following way:
 
  
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 
#include <pcl/features/normal_3d.h>
 
#include <pcl/features/normal_3d.h>
#include <pcl/features/3dsc.h>
+
#include <pcl/features/rsd.h>
  
 
int
 
int
Line 325: Line 327:
 
// Object for storing the normals.
 
// Object for storing the normals.
 
pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
 
pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
// Object for storing the 3DSC descriptors for each point.
+
// Object for storing the RSD descriptors for each point.
pcl::PointCloud<pcl::ShapeContext1980>::Ptr descriptors(new pcl::PointCloud<pcl::ShapeContext1980>());
+
pcl::PointCloud<pcl::PrincipalRadiiRSD>::Ptr descriptors(new pcl::PointCloud<pcl::PrincipalRadiiRSD>());
  
 
// Read a PCD file from disk.
 
// Read a PCD file from disk.
Line 345: Line 347:
 
normalEstimation.compute(*normals);
 
normalEstimation.compute(*normals);
  
// 3DSC estimation object.
+
// RSD estimation object.
pcl::ShapeContext3DEstimation<pcl::PointXYZ, pcl::Normal, pcl::ShapeContext1980> sc3d;
+
pcl::RSDEstimation<pcl::PointXYZ, pcl::Normal, pcl::PrincipalRadiiRSD> rsd;
sc3d.setInputCloud(cloud);
+
rsd.setInputCloud(cloud);
sc3d.setInputNormals(normals);
+
rsd.setInputNormals(normals);
sc3d.setSearchMethod(kdtree);
+
rsd.setSearchMethod(kdtree);
// Search radius, to look for neighbors. It will also be the radius of the support sphere.
+
// Search radius, to look for neighbors. Note: the value given here has to be
sc3d.setRadiusSearch(0.05);
+
// larger than the radius used to estimate the normals.
// The minimal radius value for the search sphere, to avoid being too sensitive
+
rsd.setRadiusSearch(0.05);
// in bins close to the center of the sphere.
+
// Plane radius. Any radius larger than this is considered infinite (a plane).
sc3d.setMinimalRadius(0.05 / 10.0);
+
rsd.setPlaneRadius(0.1);
// Radius used to compute the local point density for the neighbors
+
// Do we want to save the full distance-angle histograms?
// (the density is the number of points within that radius).
+
rsd.setSaveHistograms(false);
sc3d.setPointDensityRadius(0.05 / 5.0);
+
 
+
rsd.compute(*descriptors);
sc3d.compute(*descriptors);
 
 
}</syntaxhighlight>
 
}</syntaxhighlight>
  
<div style="background-color: #F8F8F8; border-style: dotted;">
+
<span style="color:#606060">'''''NOTE: This code will only compile with PCL versions 1.8 and above (the current trunk).'''''</span>
* '''Input''': Points, Normals, Search method, Radius, Minimal radius, Point density radius
 
* '''Output''': 3DSC descriptors
 
* '''Publication''':
 
** [http://www.cs.jhu.edu/~misha/Papers/Frome04.pdf Recognizing Objects in Range Data Using Regional Point Descriptors] (Andrea Frome et al., 2004)
 
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_shape_context3_d_estimation.html pcl::ShapeContext3DEstimation]
 
* [http://robotica.unileon.es/~victorm/PCL_3DSC.tar.gz Download]
 
</div>
 
  
 +
Optionally, you can use the <span style="color:#FF1493">"setSaveHistograms()"</span> function to enable the saving of the full distance-angle histograms, and then use <span style="color:#FF1493">"getHistograms()"</span> to retrieve them.
  
===USC===
+
<div style="background-color: #F8F8F8; border-style: dotted;">
 +
* '''Input''': Points, Normals, Search method, Radius, Maximum Radius, [Subdivisions], [Save full histograms]
 +
* '''Output''': RSD descriptors
 +
* '''Publications''':
 +
** [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.391.7398&rep=rep1&type=pdf General 3D Modelling of Novel Objects from a Single View] (Zoltan-Csaba Marton et al., 2010)
 +
** [https://ias.cs.tum.edu/_media/spezial/bib/marton11ijrr.pdf Combined 2D-3D Categorization and Classification for Multimodal Perception Systems] (Zoltan-Csaba Marton et al., 2011)
 +
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_r_s_d_estimation.html pcl::RSDEstimation]
 +
* [http://robotica.unileon.es/~victorm/PCL_RSD.tar.gz Download]
 +
</div>
 +
 
 +
 
 +
==3DSC==
 +
 
 +
The 3D Shape Context is a descriptor that extends its existing 2D counterpart to the third dimension. It works by creating a support structure (a sphere, to be precise) centered at the point we are computing the descriptor for, with the given search radius. The "north pole" of that sphere (the notion of "up") is pointed to match the normal at that point. Then, the sphere is divided in 3D regions or bins. In the first 2 coordinates (azimuth and elevation) the divisions are equally spaced, but in the third (the radial dimension), divisions are logarithmically spaced so they are smaller towards the center. A minimum radius can be specified to prevent very small bins, that would be too sensitive to small changes in the surface.
 +
 
 +
 
 +
[[Image:3DSC_support_structure.png|thumb|center|200px|Support structure to compute the 3DSC for a point (image from [http://www.cs.jhu.edu/~misha/Papers/Frome04.pdf original paper]).]]
 +
 
 +
 
 +
For each bin, a weighted count is accumulated for every neighboring point that lies within. The weight depends on the volume of the bin and the local point density (number of points around the current neighbor). This gives the descriptor some degree of resolution invariance.
  
The Unique Shape Context descriptor extends the 3DSC by defining a local reference frame, in order to provide an unique orientation for each point. This not only improves the accuracy of the descriptor, it also reduces its size, as computing multiple descriptors to account for orientation is no longer necessary.
+
We have mentioned that the sphere is given the direction of the normal. This still leaves one degree of freedom (only two axes have been locked, the azimuth remains free). Because of this, the descriptor so far does not cope with rotation. To overcome this (so the same point in two different clouds has the same value), the support sphere is rotated around the normal ''N'' times (a number of degrees that corresponds with the divisions in the azimuth) and the process is repeated for each, giving a total of ''N'' descriptors for that point.
  
You can check the second publication listed below to learn more about how the LRF is computed.
+
You can compute the 3DSC descriptor the following way:
  
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
#include <pcl/features/usc.h>
+
#include <pcl/features/normal_3d.h>
 +
#include <pcl/features/3dsc.h>
  
 
int
 
int
Line 386: Line 401:
 
// Object for storing the point cloud.
 
// Object for storing the point cloud.
 
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
 
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
// Object for storing the USC descriptors for each point.
+
// Object for storing the normals.
pcl::PointCloud<pcl::UniqueShapeContext1960>::Ptr descriptors(new pcl::PointCloud<pcl::UniqueShapeContext1960>());
+
pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
 +
// Object for storing the 3DSC descriptors for each point.
 +
pcl::PointCloud<pcl::ShapeContext1980>::Ptr descriptors(new pcl::PointCloud<pcl::ShapeContext1980>());
  
 
// Read a PCD file from disk.
 
// Read a PCD file from disk.
Line 398: Line 415:
 
// for simplicity, but be aware that computation can take a long time.
 
// for simplicity, but be aware that computation can take a long time.
  
// USC estimation object.
+
// Estimate the normals.
pcl::UniqueShapeContext<pcl::PointXYZ, pcl::UniqueShapeContext1960, pcl::ReferenceFrame> usc;
+
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
usc.setInputCloud(cloud);
+
normalEstimation.setInputCloud(cloud);
 +
normalEstimation.setRadiusSearch(0.03);
 +
pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
 +
normalEstimation.setSearchMethod(kdtree);
 +
normalEstimation.compute(*normals);
 +
 
 +
// 3DSC estimation object.
 +
pcl::ShapeContext3DEstimation<pcl::PointXYZ, pcl::Normal, pcl::ShapeContext1980> sc3d;
 +
sc3d.setInputCloud(cloud);
 +
sc3d.setInputNormals(normals);
 +
sc3d.setSearchMethod(kdtree);
 
// Search radius, to look for neighbors. It will also be the radius of the support sphere.
 
// Search radius, to look for neighbors. It will also be the radius of the support sphere.
usc.setRadiusSearch(0.05);
+
sc3d.setRadiusSearch(0.05);
 
// The minimal radius value for the search sphere, to avoid being too sensitive
 
// The minimal radius value for the search sphere, to avoid being too sensitive
 
// in bins close to the center of the sphere.
 
// in bins close to the center of the sphere.
usc.setMinimalRadius(0.05 / 10.0);
+
sc3d.setMinimalRadius(0.05 / 10.0);
 
// Radius used to compute the local point density for the neighbors
 
// Radius used to compute the local point density for the neighbors
 
// (the density is the number of points within that radius).
 
// (the density is the number of points within that radius).
usc.setPointDensityRadius(0.05 / 5.0);
+
sc3d.setPointDensityRadius(0.05 / 5.0);
// Set the radius to compute the Local Reference Frame.
+
 
usc.setLocalRadius(0.05);
+
sc3d.compute(*descriptors);
 
 
usc.compute(*descriptors);
 
 
}</syntaxhighlight>
 
}</syntaxhighlight>
 
<span style="color:#606060">'''''NOTE: This code will only compile with PCL versions 1.8 and above (the current trunk). For 1.7 and below, change UniqueShapeContext1960 to ShapeContext1980, and edit CMakeLists.txt.'''''</span>
 
  
 
<div style="background-color: #F8F8F8; border-style: dotted;">
 
<div style="background-color: #F8F8F8; border-style: dotted;">
* '''Input''': Points, Radius, Minimal radius, Point density radius, Local radius
+
* '''Input''': Points, Normals, Search method, Radius, Minimal radius, Point density radius
* '''Output''': USC descriptors
+
* '''Output''': 3DSC descriptors
* '''Publications''':
+
* '''Publication''':
** [http://vision.deis.unibo.it/fede/papers/eccv10.pdf Unique Signatures of Histograms for Local Surface Description] (Federico Tombari et al., 2010)
+
** [http://www.cs.jhu.edu/~misha/Papers/Frome04.pdf Recognizing Objects in Range Data Using Regional Point Descriptors] (Andrea Frome et al., 2004)
** [http://vision.deis.unibo.it/fede/papers/3dor10.pdf Unique Shape Context for 3D Data Description] (Federico Tombari et al., 2010)
+
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_shape_context3_d_estimation.html pcl::ShapeContext3DEstimation]
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_unique_shape_context.html pcl::UniqueShapeContext]
+
* [http://robotica.unileon.es/~victorm/PCL_3DSC.tar.gz Download]
* [http://robotica.unileon.es/~victorm/PCL_USC.tar.gz Download]
 
 
</div>
 
</div>
  
  
==SHOT==
+
===USC===
  
SHOT stands for Signature of Histograms of Orientations. Like 3DSC, it encodes information about the topology (surface) withing a spherical support structure. This sphere is divided in 32 bins or volumes, with 8 divisions along the azimuth, 2 along the elevation, and 2 along the radius. For every volume, a one-dimensional local histogram is computed. The variable chosen is the angle between the normal of the keypoint and the current point within that volume (to be precise, the cosine, which was found to be better suitable).
+
The Unique Shape Context descriptor extends the 3DSC by defining a local reference frame, in order to provide an unique orientation for each point. This not only improves the accuracy of the descriptor, it also reduces its size, as computing multiple descriptors to account for orientation is no longer necessary.
  
 +
You can check the second publication listed below to learn more about how the LRF is computed.
  
[[Image:SHOT_support_structure.png|thumb|center|200px|Support structure to compute SHOT. Only 4 azimuth divisions are shown for clarity (image from [http://vision.deis.unibo.it/fede/papers/eccv10.pdf original paper]).]]
+
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 
+
#include <pcl/features/usc.h>
 
 
When all local histograms have been computed, they are stitched together in a final descriptor. Like the USC descriptor, SHOT makes use of a local reference frame, making it rotation invariant. It is also robust to noise and clutter.
 
 
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 
#include <pcl/features/normal_3d.h>
 
#include <pcl/features/shot.h>
 
  
 
int
 
int
Line 447: Line 464:
 
// Object for storing the point cloud.
 
// Object for storing the point cloud.
 
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
 
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
// Object for storing the normals.
+
// Object for storing the USC descriptors for each point.
pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
+
pcl::PointCloud<pcl::UniqueShapeContext1960>::Ptr descriptors(new pcl::PointCloud<pcl::UniqueShapeContext1960>());
// Object for storing the SHOT descriptors for each point.
 
pcl::PointCloud<pcl::SHOT352>::Ptr descriptors(new pcl::PointCloud<pcl::SHOT352>());
 
  
 
// Read a PCD file from disk.
 
// Read a PCD file from disk.
Line 461: Line 476:
 
// for simplicity, but be aware that computation can take a long time.
 
// for simplicity, but be aware that computation can take a long time.
  
// Estimate the normals.
+
// USC estimation object.
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
+
pcl::UniqueShapeContext<pcl::PointXYZ, pcl::UniqueShapeContext1960, pcl::ReferenceFrame> usc;
normalEstimation.setInputCloud(cloud);
+
usc.setInputCloud(cloud);
normalEstimation.setRadiusSearch(0.03);
+
// Search radius, to look for neighbors. It will also be the radius of the support sphere.
pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
+
usc.setRadiusSearch(0.05);
normalEstimation.setSearchMethod(kdtree);
+
// The minimal radius value for the search sphere, to avoid being too sensitive
normalEstimation.compute(*normals);
+
// in bins close to the center of the sphere.
 +
usc.setMinimalRadius(0.05 / 10.0);
 +
// Radius used to compute the local point density for the neighbors
 +
// (the density is the number of points within that radius).
 +
usc.setPointDensityRadius(0.05 / 5.0);
 +
// Set the radius to compute the Local Reference Frame.
 +
usc.setLocalRadius(0.05);
  
// SHOT estimation object.
+
usc.compute(*descriptors);
pcl::SHOTEstimation<pcl::PointXYZ, pcl::Normal, pcl::SHOT352> shot;
 
shot.setInputCloud(cloud);
 
shot.setInputNormals(normals);
 
// The radius that defines which of the keypoint's neighbors are described.
 
// If too large, there may be clutter, and if too small, not enough points may be found.
 
shot.setRadiusSearch(0.02);
 
 
 
shot.compute(*descriptors);
 
 
}</syntaxhighlight>
 
}</syntaxhighlight>
  
Like with FPFH, a multithreading-optimized variant is available with <span style="color:#FF1493">"SHOTEstimationOMP"</span>, that makes use of OpenMP. You need to include the header <span style="color:#FF1493">"shot_omp.h"</span>. Also, another variant that uses the texture for matching is available, <span style="color:#FF1493">"SHOTColorEstimation"</span>, with an optimized version too (see the second publication for more details). It outputs a <span style="color:#FF1493">"SHOT1344"</span> descriptor.
+
<span style="color:#606060">'''''NOTE: This code will only compile with PCL versions 1.8 and above (the current trunk). For 1.7 and below, change UniqueShapeContext1960 to ShapeContext1980, and edit CMakeLists.txt.'''''</span>
  
 
<div style="background-color: #F8F8F8; border-style: dotted;">
 
<div style="background-color: #F8F8F8; border-style: dotted;">
* '''Input''': Points, Normals, Radius
+
* '''Input''': Points, Radius, Minimal radius, Point density radius, Local radius
* '''Output''': SHOT descriptors
+
* '''Output''': USC descriptors
* '''Publications''':
+
* '''Publication''':
** [http://vision.deis.unibo.it/fede/papers/eccv10.pdf Unique Signatures of Histograms for Local Surface Description] (Federico Tombari et al., 2010)
+
** [http://vision.deis.unibo.it/fede/papers/3dor10.pdf Unique Shape Context for 3D Data Description] (Federico Tombari et al., 2010)
** [http://www.vision.deis.unibo.it/fede/papers/icip11.pdf A Combined Texture-Shaped Descriptor for Enhanced 3D Feature Matching] (Federico Tombari et al., 2011)
+
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_unique_shape_context.html pcl::UniqueShapeContext]
* '''API''':
+
* [http://robotica.unileon.es/~victorm/PCL_USC.tar.gz Download]
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_estimation.html pcl::SHOTEstimation],
 
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_color_estimation.html pcl::SHOTColorEstimation]
 
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_estimation_o_m_p.html pcl::SHOTEstimationOMP]
 
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_color_estimation_o_m_p.html pcl::SHOTColorEstimationOMP]
 
* [http://robotica.unileon.es/~victorm/PCL_SHOT.tar.gz Download]
 
 
</div>
 
</div>
  
  
==Spin image==
+
==SHOT==
  
The Spin Image (SI) is the oldest descriptor we are going to see here. It has been around since 1997, but it still sees some use for certain applications. It was originally designed to describe surfaces made by vertices, edges and polygons, but it has been since adapted for point clouds. The descriptor is unlike all others in that the output resembles an image that can be compared with another with the usual means.
+
SHOT stands for Signature of Histograms of Orientations. Like 3DSC, it encodes information about the topology (surface) withing a spherical support structure. This sphere is divided in 32 bins or volumes, with 8 divisions along the azimuth, 2 along the elevation, and 2 along the radius. For every volume, a one-dimensional local histogram is computed. The variable chosen is the angle between the normal of the keypoint and the current point within that volume (to be precise, the cosine, which was found to be better suitable).
  
The support structure used is a cylinder, centered at the point, with a given radius and height, and aligned with the normal. This cylinder is divided radially and vertically into volumes. For each one, the number of neighbors lying inside is added up, eventually producing a descriptor. Weighting and interpolation are used to improve the result. The final descriptor can be seen as a grayscale image where dark areas correspond to volumes with higher point density.
 
  
 +
[[Image:SHOT_support_structure.png|thumb|center|200px|Support structure to compute SHOT. Only 4 azimuth divisions are shown for clarity (image from [http://vision.deis.unibo.it/fede/papers/eccv10.pdf original paper]).]]
  
[[Image:Spin images.png|thumb|center|400px|Spin images computed for 3 points of a model (image from [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.71.4190&rep=rep1&type=pdf original thesis]).]]
 
  
 +
When all local histograms have been computed, they are stitched together in a final descriptor. Like the USC descriptor, SHOT makes use of a local reference frame, making it rotation invariant. It is also robust to noise and clutter.
  
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 
#include <pcl/features/normal_3d.h>
 
#include <pcl/features/normal_3d.h>
#include <pcl/features/spin_image.h>
+
#include <pcl/features/shot.h>
 
 
// A handy typedef.
 
typedef pcl::Histogram<153> SpinImage;
 
  
 
int
 
int
Line 521: Line 526:
 
// Object for storing the normals.
 
// Object for storing the normals.
 
pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
 
pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
// Object for storing the spin image for each point.
+
// Object for storing the SHOT descriptors for each point.
pcl::PointCloud<SpinImage>::Ptr descriptors(new pcl::PointCloud<SpinImage>());
+
pcl::PointCloud<pcl::SHOT352>::Ptr descriptors(new pcl::PointCloud<pcl::SHOT352>());
  
 
// Read a PCD file from disk.
 
// Read a PCD file from disk.
Line 541: Line 546:
 
normalEstimation.compute(*normals);
 
normalEstimation.compute(*normals);
  
// Spin image estimation object.
+
// SHOT estimation object.
pcl::SpinImageEstimation<pcl::PointXYZ, pcl::Normal, SpinImage> si;
+
pcl::SHOTEstimation<pcl::PointXYZ, pcl::Normal, pcl::SHOT352> shot;
si.setInputCloud(cloud);
+
shot.setInputCloud(cloud);
si.setInputNormals(normals);
+
shot.setInputNormals(normals);
// Radius of the support cylinder.
+
// The radius that defines which of the keypoint's neighbors are described.
si.setRadiusSearch(0.02);
+
// If too large, there may be clutter, and if too small, not enough points may be found.
// Set the resolution of the spin image (the number of bins along one dimension).
+
shot.setRadiusSearch(0.02);
// Note: you must change the output histogram size to reflect this.
 
si.setImageWidth(8);
 
  
si.compute(*descriptors);
+
shot.compute(*descriptors);
 
}</syntaxhighlight>
 
}</syntaxhighlight>
  
The Spin Image estimation object provides more methods for tuning the estimation, so checking the API is recommended.
+
Like with FPFH, a multithreading-optimized variant is available with <span style="color:#FF1493">"SHOTEstimationOMP"</span>, that makes use of OpenMP. You need to include the header <span style="color:#FF1493">"shot_omp.h"</span>. Also, another variant that uses the texture for matching is available, <span style="color:#FF1493">"SHOTColorEstimation"</span>, with an optimized version too (see the second publication for more details). It outputs a <span style="color:#FF1493">"SHOT1344"</span> descriptor.
  
 
<div style="background-color: #F8F8F8; border-style: dotted;">
 
<div style="background-color: #F8F8F8; border-style: dotted;">
* '''Input''': Points, Normals, Radius, Image resolution
+
* '''Input''': Points, Normals, Radius
* '''Output''': Spin images
+
* '''Output''': SHOT descriptors
 
* '''Publications''':
 
* '''Publications''':
** [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.71.4190&rep=rep1&type=pdf Spin-Images: A Representation for 3-D Surface Matching] (Andrew Edie Johnson, 1997)
+
** [http://vision.deis.unibo.it/fede/papers/eccv10.pdf Unique Signatures of Histograms for Local Surface Description] (Federico Tombari et al., 2010)
** [http://www.cs.jhu.edu/~misha/Papers/Johnson99.pdf Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes] (Andrew Edie Johnson and Martial Hebert, 1999)
+
** [http://www.vision.deis.unibo.it/fede/papers/icip11.pdf A Combined Texture-Shaped Descriptor for Enhanced 3D Feature Matching] (Federico Tombari et al., 2011)
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_spin_image_estimation.html pcl::SpinImageEstimation]
+
* '''API''':
* [http://robotica.unileon.es/~victorm/PCL_spin_image.tar.gz Download]
+
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_estimation.html pcl::SHOTEstimation],
 +
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_color_estimation.html pcl::SHOTColorEstimation]
 +
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_estimation_o_m_p.html pcl::SHOTEstimationOMP]
 +
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_color_estimation_o_m_p.html pcl::SHOTColorEstimationOMP]
 +
* [http://robotica.unileon.es/~victorm/PCL_SHOT.tar.gz Download]
 
</div>
 
</div>
  
  
==RIFT==
+
==Spin image==
  
The Rotation-Invariant Feature Transform, like the spin image, takes some concepts from 2D features, in this case from the Scale-Invariant Feature Transform ([http://en.wikipedia.org/wiki/Scale-invariant_feature_transform SIFT]). It is the only descriptor seen here that requires intensity information in order to compute it (it can be obtained from the RGB color values). This means, of course, that you will not be able to use RIFT with standard XYZ clouds, you also need the texture.
+
The Spin Image (SI) is the oldest descriptor we are going to see here. It has been around since 1997, but it still sees some use for certain applications. It was originally designed to describe surfaces made by vertices, edges and polygons, but it has been since adapted for point clouds. The descriptor is unlike all others in that the output resembles an image that can be compared with another with the usual means.
  
In the first step, a circular patch (with the given radius) is fitted on the surface the point lies on. This patch is divided into concentric rings, according to the chosen distance bin size. Then, an histogram is populated with all the point's neighbors lying inside a sphere centered at that point and with the mentioned radius. The distance and the orientation of the intensity gradient at each point are considered. To make it rotation invariant, the angle between the gradient orientation and the vector pointing outward from the center of the patch is measured.
+
The support structure used is a cylinder, centered at the point, with a given radius and height, and aligned with the normal. This cylinder is divided radially and vertically into volumes. For each one, the number of neighbors lying inside is added up, eventually producing a descriptor. Weighting and interpolation are used to improve the result. The final descriptor can be seen as a grayscale image where dark areas correspond to volumes with higher point density.
  
  
[[Image:RIFT.png|thumb|center|600px|RIFT feature values at 3 different locations in the descriptor (image from [http://www.cs.illinois.edu/~slazebni/publications/pami05.pdf original paper]).]]
+
[[Image:Spin images.png|thumb|center|400px|Spin images computed for 3 points of a model (image from [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.71.4190&rep=rep1&type=pdf original thesis]).]]
 
 
  
The authors' original implementation uses 4 rings and 8 histogram orientations, which produce a descriptor of size 32. RIFT is not robust to texture flipping, though this was never considered a big issue.
 
  
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
#include <pcl/point_types_conversion.h>
 
 
#include <pcl/features/normal_3d.h>
 
#include <pcl/features/normal_3d.h>
#include <pcl/features/intensity_gradient.h>
+
#include <pcl/features/spin_image.h>
#include <pcl/features/rift.h>
 
  
 
// A handy typedef.
 
// A handy typedef.
typedef pcl::Histogram<32> RIFT32;
+
typedef pcl::Histogram<153> SpinImage;
  
 
int
 
int
 
main(int argc, char** argv)
 
main(int argc, char** argv)
 
{
 
{
// Object for storing the point cloud with color information.
+
// Object for storing the point cloud.
pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloudColor(new pcl::PointCloud<pcl::PointXYZRGB>);
+
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
// Object for storing the point cloud with intensity value.
 
pcl::PointCloud<pcl::PointXYZI>::Ptr cloudIntensity(new pcl::PointCloud<pcl::PointXYZI>);
 
// Object for storing the intensity gradients.
 
pcl::PointCloud<pcl::IntensityGradient>::Ptr gradients(new pcl::PointCloud<pcl::IntensityGradient>);
 
 
// Object for storing the normals.
 
// Object for storing the normals.
 
pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
 
pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
// Object for storing the RIFT descriptor for each point.
+
// Object for storing the spin image for each point.
pcl::PointCloud<RIFT32>::Ptr descriptors(new pcl::PointCloud<RIFT32>());
+
pcl::PointCloud<SpinImage>::Ptr descriptors(new pcl::PointCloud<SpinImage>());
  
 
// Read a PCD file from disk.
 
// Read a PCD file from disk.
if (pcl::io::loadPCDFile<pcl::PointXYZRGB>(argv[1], *cloudColor) != 0)
+
if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *cloud) != 0)
 
{
 
{
 
return -1;
 
return -1;
Line 610: Line 609:
 
// Note: you would usually perform downsampling now. It has been omitted here
 
// Note: you would usually perform downsampling now. It has been omitted here
 
// for simplicity, but be aware that computation can take a long time.
 
// for simplicity, but be aware that computation can take a long time.
 
// Convert the RGB to intensity.
 
pcl::PointCloudXYZRGBtoXYZI(*cloudColor, *cloudIntensity);
 
  
 
// Estimate the normals.
 
// Estimate the normals.
pcl::NormalEstimation<pcl::PointXYZI, pcl::Normal> normalEstimation;
+
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
normalEstimation.setInputCloud(cloudIntensity);
+
normalEstimation.setInputCloud(cloud);
 
normalEstimation.setRadiusSearch(0.03);
 
normalEstimation.setRadiusSearch(0.03);
pcl::search::KdTree<pcl::PointXYZI>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZI>);
+
pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
 
normalEstimation.setSearchMethod(kdtree);
 
normalEstimation.setSearchMethod(kdtree);
 
normalEstimation.compute(*normals);
 
normalEstimation.compute(*normals);
  
// Compute the intensity gradients.
+
// Spin image estimation object.
pcl::IntensityGradientEstimation < pcl::PointXYZI, pcl::Normal, pcl::IntensityGradient,
+
pcl::SpinImageEstimation<pcl::PointXYZ, pcl::Normal, SpinImage> si;
pcl::common::IntensityFieldAccessor<pcl::PointXYZI> > ge;
+
si.setInputCloud(cloud);
ge.setInputCloud(cloudIntensity);
+
si.setInputNormals(normals);
ge.setInputNormals(normals);
+
// Radius of the support cylinder.
ge.setRadiusSearch(0.03);
+
si.setRadiusSearch(0.02);
ge.compute(*gradients);
+
// Set the resolution of the spin image (the number of bins along one dimension).
 +
// Note: you must change the output histogram size to reflect this.
 +
si.setImageWidth(8);
  
// RIFT estimation object.
+
si.compute(*descriptors);
pcl::RIFTEstimation<pcl::PointXYZI, pcl::IntensityGradient, RIFT32> rift;
+
}</syntaxhighlight>
rift.setInputCloud(cloudIntensity);
 
rift.setSearchMethod(kdtree);
 
// Set the intensity gradients to use.
 
rift.setInputGradient(gradients);
 
// Radius, to get all neighbors within.
 
rift.setRadiusSearch(0.02);
 
// Set the number of bins to use in the distance dimension.
 
rift.setNrDistanceBins(4);
 
// Set the number of bins to use in the gradient orientation dimension.
 
rift.setNrGradientBins(8);
 
// Note: you must change the output histogram size to reflect the previous values.
 
  
rift.compute(*descriptors);
+
The Spin Image estimation object provides more methods for tuning the estimation, so checking the API is recommended.
}</syntaxhighlight>
 
  
 
<div style="background-color: #F8F8F8; border-style: dotted;">
 
<div style="background-color: #F8F8F8; border-style: dotted;">
* '''Input''': Points, Search method, Intensity gradients, Radius, Distance bins, Gradient bins
+
* '''Input''': Points, Normals, Radius, Image resolution
* '''Output''': RIFT descriptors
+
* '''Output''': Spin images
* '''Publication''':
+
* '''Publications''':
** [http://www.cs.illinois.edu/~slazebni/publications/pami05.pdf A Sparse Texture Representation Using Local Affine Regions] (Svetlana Lazebnik et al., 2005)
+
** [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.71.4190&rep=rep1&type=pdf Spin-Images: A Representation for 3-D Surface Matching] (Andrew Edie Johnson, 1997)
* '''API''':
+
** [http://www.cs.jhu.edu/~misha/Papers/Johnson99.pdf Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes] (Andrew Edie Johnson and Martial Hebert, 1999)
** [http://docs.pointclouds.org/trunk/classpcl_1_1_r_i_f_t_estimation.html pcl::RIFTEstimation]
+
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_spin_image_estimation.html pcl::SpinImageEstimation]
** [http://docs.pointclouds.org/trunk/classpcl_1_1_intensity_gradient_estimation.html pcl::IntensityGradientEstimation]
+
* [http://robotica.unileon.es/~victorm/PCL_spin_image.tar.gz Download]
** [http://docs.pointclouds.org/trunk/structpcl_1_1_intensity_gradient.html pcl::IntensityGradient]
 
* [http://robotica.unileon.es/~victorm/PCL_RIFT.tar.gz Download]
 
 
</div>
 
</div>
  
  
==NARF==
+
==RIFT==
  
The Normal Aligned Radial Feature is the only descriptor here that does not take a point cloud as input. Instead, it works with range images. A range image is a common RGB image in which the distance to the point that corresponds to a certain pixel is encoded as a color value in the [http://en.wikipedia.org/wiki/Visible_spectrum visible light spectrum]: the points that are closer to the camera would be violet, while the points near the maximum sensor range would be red.
+
The Rotation-Invariant Feature Transform, like the spin image, takes some concepts from 2D features, in this case from the Scale-Invariant Feature Transform ([https://en.wikipedia.org/wiki/Scale-invariant_feature_transform SIFT]). It is the only descriptor seen here that requires intensity information in order to compute it (it can be obtained from the RGB color values). This means, of course, that you will not be able to use RIFT with standard XYZ clouds, you also need the texture.
  
NARF also requires us to find suitable keypoints to compute the descriptor for. NARF keypoints are located near an object's corners, and this also requires to find the borders (transitions from foreground to background), which are trivial to find with a range image. Because of this lengthy pipeline, I will describe the whole process in different sections.
+
In the first step, a circular patch (with the given radius) is fitted on the surface the point lies on. This patch is divided into concentric rings, according to the chosen distance bin size. Then, an histogram is populated with all the point's neighbors lying inside a sphere centered at that point and with the mentioned radius. The distance and the orientation of the intensity gradient at each point are considered. To make it rotation invariant, the angle between the gradient orientation and the vector pointing outward from the center of the patch is measured.
  
===Obtaining a range image===
 
  
Because we always work with point clouds, I will now explain how you can convert one into a range image, in order to use it for the NARF descriptor. PCL provides a couple of handy classes to perform the conversion, given that you fill the camera data correctly.
+
[[Image:RIFT.png|thumb|center|600px|RIFT feature values at 3 different locations in the descriptor (image from [http://www.cs.illinois.edu/~slazebni/publications/pami05.pdf original paper]).]]
  
A range image can be created in two ways. First, we can use spherical projection, which would give us an image similar to the ones produced by a LIDAR sensor. Second, we can use planar projection, which is better suitable for camera-like sensors as the Kinect or the Xtion, and will not have the characteristic distortion of the first one.
 
  
====Spherical projection====
+
The authors' original implementation uses 4 rings and 8 histogram orientations, which produce a descriptor of size 32. RIFT is not robust to texture flipping, though this was never considered a big issue.
 
 
The following code will take a point cloud and create a range image from it, using spherical projection:
 
  
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
#include <pcl/range_image/range_image.h>
+
#include <pcl/point_types_conversion.h>
#include <pcl/visualization/range_image_visualizer.h>
+
#include <pcl/features/normal_3d.h>
 +
#include <pcl/features/intensity_gradient.h>
 +
#include <pcl/features/rift.h>
 +
 
 +
// A handy typedef.
 +
typedef pcl::Histogram<32> RIFT32;
  
 
int
 
int
 
main(int argc, char** argv)
 
main(int argc, char** argv)
 
{
 
{
// Object for storing the point cloud.
+
// Object for storing the point cloud with color information.
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
+
pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloudColor(new pcl::PointCloud<pcl::PointXYZRGB>);
 +
// Object for storing the point cloud with intensity value.
 +
pcl::PointCloud<pcl::PointXYZI>::Ptr cloudIntensity(new pcl::PointCloud<pcl::PointXYZI>);
 +
// Object for storing the intensity gradients.
 +
pcl::PointCloud<pcl::IntensityGradient>::Ptr gradients(new pcl::PointCloud<pcl::IntensityGradient>);
 +
// Object for storing the normals.
 +
pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
 +
// Object for storing the RIFT descriptor for each point.
 +
pcl::PointCloud<RIFT32>::Ptr descriptors(new pcl::PointCloud<RIFT32>());
  
 
// Read a PCD file from disk.
 
// Read a PCD file from disk.
if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *cloud) != 0)
+
if (pcl::io::loadPCDFile<pcl::PointXYZRGB>(argv[1], *cloudColor) != 0)
 
{
 
{
 
return -1;
 
return -1;
 
}
 
}
  
// Parameters needed by the range image object:
+
// Note: you would usually perform downsampling now. It has been omitted here
 +
// for simplicity, but be aware that computation can take a long time.
  
// Angular resolution is the angular distance between pixels.
+
// Convert the RGB to intensity.
// Kinect: 57° horizontal FOV, 43° vertical FOV, 640x480 (chosen here).
+
pcl::PointCloudXYZRGBtoXYZI(*cloudColor, *cloudIntensity);
// Xtion: 58° horizontal FOV, 45° vertical FOV, 640x480.
 
float angularResolutionX = (float)(57.0f / 640.0f * (M_PI / 180.0f));
 
float angularResolutionY = (float)(43.0f / 480.0f * (M_PI / 180.0f));
 
// Maximum horizontal and vertical angles. For example, for a full panoramic scan,
 
// the first would be 360º. Choosing values that adjust to the real sensor will
 
// decrease the time it takes, but don't worry. If the values are bigger than
 
// the real ones, the image will be automatically cropped to discard empty zones.
 
float maxAngleX = (float)(60.0f * (M_PI / 180.0f));
 
float maxAngleY = (float)(50.0f * (M_PI / 180.0f));
 
// Sensor pose. Thankfully, the cloud includes the data.
 
Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud->sensor_origin_[0],
 
cloud->sensor_origin_[1],
 
cloud->sensor_origin_[2])) *
 
Eigen::Affine3f(cloud->sensor_orientation_);
 
// Noise level. If greater than 0, values of neighboring points will be averaged.
 
// This would set the search radius (e.g., 0.03 == 3cm).
 
float noiseLevel = 0.0f;
 
// Minimum range. If set, any point closer to the sensor than this will be ignored.
 
float minimumRange = 0.0f;
 
// Border size. If greater than 0, a border of "unobserved" points will be left
 
// in the image when it is cropped.
 
int borderSize = 1;
 
  
// Range image object.
+
// Estimate the normals.
pcl::RangeImage rangeImage;
+
pcl::NormalEstimation<pcl::PointXYZI, pcl::Normal> normalEstimation;
rangeImage.createFromPointCloud(*cloud, angularResolutionX, angularResolutionY,
+
normalEstimation.setInputCloud(cloudIntensity);
maxAngleX, maxAngleY, sensorPose, pcl::RangeImage::CAMERA_FRAME,
+
normalEstimation.setRadiusSearch(0.03);
noiseLevel, minimumRange, borderSize);
+
pcl::search::KdTree<pcl::PointXYZI>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZI>);
 +
normalEstimation.setSearchMethod(kdtree);
 +
normalEstimation.compute(*normals);
  
// Visualize the image.
+
// Compute the intensity gradients.
pcl::visualization::RangeImageVisualizer viewer("Range image");
+
pcl::IntensityGradientEstimation < pcl::PointXYZI, pcl::Normal, pcl::IntensityGradient,
viewer.showRangeImage(rangeImage);
+
pcl::common::IntensityFieldAccessor<pcl::PointXYZI> > ge;
while (!viewer.wasStopped())
+
ge.setInputCloud(cloudIntensity);
{
+
ge.setInputNormals(normals);
viewer.spinOnce();
+
ge.setRadiusSearch(0.03);
// Sleep 100ms to go easy on the CPU.
+
ge.compute(*gradients);
pcl_sleep(0.1);
 
}
 
}</syntaxhighlight>
 
  
Here you can see an example of the output range image:
+
// RIFT estimation object.
 
+
pcl::RIFTEstimation<pcl::PointXYZI, pcl::IntensityGradient, RIFT32> rift;
 
+
rift.setInputCloud(cloudIntensity);
[[Image:Range_image_spherical.png|thumb|center|400px|Range image of a point cloud, using spherical projection.]]
+
rift.setSearchMethod(kdtree);
 +
// Set the intensity gradients to use.
 +
rift.setInputGradient(gradients);
 +
// Radius, to get all neighbors within.
 +
rift.setRadiusSearch(0.02);
 +
// Set the number of bins to use in the distance dimension.
 +
rift.setNrDistanceBins(4);
 +
// Set the number of bins to use in the gradient orientation dimension.
 +
rift.setNrGradientBins(8);
 +
// Note: you must change the output histogram size to reflect the previous values.
  
 +
rift.compute(*descriptors);
 +
}</syntaxhighlight>
  
 
<div style="background-color: #F8F8F8; border-style: dotted;">
 
<div style="background-color: #F8F8F8; border-style: dotted;">
* '''Input''': Points, Angular resolution, Maximum angle, Sensor pose, Coordinate frame, Noise level, Maximum range, Border size
+
* '''Input''': Points, Search method, Intensity gradients, Radius, Distance bins, Gradient bins
* '''Output''': Range image (spherical projection)
+
* '''Output''': RIFT descriptors
* '''Tutorials''':
+
* '''Publication''':
** [http://pointclouds.org/documentation/tutorials/range_image_creation.php How to create a range image from a point cloud]
+
** [http://www.cs.illinois.edu/~slazebni/publications/pami05.pdf A Sparse Texture Representation Using Local Affine Regions] (Svetlana Lazebnik et al., 2005)
** [http://pointclouds.org/documentation/tutorials/range_image_visualization.php How to visualize a range image]
+
* '''API''':
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_range_image.html pcl::RangeImage]
+
** [http://docs.pointclouds.org/trunk/classpcl_1_1_r_i_f_t_estimation.html pcl::RIFTEstimation]
* [http://robotica.unileon.es/~victorm/PCL_range_image_spherical.tar.gz Download]
+
** [http://docs.pointclouds.org/trunk/classpcl_1_1_intensity_gradient_estimation.html pcl::IntensityGradientEstimation]
 +
** [http://docs.pointclouds.org/trunk/structpcl_1_1_intensity_gradient.html pcl::IntensityGradient]
 +
* [http://robotica.unileon.es/~victorm/PCL_RIFT.tar.gz Download]
 
</div>
 
</div>
  
  
====Planar projection====
+
==NARF==
 +
 
 +
The Normal Aligned Radial Feature is the only descriptor here that does not take a point cloud as input. Instead, it works with range images. A range image is a common RGB image in which the distance to the point that corresponds to a certain pixel is encoded as a color value in the [https://en.wikipedia.org/wiki/Visible_spectrum visible light spectrum]: the points that are closer to the camera would be violet, while the points near the maximum sensor range would be red.
 +
 
 +
NARF also requires us to find suitable keypoints to compute the descriptor for. NARF keypoints are located near an object's corners, and this also requires to find the borders (transitions from foreground to background), which are trivial to find with a range image. Because of this lengthy pipeline, I will describe the whole process in different sections.
 +
 
 +
===Obtaining a range image===
 +
 
 +
Because we always work with point clouds, I will now explain how you can convert one into a range image, in order to use it for the NARF descriptor. PCL provides a couple of handy classes to perform the conversion, given that you fill the camera data correctly.
 +
 
 +
A range image can be created in two ways. First, we can use spherical projection, which would give us an image similar to the ones produced by a LIDAR sensor. Second, we can use planar projection, which is better suitable for camera-like sensors as the Kinect or the Xtion, and will not have the characteristic distortion of the first one.
 +
 
 +
====Spherical projection====
  
As mentioned, planar projection will give better results with clouds taken from a depth camera:
+
The following code will take a point cloud and create a range image from it, using spherical projection:
  
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
#include <pcl/range_image/range_image_planar.h>
+
#include <pcl/range_image/range_image.h>
 
#include <pcl/visualization/range_image_visualizer.h>
 
#include <pcl/visualization/range_image_visualizer.h>
  
Line 773: Line 769:
 
}
 
}
  
// Parameters needed by the planar range image object:
+
// Parameters needed by the range image object:
  
// Image size. Both Kinect and Xtion work at 640x480.
+
// Angular resolution is the angular distance between pixels.
int imageSizeX = 640;
+
// Kinect: 57° horizontal FOV, 43° vertical FOV, 640x480 (chosen here).
int imageSizeY = 480;
+
// Xtion: 58° horizontal FOV, 45° vertical FOV, 640x480.
// Center of projection. here, we choose the middle of the image.
+
float angularResolutionX = (float)(57.0f / 640.0f * (M_PI / 180.0f));
float centerX = 640.0f / 2.0f;
+
float angularResolutionY = (float)(43.0f / 480.0f * (M_PI / 180.0f));
float centerY = 480.0f / 2.0f;
+
// Maximum horizontal and vertical angles. For example, for a full panoramic scan,
// Focal length. The value seen here has been taken from the original depth images.
+
// the first would be 360º. Choosing values that adjust to the real sensor will
// It is safe to use the same value vertically and horizontally.
+
// decrease the time it takes, but don't worry. If the values are bigger than
float focalLengthX = 525.0f, focalLengthY = focalLengthX;
+
// the real ones, the image will be automatically cropped to discard empty zones.
 +
float maxAngleX = (float)(60.0f * (M_PI / 180.0f));
 +
float maxAngleY = (float)(50.0f * (M_PI / 180.0f));
 
// Sensor pose. Thankfully, the cloud includes the data.
 
// Sensor pose. Thankfully, the cloud includes the data.
 
Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud->sensor_origin_[0],
 
Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud->sensor_origin_[0],
Line 794: Line 792:
 
// Minimum range. If set, any point closer to the sensor than this will be ignored.
 
// Minimum range. If set, any point closer to the sensor than this will be ignored.
 
float minimumRange = 0.0f;
 
float minimumRange = 0.0f;
 +
// Border size. If greater than 0, a border of "unobserved" points will be left
 +
// in the image when it is cropped.
 +
int borderSize = 1;
  
// Planar range image object.
+
// Range image object.
pcl::RangeImagePlanar rangeImagePlanar;
+
pcl::RangeImage rangeImage;
rangeImagePlanar.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,
+
rangeImage.createFromPointCloud(*cloud, angularResolutionX, angularResolutionY,
centerX, centerY, focalLengthX, focalLengthX,
+
maxAngleX, maxAngleY, sensorPose, pcl::RangeImage::CAMERA_FRAME,
sensorPose, pcl::RangeImage::CAMERA_FRAME,
+
noiseLevel, minimumRange, borderSize);
noiseLevel, minimumRange);
 
  
 
// Visualize the image.
 
// Visualize the image.
pcl::visualization::RangeImageVisualizer viewer("Planar range image");
+
pcl::visualization::RangeImageVisualizer viewer("Range image");
viewer.showRangeImage(rangeImagePlanar);
+
viewer.showRangeImage(rangeImage);
 
while (!viewer.wasStopped())
 
while (!viewer.wasStopped())
 
{
 
{
Line 813: Line 813:
 
}</syntaxhighlight>
 
}</syntaxhighlight>
  
 +
Here you can see an example of the output range image:
  
[[Image:Range_image_planar.png|thumb|center|400px|Range image of a point cloud, using planar projection.]]
+
 
 +
[[Image:Range_image_spherical.png|thumb|center|400px|Range image of a point cloud, using spherical projection.]]
  
  
 
<div style="background-color: #F8F8F8; border-style: dotted;">
 
<div style="background-color: #F8F8F8; border-style: dotted;">
* '''Input''': Points, Image size, Projection center, Focal length, Sensor pose, Coordinate frame, Noise level, Maximum range
+
* '''Input''': Points, Angular resolution, Maximum angle, Sensor pose, Coordinate frame, Noise level, Maximum range, Border size
* '''Output''': Range image (planar projection)
+
* '''Output''': Range image (spherical projection)
 
* '''Tutorials''':
 
* '''Tutorials''':
 
** [http://pointclouds.org/documentation/tutorials/range_image_creation.php How to create a range image from a point cloud]
 
** [http://pointclouds.org/documentation/tutorials/range_image_creation.php How to create a range image from a point cloud]
 
** [http://pointclouds.org/documentation/tutorials/range_image_visualization.php How to visualize a range image]
 
** [http://pointclouds.org/documentation/tutorials/range_image_visualization.php How to visualize a range image]
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_range_image_planar.html pcl::RangeImagePlanar]
+
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_range_image.html pcl::RangeImage]
* [http://robotica.unileon.es/~victorm/PCL_range_image_planar.tar.gz Download]
+
* [http://robotica.unileon.es/~victorm/PCL_range_image_spherical.tar.gz Download]
 
</div>
 
</div>
  
  
If you prefer to do the conversion in real time while you inspect the cloud, PCL ships with an [https://github.com/PointCloudLibrary/pcl/tree/master/doc/tutorials/content/sources/openni_range_image_visualization example] that fetches an <span style="color:#FF1493">"openni_wrapper::DepthImage"</span> from an OpenNI device and creates the range image from it. You can adapt the code of the [[PCL/OpenNI_tutorial_1:_Installing_and_testing#Testing | example]] example from tutorial 1 to save it to disk with the function [http://docs.pointclouds.org/trunk/group__io.html#ga7291a029cdcde32ca3639d07dc6491b9 pcl::io::saveRangeImagePlanarFilePNG()].
+
====Planar projection====
  
===Extracting borders===
+
As mentioned, planar projection will give better results with clouds taken from a depth camera:
 
 
NARF keypoints are located near the edges of objects in the range image, so in order to find them, we first have to extract the borders. A border is defined as an abrupt change from foreground to background. In a range image, this can be easily seen because there is a "jump" in the depth value of two adjacent pixels.
 
 
 
 
 
[[Image:Range_image_border_detection.png|thumb|center|400px|Border detection on a range image (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).]]
 
 
 
 
 
There are three types of borders. ''Object borders'' consist of the pixels (or points) located on the very edge of an object (the outermost points still belonging to the object). ''Shadow borders'' are points in the background on the edge of occlusions (empty areas in the background due to the objects in front covering them). Notice that, when the cloud is seen from the sensor's viewpoint, object and shadow points will seem adjacent. Finally, ''veil points'' are points interpolated between the previous two which appear in scans taken with LIDAR sensors, so we do not have to worry about them here.
 
 
 
 
 
<center><gallery widths=300px>
 
File:Range_image_border_type.png | Types of borders (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).
 
File:Range_image_borders_example.png | Example of object and shadow borders on a cloud.
 
</gallery></center>
 
 
 
 
 
The algorithm basically compares a point's depth with the values of its neighbors, and if a big difference is found, we know it is due to a border. Points closer to the sensor will be marked as object borders, and the other ones as shadow borders.
 
 
 
PCL provides a class for extracting borders of a range image:
 
  
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 
#include <pcl/range_image/range_image_planar.h>
 
#include <pcl/range_image/range_image_planar.h>
#include <pcl/features/range_image_border_extractor.h>
 
 
#include <pcl/visualization/range_image_visualizer.h>
 
#include <pcl/visualization/range_image_visualizer.h>
  
Line 861: Line 843:
 
// Object for storing the point cloud.
 
// Object for storing the point cloud.
 
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
 
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
// Object for storing the borders.
 
pcl::PointCloud<pcl::BorderDescription>::Ptr borders(new pcl::PointCloud<pcl::BorderDescription>);
 
  
 
// Read a PCD file from disk.
 
// Read a PCD file from disk.
Line 870: Line 850:
 
}
 
}
  
// Convert the cloud to range image.
+
// Parameters needed by the planar range image object:
int imageSizeX = 640, imageSizeY = 480;
+
 
float centerX = (640.0f / 2.0f), centerY = (480.0f / 2.0f);
+
// Image size. Both Kinect and Xtion work at 640x480.
 +
int imageSizeX = 640;
 +
int imageSizeY = 480;
 +
// Center of projection. here, we choose the middle of the image.
 +
float centerX = 640.0f / 2.0f;
 +
float centerY = 480.0f / 2.0f;
 +
// Focal length. The value seen here has been taken from the original depth images.
 +
// It is safe to use the same value vertically and horizontally.
 
float focalLengthX = 525.0f, focalLengthY = focalLengthX;
 
float focalLengthX = 525.0f, focalLengthY = focalLengthX;
 +
// Sensor pose. Thankfully, the cloud includes the data.
 
Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud->sensor_origin_[0],
 
Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud->sensor_origin_[0],
 
cloud->sensor_origin_[1],
 
cloud->sensor_origin_[1],
 
cloud->sensor_origin_[2])) *
 
cloud->sensor_origin_[2])) *
 
Eigen::Affine3f(cloud->sensor_orientation_);
 
Eigen::Affine3f(cloud->sensor_orientation_);
float noiseLevel = 0.0f, minimumRange = 0.0f;
+
// Noise level. If greater than 0, values of neighboring points will be averaged.
pcl::RangeImagePlanar rangeImage;
+
// This would set the search radius (e.g., 0.03 == 3cm).
rangeImage.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,
+
float noiseLevel = 0.0f;
 +
// Minimum range. If set, any point closer to the sensor than this will be ignored.
 +
float minimumRange = 0.0f;
 +
 
 +
// Planar range image object.
 +
pcl::RangeImagePlanar rangeImagePlanar;
 +
rangeImagePlanar.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,
 
centerX, centerY, focalLengthX, focalLengthX,
 
centerX, centerY, focalLengthX, focalLengthX,
 
sensorPose, pcl::RangeImage::CAMERA_FRAME,
 
sensorPose, pcl::RangeImage::CAMERA_FRAME,
 
noiseLevel, minimumRange);
 
noiseLevel, minimumRange);
  
// Border extractor object.
+
// Visualize the image.
pcl::RangeImageBorderExtractor borderExtractor(&rangeImage);
+
pcl::visualization::RangeImageVisualizer viewer("Planar range image");
 
+
viewer.showRangeImage(rangeImagePlanar);
borderExtractor.compute(*borders);
+
while (!viewer.wasStopped())
 
 
// Visualize the borders.
 
pcl::visualization::RangeImageVisualizer* viewer = NULL;
 
viewer = pcl::visualization::RangeImageVisualizer::getRangeImageBordersWidget(rangeImage,
 
-std::numeric_limits<float>::infinity(),
 
std::numeric_limits<float>::infinity(),
 
false, *borders, "Borders");
 
 
 
while (!viewer->wasStopped())
 
 
{
 
{
viewer->spinOnce();
+
viewer.spinOnce();
 
// Sleep 100ms to go easy on the CPU.
 
// Sleep 100ms to go easy on the CPU.
 
pcl_sleep(0.1);
 
pcl_sleep(0.1);
Line 906: Line 891:
  
  
[[Image:Range_image_borders.png|thumb|center|400px|Borders found on the range image.]]
+
[[Image:Range_image_planar.png|thumb|center|400px|Range image of a point cloud, using planar projection.]]
 
 
  
You can use the extractor's <span style="color:#FF1493">"getParameters()"</span> function to get a [http://docs.pointclouds.org/trunk/structpcl_1_1_range_image_border_extractor_1_1_parameters.html pcl::RangeImageBorderExtractor::Parameters] struct with the settings that will be used.
 
  
 
<div style="background-color: #F8F8F8; border-style: dotted;">
 
<div style="background-color: #F8F8F8; border-style: dotted;">
* '''Input''': Range image
+
* '''Input''': Points, Image size, Projection center, Focal length, Sensor pose, Coordinate frame, Noise level, Maximum range
* '''Output''': Borders
+
* '''Output''': Range image (planar projection)
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/range_image_border_extraction.php How to extract borders from range images]
+
* '''Tutorials''':
* '''Publication''':
+
** [http://pointclouds.org/documentation/tutorials/range_image_creation.php How to create a range image from a point cloud]
** [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries] (Bastian Steder et al., 2011)
+
** [http://pointclouds.org/documentation/tutorials/range_image_visualization.php How to visualize a range image]
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_range_image_border_extractor.html pcl::RangeImageBorderExtractor]
+
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_range_image_planar.html pcl::RangeImagePlanar]
* [http://robotica.unileon.es/~victorm/PCL_range_image_borders.tar.gz Download]
+
* [http://robotica.unileon.es/~victorm/PCL_range_image_planar.tar.gz Download]
 
</div>
 
</div>
  
  
===Finding keypoints===
+
If you prefer to do the conversion in real time while you inspect the cloud, PCL ships with an [https://github.com/PointCloudLibrary/pcl/tree/master/doc/tutorials/content/sources/openni_range_image_visualization example] that fetches an <span style="color:#FF1493">"openni_wrapper::DepthImage"</span> from an OpenNI device and creates the range image from it. You can adapt the code of the [[PCL/OpenNI_tutorial_1:_Installing_and_testing#Testing | example]] example from tutorial 1 to save it to disk with the function [http://docs.pointclouds.org/trunk/group__io.html#ga7291a029cdcde32ca3639d07dc6491b9 pcl::io::saveRangeImagePlanarFilePNG()].
 +
 
 +
===Extracting borders===
 +
 
 +
NARF keypoints are located near the edges of objects in the range image, so in order to find them, we first have to extract the borders. A border is defined as an abrupt change from foreground to background. In a range image, this can be easily seen because there is a "jump" in the depth value of two adjacent pixels.
  
Citing the [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original publication]:''
 
  
''"We have the following requirements for our interest point extraction procedure:
+
[[Image:Range_image_border_detection.png|thumb|center|400px|Border detection on a range image (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).]]
  
# ''It must take information about borders and the surface structure into account.''
 
# ''It must select positions that can be reliably detected even if the object is observed from another perspective.''
 
# ''The points must be on positions that provide stable areas for normal estimation or the descriptor calculation in general."''
 
  
The procedure is the following: for every point in the range image, a score is computed that conveys how much the surface changes in its neighborhood (this is tuned with the ''support size'' σ, which is the diameter of the sphere used to find neighboring points). Also, the dominant direction of this change is computed. Then, this direction is compared with those of the neighbors, trying to find how stable the point is (if the directions are very different, that means the point is not stable, and that the surface around changes a lot). Points that are near the object's corners (but not exactly on the very edge) will be good keypoints, yet stable enough.
+
There are three types of borders. ''Object borders'' consist of the pixels (or points) located on the very edge of an object (the outermost points still belonging to the object). ''Shadow borders'' are points in the background on the edge of occlusions (empty areas in the background due to the objects in front covering them). Notice that, when the cloud is seen from the sensor's viewpoint, object and shadow points will seem adjacent. Finally, ''veil points'' are points interpolated between the previous two which appear in scans taken with LIDAR sensors, so we do not have to worry about them here.
  
  
 
<center><gallery widths=300px>
 
<center><gallery widths=300px>
File:NARF_keypoints.png | NARF keypoints (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).
+
File:Range_image_border_type.png | Types of borders (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).
File:NARF_keypoints_support_sizes.png | Interest regions with a support size of 20cm (up) and 1m (down) (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).
+
File:Range_image_borders_example.png | Example of object and shadow borders on a cloud.
 
</gallery></center>
 
</gallery></center>
  
  
In PCL, NARF keypoints can be found this way:
+
The algorithm basically compares a point's depth with the values of its neighbors, and if a big difference is found, we know it is due to a border. Points closer to the sensor will be marked as object borders, and the other ones as shadow borders.
 +
 
 +
PCL provides a class for extracting borders of a range image:
  
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 
#include <pcl/range_image/range_image_planar.h>
 
#include <pcl/range_image/range_image_planar.h>
 
#include <pcl/features/range_image_border_extractor.h>
 
#include <pcl/features/range_image_border_extractor.h>
#include <pcl/keypoints/narf_keypoint.h>
 
 
#include <pcl/visualization/range_image_visualizer.h>
 
#include <pcl/visualization/range_image_visualizer.h>
  
Line 954: Line 938:
 
// Object for storing the point cloud.
 
// Object for storing the point cloud.
 
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
 
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
// Object for storing the keypoints' indices.
+
// Object for storing the borders.
pcl::PointCloud<int>::Ptr keypoints(new pcl::PointCloud<int>);
+
pcl::PointCloud<pcl::BorderDescription>::Ptr borders(new pcl::PointCloud<pcl::BorderDescription>);
  
 
// Read a PCD file from disk.
 
// Read a PCD file from disk.
Line 978: Line 962:
 
noiseLevel, minimumRange);
 
noiseLevel, minimumRange);
  
pcl::RangeImageBorderExtractor borderExtractor;
+
// Border extractor object.
// Keypoint detection object.
+
pcl::RangeImageBorderExtractor borderExtractor(&rangeImage);
pcl::NarfKeypoint detector(&borderExtractor);
+
 
detector.setRangeImage(&rangeImage);
+
borderExtractor.compute(*borders);
// The support size influences how big the surface of interest will be,
 
// when finding keypoints from the border information.
 
detector.getParameters().support_size = 0.2f;
 
  
detector.compute(*keypoints);
+
// Visualize the borders.
 +
pcl::visualization::RangeImageVisualizer* viewer = NULL;
 +
viewer = pcl::visualization::RangeImageVisualizer::getRangeImageBordersWidget(rangeImage,
 +
-std::numeric_limits<float>::infinity(),
 +
std::numeric_limits<float>::infinity(),
 +
false, *borders, "Borders");
  
// Visualize the keypoints.
+
while (!viewer->wasStopped())
pcl::visualization::RangeImageVisualizer viewer("NARF keypoints");
 
viewer.showRangeImage(rangeImage);
 
for (size_t i = 0; i < keypoints->points.size(); ++i)
 
 
{
 
{
viewer.markPoint(keypoints->points[i] % rangeImage.width,
+
viewer->spinOnce();
keypoints->points[i] / rangeImage.width,
 
// Set the color of the pixel to red (the background
 
// circle is already that color). All other parameters
 
// are left untouched, check the API for more options.
 
pcl::visualization::Vector3ub(1.0f, 0.0f, 0.0f));
 
}
 
 
 
while (!viewer.wasStopped())
 
{
 
viewer.spinOnce();
 
 
// Sleep 100ms to go easy on the CPU.
 
// Sleep 100ms to go easy on the CPU.
 
pcl_sleep(0.1);
 
pcl_sleep(0.1);
Line 1,010: Line 983:
  
  
[[Image:Range_image_NARF_keypoints.png|thumb|center|400px|NARF keypoints found on the range image.]]
+
[[Image:Range_image_borders.png|thumb|center|400px|Borders found on the range image.]]
  
 +
 +
You can use the extractor's <span style="color:#FF1493">"getParameters()"</span> function to get a [http://docs.pointclouds.org/trunk/structpcl_1_1_range_image_border_extractor_1_1_parameters.html pcl::RangeImageBorderExtractor::Parameters] struct with the settings that will be used.
  
 
<div style="background-color: #F8F8F8; border-style: dotted;">
 
<div style="background-color: #F8F8F8; border-style: dotted;">
* '''Input''': Range image, Border extractor, Support Size
+
* '''Input''': Range image
* '''Output''': NARF keypoints
+
* '''Output''': Borders
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/narf_keypoint_extraction.php How to extract NARF keypoint from a range image]
+
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/range_image_border_extraction.php How to extract borders from range images]
 
* '''Publication''':
 
* '''Publication''':
** [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries] (Bastian Steder et al., 2011)
+
** [http://europa.informatik.uni-freiburg.de/files/steder11icra.pdf Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries] (Bastian Steder et al., 2011)
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_narf_keypoint.html pcl::NarfKeypoint]
+
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_range_image_border_extractor.html pcl::RangeImageBorderExtractor]
* [http://robotica.unileon.es/~victorm/PCL_NARF_keypoints.tar.gz Download]
+
* [http://robotica.unileon.es/~victorm/PCL_range_image_borders.tar.gz Download]
 
</div>
 
</div>
  
  
===Computing the descriptor===
+
===Finding keypoints===
  
We have created the range image from a point cloud, and we have extracted the borders in order to find good keypoints. Now it is time to compute the NARF descriptor for each keypoint.
+
Citing the [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original publication]:''
 +
 
 +
''"We have the following requirements for our interest point extraction procedure:
 +
 
 +
# ''It must take information about borders and the surface structure into account.''
 +
# ''It must select positions that can be reliably detected even if the object is observed from another perspective.''
 +
# ''The points must be on positions that provide stable areas for normal estimation or the descriptor calculation in general."''
  
The NARF descriptor encodes information about surface changes around a point. First, a local range patch is created around the point. It is like a small range image centered at that point, aligned with the normal (it would seem as if we were looking at the point along the normal). Then, a star pattern with ''n'' beams is overlaid onto the patch, also centered at the point. For every beam, a value is computed, that reflects how much the surface under it changes. The stronger the change is, and the closer to the center it is, the higher the final value will be. The ''n'' resulting values compose the final descriptor.
+
The procedure is the following: for every point in the range image, a score is computed that conveys how much the surface changes in its neighborhood (this is tuned with the ''support size'' σ, which is the diameter of the sphere used to find neighboring points). Also, the dominant direction of this change is computed. Then, this direction is compared with those of the neighbors, trying to find how stable the point is (if the directions are very different, that means the point is not stable, and that the surface around changes a lot). Points that are near the object's corners (but not exactly on the very edge) will be good keypoints, yet stable enough.
  
  
[[Image:NARF_descriptor.png|thumb|center|600px|Computing the NARF descriptor for a keypoint (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).]]
+
<center><gallery widths=300px>
 +
File:NARF_keypoints.png | NARF keypoints (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).
 +
File:NARF_keypoints_support_sizes.png | Interest regions with a support size of 20cm (up) and 1m (down) (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).
 +
</gallery></center>
  
  
The descriptor right now is not invariant to rotations around the normal. To achieve this, the whole possible 360 degrees are binned into an histogram. The value of each bin is computed from the descriptor values according the the angle. Then, the bin with the highest value is considered the dominant orientation, and the descriptor is shifted according to it.
+
In PCL, NARF keypoints can be found this way:
  
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
Line 1,040: Line 1,024:
 
#include <pcl/features/range_image_border_extractor.h>
 
#include <pcl/features/range_image_border_extractor.h>
 
#include <pcl/keypoints/narf_keypoint.h>
 
#include <pcl/keypoints/narf_keypoint.h>
#include <pcl/features/narf_descriptor.h>
+
#include <pcl/visualization/range_image_visualizer.h>
  
 
int
 
int
Line 1,049: Line 1,033:
 
// Object for storing the keypoints' indices.
 
// Object for storing the keypoints' indices.
 
pcl::PointCloud<int>::Ptr keypoints(new pcl::PointCloud<int>);
 
pcl::PointCloud<int>::Ptr keypoints(new pcl::PointCloud<int>);
// Object for storing the NARF descriptors.
 
pcl::PointCloud<pcl::Narf36>::Ptr descriptors(new pcl::PointCloud<pcl::Narf36>);
 
  
 
// Read a PCD file from disk.
 
// Read a PCD file from disk.
Line 1,073: Line 1,055:
 
noiseLevel, minimumRange);
 
noiseLevel, minimumRange);
  
// Extract the keypoints.
 
 
pcl::RangeImageBorderExtractor borderExtractor;
 
pcl::RangeImageBorderExtractor borderExtractor;
 +
// Keypoint detection object.
 
pcl::NarfKeypoint detector(&borderExtractor);
 
pcl::NarfKeypoint detector(&borderExtractor);
 
detector.setRangeImage(&rangeImage);
 
detector.setRangeImage(&rangeImage);
 +
// The support size influences how big the surface of interest will be,
 +
// when finding keypoints from the border information.
 
detector.getParameters().support_size = 0.2f;
 
detector.getParameters().support_size = 0.2f;
 +
 
detector.compute(*keypoints);
 
detector.compute(*keypoints);
  
// The NARF estimator needs the indices in a vector, not a cloud.
+
// Visualize the keypoints.
std::vector<int> keypoints2;
+
pcl::visualization::RangeImageVisualizer viewer("NARF keypoints");
keypoints2.resize(keypoints->points.size());
+
viewer.showRangeImage(rangeImage);
for (unsigned int i = 0; i < keypoints->size(); ++i)
+
for (size_t i = 0; i < keypoints->points.size(); ++i)
keypoints2[i] = keypoints->points[i];
+
{
// NARF estimation object.
+
viewer.markPoint(keypoints->points[i] % rangeImage.width,
pcl::NarfDescriptor narf(&rangeImage, &keypoints2);
+
keypoints->points[i] / rangeImage.width,
// Support size: choose the same value you used for keypoint extraction.
+
// Set the color of the pixel to red (the background
narf.getParameters().support_size = 0.2f;
+
// circle is already that color). All other parameters
// If true, the rotation invariant version of NARF will be used. The histogram
+
// are left untouched, check the API for more options.
// will be shifted according to the dominant orientation to provide robustness to
+
pcl::visualization::Vector3ub(1.0f, 0.0f, 0.0f));
// rotations around the normal.
+
}
narf.getParameters().rotation_invariant = true;
+
 
 +
while (!viewer.wasStopped())
 +
{
 +
viewer.spinOnce();
 +
// Sleep 100ms to go easy on the CPU.
 +
pcl_sleep(0.1);
 +
}
 +
}</syntaxhighlight>
 +
 
 +
 
 +
[[Image:Range_image_NARF_keypoints.png|thumb|center|400px|NARF keypoints found on the range image.]]
  
narf.compute(*descriptors);
 
}</syntaxhighlight>
 
  
 
<div style="background-color: #F8F8F8; border-style: dotted;">
 
<div style="background-color: #F8F8F8; border-style: dotted;">
* '''Input''': Range image, Key points, Support Size
+
* '''Input''': Range image, Border extractor, Support Size
* '''Output''': NARF descriptors
+
* '''Output''': NARF keypoints
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/narf_feature_extraction.php How to extract NARF Features from a range image]
+
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/narf_keypoint_extraction.php How to extract NARF keypoint from a range image]
 
* '''Publication''':
 
* '''Publication''':
** [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries] (Bastian Steder et al., 2011)
+
** [http://europa.informatik.uni-freiburg.de/files/steder11icra.pdf Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries] (Bastian Steder et al., 2011)
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_narf_descriptor.html pcl::NarfDescriptor]
+
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_narf_keypoint.html pcl::NarfKeypoint]
* [http://robotica.unileon.es/~victorm/PCL_NARF_descriptor.tar.gz Download]
+
* [http://robotica.unileon.es/~victorm/PCL_NARF_keypoints.tar.gz Download]
 
</div>
 
</div>
  
  
=Global descriptors=
+
===Computing the descriptor===
  
Global descriptors encode object geometry. They are not computed for individual points, but for a whole cluster that represents an object. Because of this, a preprocessing step ([[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Segmentation|segmentation]]) is required, in order to retrieve possible candidates.
+
We have created the range image from a point cloud, and we have extracted the borders in order to find good keypoints. Now it is time to compute the NARF descriptor for each keypoint.
  
Global descriptors are used for object recognition and classification, geometric analysis (object type, shape...), and pose estimation.
+
The NARF descriptor encodes information about surface changes around a point. First, a local range patch is created around the point. It is like a small range image centered at that point, aligned with the normal (it would seem as if we were looking at the point along the normal). Then, a star pattern with ''n'' beams is overlaid onto the patch, also centered at the point. For every beam, a value is computed, that reflects how much the surface under it changes. The stronger the change is, and the closer to the center it is, the higher the final value will be. The ''n'' resulting values compose the final descriptor.
  
You should also know that many local descriptors can also be used as global ones. This can be done with descriptors that use a radius to search for neighbors (as PFH does). The trick is to compute it for one single point in the object cluster, and set the radius to the maximum possible distance between any two points (so all points in the cluster are considered as neighbors).
 
 
==VFH==
 
 
The Viewpoint Feature Histogram is based on the FPFH. Because the latter is invariant to the object's pose, the authors decided to expand it by including information about the viewpoint. Also, the FPFH is estimated once for the whole cluster, not for every point.
 
 
The VFH is made up by two parts: a viewpoint direction component, and an extended FPFH component. To compute the first one, the object's [http://en.wikipedia.org/wiki/Centroid centroid] is found, which is the point that results from averaging the X, Y and Z coordinates of all points. Then, the vector between the viewpoint (the position of the sensor) and this centroid is computed, and normalized. Finally, for all points in the cluster, the angle between this vector and their normal is calculated, and the result is binned into an histogram. The vector is translated to each point when computing the angle because it makes the descriptor scale invariant.
 
 
The second component is computed like the FPFH (that results in 3 histograms for the 3 angular features, α, φ and θ), with some differences: it is only computed for the centroid, using the computed viewpoint direction vector as its normal (as the point, obviously, does not have a normal), and setting all the cluster's points as neighbors.
 
  
 +
[[Image:NARF_descriptor.png|thumb|center|600px|Computing the NARF descriptor for a keypoint (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).]]
  
<center><gallery widths=300px>
 
File:VFH_viewpoint_component.png | Viewpoint component of the VFH (image from [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf original paper]).
 
File:VFH_extended_FPFH_component.png | Extended FPFH component of the VFH (image from [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf original paper]).
 
</gallery></center>
 
  
 
+
The descriptor right now is not invariant to rotations around the normal. To achieve this, the whole possible 360 degrees are binned into an histogram. The value of each bin is computed from the descriptor values according to the angle. Then, the bin with the highest value is considered the dominant orientation, and the descriptor is shifted according to it.
The resulting 4 histograms (1 for the viewpoint component, 3 for the extended FPFH component) are concatenated to build the final VFH descriptor. By default, the bins are normalized using the total number of points in the cluster. This makes the VFH descriptor invariant to scale.
 
 
 
 
 
[[Image:VFH_histogram.png|thumb|center|400px| VFH histogram (image from [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf original paper]).]]
 
 
 
 
 
The PCL implementation computes an additional fifth histogram with the distances of the cluster points to the centroid (the Shape Distribution Component, SDC), increading the size of the output descriptor from 263 to 308. The SDC is taken from the CVFH descriptor that we will see in the next section, and makes the result more robust.
 
 
 
The VFH of an already clustered object can be computed this way:
 
  
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
#include <pcl/features/normal_3d.h>
+
#include <pcl/range_image/range_image_planar.h>
#include <pcl/features/vfh.h>
+
#include <pcl/features/range_image_border_extractor.h>
 +
#include <pcl/keypoints/narf_keypoint.h>
 +
#include <pcl/features/narf_descriptor.h>
  
 
int
 
int
 
main(int argc, char** argv)
 
main(int argc, char** argv)
 
{
 
{
// Cloud for storing the object.
+
// Object for storing the point cloud.
pcl::PointCloud<pcl::PointXYZ>::Ptr object(new pcl::PointCloud<pcl::PointXYZ>);
+
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
// Object for storing the normals.
+
// Object for storing the keypoints' indices.
pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
+
pcl::PointCloud<int>::Ptr keypoints(new pcl::PointCloud<int>);
// Object for storing the VFH descriptor.
+
// Object for storing the NARF descriptors.
pcl::PointCloud<pcl::VFHSignature308>::Ptr descriptor(new pcl::PointCloud<pcl::VFHSignature308>);
+
pcl::PointCloud<pcl::Narf36>::Ptr descriptors(new pcl::PointCloud<pcl::Narf36>);
 
 
// Note: you should have performed preprocessing to cluster out the object
 
// from the cloud, and save it to this individual file.
 
  
 
// Read a PCD file from disk.
 
// Read a PCD file from disk.
if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *object) != 0)
+
if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *cloud) != 0)
 
{
 
{
 
return -1;
 
return -1;
 
}
 
}
  
// Estimate the normals.
+
// Convert the cloud to range image.
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
+
int imageSizeX = 640, imageSizeY = 480;
normalEstimation.setInputCloud(object);
+
float centerX = (640.0f / 2.0f), centerY = (480.0f / 2.0f);
normalEstimation.setRadiusSearch(0.03);
+
float focalLengthX = 525.0f, focalLengthY = focalLengthX;
pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
+
Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud->sensor_origin_[0],
normalEstimation.setSearchMethod(kdtree);
+
cloud->sensor_origin_[1],
normalEstimation.compute(*normals);
+
cloud->sensor_origin_[2])) *
 +
Eigen::Affine3f(cloud->sensor_orientation_);
 +
float noiseLevel = 0.0f, minimumRange = 0.0f;
 +
pcl::RangeImagePlanar rangeImage;
 +
rangeImage.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,
 +
centerX, centerY, focalLengthX, focalLengthX,
 +
sensorPose, pcl::RangeImage::CAMERA_FRAME,
 +
noiseLevel, minimumRange);
 +
 
 +
// Extract the keypoints.
 +
pcl::RangeImageBorderExtractor borderExtractor;
 +
pcl::NarfKeypoint detector(&borderExtractor);
 +
detector.setRangeImage(&rangeImage);
 +
detector.getParameters().support_size = 0.2f;
 +
detector.compute(*keypoints);
  
// VFH estimation object.
+
// The NARF estimator needs the indices in a vector, not a cloud.
pcl::VFHEstimation<pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308> vfh;
+
std::vector<int> keypoints2;
vfh.setInputCloud(object);
+
keypoints2.resize(keypoints->points.size());
vfh.setInputNormals(normals);
+
for (unsigned int i = 0; i < keypoints->size(); ++i)
vfh.setSearchMethod(kdtree);
+
keypoints2[i] = keypoints->points[i];
// Optionally, we can normalize the bins of the resulting histogram,
+
// NARF estimation object.
// using the total number of points.
+
pcl::NarfDescriptor narf(&rangeImage, &keypoints2);
vfh.setNormalizeBins(true);
+
// Support size: choose the same value you used for keypoint extraction.
// Also, we can normalize the SDC with the maximum size found between
+
narf.getParameters().support_size = 0.2f;
// the centroid and any of the cluster's points.
+
// If true, the rotation invariant version of NARF will be used. The histogram
vfh.setNormalizeDistance(false);
+
// will be shifted according to the dominant orientation to provide robustness to
 +
// rotations around the normal.
 +
narf.getParameters().rotation_invariant = true;
  
vfh.compute(*descriptor);
+
narf.compute(*descriptors);
 
}</syntaxhighlight>
 
}</syntaxhighlight>
 
Because only one VFH descriptor is computed for the whole cluster, the size of the cloud object that stores the result will be 1.
 
  
 
<div style="background-color: #F8F8F8; border-style: dotted;">
 
<div style="background-color: #F8F8F8; border-style: dotted;">
* '''Input''': Points (cluster), Normals, Search method, [Normalize bins], [Normalize SDC]
+
* '''Input''': Range image, Key points, Support Size
* '''Output''': VFH descriptor
+
* '''Output''': NARF descriptors
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/vfh_estimation.php Estimating VFH signatures for a set of points]
+
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/narf_feature_extraction.php How to extract NARF Features from a range image]
 
* '''Publication''':
 
* '''Publication''':
** [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf Fast 3D Recognition and Pose Using the Viewpoint Feature Histogram] (Radu Bogdan Rusu et al., 2010)
+
** [http://europa.informatik.uni-freiburg.de/files/steder11icra.pdf Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries] (Bastian Steder et al., 2011)
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_v_f_h_estimation.html pcl::VFHEstimation]
+
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_narf_descriptor.html pcl::NarfDescriptor]
* [http://robotica.unileon.es/~victorm/PCL_VFH.tar.gz Download]
+
* [http://robotica.unileon.es/~victorm/PCL_NARF_descriptor.tar.gz Download]
 
</div>
 
</div>
  
  
===CVFH===
+
==RoPS==
 
 
The original VFH descriptor is not robust to occlusion or other sensor artifacts, or measurement errors. If the object cluster is missing many points, the resulting computed centroid will differ from the original, altering the final descriptor, and preventing a positive match from being found. Because of that, the Clustered Viewpoint Feature Histogram (CVFH) was introduced.
 
 
 
The idea is very simple: instead of computing a single VFH histogram for the whole cluster, the object is first divided in stable, smooth regions using [[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Region_growing | region-growing segmentation]], that enforces several constraints in the distances and differences of normals of the points belonging to every region. Then, a VFH is computed for every region. Thanks to this, an object can be found in a scene, as long as at least one of its regions is fully visible.
 
 
 
  
<center><gallery widths=300px>
+
The Rotational Projection Statistics (RoPS) feature is a bit different from the other descriptors because it works with a triangle mesh, so a previous triangulation step is needed for generating this mesh from the cloud. Apart from that, most concepts are similar.
File:CVFH_occlusion.png | Typical occlusion issues in a point cloud (image from original paper).
 
File:CVFH_regions.png | Object regions computed for the CVFH (image from original paper).
 
</gallery></center>
 
  
 +
In order to compute RoPS for a keypoint, the local surface is cropped according to a support radius, so only points and triangles lying inside are taken into account. Then, a local reference frame (LRF) is computed, giving the descriptor its rotational invariance. A coordinate system is created with the point as the origin, and the axes aligned with the LRF. Then, for every axis, several steps are performed.
  
Additionally, a Shape Distribution Component (SDC) is also computed and included. It encodes information about the distribution of the points arond the region's centroid, measuring the distances. The SDC allows to differentiate objects with similar characteristics (size and normal distribution), like two planar surfaces from each other.
+
First, the local surface is rotated around the current axis. The angle is determined by one of the parameters, which sets the number of rotations. Then, all points in the local surface are projected onto the XY, XZ and YZ planes. For each one, statistical information about the distribution of the projected points is computed, and concatenated to form the final descriptor.
 
 
The authors proposed to discard the histogram normalization step that is performed in VFH. This has the effect of making the descriptor dependant of scale, so an object of a certain size would not match a bigger or smaller copy of itself. It also makes CVFH more robust to occlusion.
 
 
 
CVFH is invariant to the camera roll angle, like most global descriptors. This is so because rotations about that camera axis do not change the observable geometry that descriptors are computed from, limiting the pose estimation to 5 DoF. The use of a Camera Roll Histogram (CRH) has been proposed to overcome this.
 
  
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 +
#include <pcl/point_types_conversion.h>
 
#include <pcl/features/normal_3d.h>
 
#include <pcl/features/normal_3d.h>
#include <pcl/features/cvfh.h>
+
#include <pcl/surface/gp3.h>
 +
#include <pcl/features/rops_estimation.h>
 +
 
 +
// A handy typedef.
 +
typedef pcl::Histogram<135> ROPS135;
  
 
int
 
int
 
main(int argc, char** argv)
 
main(int argc, char** argv)
 
{
 
{
// Cloud for storing the object.
+
// Object for storing the point cloud.
pcl::PointCloud<pcl::PointXYZ>::Ptr object(new pcl::PointCloud<pcl::PointXYZ>);
+
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
 
// Object for storing the normals.
 
// Object for storing the normals.
 
pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
 
pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
// Object for storing the CVFH descriptors.
+
// Object for storing both the points and the normals.
pcl::PointCloud<pcl::VFHSignature308>::Ptr descriptors(new pcl::PointCloud<pcl::VFHSignature308>);
+
pcl::PointCloud<pcl::PointNormal>::Ptr cloudNormals(new pcl::PointCloud<pcl::PointNormal>);
 
+
// Object for storing the ROPS descriptor for each point.
// Note: you should have performed preprocessing to cluster out the object
+
pcl::PointCloud<ROPS135>::Ptr descriptors(new pcl::PointCloud<ROPS135>());
// from the cloud, and save it to this individual file.
+
 
 
 
 
// Read a PCD file from disk.
 
// Read a PCD file from disk.
if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *object) != 0)
+
if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *cloud) != 0)
 
{
 
{
 
return -1;
 
return -1;
Line 1,244: Line 1,222:
 
// Estimate the normals.
 
// Estimate the normals.
 
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
 
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
normalEstimation.setInputCloud(object);
+
normalEstimation.setInputCloud(cloud);
 
normalEstimation.setRadiusSearch(0.03);
 
normalEstimation.setRadiusSearch(0.03);
 
pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
 
pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
Line 1,250: Line 1,228:
 
normalEstimation.compute(*normals);
 
normalEstimation.compute(*normals);
  
// CVFH estimation object.
+
// Perform triangulation.
pcl::CVFHEstimation<pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308> cvfh;
+
pcl::concatenateFields(*cloud, *normals, *cloudNormals);
cvfh.setInputCloud(object);
+
pcl::search::KdTree<pcl::PointNormal>::Ptr kdtree2(new pcl::search::KdTree<pcl::PointNormal>);
cvfh.setInputNormals(normals);
+
kdtree2->setInputCloud(cloudNormals);
cvfh.setSearchMethod(kdtree);
+
pcl::GreedyProjectionTriangulation<pcl::PointNormal> triangulation;
// Set the maximum allowable deviation of the normals,
+
pcl::PolygonMesh triangles;
// for the region segmentation step.
+
triangulation.setSearchRadius(0.025);
cvfh.setEPSAngleThreshold(5.0 / 180.0 * M_PI); // 5 degrees.
+
triangulation.setMu(2.5);
// Set the curvature threshold (maximum disparity between curvatures),
+
triangulation.setMaximumNearestNeighbors(100);
// for the region segmentation step.
+
triangulation.setMaximumSurfaceAngle(M_PI / 4); // 45 degrees.
cvfh.setCurvatureThreshold(1.0);
+
triangulation.setNormalConsistency(false);
// Set to true to normalize the bins of the resulting histogram,
+
triangulation.setMinimumAngle(M_PI / 18); // 10 degrees.
// using the total number of points. Note: enabling it will make CVFH
+
triangulation.setMaximumAngle(2 * M_PI / 3); // 120 degrees.
// invariant to scale just like VFH, but the authors encourage the opposite.
+
triangulation.setInputCloud(cloudNormals);
cvfh.setNormalizeBins(false);
+
triangulation.setSearchMethod(kdtree2);
 +
triangulation.reconstruct(triangles);
  
cvfh.compute(*descriptors);
+
// Note: you should only compute descriptors for chosen keypoints. It has
}</syntaxhighlight>
+
// been omitted here for simplicity.
  
You can further customize the segmentation step with <span style="color:#FF1493">"setClusterTolerance()"</span> (to set the maximum Euclidean distance between points in the same cluster) and <span style="color:#FF1493">"setMinPoints()"</span>. The size of the output will be equal to the number of regions the object was divided in. Also, check the functions <span style="color:#FF1493">"getCentroidClusters()"</span> and <span style="color:#FF1493">"getCentroidNormalClusters()"</span>, you can use them to get information about the centroids used to compute the different CVFH descriptors.
+
// RoPs estimation object.
 +
pcl::ROPSEstimation<pcl::PointXYZ, ROPS135> rops;
 +
rops.setInputCloud(cloud);
 +
rops.setSearchMethod(kdtree);
 +
rops.setRadiusSearch(0.03);
 +
rops.setTriangles(triangles.polygons);
 +
// Number of partition bins that is used for distribution matrix calculation.
 +
rops.setNumberOfPartitionBins(5);
 +
// The greater the number of rotations is, the bigger the resulting descriptor.
 +
// Make sure to change the histogram size accordingly.
 +
rops.setNumberOfRotations(3);
 +
// Support radius that is used to crop the local surface of the point.
 +
rops.setSupportRadius(0.025);
 +
 
 +
rops.compute(*descriptors);
 +
}</syntaxhighlight>
  
 
<div style="background-color: #F8F8F8; border-style: dotted;">
 
<div style="background-color: #F8F8F8; border-style: dotted;">
* '''Input''': Points (cluster), Normals, Search method, Angle threshold, Curvature threshold, [Normalize bins], [Cluster tolerance], [Minimum points]
+
* '''Input''': Points, Triangles, Search method, Support radius, Number of rotations, Number of partition bins
* '''Output''': CVFH descriptors
+
* '''Output''': RoPS descriptors
 
* '''Publication''':
 
* '''Publication''':
** [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6130296 CAD-Model Recognition and 6DOF Pose Estimation Using 3D Cues] (requires IEEE Xplore subscription) (Aitor Aldoma et al., 2011)
+
** [http://arxiv.org/pdf/1304.3192v1.pdf Rotational Projection Statistics for 3D Local Surface Description and Object Recognition] (Yulan Guo et al., 2013)
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_c_v_f_h_estimation.html pcl::CVFHEstimation]
+
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_r_o_p_s_estimation.html pcl::ROPSEstimation]
* [http://robotica.unileon.es/~victorm/PCL_CVFH.tar.gz Download]
+
* [http://robotica.unileon.es/~victorm/PCL_RoPS.tar.gz Download]
 
</div>
 
</div>
  
  
====OUR-CVFH====
+
=Global descriptors=
  
The Oriented, Unique and Repeatable CVFH expands the previous descriptor, adding the computation of an unique reference frame to make it more robust.
+
Global descriptors encode object geometry. They are not computed for individual points, but for a whole cluster that represents an object. Because of this, a preprocessing step ([[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Segmentation|segmentation]]) is required, in order to retrieve possible candidates.
  
OUR-CVFH relies on the use of Semi-Global Unique Reference Frames (SGURFs), which are repeatable coordinate systems computed for each region. Not only the remove the invariance to camera roll and allow to extract the 6DoF pose directly without additional steps, but they also improve the spatial descriptiveness.
+
Global descriptors are used for object recognition and classification, geometric analysis (object type, shape...), and pose estimation.
  
The first part of the computation is akin to CVFH, but after segmentation, the points in each region are filtered once more according to the difference between their normals and the region's average normal. This results in better shaped regions, improving the estimation of the Reference Frames (RFs).
+
You should also know that many local descriptors can also be used as global ones. This can be done with descriptors that use a radius to search for neighbors (as PFH does). The trick is to compute it for one single point in the object cluster, and set the radius to the maximum possible distance between any two points (so all points in the cluster are considered as neighbors).
  
After this, the SGURF is computed for each region. Disambiguation is performed to decide the sign of the axes, according to the points' distribution. If this is not enough and the sign remains ambiguous, multiple RFs will need to be created to account for it. Finally, the OUR-CVFH descriptor is computed. The original Shape Distribution Component (SDC) is discarded, and the surface is now described according to the RFs.
+
==VFH==
  
 +
The Viewpoint Feature Histogram is based on the FPFH. Because the latter is invariant to the object's pose, the authors decided to expand it by including information about the viewpoint. Also, the FPFH is estimated once for the whole cluster, not for every point.
  
[[Image:OUR-CVFH.png|thumb|center|600px| SGURF frame and resulting histogram of a region (image from [http://vision.deis.unibo.it/fede/papers/dagm12.pdf original paper]).]]
+
The VFH is made up by two parts: a viewpoint direction component, and an extended FPFH component. To compute the first one, the object's [[PCL/OpenNI_tutorial_2:_Cloud_processing_%28basic%29#Computing_the_centroid|centroid]] is found, which is the point that results from averaging the X, Y and Z coordinates of all points. Then, the vector between the viewpoint (the position of the sensor) and this centroid is computed, and normalized. Finally, for all points in the cluster, the angle between this vector and their normal is calculated, and the result is binned into an histogram. The vector is translated to each point when computing the angle because it makes the descriptor scale invariant.
  
 +
The second component is computed like the FPFH (that results in 3 histograms for the 3 angular features, α, φ and θ), with some differences: it is only computed for the centroid, using the computed viewpoint direction vector as its normal (as the point, obviously, does not have a normal), and setting all the cluster's points as neighbors.
  
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 
#include <pcl/features/normal_3d.h>
 
#include <pcl/features/our_cvfh.h>
 
  
int
+
<center><gallery widths=300px>
main(int argc, char** argv)
+
File:VFH_viewpoint_component.png | Viewpoint component of the VFH (image from [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf original paper]).
 +
File:VFH_extended_FPFH_component.png | Extended FPFH component of the VFH (image from [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf original paper]).
 +
</gallery></center>
 +
 
 +
 
 +
The resulting 4 histograms (1 for the viewpoint component, 3 for the extended FPFH component) are concatenated to build the final VFH descriptor. By default, the bins are normalized using the total number of points in the cluster. This makes the VFH descriptor invariant to scale.
 +
 
 +
 
 +
[[Image:VFH_histogram.png|thumb|center|400px| VFH histogram (image from [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf original paper]).]]
 +
 
 +
 
 +
The PCL implementation computes an additional fifth histogram with the distances of the cluster points to the centroid (the Shape Distribution Component, SDC), increading the size of the output descriptor from 263 to 308. The SDC is taken from the CVFH descriptor that we will see in the next section, and makes the result more robust.
 +
 
 +
The VFH of an already clustered object can be computed this way:
 +
 
 +
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 +
#include <pcl/features/normal_3d.h>
 +
#include <pcl/features/vfh.h>
 +
 
 +
int
 +
main(int argc, char** argv)
 
{
 
{
 
// Cloud for storing the object.
 
// Cloud for storing the object.
Line 1,306: Line 1,319:
 
// Object for storing the normals.
 
// Object for storing the normals.
 
pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
 
pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
// Object for storing the OUR-CVFH descriptors.
+
// Object for storing the VFH descriptor.
pcl::PointCloud<pcl::VFHSignature308>::Ptr descriptors(new pcl::PointCloud<pcl::VFHSignature308>);
+
pcl::PointCloud<pcl::VFHSignature308>::Ptr descriptor(new pcl::PointCloud<pcl::VFHSignature308>);
  
 
// Note: you should have performed preprocessing to cluster out the object
 
// Note: you should have performed preprocessing to cluster out the object
Line 1,326: Line 1,339:
 
normalEstimation.compute(*normals);
 
normalEstimation.compute(*normals);
  
// OUR-CVFH estimation object.
+
// VFH estimation object.
pcl::OURCVFHEstimation<pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308> ourcvfh;
+
pcl::VFHEstimation<pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308> vfh;
ourcvfh.setInputCloud(object);
+
vfh.setInputCloud(object);
ourcvfh.setInputNormals(normals);
+
vfh.setInputNormals(normals);
ourcvfh.setSearchMethod(kdtree);
+
vfh.setSearchMethod(kdtree);
ourcvfh.setEPSAngleThreshold(5.0 / 180.0 * M_PI); // 5 degrees.
+
// Optionally, we can normalize the bins of the resulting histogram,
ourcvfh.setCurvatureThreshold(1.0);
+
// using the total number of points.
ourcvfh.setNormalizeBins(false);
+
vfh.setNormalizeBins(true);
// Set the minimum axis ratio between the SGURF axes. At the disambiguation phase,
+
// Also, we can normalize the SDC with the maximum size found between
// this will decide if additional Reference Frames need to be created, if ambiguous.
+
// the centroid and any of the cluster's points.
ourcvfh.setAxisRatio(0.8);
+
vfh.setNormalizeDistance(false);
  
ourcvfh.compute(*descriptors);
+
vfh.compute(*descriptor);
 
}</syntaxhighlight>
 
}</syntaxhighlight>
  
You can use the <span style="color:#FF1493">"getTransforms()"</span> function to get the transformations aligning the cloud to the corresponding SGURF.
+
Because only one VFH descriptor is computed for the whole cluster, the size of the cloud object that stores the result will be 1.
  
 
<div style="background-color: #F8F8F8; border-style: dotted;">
 
<div style="background-color: #F8F8F8; border-style: dotted;">
* '''Input''': Points (cluster), Normals, Search method, Angle threshold, Curvature threshold, [Normalize bins], [Cluster tolerance], [Minimum points], [Axis ratio]
+
* '''Input''': Points (cluster), Normals, Search method, [Normalize bins], [Normalize SDC]
* '''Output''': OUR-CVFH descriptors
+
* '''Output''': VFH descriptor
 +
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/vfh_estimation.php Estimating VFH signatures for a set of points]
 
* '''Publication''':
 
* '''Publication''':
** [http://vision.deis.unibo.it/fede/papers/dagm12.pdf OUR-CVFH – Oriented, Unique and Repeatable Clustered Viewpoint Feature Histogram for Object Recognition and 6DOF Pose Estimation] (Aitor Aldoma et al., 2012)
+
** [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf Fast 3D Recognition and Pose Using the Viewpoint Feature Histogram] (Radu Bogdan Rusu et al., 2010)
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_o_u_r_c_v_f_h_estimation.html pcl::OURCVFHEstimation]
+
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_v_f_h_estimation.html pcl::VFHEstimation]
* [http://robotica.unileon.es/~victorm/PCL_OUR-CVFH.tar.gz Download]
+
* [http://robotica.unileon.es/~victorm/PCL_VFH.tar.gz Download]
 
</div>
 
</div>
  
  
==ESF==
+
===CVFH===
 +
 
 +
The original VFH descriptor is not robust to occlusion or other sensor artifacts, or measurement errors. If the object cluster is missing many points, the resulting computed centroid will differ from the original, altering the final descriptor, and preventing a positive match from being found. Because of that, the Clustered Viewpoint Feature Histogram (CVFH) was introduced.
  
The Ensemble of Shape Functions (ESF) is a combination of 3 different shape functions that describe certain properties of the cloud's points: distances, angles and area. This descriptor is very unique because it does not require normal information. Actually, it does not need any preprocessing, as it is robust to noise and incomplete surfaces.
+
The idea is very simple: instead of computing a single VFH histogram for the whole cluster, the object is first divided in stable, smooth regions using [[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Region_growing | region-growing segmentation]], that enforces several constraints in the distances and differences of normals of the points belonging to every region. Then, a VFH is computed for every region. Thanks to this, an object can be found in a scene, as long as at least one of its regions is fully visible.
  
The algorithm uses a voxel grid as an approximation of the real surface. It iterates through all the points in the cloud: for every iteration, 3 random points are chosen. For these points, the shape functions are computed:
 
  
* '''D2''': this function computes the distances between point pairs (3 overall). Then, for every pair, it checks if the line that connects both points lies entirely inside the surface, entirely outside (crossing free space), or both. Depending on this, the distance value will be binned to one of three possible histograms: IN, OUT or MIXED.
+
<center><gallery widths=300px>
* '''D2 ratio''': an additional histogram for the ratio between parts of the line inside the surface, and parts outside. This value will be 0 if the line is completely outside, 1 if completely inside, and some value in between if mixed.
+
File:CVFH_occlusion.png | Typical occlusion issues in a point cloud (image from original paper).
* '''D3''': this computes the square root of the area of the triangle formed by the 3 points. Like D2, the result is also classified as IN, OUT or MIXED, each with its own histogram.
+
File:CVFH_regions.png | Object regions computed for the CVFH (image from original paper).
* '''A3''': this function computes the angle formed by the points. Then, the value is binned depending on how the line opposite to the angle is (once again, as IN, OUT or MIXED).
+
</gallery></center>
  
  
[[Image:ESF.png|thumb|center|600px| ESF descriptor (image from [http://www.inf.ethz.ch/personal/zeislb/publications/aldoma_2012jram_PCLTutorial.pdf this paper]).]]
+
Additionally, a Shape Distribution Component (SDC) is also computed and included. It encodes information about the distribution of the points arond the region's centroid, measuring the distances. The SDC allows to differentiate objects with similar characteristics (size and normal distribution), like two planar surfaces from each other.
  
 +
The authors proposed to discard the histogram normalization step that is performed in VFH. This has the effect of making the descriptor dependant of scale, so an object of a certain size would not match a bigger or smaller copy of itself. It also makes CVFH more robust to occlusion.
  
After the loop is over, we are left with 10 subhistograms (IN, OUT and MIXED for D2, D3 and A3, and an additional one for the ratio). Each one has 64 bins, so the size of the final ESF descriptor is 640.
+
CVFH is invariant to the camera roll angle, like most global descriptors. This is so because rotations about that camera axis do not change the observable geometry that descriptors are computed from, limiting the pose estimation to 5 DoF. The use of a Camera Roll Histogram (CRH) has been proposed to overcome this.
  
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
#include <pcl/features/esf.h>
+
#include <pcl/features/normal_3d.h>
 +
#include <pcl/features/cvfh.h>
  
 
int
 
int
Line 1,378: Line 1,395:
 
// Cloud for storing the object.
 
// Cloud for storing the object.
 
pcl::PointCloud<pcl::PointXYZ>::Ptr object(new pcl::PointCloud<pcl::PointXYZ>);
 
pcl::PointCloud<pcl::PointXYZ>::Ptr object(new pcl::PointCloud<pcl::PointXYZ>);
// Object for storing the ESF descriptor.
+
// Object for storing the normals.
pcl::PointCloud<pcl::ESFSignature640>::Ptr descriptor(new pcl::PointCloud<pcl::ESFSignature640>);
+
pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
 +
// Object for storing the CVFH descriptors.
 +
pcl::PointCloud<pcl::VFHSignature308>::Ptr descriptors(new pcl::PointCloud<pcl::VFHSignature308>);
  
 
// Note: you should have performed preprocessing to cluster out the object
 
// Note: you should have performed preprocessing to cluster out the object
Line 1,390: Line 1,409:
 
}
 
}
  
// ESF estimation object.
+
// Estimate the normals.
pcl::ESFEstimation<pcl::PointXYZ, pcl::ESFSignature640> esf;
+
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
esf.setInputCloud(object);
+
normalEstimation.setInputCloud(object);
 +
normalEstimation.setRadiusSearch(0.03);
 +
pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
 +
normalEstimation.setSearchMethod(kdtree);
 +
normalEstimation.compute(*normals);
  
esf.compute(*descriptor);
+
// CVFH estimation object.
}</syntaxhighlight>
+
pcl::CVFHEstimation<pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308> cvfh;
 
+
cvfh.setInputCloud(object);
<div style="background-color: #F8F8F8; border-style: dotted;">
+
cvfh.setInputNormals(normals);
* '''Input''': Points (cluster)
+
cvfh.setSearchMethod(kdtree);
* '''Output''': ESF descriptor
+
// Set the maximum allowable deviation of the normals,
* '''Publication''':
+
// for the region segmentation step.
** [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6181760 Ensemble of shape functions for 3D object classification] (requires IEEE Xplore subscription) (Walter Wohlkinger and Markus Vincze, 2011)
+
cvfh.setEPSAngleThreshold(5.0 / 180.0 * M_PI); // 5 degrees.
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_e_s_f_estimation.html pcl::ESFEstimation]
+
// Set the curvature threshold (maximum disparity between curvatures),
* [http://robotica.unileon.es/~victorm/PCL_ESF.tar.gz Download]
+
// for the region segmentation step.
</div>
+
cvfh.setCurvatureThreshold(1.0);
 +
// Set to true to normalize the bins of the resulting histogram,
 +
// using the total number of points. Note: enabling it will make CVFH
 +
// invariant to scale just like VFH, but the authors encourage the opposite.
 +
cvfh.setNormalizeBins(false);
  
 +
cvfh.compute(*descriptors);
 +
}</syntaxhighlight>
  
==GFPFH==
+
You can further customize the segmentation step with <span style="color:#FF1493">"setClusterTolerance()"</span> (to set the maximum Euclidean distance between points in the same cluster) and <span style="color:#FF1493">"setMinPoints()"</span>. The size of the output will be equal to the number of regions the object was divided in. Also, check the functions <span style="color:#FF1493">"getCentroidClusters()"</span> and <span style="color:#FF1493">"getCentroidNormalClusters()"</span>, you can use them to get information about the centroids used to compute the different CVFH descriptors.
  
As you may have guessed, GFPFH stands for Global Fast Point Feature Histogram, the global version of the FPFH descriptor. GFPFH was designed for the task of helping a robot navigate its environment, having some context of the objects around it.
+
<div style="background-color: #F8F8F8; border-style: dotted;">
 
+
* '''Input''': Points (cluster), Normals, Search method, Angle threshold, Curvature threshold, [Normalize bins], [Cluster tolerance], [Minimum points]
The first step before being able to compute the descriptor is surface categorization. A set of logical primitives (the classes, or categories) is created, which depends on the type of objects we expect the robot to find on the scene. For example, if we know there will be a coffee mug, we create three: one for the handle, and the other two for the outer and inner faces. Then, FPFH descriptors are computed, and everything is fed to a [http://en.wikipedia.org/wiki/Conditional_random_field Conditional Random Field] (CRF) algorithm. The CRF will label each surface with one of the previous categories, so we end up with a cloud where each point has been classified depending of the type of object (or object's region) it belongs to.
+
* '''Output''': CVFH descriptors
 +
* '''Publication''':
 +
** [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6130296 CAD-Model Recognition and 6DOF Pose Estimation Using 3D Cues] (requires IEEE Xplore subscription) (Aitor Aldoma et al., 2011)
 +
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_c_v_f_h_estimation.html pcl::CVFHEstimation]
 +
* [http://robotica.unileon.es/~victorm/PCL_CVFH.tar.gz Download]
 +
</div>
  
  
[[Image:FPFH_CRF.png|thumb|center|300px| Classification of objects made with FPFH and CRF (image from [https://www.willowgarage.com/sites/default/files/iccv09.pdf original paper]).]]
+
====OUR-CVFH====
 +
 
 +
The Oriented, Unique and Repeatable CVFH expands the previous descriptor, adding the computation of an unique reference frame to make it more robust.
  
 +
OUR-CVFH relies on the use of Semi-Global Unique Reference Frames (SGURFs), which are repeatable coordinate systems computed for each region. Not only they remove the invariance to camera roll and allow to extract the 6DoF pose directly without additional steps, but they also improve the spatial descriptiveness.
  
Now, the GFPFH descriptor can be computed with the result of the classification step. It will encode what the object is made of, so the robot can easily recognize it. First, an [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#Octree | octree]] is created, dividing the object in voxel leaves. For every leaf, a set of probabilities is created, one for each class. Each one stores the probability of that leaf belonging to the class, and it is computed according to the number of points in that leaf that have been labelled as that class, and the total number of points. Then, for every pair of leaves in the octree, a line is casted, connecting them. Every leaf in its path is checked for occupancy, storing the result in an histogram. If the leaf is empty (free space), a value of 0 is saved. Otherwise, the leaf probabilities are used.
+
The first part of the computation is akin to CVFH, but after segmentation, the points in each region are filtered once more according to the difference between their normals and the region's average normal. This results in better shaped regions, improving the estimation of the Reference Frames (RFs).
  
 +
After this, the SGURF is computed for each region. Disambiguation is performed to decide the sign of the axes, according to the points' distribution. If this is not enough and the sign remains ambiguous, multiple RFs will need to be created to account for it. Finally, the OUR-CVFH descriptor is computed. The original Shape Distribution Component (SDC) is discarded, and the surface is now described according to the RFs.
  
[[Image:GFPFH.png|thumb|center|500px| Computing the GFPFH with a voxel grid (image from [https://www.willowgarage.com/sites/default/files/iccv09.pdf original paper]).]]
 
  
 +
[[Image:OUR-CVFH.png|thumb|center|600px| SGURF frame and resulting histogram of a region (image from [http://vision.deis.unibo.it/fede/papers/dagm12.pdf original paper]).]]
  
The following code will compute the GFPFH for a cloud with label information. The categorization step is up to you, as it depends largely on the type of the scene, and the use you are going to give it.
 
  
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
#include <pcl/features/gfpfh.h>
+
#include <pcl/features/normal_3d.h>
 +
#include <pcl/features/our_cvfh.h>
  
 
int
 
int
Line 1,432: Line 1,470:
 
{
 
{
 
// Cloud for storing the object.
 
// Cloud for storing the object.
pcl::PointCloud<pcl::PointXYZL>::Ptr object(new pcl::PointCloud<pcl::PointXYZL>);
+
pcl::PointCloud<pcl::PointXYZ>::Ptr object(new pcl::PointCloud<pcl::PointXYZ>);
// Object for storing the GFPFH descriptor.
+
// Object for storing the normals.
pcl::PointCloud<pcl::GFPFHSignature16>::Ptr descriptor(new pcl::PointCloud<pcl::GFPFHSignature16>);
+
pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
 +
// Object for storing the OUR-CVFH descriptors.
 +
pcl::PointCloud<pcl::VFHSignature308>::Ptr descriptors(new pcl::PointCloud<pcl::VFHSignature308>);
  
 
// Note: you should have performed preprocessing to cluster out the object
 
// Note: you should have performed preprocessing to cluster out the object
Line 1,440: Line 1,480:
  
 
// Read a PCD file from disk.
 
// Read a PCD file from disk.
if (pcl::io::loadPCDFile<pcl::PointXYZL>(argv[1], *object) != 0)
+
if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *object) != 0)
 
{
 
{
 
return -1;
 
return -1;
 
}
 
}
  
// Note: you should now perform classification on the cloud's points. See the
+
// Estimate the normals.
// original paper for more details. For this example, we will now consider 4
+
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
// different classes, and randomly label each point as one of them.
+
normalEstimation.setInputCloud(object);
for (size_t i = 0; i < object->points.size(); ++i)
+
normalEstimation.setRadiusSearch(0.03);
{
+
pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
object->points[i].label = 1 + i % 4;
+
normalEstimation.setSearchMethod(kdtree);
}
+
normalEstimation.compute(*normals);
  
// ESF estimation object.
+
// OUR-CVFH estimation object.
pcl::GFPFHEstimation<pcl::PointXYZL, pcl::PointXYZL, pcl::GFPFHSignature16> gfpfh;
+
pcl::OURCVFHEstimation<pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308> ourcvfh;
gfpfh.setInputCloud(object);
+
ourcvfh.setInputCloud(object);
// Set the object that contains the labels for each point. Thanks to the
+
ourcvfh.setInputNormals(normals);
// PointXYZL type, we can use the same object we store the cloud in.
+
ourcvfh.setSearchMethod(kdtree);
gfpfh.setInputLabels(object);
+
ourcvfh.setEPSAngleThreshold(5.0 / 180.0 * M_PI); // 5 degrees.
// Set the size of the octree leaves to 1cm (cubic).
+
ourcvfh.setCurvatureThreshold(1.0);
gfpfh.setOctreeLeafSize(0.01);
+
ourcvfh.setNormalizeBins(false);
// Set the number of classes the cloud has been labelled with (default is 16).
+
// Set the minimum axis ratio between the SGURF axes. At the disambiguation phase,
gfpfh.setNumberOfClasses(4);
+
// this will decide if additional Reference Frames need to be created, if ambiguous.
 +
ourcvfh.setAxisRatio(0.8);
  
gfpfh.compute(*descriptor);
+
ourcvfh.compute(*descriptors);
 
}</syntaxhighlight>
 
}</syntaxhighlight>
 +
 +
You can use the <span style="color:#FF1493">"getTransforms()"</span> function to get the transformations aligning the cloud to the corresponding SGURF.
  
 
<div style="background-color: #F8F8F8; border-style: dotted;">
 
<div style="background-color: #F8F8F8; border-style: dotted;">
* '''Input''': Points (cluster), Labels, Number of classes, Leaf size
+
* '''Input''': Points (cluster), Normals, Search method, Angle threshold, Curvature threshold, [Normalize bins], [Cluster tolerance], [Minimum points], [Axis ratio]
* '''Output''': GFPFH descriptor
+
* '''Output''': OUR-CVFH descriptors
 
* '''Publication''':
 
* '''Publication''':
** [https://www.willowgarage.com/sites/default/files/iccv09.pdf Detecting and Segmenting Objects for Mobile Manipulation] (Radu Bogdan Rusu et al., 2009)
+
** [http://vision.deis.unibo.it/fede/papers/dagm12.pdf OUR-CVFH – Oriented, Unique and Repeatable Clustered Viewpoint Feature Histogram for Object Recognition and 6DOF Pose Estimation] (Aitor Aldoma et al., 2012)
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_g_f_p_f_h_estimation.html pcl::GFPFHEstimation]
+
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_o_u_r_c_v_f_h_estimation.html pcl::OURCVFHEstimation]
* [http://robotica.unileon.es/~victorm/PCL_GFPFH.tar.gz Download]
+
* [http://robotica.unileon.es/~victorm/PCL_OUR-CVFH.tar.gz Download]
 
</div>
 
</div>
  
  
==GRSD==
+
==ESF==
  
The global version of the Radius-based Surface Descriptor works in a similar fashion to GFPFH. A voxelization and a surface categorization step are performed beforehand, labelling all surface patches according to the geometric category (plane, cylinder, edge, rim, sphere), using RSD. Then, the whole cluster is classified into one of these categories, and the GRSD descriptor is computed from this.
+
The Ensemble of Shape Functions (ESF) is a combination of 3 different shape functions that describe certain properties of the cloud's points: distances, angles and area. This descriptor is very unique because it does not require normal information. Actually, it does not need any preprocessing, as it is robust to noise and incomplete surfaces.
  
 +
The algorithm uses a voxel grid as an approximation of the real surface. It iterates through all the points in the cloud: for every iteration, 3 random points are chosen. For these points, the shape functions are computed:
  
[[Image:GRSD.png|thumb|center|400px| Classification of objects for GRSD and resulting histogram (image from [https://ias.in.tum.de/_media/spezial/bib/grsd10humanoids.pdf original paper]).]]
+
* '''D2''': this function computes the distances between point pairs (3 overall). Then, for every pair, it checks if the line that connects both points lies entirely inside the surface, entirely outside (crossing free space), or both. Depending on this, the distance value will be binned to one of three possible histograms: IN, OUT or MIXED.
 +
* '''D2 ratio''': an additional histogram for the ratio between parts of the line inside the surface, and parts outside. This value will be 0 if the line is completely outside, 1 if completely inside, and some value in between if mixed.
 +
* '''D3''': this computes the square root of the area of the triangle formed by the 3 points. Like D2, the result is also classified as IN, OUT or MIXED, each with its own histogram.
 +
* '''A3''': this function computes the angle formed by the points. Then, the value is binned depending on how the line opposite to the angle is (once again, as IN, OUT or MIXED).
  
  
As with [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#RSD|RSD]], I will provide a code snippet when the descriptor becomes available in PCL.
+
[[Image:ESF.png|thumb|center|600px| ESF descriptor (image from [http://www.inf.ethz.ch/personal/zeislb/publications/aldoma_2012jram_PCLTutorial.pdf this paper]).]]
  
<div style="background-color: #F8F8F8; border-style: dotted;">
 
* '''Publication''':
 
** [https://ias.in.tum.de/_media/spezial/bib/grsd10humanoids.pdf Hierarchical Object Geometric Categorization and Appearance Classification for Mobile Manipulation] (Zoltan-Csaba Marton et al., 2010)
 
</div>
 
  
 
+
After the loop is over, we are left with 10 subhistograms (IN, OUT and MIXED for D2, D3 and A3, and an additional one for the ratio). Each one has 64 bins, so the size of the final ESF descriptor is 640.
=Visualization=
 
 
 
Sometimes it is desired to check a visual representation of a descriptor, perhaps to analyze the distribution of data over different bins. Because they are saved as histograms, this is something trivial to do. PCL offers a couple of classes to do this.
 
 
 
==Histogram Visualizer==
 
 
 
<span style="color:#FF1493">"PCLHistogramVisualizer"</span> is the simplest way to plot an histogram. The class has little functionality, but it does its job. Only one call is necessary to give the histogram and its size:
 
  
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
#include <pcl/features/normal_3d.h>
+
#include <pcl/features/esf.h>
#include <pcl/features/vfh.h>
 
#include<pcl/visualization/histogram_visualizer.h>
 
  
 
int
 
int
 
main(int argc, char** argv)
 
main(int argc, char** argv)
 
{
 
{
// Clouds for storing everything.
+
// Cloud for storing the object.
 
pcl::PointCloud<pcl::PointXYZ>::Ptr object(new pcl::PointCloud<pcl::PointXYZ>);
 
pcl::PointCloud<pcl::PointXYZ>::Ptr object(new pcl::PointCloud<pcl::PointXYZ>);
pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
+
// Object for storing the ESF descriptor.
pcl::PointCloud<pcl::VFHSignature308>::Ptr descriptor(new pcl::PointCloud<pcl::VFHSignature308>);
+
pcl::PointCloud<pcl::ESFSignature640>::Ptr descriptor(new pcl::PointCloud<pcl::ESFSignature640>);
 +
 
 +
// Note: you should have performed preprocessing to cluster out the object
 +
// from the cloud, and save it to this individual file.
  
 
// Read a PCD file from disk.
 
// Read a PCD file from disk.
Line 1,520: Line 1,557:
 
}
 
}
  
// Estimate the normals.
+
// ESF estimation object.
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
+
pcl::ESFEstimation<pcl::PointXYZ, pcl::ESFSignature640> esf;
normalEstimation.setInputCloud(object);
+
esf.setInputCloud(object);
normalEstimation.setRadiusSearch(0.03);
+
 
pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
+
esf.compute(*descriptor);
normalEstimation.setSearchMethod(kdtree);
+
}</syntaxhighlight>
normalEstimation.compute(*normals);
+
 
 +
<div style="background-color: #F8F8F8; border-style: dotted;">
 +
* '''Input''': Points (cluster)
 +
* '''Output''': ESF descriptor
 +
* '''Publication''':
 +
** [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6181760 Ensemble of shape functions for 3D object classification] (requires IEEE Xplore subscription) (Walter Wohlkinger and Markus Vincze, 2011)
 +
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_e_s_f_estimation.html pcl::ESFEstimation]
 +
* [http://robotica.unileon.es/~victorm/PCL_ESF.tar.gz Download]
 +
</div>
 +
 
 +
 
 +
==GFPFH==
 +
 
 +
As you may have guessed, GFPFH stands for Global Fast Point Feature Histogram, the global version of the FPFH descriptor. GFPFH was designed for the task of helping a robot navigate its environment, having some context of the objects around it.
 +
 
 +
The first step before being able to compute the descriptor is surface categorization. A set of logical primitives (the classes, or categories) is created, which depends on the type of objects we expect the robot to find on the scene. For example, if we know there will be a coffee mug, we create three: one for the handle, and the other two for the outer and inner faces. Then, FPFH descriptors are computed, and everything is fed to a [https://en.wikipedia.org/wiki/Conditional_random_field Conditional Random Field] (CRF) algorithm. The CRF will label each surface with one of the previous categories, so we end up with a cloud where each point has been classified depending of the type of object (or object's region) it belongs to.
 +
 
 +
 
 +
[[Image:FPFH_CRF.png|thumb|center|300px| Classification of objects made with FPFH and CRF (image from [https://www.willowgarage.com/sites/default/files/iccv09.pdf original paper]).]]
 +
 
 +
 
 +
Now, the GFPFH descriptor can be computed with the result of the classification step. It will encode what the object is made of, so the robot can easily recognize it. First, an [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#Octree | octree]] is created, dividing the object in voxel leaves. For every leaf, a set of probabilities is created, one for each class. Each one stores the probability of that leaf belonging to the class, and it is computed according to the number of points in that leaf that have been labelled as that class, and the total number of points. Then, for every pair of leaves in the octree, a line is casted, connecting them. Every leaf in its path is checked for occupancy, storing the result in an histogram. If the leaf is empty (free space), a value of 0 is saved. Otherwise, the leaf probabilities are used.
 +
 
 +
 
 +
[[Image:GFPFH.png|thumb|center|500px| Computing the GFPFH with a voxel grid (image from [https://www.willowgarage.com/sites/default/files/iccv09.pdf original paper]).]]
 +
 
 +
 
 +
The following code will compute the GFPFH for a cloud with label information. The categorization step is up to you, as it depends largely on the type of the scene, and the use you are going to give it.
 +
 
 +
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 +
#include <pcl/features/gfpfh.h>
 +
 
 +
int
 +
main(int argc, char** argv)
 +
{
 +
// Cloud for storing the object.
 +
pcl::PointCloud<pcl::PointXYZL>::Ptr object(new pcl::PointCloud<pcl::PointXYZL>);
 +
// Object for storing the GFPFH descriptor.
 +
pcl::PointCloud<pcl::GFPFHSignature16>::Ptr descriptor(new pcl::PointCloud<pcl::GFPFHSignature16>);
 +
 
 +
// Note: you should have performed preprocessing to cluster out the object
 +
// from the cloud, and save it to this individual file.
 +
 
 +
// Read a PCD file from disk.
 +
if (pcl::io::loadPCDFile<pcl::PointXYZL>(argv[1], *object) != 0)
 +
{
 +
return -1;
 +
}
 +
 
 +
// Note: you should now perform classification on the cloud's points. See the
 +
// original paper for more details. For this example, we will now consider 4
 +
// different classes, and randomly label each point as one of them.
 +
for (size_t i = 0; i < object->points.size(); ++i)
 +
{
 +
object->points[i].label = 1 + i % 4;
 +
}
 +
 
 +
// ESF estimation object.
 +
pcl::GFPFHEstimation<pcl::PointXYZL, pcl::PointXYZL, pcl::GFPFHSignature16> gfpfh;
 +
gfpfh.setInputCloud(object);
 +
// Set the object that contains the labels for each point. Thanks to the
 +
// PointXYZL type, we can use the same object we store the cloud in.
 +
gfpfh.setInputLabels(object);
 +
// Set the size of the octree leaves to 1cm (cubic).
 +
gfpfh.setOctreeLeafSize(0.01);
 +
// Set the number of classes the cloud has been labelled with (default is 16).
 +
gfpfh.setNumberOfClasses(4);
 +
 
 +
gfpfh.compute(*descriptor);
 +
}</syntaxhighlight>
 +
 
 +
<div style="background-color: #F8F8F8; border-style: dotted;">
 +
* '''Input''': Points (cluster), Labels, Number of classes, Leaf size
 +
* '''Output''': GFPFH descriptor
 +
* '''Publication''':
 +
** [https://www.willowgarage.com/sites/default/files/iccv09.pdf Detecting and Segmenting Objects for Mobile Manipulation] (Radu Bogdan Rusu et al., 2009)
 +
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_g_f_p_f_h_estimation.html pcl::GFPFHEstimation]
 +
* [http://robotica.unileon.es/~victorm/PCL_GFPFH.tar.gz Download]
 +
</div>
 +
 
 +
 
 +
==GRSD==
 +
 
 +
The global version of the Radius-based Surface Descriptor works in a similar fashion to GFPFH. A voxelization and a surface categorization step are performed beforehand, labelling all surface patches according to the geometric category (plane, cylinder, edge, rim, sphere), using RSD. Then, the whole cluster is classified into one of these categories, and the GRSD descriptor is computed from this.
 +
 
 +
 
 +
[[Image:GRSD.png|thumb|center|400px| Classification of objects for GRSD and resulting histogram (image from [https://ias.in.tum.de/_media/spezial/bib/grsd10humanoids.pdf original paper]).]]
 +
 
 +
 
 +
To compute it:
 +
 
 +
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 +
#include <pcl/features/normal_3d.h>
 +
#include <pcl/features/grsd.h>
 +
 
 +
int
 +
main(int argc, char** argv)
 +
{
 +
// Object for storing the point cloud.
 +
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
 +
// Object for storing the normals.
 +
pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
 +
// Object for storing the GRSD descriptors for each point.
 +
pcl::PointCloud<pcl::GRSDSignature21>::Ptr descriptors(new pcl::PointCloud<pcl::GRSDSignature21>());
 +
 
 +
// Read a PCD file from disk.
 +
if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *cloud) != 0)
 +
{
 +
return -1;
 +
}
 +
 
 +
// Note: you would usually perform downsampling now. It has been omitted here
 +
// for simplicity, but be aware that computation can take a long time.
 +
 
 +
// Estimate the normals.
 +
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
 +
normalEstimation.setInputCloud(cloud);
 +
normalEstimation.setRadiusSearch(0.03);
 +
pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
 +
normalEstimation.setSearchMethod(kdtree);
 +
normalEstimation.compute(*normals);
 +
 
 +
// GRSD estimation object.
 +
pcl::GRSDEstimation<pcl::PointXYZ, pcl::Normal, pcl::GRSDSignature21> grsd;
 +
grsd.setInputCloud(cloud);
 +
grsd.setInputNormals(normals);
 +
grsd.setSearchMethod(kdtree);
 +
// Search radius, to look for neighbors. Note: the value given here has to be
 +
// larger than the radius used to estimate the normals.
 +
grsd.setRadiusSearch(0.05);
 +
 +
grsd.compute(*descriptors);
 +
}</syntaxhighlight>
 +
 
 +
<span style="color:#606060">'''''NOTE: This code will only compile with PCL versions 1.8 and above (the current trunk).'''''</span>
 +
 
 +
<div style="background-color: #F8F8F8; border-style: dotted;">
 +
* '''Input''': Points (cluster), Normals, Search method, Radius
 +
* '''Output''': GRSD descriptor
 +
* '''Publications''':
 +
** [https://ias.in.tum.de/_media/spezial/bib/grsd10humanoids.pdf Hierarchical Object Geometric Categorization and Appearance Classification for Mobile Manipulation] (Zoltan-Csaba Marton et al., 2010)
 +
** [https://ias.cs.tum.edu/_media/spezial/bib/marton11ijrr.pdf Combined 2D-3D Categorization and Classification for Multimodal Perception Systems] (Zoltan-Csaba Marton et al., 2011)
 +
** [http://www.mi.t.u-tokyo.ac.jp/top/downloadpublication/36 Voxelized Shape and Color Histograms for RGB-D] (Zoltan-Csaba Marton et al., 2011)
 +
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_g_r_s_d_estimation.html pcl::GRSDEstimation]
 +
* [http://robotica.unileon.es/~victorm/PCL_GRSD.tar.gz Download]
 +
</div>
 +
 
 +
 
 +
=Saving and loading=
 +
 
 +
You can save a descriptor to a file just [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#Writing_to_file|like with any other cloud type]]. One caveat, though. If you are using a descriptor that has its own custom type, like <span style="color:#FF1493">"PFHSignature125"</span>, everything will be OK. But with descriptors that do not (where you have to use <span style="color:#FF1493">"pcl::Histogram<>"</span>), you will get this error: <span style="color:#FF1493">''"POINT_TYPE_NOT_PROPERLY_REGISTERED"''</span>. In order to save or load from a file, PCL's IO functions need to know about the number, type and size of the fields. To solve this, you will have to properly register a new point type for your descriptor. For example, this will work for the [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#RoPS|RoPS]] descriptor example we saw earlier:
 +
 
 +
<syntaxhighlight lang=CPP>POINT_CLOUD_REGISTER_POINT_STRUCT(ROPS135,
 +
                                  (float[135], histogram, histogram)
 +
)</syntaxhighlight>
 +
 
 +
Add the previous to your code (change it accordingly), and you will be able to save and load descriptors as usual.
 +
 
 +
=Visualization=
 +
 
 +
Sometimes it is desired to check a visual representation of a descriptor, perhaps to analyze the distribution of data over different bins. Because they are saved as histograms, this is something trivial to do. PCL offers a couple of classes to do this.
 +
 
 +
==PCLHistogramVisualizer==
 +
 
 +
<span style="color:#FF1493">"PCLHistogramVisualizer"</span> is the simplest way to plot an histogram. The class has little functionality, but it does its job. Only one call is necessary to give the histogram and its size:
 +
 
 +
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 +
#include <pcl/features/normal_3d.h>
 +
#include <pcl/features/vfh.h>
 +
#include<pcl/visualization/histogram_visualizer.h>
 +
 
 +
int
 +
main(int argc, char** argv)
 +
{
 +
// Clouds for storing everything.
 +
pcl::PointCloud<pcl::PointXYZ>::Ptr object(new pcl::PointCloud<pcl::PointXYZ>);
 +
pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
 +
pcl::PointCloud<pcl::VFHSignature308>::Ptr descriptor(new pcl::PointCloud<pcl::VFHSignature308>);
 +
 
 +
// Read a PCD file from disk.
 +
if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *object) != 0)
 +
{
 +
return -1;
 +
}
 +
 
 +
// Estimate the normals.
 +
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
 +
normalEstimation.setInputCloud(object);
 +
normalEstimation.setRadiusSearch(0.03);
 +
pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
 +
normalEstimation.setSearchMethod(kdtree);
 +
normalEstimation.compute(*normals);
 +
 
 +
// Estimate VFH descriptor.
 +
pcl::VFHEstimation<pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308> vfh;
 +
vfh.setInputCloud(object);
 +
vfh.setInputNormals(normals);
 +
vfh.setSearchMethod(kdtree);
 +
vfh.setNormalizeBins(true);
 +
vfh.setNormalizeDistance(false);
 +
vfh.compute(*descriptor);
 +
 
 +
// Plotter object.
 +
pcl::visualization::PCLHistogramVisualizer viewer;
 +
// We need to set the size of the descriptor beforehand.
 +
viewer.addFeatureHistogram(*descriptor, 308);
 +
 
 +
viewer.spin();
 +
}</syntaxhighlight>
 +
 
 +
 
 +
[[Image:Histogram_visualizer_VFH.png|thumb|center|500px| VFH histogram seen with the Histogram Visualizer.]]
 +
 
 +
 
 +
<div style="background-color: #F8F8F8; border-style: dotted;">
 +
* '''Input''': Descriptor, descriptor size, [Window size], [Background color], [Y axis range]
 +
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_histogram_visualizer.html pcl::visualization::PCLHistogramVisualizer]
 +
* [http://robotica.unileon.es/~victorm/PCL_histogram_visualizer.tar.gz Download]
 +
</div>
 +
 
 +
 
 +
==PCLPlotter==
 +
 
 +
This class has all the methods from <span style="color:#FF1493">"PCLHistogramVisualizer"</span> (which will be deprecated soon) plus a lot more features. The code is almost the same:
 +
 
 +
<syntaxhighlight lang=CPP>#include <pcl/io/pcd_io.h>
 +
#include <pcl/features/normal_3d.h>
 +
#include <pcl/features/vfh.h>
 +
#include<pcl/visualization/pcl_plotter.h>
 +
 
 +
int
 +
main(int argc, char** argv)
 +
{
 +
// Clouds for storing everything.
 +
pcl::PointCloud<pcl::PointXYZ>::Ptr object(new pcl::PointCloud<pcl::PointXYZ>);
 +
pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
 +
pcl::PointCloud<pcl::VFHSignature308>::Ptr descriptor(new pcl::PointCloud<pcl::VFHSignature308>);
 +
 
 +
// Read a PCD file from disk.
 +
if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *object) != 0)
 +
{
 +
return -1;
 +
}
 +
 
 +
// Estimate the normals.
 +
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
 +
normalEstimation.setInputCloud(object);
 +
normalEstimation.setRadiusSearch(0.03);
 +
pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
 +
normalEstimation.setSearchMethod(kdtree);
 +
normalEstimation.compute(*normals);
 +
 
 +
// Estimate VFH descriptor.
 +
pcl::VFHEstimation<pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308> vfh;
 +
vfh.setInputCloud(object);
 +
vfh.setInputNormals(normals);
 +
vfh.setSearchMethod(kdtree);
 +
vfh.setNormalizeBins(true);
 +
vfh.setNormalizeDistance(false);
 +
vfh.compute(*descriptor);
 +
 
 +
// Plotter object.
 +
pcl::visualization::PCLPlotter plotter;
 +
// We need to set the size of the descriptor beforehand.
 +
plotter.addFeatureHistogram(*descriptor, 308);
 +
 
 +
plotter.plot();
 +
}</syntaxhighlight>
 +
 
  
// Estimate VFH descriptor.
+
[[Image:PCL_plotter_VFH.png|thumb|center|500px| VFH histogram seen with the PCLPlotter class.]]
pcl::VFHEstimation<pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308> vfh;
 
vfh.setInputCloud(object);
 
vfh.setInputNormals(normals);
 
vfh.setSearchMethod(kdtree);
 
vfh.setNormalizeBins(true);
 
vfh.setNormalizeDistance(false);
 
vfh.compute(*descriptor);
 
  
// Plotter object.
 
pcl::visualization::PCLHistogramVisualizer viewer;
 
// We need to set the size of the descriptor beforehand.
 
viewer.addFeatureHistogram(*descriptor, 308);
 
 
viewer.spin();
 
}</syntaxhighlight>
 
 
 
[[Image:Histogram_visualizer_VFH.png|thumb|center|500px| VFH histogram seen with the Histogram Visualizer.]]
 
  
 +
If you have raw data (such as a vector of floats) you can use the [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_plotter.html#aaf23b2b1c2f91c517cfde387ee1b654e addHistogramData()] function to plot it as an histogram.
  
 
<div style="background-color: #F8F8F8; border-style: dotted;">
 
<div style="background-color: #F8F8F8; border-style: dotted;">
 
* '''Input''': Descriptor, descriptor size, [Window size], [Background color], [Y axis range]
 
* '''Input''': Descriptor, descriptor size, [Window size], [Background color], [Y axis range]
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_histogram_visualizer.html pcl::visualization::PCLHistogramVisualizer]
+
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/pcl_plotter.php PCLPlotter]
* [http://robotica.unileon.es/~victorm/PCL_histogram_visualizer.tar.gz Download]
+
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_plotter.html pcl::visualization::PCLPlotter]
 +
* [http://robotica.unileon.es/~victorm/PCL_plotter.tar.gz Download]
 
</div>
 
</div>
  
Line 1,558: Line 1,848:
 
==PCL Viewer==
 
==PCL Viewer==
  
This program, included with PCL, will also let you open and visualize a saved descriptor. Remember that you can save a descriptor to a file just [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#Writing_to_file|like with any other cloud type]]. Once done, invoke the viewer from the command line:
+
This program, included with PCL, will also let you open and visualize a saved descriptor. Internally, it uses [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#PCLPlotter|PCLPlotter]]. You can invoke the viewer from the command line like this, so it comes handy:
  
 
<syntaxhighlight lang=Bash>pcl_viewer <descriptor_file></syntaxhighlight>
 
<syntaxhighlight lang=Bash>pcl_viewer <descriptor_file></syntaxhighlight>

Latest revision as of 16:59, 5 November 2015

Go to root: PhD-3D-Object-Tracking




It is time to learn the basics of one of the most interesting applications of point cloud processing: 3D object recognition. Akin to 2D recognition, this technique relies on finding good keypoints (characteristic points) in the cloud, and matching them to a set of previously saved ones. But 3D has several advantages over 2D: namely, we will be able to estimate with decent accuracy the exact position and orientation of the object, relative to the sensor. Also, 3D object recognition tends to be more robust to clutter (crowded scenes where objects in the front occluding objects in the background). And finally, having information about the object's shape will help with collision avoidance or grasping operations.

In this first tutorial we will see what descriptors are, how many types are there available in PCL, and how to compute them.

Overview

The basis of 3D object recognition is to find a set of correspondences between two different clouds, one of them containing the object we are looking for. In order to do this, we need a way to compare points in an unambiguous manner. Until now, we have worked with points that store the XYZ coordinates, the RGB color... but none of those properties are unique enough. In two sequential scans, two points could share the same coordinates despite belonging to different surfaces, and using the color information takes us back to 2D recognition, will all the lightning related problems.

In a previous tutorial, we talked about features, before introducing the normals. Normals are an example of feature, because they encode information about the vicinity of the point. That is, the neighboring points are taken into account when computing it, giving us an idea of how the surrounding surface is. But this is not enough. For a feature to be optimal, it must meet the following criteria:

  1. It must be robust to transformations: rigid transformations (the ones that do not change the distance between points) like translations and rotations must not affect the feature. Even if we play with the cloud a bit beforehand, there should be no difference.
  2. It must be robust to noise: measurement errors that cause noise should not change the feature estimation much.
  3. It must be resolution invariant: if sampled with different density (like after performing downsampling), the result must be identical or similar.

This is where descriptors come in. They are more complex (and precise) signatures of a point, that encode a lot of information about the surrounding geometry. The purpose is to unequivocally identify a point across multiple point clouds, no matter the noise, resolution or transformations. Also, some of them capture additional data about the object they belong to, like the viewpoint (that lets us retrieve the pose).


Finding correspondences between point features of two clouds (image from http://pointclouds.org/).


There are many 3D descriptors implemented into PCL. Each one has its own method for computing unique values for a point. Some use the difference between the angles of the normals of the point and its neighbors, for example. Others use the distances between the points. Because of this, some are inherently better or worse for certain purposes. A given descriptor may be scale invariant, and another one may be better with occlusions and partial views of objects. Which one you choose depends on what you want to do.

After calculating the necessary values, an additional step is performed to reduce the descriptor size: the result is binned into an histogram. To do this, the value range of each variable that makes up the descriptor is divided into n subdivisions, and the number of occurrences in each one is counted. Try to imagine a descriptor that computes a single variable, that ranges from 1 to 100. We choose to create 10 bins for it, so the first bin would gather all occurrences between 1 and 10, the second from 11 to 20, and so on. We look at the value of the variable for the first point-neighbor pair, and it is 27, so we increment the value of the third bin by 1. We keep doing this until we get a final histogram for that keypoint. The bin size must be carefully chosen depending on how descriptive that variable is (the variables do not have to share the same number of bins, and also the bins do not have to be of the same size; if for example most values from the previous example fell in the 50-100 range then it would be sensible to have more bins of smaller size in that range).

Descriptors can be classified in two main categories: global and local. The process for computing and using each one (recognition pipeline) is different, so each will be explained in its own section in this article.


Table

The following table will give you a hint of how many descriptors are there in PCL, and some of their features:

3D descriptors
Name Type Size Custom PointType††
PFH (Point Feature Histogram) Local 125 Yes
FPFH (Fast Point Feature Histogram) Local 33 Yes
RSD (Radius-Based Surface Descriptor) Local 289 Yes
3DSC (3D Shape Context) Local 1980 Yes
USC (Unique Shape Context) Local 1960 Yes
SHOT (Signatures of Histograms of Orientations) Local 352 Yes
Spin image Local 153* No
RIFT (Rotation-Invariant Feature Transform) Local 32* No
NARF (Normal Aligned Radial Feature) Local 36 Yes
RoPS (Rotational Projection Statistics) Local 135* No
VFH (Viewpoint Feature Histogram) Global 308 Yes
CVFH (Clustered Viewpoint Feature Histogram) Global 308 Yes
OUR-CVFH (Oriented, Unique and Repeatable Clustered Viewpoint Feature Histogram) Global 308 Yes
ESF (Ensemble of Shape Functions) Global 640 Yes
GFPFH (Global Fast Point Feature Histogram) Global 16 Yes
GRSD (Global Radius-Based Surface Descriptor) Global 21 Yes


† Values marked with an asterisk (*) indicate that the descriptor's size depends on some parameter(s), and the one given is for the default values.

†† Descriptors without their own custom PointType use the generic "pcl::Histogram<>" type. See Saving and loading.

Optionally, you can download a document with a less simple version of the table. Page format is A4, landscape:


Local descriptors

Local descriptors are computed for individual points that we give as input. They have no notion of what an object is, they just describe how the local geometry is around that point. Usually, it is your task to choose which points you want a descriptor to be computed for: the keypoints. Most of the time, you can get away by just performing a downsampling and choosing all remaining points, but keypoint detectors are available, like the one used for NARF, or ISS.

Local descriptors are used for object recognition and registration. Now we will see which ones are implemented into PCL.

PFH

PFH stands for Point Feature Histogram. It is one of the most important descriptors offered by PCL and the basis of others such as FPFH. The PFH tries to capture information of the geometry surrounding the point by analyzing the difference between the directions of the normals in the vicinity (and because of this, an imprecise normal estimation may produce low-quality descriptors).

First, the algorithm pairs all points in the vicinity (not just the chosen keypoint with its neighbors, but also the neighbors with themselves). Then, for each pair, a fixed coordinate frame is computed from their normals. With this frame, the difference between the normals can be encoded with 3 angular variables. These variables, together with the euclidean distance between the points, are saved, and then binned to an histogram when all pairs have been computed. The final descriptor is the concatenation of the histograms of each variable (4 in total).



Computing descriptors in PCL is very easy, and the PFH is not an exception:

#include <pcl/io/pcd_io.h>
#include <pcl/features/normal_3d.h>
#include <pcl/features/pfh.h>

int
main(int argc, char** argv)
{
	// Object for storing the point cloud.
	pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
	// Object for storing the normals.
	pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
	// Object for storing the PFH descriptors for each point.
	pcl::PointCloud<pcl::PFHSignature125>::Ptr descriptors(new pcl::PointCloud<pcl::PFHSignature125>());

	// Read a PCD file from disk.
	if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *cloud) != 0)
	{
		return -1;
	}

	// Note: you would usually perform downsampling now. It has been omitted here
	// for simplicity, but be aware that computation can take a long time.

	// Estimate the normals.
	pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
	normalEstimation.setInputCloud(cloud);
	normalEstimation.setRadiusSearch(0.03);
	pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
	normalEstimation.setSearchMethod(kdtree);
	normalEstimation.compute(*normals);

	// PFH estimation object.
	pcl::PFHEstimation<pcl::PointXYZ, pcl::Normal, pcl::PFHSignature125> pfh;
	pfh.setInputCloud(cloud);
	pfh.setInputNormals(normals);
	pfh.setSearchMethod(kdtree);
	// Search radius, to look for neighbors. Note: the value given here has to be
	// larger than the radius used to estimate the normals.
	pfh.setRadiusSearch(0.05);

	pfh.compute(*descriptors);
}

As you can see, PCL uses the "PFHSignature125" type to save the descriptor to. This means that the descriptor's size is 125 (the dimensionality of the feature vector). Dividing a feature in D dimensional space in B divisions requires a total of BD bins. The original proposal makes use of the distance between the points, but the implementation of PCL does not, as it was not considered discriminative enough (specially in 2.5D scans, where the distance between points increases the further away from the sensor). Hence, the remaining features (with one dimension each) can be divided in 5 divisions, resulting in a 125-dimensional (53) vector.

The final object that stores the computed descriptors can be handled like a normal cloud (even saved to, or read from, disk), and it has the same number of "points" than the original one. The "PFHSignature125" object at position 0, for example, stores the PFH descriptor for the "PointXYZ" point at the same position in the cloud.

For additional details about the descriptor, check the original publications listed below, or PCL's tutorial.


FPFH

PFH gives accurate results, but it has a drawback: it is too computationally expensive to perform at real time. For a cloud of n keypoints with k neighbors considered, it has a complexity of O(nk2). Because of this, a derived descriptor was created, named FPFH (Fast Point Feature Histogram).

The FPFH considers only the direct connections between the current keypoint and its neighbors, removing additional links between neighbors. This takes the complexity down to O(nk). Because of this, the resulting histogram is referred to as SPFH (Simplified Point Feature Histogram). The reference frame and the angular variables are computed as always.


Point pairs established when computing the FPFH for a point (image from http://pointclouds.org).


To account for the loss of these extra connections, an additional step takes place after all histograms have been computed: the SPFHs of a point's neighbors are "merged" with its own, weighted according to the distance. This has the effect of giving a point surface information of points as far away as 2 times the radius used. Finally, the 3 histograms (distance is not used) are concatenated to compose the final descriptor.

#include <pcl/io/pcd_io.h>
#include <pcl/features/normal_3d.h>
#include <pcl/features/fpfh.h>

int
main(int argc, char** argv)
{
	// Object for storing the point cloud.
	pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
	// Object for storing the normals.
	pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
	// Object for storing the FPFH descriptors for each point.
	pcl::PointCloud<pcl::FPFHSignature33>::Ptr descriptors(new pcl::PointCloud<pcl::FPFHSignature33>());

	// Read a PCD file from disk.
	if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *cloud) != 0)
	{
		return -1;
	}

	// Note: you would usually perform downsampling now. It has been omitted here
	// for simplicity, but be aware that computation can take a long time.

	// Estimate the normals.
	pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
	normalEstimation.setInputCloud(cloud);
	normalEstimation.setRadiusSearch(0.03);
	pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
	normalEstimation.setSearchMethod(kdtree);
	normalEstimation.compute(*normals);

	// FPFH estimation object.
	pcl::FPFHEstimation<pcl::PointXYZ, pcl::Normal, pcl::FPFHSignature33> fpfh;
	fpfh.setInputCloud(cloud);
	fpfh.setInputNormals(normals);
	fpfh.setSearchMethod(kdtree);
	// Search radius, to look for neighbors. Note: the value given here has to be
	// larger than the radius used to estimate the normals.
	fpfh.setRadiusSearch(0.05);

	fpfh.compute(*descriptors);
}

An additional implementation of the FPFH estimation that takes advantage of multithreaded optimizations (with the OpenMP API) is available in the class "FPFHEstimationOMP". Its interface is identical to the standard unoptimized implementation. Using it will result in a big performance boost on multi-core systems, meaning faster computation times. Remember to include the header "fpfh_omp.h" instead.


RSD

The Radius-Based Surface Descriptor encodes the radial relationship of the point and its neighborhood. For every pair of the keypoint with a neighbor, the algorithm computes the distance between them, and the difference between their normals. Then, by assuming that both points lie on the surface of a sphere, said sphere is found by fitting not only the points, but also the normals (otherwise, there would be infinite possible spheres). Finally, from all the point-neighbor spheres, only the ones with the maximum and minimum radii are kept and saved to the descriptor of that point.

As you may have deduced already, when two points lie on a flat surface, the sphere radius will be infinite. If, on the other hand, they lie on the curved face of a cylinder, the radius will be more or less the same as that of the cylinder. This allows us to tell objects apart with RSD. The algorithm takes a parameter that sets the maximum radius at which the points will be considered to be part of a plane.



This is the code for compiling the RSD descriptor:

#include <pcl/io/pcd_io.h>
#include <pcl/features/normal_3d.h>
#include <pcl/features/rsd.h>

int
main(int argc, char** argv)
{
	// Object for storing the point cloud.
	pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
	// Object for storing the normals.
	pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
	// Object for storing the RSD descriptors for each point.
	pcl::PointCloud<pcl::PrincipalRadiiRSD>::Ptr descriptors(new pcl::PointCloud<pcl::PrincipalRadiiRSD>());

	// Read a PCD file from disk.
	if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *cloud) != 0)
	{
		return -1;
	}

	// Note: you would usually perform downsampling now. It has been omitted here
	// for simplicity, but be aware that computation can take a long time.

	// Estimate the normals.
	pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
	normalEstimation.setInputCloud(cloud);
	normalEstimation.setRadiusSearch(0.03);
	pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
	normalEstimation.setSearchMethod(kdtree);
	normalEstimation.compute(*normals);

	// RSD estimation object.
	pcl::RSDEstimation<pcl::PointXYZ, pcl::Normal, pcl::PrincipalRadiiRSD> rsd;
	rsd.setInputCloud(cloud);
	rsd.setInputNormals(normals);
	rsd.setSearchMethod(kdtree);
	// Search radius, to look for neighbors. Note: the value given here has to be
	// larger than the radius used to estimate the normals.
	rsd.setRadiusSearch(0.05);
	// Plane radius. Any radius larger than this is considered infinite (a plane).
	rsd.setPlaneRadius(0.1);
	// Do we want to save the full distance-angle histograms?
	rsd.setSaveHistograms(false);
	
	rsd.compute(*descriptors);
}

NOTE: This code will only compile with PCL versions 1.8 and above (the current trunk).

Optionally, you can use the "setSaveHistograms()" function to enable the saving of the full distance-angle histograms, and then use "getHistograms()" to retrieve them.


3DSC

The 3D Shape Context is a descriptor that extends its existing 2D counterpart to the third dimension. It works by creating a support structure (a sphere, to be precise) centered at the point we are computing the descriptor for, with the given search radius. The "north pole" of that sphere (the notion of "up") is pointed to match the normal at that point. Then, the sphere is divided in 3D regions or bins. In the first 2 coordinates (azimuth and elevation) the divisions are equally spaced, but in the third (the radial dimension), divisions are logarithmically spaced so they are smaller towards the center. A minimum radius can be specified to prevent very small bins, that would be too sensitive to small changes in the surface.


Support structure to compute the 3DSC for a point (image from original paper).


For each bin, a weighted count is accumulated for every neighboring point that lies within. The weight depends on the volume of the bin and the local point density (number of points around the current neighbor). This gives the descriptor some degree of resolution invariance.

We have mentioned that the sphere is given the direction of the normal. This still leaves one degree of freedom (only two axes have been locked, the azimuth remains free). Because of this, the descriptor so far does not cope with rotation. To overcome this (so the same point in two different clouds has the same value), the support sphere is rotated around the normal N times (a number of degrees that corresponds with the divisions in the azimuth) and the process is repeated for each, giving a total of N descriptors for that point.

You can compute the 3DSC descriptor the following way:

#include <pcl/io/pcd_io.h>
#include <pcl/features/normal_3d.h>
#include <pcl/features/3dsc.h>

int
main(int argc, char** argv)
{
	// Object for storing the point cloud.
	pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
	// Object for storing the normals.
	pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
	// Object for storing the 3DSC descriptors for each point.
	pcl::PointCloud<pcl::ShapeContext1980>::Ptr descriptors(new pcl::PointCloud<pcl::ShapeContext1980>());

	// Read a PCD file from disk.
	if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *cloud) != 0)
	{
		return -1;
	}

	// Note: you would usually perform downsampling now. It has been omitted here
	// for simplicity, but be aware that computation can take a long time.

	// Estimate the normals.
	pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
	normalEstimation.setInputCloud(cloud);
	normalEstimation.setRadiusSearch(0.03);
	pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
	normalEstimation.setSearchMethod(kdtree);
	normalEstimation.compute(*normals);

	// 3DSC estimation object.
	pcl::ShapeContext3DEstimation<pcl::PointXYZ, pcl::Normal, pcl::ShapeContext1980> sc3d;
	sc3d.setInputCloud(cloud);
	sc3d.setInputNormals(normals);
	sc3d.setSearchMethod(kdtree);
	// Search radius, to look for neighbors. It will also be the radius of the support sphere.
	sc3d.setRadiusSearch(0.05);
	// The minimal radius value for the search sphere, to avoid being too sensitive
	// in bins close to the center of the sphere.
	sc3d.setMinimalRadius(0.05 / 10.0);
	// Radius used to compute the local point density for the neighbors
	// (the density is the number of points within that radius).
	sc3d.setPointDensityRadius(0.05 / 5.0);

	sc3d.compute(*descriptors);
}


USC

The Unique Shape Context descriptor extends the 3DSC by defining a local reference frame, in order to provide an unique orientation for each point. This not only improves the accuracy of the descriptor, it also reduces its size, as computing multiple descriptors to account for orientation is no longer necessary.

You can check the second publication listed below to learn more about how the LRF is computed.

#include <pcl/io/pcd_io.h>
#include <pcl/features/usc.h>

int
main(int argc, char** argv)
{
	// Object for storing the point cloud.
	pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
	// Object for storing the USC descriptors for each point.
	pcl::PointCloud<pcl::UniqueShapeContext1960>::Ptr descriptors(new pcl::PointCloud<pcl::UniqueShapeContext1960>());

	// Read a PCD file from disk.
	if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *cloud) != 0)
	{
		return -1;
	}

	// Note: you would usually perform downsampling now. It has been omitted here
	// for simplicity, but be aware that computation can take a long time.

	// USC estimation object.
	pcl::UniqueShapeContext<pcl::PointXYZ, pcl::UniqueShapeContext1960, pcl::ReferenceFrame> usc;
	usc.setInputCloud(cloud);
	// Search radius, to look for neighbors. It will also be the radius of the support sphere.
	usc.setRadiusSearch(0.05);
	// The minimal radius value for the search sphere, to avoid being too sensitive
	// in bins close to the center of the sphere.
	usc.setMinimalRadius(0.05 / 10.0);
	// Radius used to compute the local point density for the neighbors
	// (the density is the number of points within that radius).
	usc.setPointDensityRadius(0.05 / 5.0);
	// Set the radius to compute the Local Reference Frame.
	usc.setLocalRadius(0.05);

	usc.compute(*descriptors);
}

NOTE: This code will only compile with PCL versions 1.8 and above (the current trunk). For 1.7 and below, change UniqueShapeContext1960 to ShapeContext1980, and edit CMakeLists.txt.


SHOT

SHOT stands for Signature of Histograms of Orientations. Like 3DSC, it encodes information about the topology (surface) withing a spherical support structure. This sphere is divided in 32 bins or volumes, with 8 divisions along the azimuth, 2 along the elevation, and 2 along the radius. For every volume, a one-dimensional local histogram is computed. The variable chosen is the angle between the normal of the keypoint and the current point within that volume (to be precise, the cosine, which was found to be better suitable).


Support structure to compute SHOT. Only 4 azimuth divisions are shown for clarity (image from original paper).


When all local histograms have been computed, they are stitched together in a final descriptor. Like the USC descriptor, SHOT makes use of a local reference frame, making it rotation invariant. It is also robust to noise and clutter.

#include <pcl/io/pcd_io.h>
#include <pcl/features/normal_3d.h>
#include <pcl/features/shot.h>

int
main(int argc, char** argv)
{
	// Object for storing the point cloud.
	pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
	// Object for storing the normals.
	pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
	// Object for storing the SHOT descriptors for each point.
	pcl::PointCloud<pcl::SHOT352>::Ptr descriptors(new pcl::PointCloud<pcl::SHOT352>());

	// Read a PCD file from disk.
	if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *cloud) != 0)
	{
		return -1;
	}

	// Note: you would usually perform downsampling now. It has been omitted here
	// for simplicity, but be aware that computation can take a long time.

	// Estimate the normals.
	pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
	normalEstimation.setInputCloud(cloud);
	normalEstimation.setRadiusSearch(0.03);
	pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
	normalEstimation.setSearchMethod(kdtree);
	normalEstimation.compute(*normals);

	// SHOT estimation object.
	pcl::SHOTEstimation<pcl::PointXYZ, pcl::Normal, pcl::SHOT352> shot;
	shot.setInputCloud(cloud);
	shot.setInputNormals(normals);
	// The radius that defines which of the keypoint's neighbors are described.
	// If too large, there may be clutter, and if too small, not enough points may be found.
	shot.setRadiusSearch(0.02);

	shot.compute(*descriptors);
}

Like with FPFH, a multithreading-optimized variant is available with "SHOTEstimationOMP", that makes use of OpenMP. You need to include the header "shot_omp.h". Also, another variant that uses the texture for matching is available, "SHOTColorEstimation", with an optimized version too (see the second publication for more details). It outputs a "SHOT1344" descriptor.


Spin image

The Spin Image (SI) is the oldest descriptor we are going to see here. It has been around since 1997, but it still sees some use for certain applications. It was originally designed to describe surfaces made by vertices, edges and polygons, but it has been since adapted for point clouds. The descriptor is unlike all others in that the output resembles an image that can be compared with another with the usual means.

The support structure used is a cylinder, centered at the point, with a given radius and height, and aligned with the normal. This cylinder is divided radially and vertically into volumes. For each one, the number of neighbors lying inside is added up, eventually producing a descriptor. Weighting and interpolation are used to improve the result. The final descriptor can be seen as a grayscale image where dark areas correspond to volumes with higher point density.


Spin images computed for 3 points of a model (image from original thesis).


#include <pcl/io/pcd_io.h>
#include <pcl/features/normal_3d.h>
#include <pcl/features/spin_image.h>

// A handy typedef.
typedef pcl::Histogram<153> SpinImage;

int
main(int argc, char** argv)
{
	// Object for storing the point cloud.
	pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
	// Object for storing the normals.
	pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
	// Object for storing the spin image for each point.
	pcl::PointCloud<SpinImage>::Ptr descriptors(new pcl::PointCloud<SpinImage>());

	// Read a PCD file from disk.
	if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *cloud) != 0)
	{
		return -1;
	}

	// Note: you would usually perform downsampling now. It has been omitted here
	// for simplicity, but be aware that computation can take a long time.

	// Estimate the normals.
	pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
	normalEstimation.setInputCloud(cloud);
	normalEstimation.setRadiusSearch(0.03);
	pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
	normalEstimation.setSearchMethod(kdtree);
	normalEstimation.compute(*normals);

	// Spin image estimation object.
	pcl::SpinImageEstimation<pcl::PointXYZ, pcl::Normal, SpinImage> si;
	si.setInputCloud(cloud);
	si.setInputNormals(normals);
	// Radius of the support cylinder.
	si.setRadiusSearch(0.02);
	// Set the resolution of the spin image (the number of bins along one dimension).
	// Note: you must change the output histogram size to reflect this.
	si.setImageWidth(8);

	si.compute(*descriptors);
}

The Spin Image estimation object provides more methods for tuning the estimation, so checking the API is recommended.


RIFT

The Rotation-Invariant Feature Transform, like the spin image, takes some concepts from 2D features, in this case from the Scale-Invariant Feature Transform (SIFT). It is the only descriptor seen here that requires intensity information in order to compute it (it can be obtained from the RGB color values). This means, of course, that you will not be able to use RIFT with standard XYZ clouds, you also need the texture.

In the first step, a circular patch (with the given radius) is fitted on the surface the point lies on. This patch is divided into concentric rings, according to the chosen distance bin size. Then, an histogram is populated with all the point's neighbors lying inside a sphere centered at that point and with the mentioned radius. The distance and the orientation of the intensity gradient at each point are considered. To make it rotation invariant, the angle between the gradient orientation and the vector pointing outward from the center of the patch is measured.


RIFT feature values at 3 different locations in the descriptor (image from original paper).


The authors' original implementation uses 4 rings and 8 histogram orientations, which produce a descriptor of size 32. RIFT is not robust to texture flipping, though this was never considered a big issue.

#include <pcl/io/pcd_io.h>
#include <pcl/point_types_conversion.h>
#include <pcl/features/normal_3d.h>
#include <pcl/features/intensity_gradient.h>
#include <pcl/features/rift.h>

// A handy typedef.
typedef pcl::Histogram<32> RIFT32;

int
main(int argc, char** argv)
{
	// Object for storing the point cloud with color information.
	pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloudColor(new pcl::PointCloud<pcl::PointXYZRGB>);
	// Object for storing the point cloud with intensity value.
	pcl::PointCloud<pcl::PointXYZI>::Ptr cloudIntensity(new pcl::PointCloud<pcl::PointXYZI>);
	// Object for storing the intensity gradients.
	pcl::PointCloud<pcl::IntensityGradient>::Ptr gradients(new pcl::PointCloud<pcl::IntensityGradient>);
	// Object for storing the normals.
	pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
	// Object for storing the RIFT descriptor for each point.
	pcl::PointCloud<RIFT32>::Ptr descriptors(new pcl::PointCloud<RIFT32>());

	// Read a PCD file from disk.
	if (pcl::io::loadPCDFile<pcl::PointXYZRGB>(argv[1], *cloudColor) != 0)
	{
		return -1;
	}

	// Note: you would usually perform downsampling now. It has been omitted here
	// for simplicity, but be aware that computation can take a long time.

	// Convert the RGB to intensity.
	pcl::PointCloudXYZRGBtoXYZI(*cloudColor, *cloudIntensity);

	// Estimate the normals.
	pcl::NormalEstimation<pcl::PointXYZI, pcl::Normal> normalEstimation;
	normalEstimation.setInputCloud(cloudIntensity);
	normalEstimation.setRadiusSearch(0.03);
	pcl::search::KdTree<pcl::PointXYZI>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZI>);
	normalEstimation.setSearchMethod(kdtree);
	normalEstimation.compute(*normals);

	// Compute the intensity gradients.
	pcl::IntensityGradientEstimation < pcl::PointXYZI, pcl::Normal, pcl::IntensityGradient,
		pcl::common::IntensityFieldAccessor<pcl::PointXYZI> > ge;
	ge.setInputCloud(cloudIntensity);
	ge.setInputNormals(normals);
	ge.setRadiusSearch(0.03);
	ge.compute(*gradients);

	// RIFT estimation object.
	pcl::RIFTEstimation<pcl::PointXYZI, pcl::IntensityGradient, RIFT32> rift;
	rift.setInputCloud(cloudIntensity);
	rift.setSearchMethod(kdtree);
	// Set the intensity gradients to use.
	rift.setInputGradient(gradients);
	// Radius, to get all neighbors within.
	rift.setRadiusSearch(0.02);
	// Set the number of bins to use in the distance dimension.
	rift.setNrDistanceBins(4);
	// Set the number of bins to use in the gradient orientation dimension.
	rift.setNrGradientBins(8);
	// Note: you must change the output histogram size to reflect the previous values.

	rift.compute(*descriptors);
}


NARF

The Normal Aligned Radial Feature is the only descriptor here that does not take a point cloud as input. Instead, it works with range images. A range image is a common RGB image in which the distance to the point that corresponds to a certain pixel is encoded as a color value in the visible light spectrum: the points that are closer to the camera would be violet, while the points near the maximum sensor range would be red.

NARF also requires us to find suitable keypoints to compute the descriptor for. NARF keypoints are located near an object's corners, and this also requires to find the borders (transitions from foreground to background), which are trivial to find with a range image. Because of this lengthy pipeline, I will describe the whole process in different sections.

Obtaining a range image

Because we always work with point clouds, I will now explain how you can convert one into a range image, in order to use it for the NARF descriptor. PCL provides a couple of handy classes to perform the conversion, given that you fill the camera data correctly.

A range image can be created in two ways. First, we can use spherical projection, which would give us an image similar to the ones produced by a LIDAR sensor. Second, we can use planar projection, which is better suitable for camera-like sensors as the Kinect or the Xtion, and will not have the characteristic distortion of the first one.

Spherical projection

The following code will take a point cloud and create a range image from it, using spherical projection:

#include <pcl/io/pcd_io.h>
#include <pcl/range_image/range_image.h>
#include <pcl/visualization/range_image_visualizer.h>

int
main(int argc, char** argv)
{
	// Object for storing the point cloud.
	pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);

	// Read a PCD file from disk.
	if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *cloud) != 0)
	{
		return -1;
	}

	// Parameters needed by the range image object:

	// Angular resolution is the angular distance between pixels.
	// Kinect: 57° horizontal FOV, 43° vertical FOV, 640x480 (chosen here).
	// Xtion: 58° horizontal FOV, 45° vertical FOV, 640x480.
	float angularResolutionX = (float)(57.0f / 640.0f * (M_PI / 180.0f));
	float angularResolutionY = (float)(43.0f / 480.0f * (M_PI / 180.0f));
	// Maximum horizontal and vertical angles. For example, for a full panoramic scan,
	// the first would be 360º. Choosing values that adjust to the real sensor will
	// decrease the time it takes, but don't worry. If the values are bigger than
	// the real ones, the image will be automatically cropped to discard empty zones.
	float maxAngleX = (float)(60.0f * (M_PI / 180.0f));
	float maxAngleY = (float)(50.0f * (M_PI / 180.0f));
	// Sensor pose. Thankfully, the cloud includes the data.
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud->sensor_origin_[0],
								 cloud->sensor_origin_[1],
								 cloud->sensor_origin_[2])) *
								 Eigen::Affine3f(cloud->sensor_orientation_);
	// Noise level. If greater than 0, values of neighboring points will be averaged.
	// This would set the search radius (e.g., 0.03 == 3cm).
	float noiseLevel = 0.0f;
	// Minimum range. If set, any point closer to the sensor than this will be ignored.
	float minimumRange = 0.0f;
	// Border size. If greater than 0, a border of "unobserved" points will be left
	// in the image when it is cropped.
	int borderSize = 1;

	// Range image object.
	pcl::RangeImage rangeImage;
	rangeImage.createFromPointCloud(*cloud, angularResolutionX, angularResolutionY,
									maxAngleX, maxAngleY, sensorPose, pcl::RangeImage::CAMERA_FRAME,
									noiseLevel, minimumRange, borderSize);

	// Visualize the image.
	pcl::visualization::RangeImageVisualizer viewer("Range image");
	viewer.showRangeImage(rangeImage);
	while (!viewer.wasStopped())
	{
		viewer.spinOnce();
		// Sleep 100ms to go easy on the CPU.
		pcl_sleep(0.1);
	}
}

Here you can see an example of the output range image:


Range image of a point cloud, using spherical projection.



Planar projection

As mentioned, planar projection will give better results with clouds taken from a depth camera:

#include <pcl/io/pcd_io.h>
#include <pcl/range_image/range_image_planar.h>
#include <pcl/visualization/range_image_visualizer.h>

int
main(int argc, char** argv)
{
	// Object for storing the point cloud.
	pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);

	// Read a PCD file from disk.
	if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *cloud) != 0)
	{
		return -1;
	}

	// Parameters needed by the planar range image object:

	// Image size. Both Kinect and Xtion work at 640x480.
	int imageSizeX = 640;
	int imageSizeY = 480;
	// Center of projection. here, we choose the middle of the image.
	float centerX = 640.0f / 2.0f;
	float centerY = 480.0f / 2.0f;
	// Focal length. The value seen here has been taken from the original depth images.
	// It is safe to use the same value vertically and horizontally.
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;
	// Sensor pose. Thankfully, the cloud includes the data.
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud->sensor_origin_[0],
								 cloud->sensor_origin_[1],
								 cloud->sensor_origin_[2])) *
								 Eigen::Affine3f(cloud->sensor_orientation_);
	// Noise level. If greater than 0, values of neighboring points will be averaged.
	// This would set the search radius (e.g., 0.03 == 3cm).
	float noiseLevel = 0.0f;
	// Minimum range. If set, any point closer to the sensor than this will be ignored.
	float minimumRange = 0.0f;

	// Planar range image object.
	pcl::RangeImagePlanar rangeImagePlanar;
	rangeImagePlanar.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,
			centerX, centerY, focalLengthX, focalLengthX,
			sensorPose, pcl::RangeImage::CAMERA_FRAME,
			noiseLevel, minimumRange);

	// Visualize the image.
	pcl::visualization::RangeImageVisualizer viewer("Planar range image");
	viewer.showRangeImage(rangeImagePlanar);
	while (!viewer.wasStopped())
	{
		viewer.spinOnce();
		// Sleep 100ms to go easy on the CPU.
		pcl_sleep(0.1);
	}
}


Range image of a point cloud, using planar projection.



If you prefer to do the conversion in real time while you inspect the cloud, PCL ships with an example that fetches an "openni_wrapper::DepthImage" from an OpenNI device and creates the range image from it. You can adapt the code of the example example from tutorial 1 to save it to disk with the function pcl::io::saveRangeImagePlanarFilePNG().

Extracting borders

NARF keypoints are located near the edges of objects in the range image, so in order to find them, we first have to extract the borders. A border is defined as an abrupt change from foreground to background. In a range image, this can be easily seen because there is a "jump" in the depth value of two adjacent pixels.


Border detection on a range image (image from original paper).


There are three types of borders. Object borders consist of the pixels (or points) located on the very edge of an object (the outermost points still belonging to the object). Shadow borders are points in the background on the edge of occlusions (empty areas in the background due to the objects in front covering them). Notice that, when the cloud is seen from the sensor's viewpoint, object and shadow points will seem adjacent. Finally, veil points are points interpolated between the previous two which appear in scans taken with LIDAR sensors, so we do not have to worry about them here.



The algorithm basically compares a point's depth with the values of its neighbors, and if a big difference is found, we know it is due to a border. Points closer to the sensor will be marked as object borders, and the other ones as shadow borders.

PCL provides a class for extracting borders of a range image:

#include <pcl/io/pcd_io.h>
#include <pcl/range_image/range_image_planar.h>
#include <pcl/features/range_image_border_extractor.h>
#include <pcl/visualization/range_image_visualizer.h>

int
main(int argc, char** argv)
{
	// Object for storing the point cloud.
	pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
	// Object for storing the borders.
	pcl::PointCloud<pcl::BorderDescription>::Ptr borders(new pcl::PointCloud<pcl::BorderDescription>);

	// Read a PCD file from disk.
	if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *cloud) != 0)
	{
		return -1;
	}

	// Convert the cloud to range image.
	int imageSizeX = 640, imageSizeY = 480;
	float centerX = (640.0f / 2.0f), centerY = (480.0f / 2.0f);
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud->sensor_origin_[0],
								 cloud->sensor_origin_[1],
								 cloud->sensor_origin_[2])) *
								 Eigen::Affine3f(cloud->sensor_orientation_);
	float noiseLevel = 0.0f, minimumRange = 0.0f;
	pcl::RangeImagePlanar rangeImage;
	rangeImage.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,
			centerX, centerY, focalLengthX, focalLengthX,
			sensorPose, pcl::RangeImage::CAMERA_FRAME,
			noiseLevel, minimumRange);

	// Border extractor object.
	pcl::RangeImageBorderExtractor borderExtractor(&rangeImage);

	borderExtractor.compute(*borders);

	// Visualize the borders.
	pcl::visualization::RangeImageVisualizer* viewer = NULL;
	viewer = pcl::visualization::RangeImageVisualizer::getRangeImageBordersWidget(rangeImage,
			 -std::numeric_limits<float>::infinity(),
			 std::numeric_limits<float>::infinity(),
			 false, *borders, "Borders");

	while (!viewer->wasStopped())
	{
		viewer->spinOnce();
		// Sleep 100ms to go easy on the CPU.
		pcl_sleep(0.1);
	}
}


Borders found on the range image.


You can use the extractor's "getParameters()" function to get a pcl::RangeImageBorderExtractor::Parameters struct with the settings that will be used.


Finding keypoints

Citing the original publication:

"We have the following requirements for our interest point extraction procedure:

  1. It must take information about borders and the surface structure into account.
  2. It must select positions that can be reliably detected even if the object is observed from another perspective.
  3. The points must be on positions that provide stable areas for normal estimation or the descriptor calculation in general."

The procedure is the following: for every point in the range image, a score is computed that conveys how much the surface changes in its neighborhood (this is tuned with the support size σ, which is the diameter of the sphere used to find neighboring points). Also, the dominant direction of this change is computed. Then, this direction is compared with those of the neighbors, trying to find how stable the point is (if the directions are very different, that means the point is not stable, and that the surface around changes a lot). Points that are near the object's corners (but not exactly on the very edge) will be good keypoints, yet stable enough.



In PCL, NARF keypoints can be found this way:

#include <pcl/io/pcd_io.h>
#include <pcl/range_image/range_image_planar.h>
#include <pcl/features/range_image_border_extractor.h>
#include <pcl/keypoints/narf_keypoint.h>
#include <pcl/visualization/range_image_visualizer.h>

int
main(int argc, char** argv)
{
	// Object for storing the point cloud.
	pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
	// Object for storing the keypoints' indices.
	pcl::PointCloud<int>::Ptr keypoints(new pcl::PointCloud<int>);

	// Read a PCD file from disk.
	if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *cloud) != 0)
	{
		return -1;
	}

	// Convert the cloud to range image.
	int imageSizeX = 640, imageSizeY = 480;
	float centerX = (640.0f / 2.0f), centerY = (480.0f / 2.0f);
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud->sensor_origin_[0],
								 cloud->sensor_origin_[1],
								 cloud->sensor_origin_[2])) *
								 Eigen::Affine3f(cloud->sensor_orientation_);
	float noiseLevel = 0.0f, minimumRange = 0.0f;
	pcl::RangeImagePlanar rangeImage;
	rangeImage.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,
			centerX, centerY, focalLengthX, focalLengthX,
			sensorPose, pcl::RangeImage::CAMERA_FRAME,
			noiseLevel, minimumRange);

	pcl::RangeImageBorderExtractor borderExtractor;
	// Keypoint detection object.
	pcl::NarfKeypoint detector(&borderExtractor);
	detector.setRangeImage(&rangeImage);
	// The support size influences how big the surface of interest will be,
	// when finding keypoints from the border information.
	detector.getParameters().support_size = 0.2f;

	detector.compute(*keypoints);

	// Visualize the keypoints.
	pcl::visualization::RangeImageVisualizer viewer("NARF keypoints");
	viewer.showRangeImage(rangeImage);
	for (size_t i = 0; i < keypoints->points.size(); ++i)
	{
		viewer.markPoint(keypoints->points[i] % rangeImage.width,
						 keypoints->points[i] / rangeImage.width,
						 // Set the color of the pixel to red (the background
						 // circle is already that color). All other parameters
						 // are left untouched, check the API for more options.
						 pcl::visualization::Vector3ub(1.0f, 0.0f, 0.0f));
	}

	while (!viewer.wasStopped())
	{
		viewer.spinOnce();
		// Sleep 100ms to go easy on the CPU.
		pcl_sleep(0.1);
	}
}


NARF keypoints found on the range image.



Computing the descriptor

We have created the range image from a point cloud, and we have extracted the borders in order to find good keypoints. Now it is time to compute the NARF descriptor for each keypoint.

The NARF descriptor encodes information about surface changes around a point. First, a local range patch is created around the point. It is like a small range image centered at that point, aligned with the normal (it would seem as if we were looking at the point along the normal). Then, a star pattern with n beams is overlaid onto the patch, also centered at the point. For every beam, a value is computed, that reflects how much the surface under it changes. The stronger the change is, and the closer to the center it is, the higher the final value will be. The n resulting values compose the final descriptor.


Computing the NARF descriptor for a keypoint (image from original paper).


The descriptor right now is not invariant to rotations around the normal. To achieve this, the whole possible 360 degrees are binned into an histogram. The value of each bin is computed from the descriptor values according to the angle. Then, the bin with the highest value is considered the dominant orientation, and the descriptor is shifted according to it.

#include <pcl/io/pcd_io.h>
#include <pcl/range_image/range_image_planar.h>
#include <pcl/features/range_image_border_extractor.h>
#include <pcl/keypoints/narf_keypoint.h>
#include <pcl/features/narf_descriptor.h>

int
main(int argc, char** argv)
{
	// Object for storing the point cloud.
	pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
	// Object for storing the keypoints' indices.
	pcl::PointCloud<int>::Ptr keypoints(new pcl::PointCloud<int>);
	// Object for storing the NARF descriptors.
	pcl::PointCloud<pcl::Narf36>::Ptr descriptors(new pcl::PointCloud<pcl::Narf36>);

	// Read a PCD file from disk.
	if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *cloud) != 0)
	{
		return -1;
	}

	// Convert the cloud to range image.
	int imageSizeX = 640, imageSizeY = 480;
	float centerX = (640.0f / 2.0f), centerY = (480.0f / 2.0f);
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud->sensor_origin_[0],
								 cloud->sensor_origin_[1],
								 cloud->sensor_origin_[2])) *
								 Eigen::Affine3f(cloud->sensor_orientation_);
	float noiseLevel = 0.0f, minimumRange = 0.0f;
	pcl::RangeImagePlanar rangeImage;
	rangeImage.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,
			centerX, centerY, focalLengthX, focalLengthX,
			sensorPose, pcl::RangeImage::CAMERA_FRAME,
			noiseLevel, minimumRange);

	// Extract the keypoints.
	pcl::RangeImageBorderExtractor borderExtractor;
	pcl::NarfKeypoint detector(&borderExtractor);
	detector.setRangeImage(&rangeImage);
	detector.getParameters().support_size = 0.2f;
	detector.compute(*keypoints);

	// The NARF estimator needs the indices in a vector, not a cloud.
	std::vector<int> keypoints2;
	keypoints2.resize(keypoints->points.size());
	for (unsigned int i = 0; i < keypoints->size(); ++i)
		keypoints2[i] = keypoints->points[i];
	// NARF estimation object.
	pcl::NarfDescriptor narf(&rangeImage, &keypoints2);
	// Support size: choose the same value you used for keypoint extraction.
	narf.getParameters().support_size = 0.2f;
	// If true, the rotation invariant version of NARF will be used. The histogram
	// will be shifted according to the dominant orientation to provide robustness to
	// rotations around the normal.
	narf.getParameters().rotation_invariant = true;

	narf.compute(*descriptors);
}


RoPS

The Rotational Projection Statistics (RoPS) feature is a bit different from the other descriptors because it works with a triangle mesh, so a previous triangulation step is needed for generating this mesh from the cloud. Apart from that, most concepts are similar.

In order to compute RoPS for a keypoint, the local surface is cropped according to a support radius, so only points and triangles lying inside are taken into account. Then, a local reference frame (LRF) is computed, giving the descriptor its rotational invariance. A coordinate system is created with the point as the origin, and the axes aligned with the LRF. Then, for every axis, several steps are performed.

First, the local surface is rotated around the current axis. The angle is determined by one of the parameters, which sets the number of rotations. Then, all points in the local surface are projected onto the XY, XZ and YZ planes. For each one, statistical information about the distribution of the projected points is computed, and concatenated to form the final descriptor.

#include <pcl/io/pcd_io.h>
#include <pcl/point_types_conversion.h>
#include <pcl/features/normal_3d.h>
#include <pcl/surface/gp3.h>
#include <pcl/features/rops_estimation.h>

// A handy typedef.
typedef pcl::Histogram<135> ROPS135;

int
main(int argc, char** argv)
{
	// Object for storing the point cloud.
	pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
	// Object for storing the normals.
	pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
	// Object for storing both the points and the normals.
	pcl::PointCloud<pcl::PointNormal>::Ptr cloudNormals(new pcl::PointCloud<pcl::PointNormal>);
	// Object for storing the ROPS descriptor for each point.
	pcl::PointCloud<ROPS135>::Ptr descriptors(new pcl::PointCloud<ROPS135>());

	// Read a PCD file from disk.
	if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *cloud) != 0)
	{
		return -1;
	}

	// Estimate the normals.
	pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
	normalEstimation.setInputCloud(cloud);
	normalEstimation.setRadiusSearch(0.03);
	pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
	normalEstimation.setSearchMethod(kdtree);
	normalEstimation.compute(*normals);

	// Perform triangulation.
	pcl::concatenateFields(*cloud, *normals, *cloudNormals);
	pcl::search::KdTree<pcl::PointNormal>::Ptr kdtree2(new pcl::search::KdTree<pcl::PointNormal>);
	kdtree2->setInputCloud(cloudNormals);
	pcl::GreedyProjectionTriangulation<pcl::PointNormal> triangulation;
	pcl::PolygonMesh triangles;
	triangulation.setSearchRadius(0.025);
	triangulation.setMu(2.5);
	triangulation.setMaximumNearestNeighbors(100);
	triangulation.setMaximumSurfaceAngle(M_PI / 4); // 45 degrees.
	triangulation.setNormalConsistency(false);
	triangulation.setMinimumAngle(M_PI / 18); // 10 degrees.
	triangulation.setMaximumAngle(2 * M_PI / 3); // 120 degrees.
	triangulation.setInputCloud(cloudNormals);
	triangulation.setSearchMethod(kdtree2);
	triangulation.reconstruct(triangles);

	// Note: you should only compute descriptors for chosen keypoints. It has
	// been omitted here for simplicity.

	// RoPs estimation object.
	pcl::ROPSEstimation<pcl::PointXYZ, ROPS135> rops;
	rops.setInputCloud(cloud);
	rops.setSearchMethod(kdtree);
	rops.setRadiusSearch(0.03);
	rops.setTriangles(triangles.polygons);
	// Number of partition bins that is used for distribution matrix calculation.
	rops.setNumberOfPartitionBins(5);
	// The greater the number of rotations is, the bigger the resulting descriptor.
	// Make sure to change the histogram size accordingly.
	rops.setNumberOfRotations(3);
	// Support radius that is used to crop the local surface of the point.
	rops.setSupportRadius(0.025);

	rops.compute(*descriptors);
}


Global descriptors

Global descriptors encode object geometry. They are not computed for individual points, but for a whole cluster that represents an object. Because of this, a preprocessing step (segmentation) is required, in order to retrieve possible candidates.

Global descriptors are used for object recognition and classification, geometric analysis (object type, shape...), and pose estimation.

You should also know that many local descriptors can also be used as global ones. This can be done with descriptors that use a radius to search for neighbors (as PFH does). The trick is to compute it for one single point in the object cluster, and set the radius to the maximum possible distance between any two points (so all points in the cluster are considered as neighbors).

VFH

The Viewpoint Feature Histogram is based on the FPFH. Because the latter is invariant to the object's pose, the authors decided to expand it by including information about the viewpoint. Also, the FPFH is estimated once for the whole cluster, not for every point.

The VFH is made up by two parts: a viewpoint direction component, and an extended FPFH component. To compute the first one, the object's centroid is found, which is the point that results from averaging the X, Y and Z coordinates of all points. Then, the vector between the viewpoint (the position of the sensor) and this centroid is computed, and normalized. Finally, for all points in the cluster, the angle between this vector and their normal is calculated, and the result is binned into an histogram. The vector is translated to each point when computing the angle because it makes the descriptor scale invariant.

The second component is computed like the FPFH (that results in 3 histograms for the 3 angular features, α, φ and θ), with some differences: it is only computed for the centroid, using the computed viewpoint direction vector as its normal (as the point, obviously, does not have a normal), and setting all the cluster's points as neighbors.



The resulting 4 histograms (1 for the viewpoint component, 3 for the extended FPFH component) are concatenated to build the final VFH descriptor. By default, the bins are normalized using the total number of points in the cluster. This makes the VFH descriptor invariant to scale.


VFH histogram (image from original paper).


The PCL implementation computes an additional fifth histogram with the distances of the cluster points to the centroid (the Shape Distribution Component, SDC), increading the size of the output descriptor from 263 to 308. The SDC is taken from the CVFH descriptor that we will see in the next section, and makes the result more robust.

The VFH of an already clustered object can be computed this way:

#include <pcl/io/pcd_io.h>
#include <pcl/features/normal_3d.h>
#include <pcl/features/vfh.h>

int
main(int argc, char** argv)
{
	// Cloud for storing the object.
	pcl::PointCloud<pcl::PointXYZ>::Ptr object(new pcl::PointCloud<pcl::PointXYZ>);
	// Object for storing the normals.
	pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
	// Object for storing the VFH descriptor.
	pcl::PointCloud<pcl::VFHSignature308>::Ptr descriptor(new pcl::PointCloud<pcl::VFHSignature308>);

	// Note: you should have performed preprocessing to cluster out the object
	// from the cloud, and save it to this individual file.

	// Read a PCD file from disk.
	if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *object) != 0)
	{
		return -1;
	}

	// Estimate the normals.
	pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
	normalEstimation.setInputCloud(object);
	normalEstimation.setRadiusSearch(0.03);
	pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
	normalEstimation.setSearchMethod(kdtree);
	normalEstimation.compute(*normals);

	// VFH estimation object.
	pcl::VFHEstimation<pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308> vfh;
	vfh.setInputCloud(object);
	vfh.setInputNormals(normals);
	vfh.setSearchMethod(kdtree);
	// Optionally, we can normalize the bins of the resulting histogram,
	// using the total number of points.
	vfh.setNormalizeBins(true);
	// Also, we can normalize the SDC with the maximum size found between
	// the centroid and any of the cluster's points.
	vfh.setNormalizeDistance(false);

	vfh.compute(*descriptor);
}

Because only one VFH descriptor is computed for the whole cluster, the size of the cloud object that stores the result will be 1.


CVFH

The original VFH descriptor is not robust to occlusion or other sensor artifacts, or measurement errors. If the object cluster is missing many points, the resulting computed centroid will differ from the original, altering the final descriptor, and preventing a positive match from being found. Because of that, the Clustered Viewpoint Feature Histogram (CVFH) was introduced.

The idea is very simple: instead of computing a single VFH histogram for the whole cluster, the object is first divided in stable, smooth regions using region-growing segmentation, that enforces several constraints in the distances and differences of normals of the points belonging to every region. Then, a VFH is computed for every region. Thanks to this, an object can be found in a scene, as long as at least one of its regions is fully visible.



Additionally, a Shape Distribution Component (SDC) is also computed and included. It encodes information about the distribution of the points arond the region's centroid, measuring the distances. The SDC allows to differentiate objects with similar characteristics (size and normal distribution), like two planar surfaces from each other.

The authors proposed to discard the histogram normalization step that is performed in VFH. This has the effect of making the descriptor dependant of scale, so an object of a certain size would not match a bigger or smaller copy of itself. It also makes CVFH more robust to occlusion.

CVFH is invariant to the camera roll angle, like most global descriptors. This is so because rotations about that camera axis do not change the observable geometry that descriptors are computed from, limiting the pose estimation to 5 DoF. The use of a Camera Roll Histogram (CRH) has been proposed to overcome this.

#include <pcl/io/pcd_io.h>
#include <pcl/features/normal_3d.h>
#include <pcl/features/cvfh.h>

int
main(int argc, char** argv)
{
	// Cloud for storing the object.
	pcl::PointCloud<pcl::PointXYZ>::Ptr object(new pcl::PointCloud<pcl::PointXYZ>);
	// Object for storing the normals.
	pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
	// Object for storing the CVFH descriptors.
	pcl::PointCloud<pcl::VFHSignature308>::Ptr descriptors(new pcl::PointCloud<pcl::VFHSignature308>);

	// Note: you should have performed preprocessing to cluster out the object
	// from the cloud, and save it to this individual file.

	// Read a PCD file from disk.
	if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *object) != 0)
	{
		return -1;
	}

	// Estimate the normals.
	pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
	normalEstimation.setInputCloud(object);
	normalEstimation.setRadiusSearch(0.03);
	pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
	normalEstimation.setSearchMethod(kdtree);
	normalEstimation.compute(*normals);

	// CVFH estimation object.
	pcl::CVFHEstimation<pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308> cvfh;
	cvfh.setInputCloud(object);
	cvfh.setInputNormals(normals);
	cvfh.setSearchMethod(kdtree);
	// Set the maximum allowable deviation of the normals,
	// for the region segmentation step.
	cvfh.setEPSAngleThreshold(5.0 / 180.0 * M_PI); // 5 degrees.
	// Set the curvature threshold (maximum disparity between curvatures),
	// for the region segmentation step.
	cvfh.setCurvatureThreshold(1.0);
	// Set to true to normalize the bins of the resulting histogram,
	// using the total number of points. Note: enabling it will make CVFH
	// invariant to scale just like VFH, but the authors encourage the opposite.
	cvfh.setNormalizeBins(false);

	cvfh.compute(*descriptors);
}

You can further customize the segmentation step with "setClusterTolerance()" (to set the maximum Euclidean distance between points in the same cluster) and "setMinPoints()". The size of the output will be equal to the number of regions the object was divided in. Also, check the functions "getCentroidClusters()" and "getCentroidNormalClusters()", you can use them to get information about the centroids used to compute the different CVFH descriptors.


OUR-CVFH

The Oriented, Unique and Repeatable CVFH expands the previous descriptor, adding the computation of an unique reference frame to make it more robust.

OUR-CVFH relies on the use of Semi-Global Unique Reference Frames (SGURFs), which are repeatable coordinate systems computed for each region. Not only they remove the invariance to camera roll and allow to extract the 6DoF pose directly without additional steps, but they also improve the spatial descriptiveness.

The first part of the computation is akin to CVFH, but after segmentation, the points in each region are filtered once more according to the difference between their normals and the region's average normal. This results in better shaped regions, improving the estimation of the Reference Frames (RFs).

After this, the SGURF is computed for each region. Disambiguation is performed to decide the sign of the axes, according to the points' distribution. If this is not enough and the sign remains ambiguous, multiple RFs will need to be created to account for it. Finally, the OUR-CVFH descriptor is computed. The original Shape Distribution Component (SDC) is discarded, and the surface is now described according to the RFs.


SGURF frame and resulting histogram of a region (image from original paper).


#include <pcl/io/pcd_io.h>
#include <pcl/features/normal_3d.h>
#include <pcl/features/our_cvfh.h>

int
main(int argc, char** argv)
{
	// Cloud for storing the object.
	pcl::PointCloud<pcl::PointXYZ>::Ptr object(new pcl::PointCloud<pcl::PointXYZ>);
	// Object for storing the normals.
	pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
	// Object for storing the OUR-CVFH descriptors.
	pcl::PointCloud<pcl::VFHSignature308>::Ptr descriptors(new pcl::PointCloud<pcl::VFHSignature308>);

	// Note: you should have performed preprocessing to cluster out the object
	// from the cloud, and save it to this individual file.

	// Read a PCD file from disk.
	if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *object) != 0)
	{
		return -1;
	}

	// Estimate the normals.
	pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
	normalEstimation.setInputCloud(object);
	normalEstimation.setRadiusSearch(0.03);
	pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
	normalEstimation.setSearchMethod(kdtree);
	normalEstimation.compute(*normals);

	// OUR-CVFH estimation object.
	pcl::OURCVFHEstimation<pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308> ourcvfh;
	ourcvfh.setInputCloud(object);
	ourcvfh.setInputNormals(normals);
	ourcvfh.setSearchMethod(kdtree);
	ourcvfh.setEPSAngleThreshold(5.0 / 180.0 * M_PI); // 5 degrees.
	ourcvfh.setCurvatureThreshold(1.0);
	ourcvfh.setNormalizeBins(false);
	// Set the minimum axis ratio between the SGURF axes. At the disambiguation phase,
	// this will decide if additional Reference Frames need to be created, if ambiguous.
	ourcvfh.setAxisRatio(0.8);

	ourcvfh.compute(*descriptors);
}

You can use the "getTransforms()" function to get the transformations aligning the cloud to the corresponding SGURF.


ESF

The Ensemble of Shape Functions (ESF) is a combination of 3 different shape functions that describe certain properties of the cloud's points: distances, angles and area. This descriptor is very unique because it does not require normal information. Actually, it does not need any preprocessing, as it is robust to noise and incomplete surfaces.

The algorithm uses a voxel grid as an approximation of the real surface. It iterates through all the points in the cloud: for every iteration, 3 random points are chosen. For these points, the shape functions are computed:

  • D2: this function computes the distances between point pairs (3 overall). Then, for every pair, it checks if the line that connects both points lies entirely inside the surface, entirely outside (crossing free space), or both. Depending on this, the distance value will be binned to one of three possible histograms: IN, OUT or MIXED.
  • D2 ratio: an additional histogram for the ratio between parts of the line inside the surface, and parts outside. This value will be 0 if the line is completely outside, 1 if completely inside, and some value in between if mixed.
  • D3: this computes the square root of the area of the triangle formed by the 3 points. Like D2, the result is also classified as IN, OUT or MIXED, each with its own histogram.
  • A3: this function computes the angle formed by the points. Then, the value is binned depending on how the line opposite to the angle is (once again, as IN, OUT or MIXED).


ESF descriptor (image from this paper).


After the loop is over, we are left with 10 subhistograms (IN, OUT and MIXED for D2, D3 and A3, and an additional one for the ratio). Each one has 64 bins, so the size of the final ESF descriptor is 640.

#include <pcl/io/pcd_io.h>
#include <pcl/features/esf.h>

int
main(int argc, char** argv)
{
	// Cloud for storing the object.
	pcl::PointCloud<pcl::PointXYZ>::Ptr object(new pcl::PointCloud<pcl::PointXYZ>);
	// Object for storing the ESF descriptor.
	pcl::PointCloud<pcl::ESFSignature640>::Ptr descriptor(new pcl::PointCloud<pcl::ESFSignature640>);

	// Note: you should have performed preprocessing to cluster out the object
	// from the cloud, and save it to this individual file.

	// Read a PCD file from disk.
	if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *object) != 0)
	{
		return -1;
	}

	// ESF estimation object.
	pcl::ESFEstimation<pcl::PointXYZ, pcl::ESFSignature640> esf;
	esf.setInputCloud(object);

	esf.compute(*descriptor);
}


GFPFH

As you may have guessed, GFPFH stands for Global Fast Point Feature Histogram, the global version of the FPFH descriptor. GFPFH was designed for the task of helping a robot navigate its environment, having some context of the objects around it.

The first step before being able to compute the descriptor is surface categorization. A set of logical primitives (the classes, or categories) is created, which depends on the type of objects we expect the robot to find on the scene. For example, if we know there will be a coffee mug, we create three: one for the handle, and the other two for the outer and inner faces. Then, FPFH descriptors are computed, and everything is fed to a Conditional Random Field (CRF) algorithm. The CRF will label each surface with one of the previous categories, so we end up with a cloud where each point has been classified depending of the type of object (or object's region) it belongs to.


Classification of objects made with FPFH and CRF (image from original paper).


Now, the GFPFH descriptor can be computed with the result of the classification step. It will encode what the object is made of, so the robot can easily recognize it. First, an octree is created, dividing the object in voxel leaves. For every leaf, a set of probabilities is created, one for each class. Each one stores the probability of that leaf belonging to the class, and it is computed according to the number of points in that leaf that have been labelled as that class, and the total number of points. Then, for every pair of leaves in the octree, a line is casted, connecting them. Every leaf in its path is checked for occupancy, storing the result in an histogram. If the leaf is empty (free space), a value of 0 is saved. Otherwise, the leaf probabilities are used.


Computing the GFPFH with a voxel grid (image from original paper).


The following code will compute the GFPFH for a cloud with label information. The categorization step is up to you, as it depends largely on the type of the scene, and the use you are going to give it.

#include <pcl/io/pcd_io.h>
#include <pcl/features/gfpfh.h>

int
main(int argc, char** argv)
{
	// Cloud for storing the object.
	pcl::PointCloud<pcl::PointXYZL>::Ptr object(new pcl::PointCloud<pcl::PointXYZL>);
	// Object for storing the GFPFH descriptor.
	pcl::PointCloud<pcl::GFPFHSignature16>::Ptr descriptor(new pcl::PointCloud<pcl::GFPFHSignature16>);

	// Note: you should have performed preprocessing to cluster out the object
	// from the cloud, and save it to this individual file.

	// Read a PCD file from disk.
	if (pcl::io::loadPCDFile<pcl::PointXYZL>(argv[1], *object) != 0)
	{
		return -1;
	}

	// Note: you should now perform classification on the cloud's points. See the
	// original paper for more details. For this example, we will now consider 4
	// different classes, and randomly label each point as one of them.
	for (size_t i = 0; i < object->points.size(); ++i)
	{
		object->points[i].label = 1 + i % 4;
	}

	// ESF estimation object.
	pcl::GFPFHEstimation<pcl::PointXYZL, pcl::PointXYZL, pcl::GFPFHSignature16> gfpfh;
	gfpfh.setInputCloud(object);
	// Set the object that contains the labels for each point. Thanks to the
	// PointXYZL type, we can use the same object we store the cloud in.
	gfpfh.setInputLabels(object);
	// Set the size of the octree leaves to 1cm (cubic).
	gfpfh.setOctreeLeafSize(0.01);
	// Set the number of classes the cloud has been labelled with (default is 16).
	gfpfh.setNumberOfClasses(4);

	gfpfh.compute(*descriptor);
}


GRSD

The global version of the Radius-based Surface Descriptor works in a similar fashion to GFPFH. A voxelization and a surface categorization step are performed beforehand, labelling all surface patches according to the geometric category (plane, cylinder, edge, rim, sphere), using RSD. Then, the whole cluster is classified into one of these categories, and the GRSD descriptor is computed from this.


Classification of objects for GRSD and resulting histogram (image from original paper).


To compute it:

#include <pcl/io/pcd_io.h>
#include <pcl/features/normal_3d.h>
#include <pcl/features/grsd.h>

int
main(int argc, char** argv)
{
	// Object for storing the point cloud.
	pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
	// Object for storing the normals.
	pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
	// Object for storing the GRSD descriptors for each point.
	pcl::PointCloud<pcl::GRSDSignature21>::Ptr descriptors(new pcl::PointCloud<pcl::GRSDSignature21>());

	// Read a PCD file from disk.
	if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *cloud) != 0)
	{
		return -1;
	}

	// Note: you would usually perform downsampling now. It has been omitted here
	// for simplicity, but be aware that computation can take a long time.

	// Estimate the normals.
	pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
	normalEstimation.setInputCloud(cloud);
	normalEstimation.setRadiusSearch(0.03);
	pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
	normalEstimation.setSearchMethod(kdtree);
	normalEstimation.compute(*normals);

	// GRSD estimation object.
	pcl::GRSDEstimation<pcl::PointXYZ, pcl::Normal, pcl::GRSDSignature21> grsd;
	grsd.setInputCloud(cloud);
	grsd.setInputNormals(normals);
	grsd.setSearchMethod(kdtree);
	// Search radius, to look for neighbors. Note: the value given here has to be
	// larger than the radius used to estimate the normals.
	grsd.setRadiusSearch(0.05);
	
	grsd.compute(*descriptors);
}

NOTE: This code will only compile with PCL versions 1.8 and above (the current trunk).


Saving and loading

You can save a descriptor to a file just like with any other cloud type. One caveat, though. If you are using a descriptor that has its own custom type, like "PFHSignature125", everything will be OK. But with descriptors that do not (where you have to use "pcl::Histogram<>"), you will get this error: "POINT_TYPE_NOT_PROPERLY_REGISTERED". In order to save or load from a file, PCL's IO functions need to know about the number, type and size of the fields. To solve this, you will have to properly register a new point type for your descriptor. For example, this will work for the RoPS descriptor example we saw earlier:

POINT_CLOUD_REGISTER_POINT_STRUCT(ROPS135,
                                  (float[135], histogram, histogram)
)

Add the previous to your code (change it accordingly), and you will be able to save and load descriptors as usual.

Visualization

Sometimes it is desired to check a visual representation of a descriptor, perhaps to analyze the distribution of data over different bins. Because they are saved as histograms, this is something trivial to do. PCL offers a couple of classes to do this.

PCLHistogramVisualizer

"PCLHistogramVisualizer" is the simplest way to plot an histogram. The class has little functionality, but it does its job. Only one call is necessary to give the histogram and its size:

#include <pcl/io/pcd_io.h>
#include <pcl/features/normal_3d.h>
#include <pcl/features/vfh.h>
#include<pcl/visualization/histogram_visualizer.h>

int
main(int argc, char** argv)
{
	// Clouds for storing everything.
	pcl::PointCloud<pcl::PointXYZ>::Ptr object(new pcl::PointCloud<pcl::PointXYZ>);
	pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
	pcl::PointCloud<pcl::VFHSignature308>::Ptr descriptor(new pcl::PointCloud<pcl::VFHSignature308>);

	// Read a PCD file from disk.
	if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *object) != 0)
	{
		return -1;
	}

	// Estimate the normals.
	pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
	normalEstimation.setInputCloud(object);
	normalEstimation.setRadiusSearch(0.03);
	pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
	normalEstimation.setSearchMethod(kdtree);
	normalEstimation.compute(*normals);

	// Estimate VFH descriptor.
	pcl::VFHEstimation<pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308> vfh;
	vfh.setInputCloud(object);
	vfh.setInputNormals(normals);
	vfh.setSearchMethod(kdtree);
	vfh.setNormalizeBins(true);
	vfh.setNormalizeDistance(false);
	vfh.compute(*descriptor);

	// Plotter object.
	pcl::visualization::PCLHistogramVisualizer viewer;
	// We need to set the size of the descriptor beforehand.
	viewer.addFeatureHistogram(*descriptor, 308);

	viewer.spin();
}


VFH histogram seen with the Histogram Visualizer.



PCLPlotter

This class has all the methods from "PCLHistogramVisualizer" (which will be deprecated soon) plus a lot more features. The code is almost the same:

#include <pcl/io/pcd_io.h>
#include <pcl/features/normal_3d.h>
#include <pcl/features/vfh.h>
#include<pcl/visualization/pcl_plotter.h>

int
main(int argc, char** argv)
{
	// Clouds for storing everything.
	pcl::PointCloud<pcl::PointXYZ>::Ptr object(new pcl::PointCloud<pcl::PointXYZ>);
	pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
	pcl::PointCloud<pcl::VFHSignature308>::Ptr descriptor(new pcl::PointCloud<pcl::VFHSignature308>);

	// Read a PCD file from disk.
	if (pcl::io::loadPCDFile<pcl::PointXYZ>(argv[1], *object) != 0)
	{
		return -1;
	}

	// Estimate the normals.
	pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
	normalEstimation.setInputCloud(object);
	normalEstimation.setRadiusSearch(0.03);
	pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
	normalEstimation.setSearchMethod(kdtree);
	normalEstimation.compute(*normals);

	// Estimate VFH descriptor.
	pcl::VFHEstimation<pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308> vfh;
	vfh.setInputCloud(object);
	vfh.setInputNormals(normals);
	vfh.setSearchMethod(kdtree);
	vfh.setNormalizeBins(true);
	vfh.setNormalizeDistance(false);
	vfh.compute(*descriptor);

	// Plotter object.
	pcl::visualization::PCLPlotter plotter;
	// We need to set the size of the descriptor beforehand.
	plotter.addFeatureHistogram(*descriptor, 308);

	plotter.plot();
}


VFH histogram seen with the PCLPlotter class.


If you have raw data (such as a vector of floats) you can use the addHistogramData() function to plot it as an histogram.


PCL Viewer

This program, included with PCL, will also let you open and visualize a saved descriptor. Internally, it uses PCLPlotter. You can invoke the viewer from the command line like this, so it comes handy:

pcl_viewer <descriptor_file>


VFH histogram seen with pcl_viewer.




Go to root: PhD-3D-Object-Tracking

Links to articles:

PCL/OpenNI tutorial 0: The very basics

PCL/OpenNI tutorial 1: Installing and testing

PCL/OpenNI tutorial 2: Cloud processing (basic)

PCL/OpenNI tutorial 3: Cloud processing (advanced)

PCL/OpenNI tutorial 4: 3D object recognition (descriptors)

PCL/OpenNI tutorial 5: 3D object recognition (pipeline)

PCL/OpenNI troubleshooting