<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://robotica.unileon.es/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Victorm</id>
		<title>robotica.unileon.es - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://robotica.unileon.es/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Victorm"/>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=Special:Contributions/Victorm"/>
		<updated>2026-04-30T12:54:57Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.27.0</generator>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=TFM-Nao-Goalkeeper&amp;diff=4874</id>
		<title>TFM-Nao-Goalkeeper</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=TFM-Nao-Goalkeeper&amp;diff=4874"/>
				<updated>2016-04-20T18:54:03Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* '''Project name:''' TFM-Nao-Goalkeeper&lt;br /&gt;
* '''Dates:''' October 2010 - July 2011&lt;br /&gt;
* '''Degree:''' MSc&lt;br /&gt;
* '''Author:''' [[User:Victorm | Víctor Rodríguez]]&lt;br /&gt;
* '''Contact:''' vrodm [at] unileon [dot] es&lt;br /&gt;
* '''Tags:''' ai, state machine, robocup, goalkeeper&lt;br /&gt;
* '''Technologies:''' nao, naoqi, robocup, c++, cmake, svn, bica&lt;br /&gt;
* '''Status:''' Finished (used in [http://www.robocup2011.org RoboCup 2011])&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Nao articles=&lt;br /&gt;
&lt;br /&gt;
Mostly deprecated by now, but here you have:&lt;br /&gt;
&lt;br /&gt;
[[Nao tutorial 1: First steps]]&lt;br /&gt;
&lt;br /&gt;
[[Nao tutorial 2: First module]]&lt;br /&gt;
&lt;br /&gt;
[[Nao troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=Activities&amp;diff=4837</id>
		<title>Activities</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=Activities&amp;diff=4837"/>
				<updated>2016-03-01T09:35:17Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Last Events=&lt;br /&gt;
&lt;br /&gt;
==3D Scanning and Printing==&lt;br /&gt;
&lt;br /&gt;
During the last two weeks Francisco Sedano gave a brief introduction to 3D prototyping technologies that can be applied to a wide variety of engineering applications. The training gave the students the knowledge to develop and use 3D scanning and 3D printing technologies.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:20160218_195229.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://robotica.unileon.es/mediawiki/documents/escaneado_triptico.pdf Scanning Course]&lt;br /&gt;
&lt;br /&gt;
[http://robotica.unileon.es/mediawiki/documents/impresoras_3D.pdf 3D Printing Course]&lt;br /&gt;
&lt;br /&gt;
==Francisco Lera's PhD dissertation==&lt;br /&gt;
&lt;br /&gt;
Francisco J. Lera just became the newest doctor of the group. His defense was held on December the 11th and he got a &amp;quot;Sobresaliente Cum Laude&amp;quot;. Congratulations, Fran! &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:tesis-fran.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==RoCKIn 2015 (Lisboa) ==&lt;br /&gt;
&lt;br /&gt;
[http://rockinrobotchallenge.eu/rockin2015.php RoCKIn2015], the second competition event in the EU-funded robotics project RoCKIn, was held in Lisbon (Portugal), 21-23 November 2015.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:RoCKIn2015.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It was a joint event with the [https://web.fe.up.pt/~robot2015/ ROBOT'2015], the Second Iberian Robotics Conference where we also presented [[a paper]]&lt;br /&gt;
&lt;br /&gt;
==RoCKIn Event 2014 (Toulouse)==&lt;br /&gt;
&lt;br /&gt;
(main article of this event [[RoCKInChallengeToulouse | here]])&lt;br /&gt;
&lt;br /&gt;
==RoCKIn Camp 2014 (Rome)==&lt;br /&gt;
&lt;br /&gt;
(main article of this event [[RoCKIn2014 | here]])&lt;br /&gt;
&lt;br /&gt;
The team was in Rome testing the MYRABot robot. There, we learned different topics about vision, grasping, and natural language understanding. We tested the MB+ (MYRABot+) robot, a modification of MYRABot. Other robots were introduced in this competition, such as REEM from PAL Robotics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:MBPAL1.JPG|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Hack For Good 2013==&lt;br /&gt;
&lt;br /&gt;
Proposals for [http://hackforgood.net HackForGood] can be found [[proposals-hackforfood-2013 | here]].&lt;br /&gt;
&lt;br /&gt;
=Seminars=&lt;br /&gt;
&lt;br /&gt;
* European Robotics Week Conference [[http://www.eu-robotics.net/eurobotics-week/events-2013/low-cost-service-robots-ule-watermelon-project-for-rockinhome-challenge.html | Low Cost Robots for RoCKIn Challenge ]]November 27th, 2013&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2012|Introduction to Mobile Robot Programming (4th edition)]]. October 22nd to 26th , 2012&lt;br /&gt;
&lt;br /&gt;
* Conference ''&amp;quot;Augmented reality for elderly medication control using a mobile robot&amp;quot;'' at the Curso de Verano [http://catedratelefonica.unileon.es/2012/07/09/participacion-de-la-catedra-en-el-curso-de-verano-envejecimiento-activo-una-aproximacion-a-la-realidad-actual/ Envejecimiento Activo].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Foto-Curso-Verano-2012.jpeg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Conference ''&amp;quot;The role of edutainment robotics in mechatronics research and teaching&amp;quot;'' by [http://www.ipb.pt/~jllima/ José Luis Lima] and José Gonçalves from the [http://www.ipb.pt Politechical Institute of Bragança], on November 4, 2011.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:CharlaIPB-2011.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2011 | Mobile Robot Programming, medium level (1st edition)]]. March 2011.&lt;br /&gt;
&lt;br /&gt;
* Seminar on BICA programming in the SPITeam for the RoboCup2011. Instructors: Francisco Martín and Carlos E. Agüero from [http://www.robotica-urjc.es URJC Robotics Group].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;wikiflv width=&amp;quot;300&amp;quot; height=&amp;quot;250&amp;quot; logo=&amp;quot;true&amp;quot;&amp;gt;/videos/CursoBica.flv&amp;lt;/wikiflv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2010|Introduction to Mobile Robot Programming (3rd edition)]]. December 9 to 23, 2010.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:16122012.JPG|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2010|Introduction to Mobile Robot Programming (2nd edition)]]. March 8 to 12, 2010.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;wikiflv width=&amp;quot;300&amp;quot; height=&amp;quot;250&amp;quot; logo=&amp;quot;true&amp;quot;&amp;gt;/videos/videoSumo20100312.flv&amp;lt;/wikiflv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [http://roboticaleon2009.googlepages.com/ Mobile Robot programming]. July 1 to 7, 2009.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Carrera.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [http://www.compcog.org/ CompCog] Workshop ESF on Held in the University of León, May 26 to 27, 2009.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Compcog09-leon.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Introduction seminar to Nao programming (Instructor [http://www.robotica-urjc.es Dr. Francisco Martín Rico]),&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Workshop-paco09.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=In the Press=&lt;br /&gt;
&lt;br /&gt;
* Article in Diario de León about the [http://www.diariodeleon.es/noticias/sociedad/orbione-amigo-robot-ayuda-casa_1048422.html OrbiOne robot]. We made the front page! (February 24th, 2016)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:PortadaDiarioLeon2016.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [http://www.diariodevalladolid.es/noticias/valladolid/villanueva-augura-empresas-innoven-creceran-triple-proximos-cinco-anos_14514.html Award ''Innovadores Castilla y Leon 2015'']  (El Mundo, 2015)&lt;br /&gt;
&lt;br /&gt;
*[http://www.elmundo.es/economia/2014/12/31/54a2ee5b22601d42688b4580.html?intcmp=ULNOH002 Robots que medican y chalan], coverage by national newspaper of the group's activities (El Mundo, Dec, 2014)&lt;br /&gt;
&lt;br /&gt;
* [http://www.diariodeleon.es/noticias/leon/premio-tic_845853.html Innova Award] to the best 2014 IT project, given by local newspaper (Diario de León).&lt;br /&gt;
&lt;br /&gt;
* Article about  Fernando's work on mobile robots [http://www.fernando.casadogarcia.es/innova.jpg Suplemento Innova (Diario de León)]&lt;br /&gt;
&lt;br /&gt;
[[Image:DL-FernandoCasado.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
* Juan Felipe was awarded the [http://www.diariodeleon.es/noticias/leon/juan-felipe-garcia_694743.html Young Talent 2012] title by Diario de León.&lt;br /&gt;
* Article about using mobile robots in [http://www.la-cronica.net/2011/02/28/leon/robots-moviles-para-asistir-a-personas-dependientes-113536.htm La Crónica].&lt;br /&gt;
&lt;br /&gt;
* Conference on social robotics applied to handicapped people. Organized by CRE:&lt;br /&gt;
&lt;br /&gt;
# Information about the event at [http://www.imserso.es/cresanandres_01/auxiliares/actualidad/novedades/2011/enero/IM_042486?dDocName=IM_042486 CRE's webpage].&lt;br /&gt;
&lt;br /&gt;
# Press note at [http://www.diariodeleon.es/noticias/noticia.asp?pkid=577647  Diario de León].&lt;br /&gt;
&lt;br /&gt;
* Press Note about [http://www.diariodeleon.es/noticias/noticia.asp?pkid=577402 Juan's PhD Viva].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:FotoDiarioLeon2011.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Yet another info in the local newspaper, about a project on [http://www.diariodeleon.es/noticias/noticia.asp?pkid=571538 robotics for security].&lt;br /&gt;
&lt;br /&gt;
* The same one in the local TV:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;videoflash&amp;gt;Z-lNsL-o8UY&amp;lt;/videoflash&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Another review in the local newspapers: [http://www.leonoticias.com/frontend/leonoticias/Dos-Robots-leoneses-Acudiran-A-La-Copa-De-Europa-De-Futb-vn45312-vst384 Leonnoticias], and [http://www.la-cronica.net/2010/03/11/leon/ingenieros-de-la-ule-estaran-en-la-champions-de-futbol-de-robots-72130.htm La Crónica], about the team entering the [http://www.robocup-mediterranean-open.org/ Mediterranean Open] (Rome, March 18 to 20, 2010).&lt;br /&gt;
&lt;br /&gt;
* Interview in the local newspaper (Diario de León): [http://www.diariodeleon.es/noticias/noticia.asp?pkid=495839 first one] and [http://www.diariodeleon.es/noticias/noticia.asp?pkid=495838 second one].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:FotoDiarioLeon.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
= RoboCup =&lt;br /&gt;
&lt;br /&gt;
* We were proud members of the Spanish Team ([https://twitter.com/spiteam_robot SPITeam]) in the RoboCup Standard Platform providing a goalkeeper. SPITeam was a joint effort of the [robotics group] of the Rey Juan Carlos University, Rovira y Virgili University and Universidad de León. Last Team Description paper can be found [http://www.gsyc.es/~fmartin/SPiTeamApplication.pdf here]. Our last competition entrance was in [http://www.robocup2011.org/en/content.asp?PID={41EB0A0E-7EAF-4365-98F7-1D951C9B8D88}&amp;amp;PageFn=Teams Instambul 2011]&lt;br /&gt;
&lt;br /&gt;
* We were previously also proud members of the [http://www.teamchaos.es TeamChaos] RoboCup SPL team. We have contributed the goalie to the team for the [http://www.robocup2009.org/ RoboCup 2009] and [http://www.robocup-german-open.de/en German Open 2009].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Teamchaos-08.JPG|center|300px]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=Activities&amp;diff=4834</id>
		<title>Activities</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=Activities&amp;diff=4834"/>
				<updated>2016-02-23T13:24:00Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Last Events=&lt;br /&gt;
&lt;br /&gt;
==3D Scanning and Printing==&lt;br /&gt;
&lt;br /&gt;
During the last two weeks Francisco Sedano gave a brief introduction to 3D prototyping technologies that can be applied to a wide variety of engineering applications. The training gave the students the knowledge to develop and use 3D scanning and 3D printing technologies.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:20160218_195229.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://robotica.unileon.es/mediawiki/documents/escaneado_triptico.pdf Scanning Course]&lt;br /&gt;
&lt;br /&gt;
[http://robotica.unileon.es/mediawiki/documents/impresoras_3D.pdf 3D Printing Course]&lt;br /&gt;
&lt;br /&gt;
==Francisco Lera's PhD dissertation==&lt;br /&gt;
&lt;br /&gt;
Francisco J. Lera just became the newest doctor of the group. His defense was held on December the 11th and he got a &amp;quot;Sobresaliente Cum Laude&amp;quot;. Congratulations, Fran! &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:tesis-fran.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==RoCKIn 2015 (Lisboa) ==&lt;br /&gt;
&lt;br /&gt;
[http://rockinrobotchallenge.eu/rockin2015.php RoCKIn2015], the second competition event in the EU-funded robotics project RoCKIn, was held in Lisbon (Portugal), 21-23 November 2015.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:RoCKIn2015.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It was a joint event with the [https://web.fe.up.pt/~robot2015/ ROBOT'2015], the Second Iberian Robotics Conference where we also presented [[a paper]]&lt;br /&gt;
&lt;br /&gt;
==RoCKIn Event 2014 (Toulouse)==&lt;br /&gt;
&lt;br /&gt;
(main article of this event [[RoCKInChallengeToulouse | here]])&lt;br /&gt;
&lt;br /&gt;
==RoCKIn Camp 2014 (Rome)==&lt;br /&gt;
&lt;br /&gt;
(main article of this event [[RoCKIn2014 | here]])&lt;br /&gt;
&lt;br /&gt;
The team was in Rome testing the MYRABot robot. There, we learned different topics about vision, grasping, and natural language understanding. We tested the MB+ (MYRABot+) robot, a modification of MYRABot. Other robots were introduced in this competition, such as REEM from PAL Robotics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:MBPAL1.JPG|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Hack For Good 2013==&lt;br /&gt;
&lt;br /&gt;
Proposals for [http://hackforgood.net HackForGood] can be found [[proposals-hackforfood-2013 | here]].&lt;br /&gt;
&lt;br /&gt;
=Seminars=&lt;br /&gt;
&lt;br /&gt;
* European Robotics Week Conference [[http://www.eu-robotics.net/eurobotics-week/events-2013/low-cost-service-robots-ule-watermelon-project-for-rockinhome-challenge.html | Low Cost Robots for RoCKIn Challenge ]]November 27th, 2013&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2012|Introduction to Mobile Robot Programming (4th edition)]]. October 22nd to 26th , 2012&lt;br /&gt;
&lt;br /&gt;
* Conference ''&amp;quot;Augmented reality for elderly medication control using a mobile robot&amp;quot;'' at the Curso de Verano [http://catedratelefonica.unileon.es/2012/07/09/participacion-de-la-catedra-en-el-curso-de-verano-envejecimiento-activo-una-aproximacion-a-la-realidad-actual/ Envejecimiento Activo].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Foto-Curso-Verano-2012.jpeg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Conference ''&amp;quot;The role of edutainment robotics in mechatronics research and teaching&amp;quot;'' by [http://www.ipb.pt/~jllima/ José Luis Lima] and José Gonçalves from the [http://www.ipb.pt Politechical Institute of Bragança], on November 4, 2011.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:CharlaIPB-2011.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2011 | Mobile Robot Programming, medium level (1st edition)]]. March 2011.&lt;br /&gt;
&lt;br /&gt;
* Seminar on BICA programming in the SPITeam for the RoboCup2011. Instructors: Francisco Martín and Carlos E. Agüero from [http://www.robotica-urjc.es URJC Robotics Group].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;wikiflv width=&amp;quot;300&amp;quot; height=&amp;quot;250&amp;quot; logo=&amp;quot;true&amp;quot;&amp;gt;/videos/CursoBica.flv&amp;lt;/wikiflv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2010|Introduction to Mobile Robot Programming (3rd edition)]]. December 9 to 23, 2010.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:16122012.JPG|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2010|Introduction to Mobile Robot Programming (2nd edition)]]. March 8 to 12, 2010.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;wikiflv width=&amp;quot;300&amp;quot; height=&amp;quot;250&amp;quot; logo=&amp;quot;true&amp;quot;&amp;gt;/videos/videoSumo20100312.flv&amp;lt;/wikiflv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [http://roboticaleon2009.googlepages.com/ Mobile Robot programming]. July 1 to 7, 2009.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Carrera.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [http://www.compcog.org/ CompCog] Workshop ESF on Held in the University of León, May 26 to 27, 2009.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Compcog09-leon.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Introduction seminar to Nao programming (Instructor [http://www.robotica-urjc.es Dr. Francisco Martín Rico]),&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Workshop-paco09.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=In the Press=&lt;br /&gt;
&lt;br /&gt;
* [http://www.diariodevalladolid.es/noticias/valladolid/villanueva-augura-empresas-innoven-creceran-triple-proximos-cinco-anos_14514.html Award ''Innovadores Castilla y Leon 2015'']  (El Mundo, 2015)&lt;br /&gt;
&lt;br /&gt;
*[http://www.elmundo.es/economia/2014/12/31/54a2ee5b22601d42688b4580.html?intcmp=ULNOH002 Robots que medican y chalan], coverage by national newspaper of the group's activities (El Mundo, Dec, 2014)&lt;br /&gt;
&lt;br /&gt;
* [http://www.diariodeleon.es/noticias/leon/premio-tic_845853.html Innova Award] to the best 2014 IT project, given by local newspaper (Diario de León).&lt;br /&gt;
&lt;br /&gt;
* Article about  Fernando's work on mobile robots [http://www.fernando.casadogarcia.es/innova.jpg Suplemento Innova (Diario de León)]&lt;br /&gt;
&lt;br /&gt;
[[Image:DL-FernandoCasado.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
* Juan Felipe was awarded the [http://www.diariodeleon.es/noticias/leon/juan-felipe-garcia_694743.html Young Talent 2012] title by Diario de León.&lt;br /&gt;
* Article about using mobile robots in [http://www.la-cronica.net/2011/02/28/leon/robots-moviles-para-asistir-a-personas-dependientes-113536.htm La Crónica].&lt;br /&gt;
&lt;br /&gt;
* Conference on social robotics applied to handicapped people. Organized by CRE:&lt;br /&gt;
&lt;br /&gt;
# Information about the event at [http://www.imserso.es/cresanandres_01/auxiliares/actualidad/novedades/2011/enero/IM_042486?dDocName=IM_042486 CRE's webpage].&lt;br /&gt;
&lt;br /&gt;
# Press note at [http://www.diariodeleon.es/noticias/noticia.asp?pkid=577647  Diario de León].&lt;br /&gt;
&lt;br /&gt;
* Press Note about [http://www.diariodeleon.es/noticias/noticia.asp?pkid=577402 Juan's PhD Viva].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:FotoDiarioLeon2011.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Yet another info in the local newspaper, about a project on [http://www.diariodeleon.es/noticias/noticia.asp?pkid=571538 robotics for security].&lt;br /&gt;
&lt;br /&gt;
* The same one in the local TV:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;videoflash&amp;gt;Z-lNsL-o8UY&amp;lt;/videoflash&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Another review in the local newspapers: [http://www.leonoticias.com/frontend/leonoticias/Dos-Robots-leoneses-Acudiran-A-La-Copa-De-Europa-De-Futb-vn45312-vst384 Leonnoticias], and [http://www.la-cronica.net/2010/03/11/leon/ingenieros-de-la-ule-estaran-en-la-champions-de-futbol-de-robots-72130.htm La Crónica], about the team entering the [http://www.robocup-mediterranean-open.org/ Mediterranean Open] (Rome, March 18 to 20, 2010).&lt;br /&gt;
&lt;br /&gt;
* Interview in the local newspaper (Diario de León): [http://www.diariodeleon.es/noticias/noticia.asp?pkid=495839 first one] and [http://www.diariodeleon.es/noticias/noticia.asp?pkid=495838 second one].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:FotoDiarioLeon.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
= RoboCup =&lt;br /&gt;
&lt;br /&gt;
* We were proud members of the Spanish Team ([https://twitter.com/spiteam_robot SPITeam]) in the RoboCup Standard Platform providing a goalkeeper. SPITeam was a joint effort of the [robotics group] of the Rey Juan Carlos University, Rovira y Virgili University and Universidad de León. Last Team Description paper can be found [http://www.gsyc.es/~fmartin/SPiTeamApplication.pdf here]. Our last competition entrance was in [http://www.robocup2011.org/en/content.asp?PID={41EB0A0E-7EAF-4365-98F7-1D951C9B8D88}&amp;amp;PageFn=Teams Instambul 2011]&lt;br /&gt;
&lt;br /&gt;
* We were previously also proud members of the [http://www.teamchaos.es TeamChaos] RoboCup SPL team. We have contributed the goalie to the team for the [http://www.robocup2009.org/ RoboCup 2009] and [http://www.robocup-german-open.de/en German Open 2009].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Teamchaos-08.JPG|center|300px]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=Home&amp;diff=4833</id>
		<title>Home</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=Home&amp;diff=4833"/>
				<updated>2016-02-23T13:21:39Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the official wiki of the Robotics Group of the [http://www.unileon.es University of León] (ULE). We are a group of researchers (faculty and students) interested in creating autonomous behaviour for mobile robots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Foto-grupo2015.jpg|center|border|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We are proud members of [http://www.redaf.es RedAF] (Red Nacional de Investigación en Agentes Físicos), [http://retecog.net ReteCog] (Red Temática de Ciencia Cognitiva), and [http://www.euron.org/members/index#257 Euron].&lt;br /&gt;
&lt;br /&gt;
==Location==&lt;br /&gt;
&lt;br /&gt;
Our lab is in the second floor of the [https://www.google.com/maps/@42.6150329,-5.5635688,203m/data=!3m1!1e3 MIC building] (Módulo de Investigación en Cibernética - Cybernetics Research Module). If you are interested in our work, you are welcome to pay us a visit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Mic2015.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We also hang around the [http://centros.unileon.es/eiii/ EIII] (Escuela de Ingenierías Industrial e Informática - Engineering School), which is [https://www.google.com/maps/@42.6141916,-5.5611339,203m/data=!3m1!1e3 here].&lt;br /&gt;
&lt;br /&gt;
==News==&lt;br /&gt;
&lt;br /&gt;
(February, 2016) Francisco Sedano gave a brief introduction to 3D prototyping technologies that can be applied to a wide variety of engineering applications: [http://robotica.unileon.es/mediawiki/documents/escaneado_triptico.pdf Scanning Course], [http://robotica.unileon.es/mediawiki/documents/impresoras_3D.pdf 3D Printing Course]. The training gave to the students the knowledge to develop and use 3D scanning and 3D printing technologies.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:20160218_195229.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(December, 2015) Francisco J. Lera just became the newest doctor of the group. His defense was held on December the 11th and he got a &amp;quot;Sobresaliente Cum Laude&amp;quot;. Congratulations, Fran!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:tesis-fran.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(November 2015) We took part in [http://rockinrobotchallenge.eu/rockin2015.php RoCKIn2015], the second competition event in the EU-funded robotics project RoCKIn. It took place in Lisbon (Portugal), from 21 to 23 November 2015.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:RoCKIn2015.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the Team Description Paper of our &amp;quot;Watermelon Project&amp;quot; team for the RoCKIn@home Challenge in Lisbon.&lt;br /&gt;
&lt;br /&gt;
[http://rockincompetition.eu/node/256 RoCKIn competition]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(November 24, 2014) Coinciding with the [http://www.eu-robotics.net/eurobotics-week/about-eurobotics-week/ European Robotics Week], we are going to participate in the first competition event of RoCKIn. It will take place in the Cité de l'espace science museum in Toulouse, France.&lt;br /&gt;
&lt;br /&gt;
You can follow up with us [[RoCKInChallengeToulouse|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We want to express our gratitude to  [http://www.mountain.es/ Mountain] for sponsoring our group in the RoCKIn Challenge.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Lgo_mountain.png|center|330px|link=http://www.mountain.es/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We have been using Mountain hardware for research and development with our robots. The control software of the CeRVaNTeS platform runs on a [http://www.mountain.es/portatiles F-13] laptop. Soon, our MYRABot platorm will also mount one of their machines.&lt;br /&gt;
&lt;br /&gt;
We want to express our sincere gratitude to [http://www.mountain.es/ Mountain] for their support to small research groups like ours.&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=Activities&amp;diff=4825</id>
		<title>Activities</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=Activities&amp;diff=4825"/>
				<updated>2016-02-22T12:45:34Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Last Events=&lt;br /&gt;
&lt;br /&gt;
==Francisco Lera's PhD dissertation==&lt;br /&gt;
&lt;br /&gt;
Francisco J. Lera just became the newest doctor of the group. His defense was held on December the 11th and he got a &amp;quot;Sobresaliente Cum Laude&amp;quot;. Congratulations, Fran! &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:tesis-fran.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==RoCKIn 2015 (Lisboa) ==&lt;br /&gt;
&lt;br /&gt;
[http://rockinrobotchallenge.eu/rockin2015.php RoCKIn2015], the second competition event in the EU-funded robotics project RoCKIn, was held in Lisbon (Portugal), 21-23 November 2015.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:RoCKIn2015.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It was a joint event with the [https://web.fe.up.pt/~robot2015/ ROBOT'2015], the Second Iberian Robotics Conference where we also presented [[a paper]]&lt;br /&gt;
&lt;br /&gt;
==RoCKIn Event 2014 (Toulouse)==&lt;br /&gt;
&lt;br /&gt;
(main article of this event [[RoCKInChallengeToulouse | here]])&lt;br /&gt;
&lt;br /&gt;
==RoCKIn Camp 2014 (Rome)==&lt;br /&gt;
&lt;br /&gt;
(main article of this event [[RoCKIn2014 | here]])&lt;br /&gt;
&lt;br /&gt;
The team was in Rome testing the MYRABot robot. There, we learned different topics about vision, grasping, and natural language understanding. We tested the MB+ (MYRABot+) robot, a modification of MYRABot. Other robots were introduced in this competition, such as REEM from PAL Robotics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:MBPAL1.JPG|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Hack For Good 2013==&lt;br /&gt;
&lt;br /&gt;
Proposals for [http://hackforgood.net HackForGood] can be found [[proposals-hackforfood-2013 | here]].&lt;br /&gt;
&lt;br /&gt;
=Seminars=&lt;br /&gt;
&lt;br /&gt;
* European Robotics Week Conference [[http://www.eu-robotics.net/eurobotics-week/events-2013/low-cost-service-robots-ule-watermelon-project-for-rockinhome-challenge.html | Low Cost Robots for RoCKIn Challenge ]]November 27th, 2013&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2012|Introduction to Mobile Robot Programming (4th edition)]]. October 22nd to 26th , 2012&lt;br /&gt;
&lt;br /&gt;
* Conference ''&amp;quot;Augmented reality for elderly medication control using a mobile robot&amp;quot;'' at the Curso de Verano [http://catedratelefonica.unileon.es/2012/07/09/participacion-de-la-catedra-en-el-curso-de-verano-envejecimiento-activo-una-aproximacion-a-la-realidad-actual/ Envejecimiento Activo].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Foto-Curso-Verano-2012.jpeg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Conference ''&amp;quot;The role of edutainment robotics in mechatronics research and teaching&amp;quot;'' by [http://www.ipb.pt/~jllima/ José Luis Lima] and José Gonçalves from the [http://www.ipb.pt Politechical Institute of Bragança], on November 4, 2011.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:CharlaIPB-2011.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2011 | Mobile Robot Programming, medium level (1st edition)]]. March 2011.&lt;br /&gt;
&lt;br /&gt;
* Seminar on BICA programming in the SPITeam for the RoboCup2011. Instructors: Francisco Martín and Carlos E. Agüero from [http://www.robotica-urjc.es URJC Robotics Group].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;wikiflv width=&amp;quot;300&amp;quot; height=&amp;quot;250&amp;quot; logo=&amp;quot;true&amp;quot;&amp;gt;/videos/CursoBica.flv&amp;lt;/wikiflv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2010|Introduction to Mobile Robot Programming (3rd edition)]]. December 9 to 23, 2010.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:16122012.JPG|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2010|Introduction to Mobile Robot Programming (2nd edition)]]. March 8 to 12, 2010.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;wikiflv width=&amp;quot;300&amp;quot; height=&amp;quot;250&amp;quot; logo=&amp;quot;true&amp;quot;&amp;gt;/videos/videoSumo20100312.flv&amp;lt;/wikiflv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [http://roboticaleon2009.googlepages.com/ Mobile Robot programming]. July 1 to 7, 2009.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Carrera.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [http://www.compcog.org/ CompCog] Workshop ESF on Held in the University of León, May 26 to 27, 2009.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Compcog09-leon.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Introduction seminar to Nao programming (Instructor [http://www.robotica-urjc.es Dr. Francisco Martín Rico]),&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Workshop-paco09.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=In the Press=&lt;br /&gt;
&lt;br /&gt;
* [http://www.diariodevalladolid.es/noticias/valladolid/villanueva-augura-empresas-innoven-creceran-triple-proximos-cinco-anos_14514.html Award ''Innovadores Castilla y Leon 2015'']  (El Mundo, 2015)&lt;br /&gt;
&lt;br /&gt;
*[http://www.elmundo.es/economia/2014/12/31/54a2ee5b22601d42688b4580.html?intcmp=ULNOH002 Robots que medican y chalan], coverage by national newspaper of the group's activities (El Mundo, Dec, 2014)&lt;br /&gt;
&lt;br /&gt;
* [http://www.diariodeleon.es/noticias/leon/premio-tic_845853.html Innova Award] to the best 2014 IT project, given by local newspaper (Diario de León).&lt;br /&gt;
&lt;br /&gt;
* Article about  Fernando's work on mobile robots [http://www.fernando.casadogarcia.es/innova.jpg Suplemento Innova (Diario de León)]&lt;br /&gt;
&lt;br /&gt;
[[Image:DL-FernandoCasado.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
* Juan Felipe was awarded the [http://www.diariodeleon.es/noticias/leon/juan-felipe-garcia_694743.html Young Talent 2012] title by Diario de León.&lt;br /&gt;
* Article about using mobile robots in [http://www.la-cronica.net/2011/02/28/leon/robots-moviles-para-asistir-a-personas-dependientes-113536.htm La Crónica].&lt;br /&gt;
&lt;br /&gt;
* Conference on social robotics applied to handicapped people. Organized by CRE:&lt;br /&gt;
&lt;br /&gt;
# Information about the event at [http://www.imserso.es/cresanandres_01/auxiliares/actualidad/novedades/2011/enero/IM_042486?dDocName=IM_042486 CRE's webpage].&lt;br /&gt;
&lt;br /&gt;
# Press note at [http://www.diariodeleon.es/noticias/noticia.asp?pkid=577647  Diario de León].&lt;br /&gt;
&lt;br /&gt;
* Press Note about [http://www.diariodeleon.es/noticias/noticia.asp?pkid=577402 Juan's PhD Viva].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:FotoDiarioLeon2011.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Yet another info in the local newspaper, about a project on [http://www.diariodeleon.es/noticias/noticia.asp?pkid=571538 robotics for security].&lt;br /&gt;
&lt;br /&gt;
* The same one in the local TV:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;videoflash&amp;gt;Z-lNsL-o8UY&amp;lt;/videoflash&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Another review in the local newspapers: [http://www.leonoticias.com/frontend/leonoticias/Dos-Robots-leoneses-Acudiran-A-La-Copa-De-Europa-De-Futb-vn45312-vst384 Leonnoticias], and [http://www.la-cronica.net/2010/03/11/leon/ingenieros-de-la-ule-estaran-en-la-champions-de-futbol-de-robots-72130.htm La Crónica], about the team entering the [http://www.robocup-mediterranean-open.org/ Mediterranean Open] (Rome, March 18 to 20, 2010).&lt;br /&gt;
&lt;br /&gt;
* Interview in the local newspaper (Diario de León): [http://www.diariodeleon.es/noticias/noticia.asp?pkid=495839 first one] and [http://www.diariodeleon.es/noticias/noticia.asp?pkid=495838 second one].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:FotoDiarioLeon.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
= RoboCup =&lt;br /&gt;
&lt;br /&gt;
* We were proud members of the Spanish Team ([https://twitter.com/spiteam_robot SPITeam]) in the RoboCup Standard Platform providing a goalkeeper. SPITeam was a joint effort of the [robotics group] of the Rey Juan Carlos University, Rovira y Virgili University and Universidad de León. Last Team Description paper can be found [http://www.gsyc.es/~fmartin/SPiTeamApplication.pdf here]. Our last competition entrance was in [http://www.robocup2011.org/en/content.asp?PID={41EB0A0E-7EAF-4365-98F7-1D951C9B8D88}&amp;amp;PageFn=Teams Instambul 2011]&lt;br /&gt;
&lt;br /&gt;
* We were previously also proud members of the [http://www.teamchaos.es TeamChaos] RoboCup SPL team. We have contributed the goalie to the team for the [http://www.robocup2009.org/ RoboCup 2009] and [http://www.robocup-german-open.de/en German Open 2009].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Teamchaos-08.JPG|center|300px]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=Activities&amp;diff=4824</id>
		<title>Activities</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=Activities&amp;diff=4824"/>
				<updated>2016-02-22T12:45:15Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Last Events=&lt;br /&gt;
&lt;br /&gt;
==Francisco Lera's PhD dissertation==&lt;br /&gt;
&lt;br /&gt;
Francisco J. Lera just became the newest doctor of the group. His defense was held on December the 11th and he got a &amp;quot;Sobresaliente Cum Laude&amp;quot;. Congratulations, Fran! &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:tesis-fran.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==RoCKIn 2015 (Lisboa) ==&lt;br /&gt;
&lt;br /&gt;
[[http://rockinrobotchallenge.eu/rockin2015.php RoCKIn2015]], the second competition event in the EU-funded robotics project RoCKIn, was held in Lisbon (Portugal), 21-23 November 2015.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:RoCKIn2015.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It was a joint event with the [[https://web.fe.up.pt/~robot2015/ ROBOT'2015]], the Second Iberian Robotics Conference where we also presented [[a paper]]&lt;br /&gt;
&lt;br /&gt;
==RoCKIn Event 2014 (Toulouse)==&lt;br /&gt;
&lt;br /&gt;
(main article of this event [[RoCKInChallengeToulouse | here]])&lt;br /&gt;
&lt;br /&gt;
==RoCKIn Camp 2014 (Rome)==&lt;br /&gt;
&lt;br /&gt;
(main article of this event [[RoCKIn2014 | here]])&lt;br /&gt;
&lt;br /&gt;
The team was in Rome testing the MYRABot robot. There, we learned different topics about vision, grasping, and natural language understanding. We tested the MB+ (MYRABot+) robot, a modification of MYRABot. Other robots were introduced in this competition, such as REEM from PAL Robotics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:MBPAL1.JPG|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Hack For Good 2013==&lt;br /&gt;
&lt;br /&gt;
Proposals for [http://hackforgood.net HackForGood] can be found [[proposals-hackforfood-2013 | here]].&lt;br /&gt;
&lt;br /&gt;
=Seminars=&lt;br /&gt;
&lt;br /&gt;
* European Robotics Week Conference [[http://www.eu-robotics.net/eurobotics-week/events-2013/low-cost-service-robots-ule-watermelon-project-for-rockinhome-challenge.html | Low Cost Robots for RoCKIn Challenge ]]November 27th, 2013&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2012|Introduction to Mobile Robot Programming (4th edition)]]. October 22nd to 26th , 2012&lt;br /&gt;
&lt;br /&gt;
* Conference ''&amp;quot;Augmented reality for elderly medication control using a mobile robot&amp;quot;'' at the Curso de Verano [http://catedratelefonica.unileon.es/2012/07/09/participacion-de-la-catedra-en-el-curso-de-verano-envejecimiento-activo-una-aproximacion-a-la-realidad-actual/ Envejecimiento Activo].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Foto-Curso-Verano-2012.jpeg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Conference ''&amp;quot;The role of edutainment robotics in mechatronics research and teaching&amp;quot;'' by [http://www.ipb.pt/~jllima/ José Luis Lima] and José Gonçalves from the [http://www.ipb.pt Politechical Institute of Bragança], on November 4, 2011.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:CharlaIPB-2011.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2011 | Mobile Robot Programming, medium level (1st edition)]]. March 2011.&lt;br /&gt;
&lt;br /&gt;
* Seminar on BICA programming in the SPITeam for the RoboCup2011. Instructors: Francisco Martín and Carlos E. Agüero from [http://www.robotica-urjc.es URJC Robotics Group].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;wikiflv width=&amp;quot;300&amp;quot; height=&amp;quot;250&amp;quot; logo=&amp;quot;true&amp;quot;&amp;gt;/videos/CursoBica.flv&amp;lt;/wikiflv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2010|Introduction to Mobile Robot Programming (3rd edition)]]. December 9 to 23, 2010.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:16122012.JPG|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2010|Introduction to Mobile Robot Programming (2nd edition)]]. March 8 to 12, 2010.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;wikiflv width=&amp;quot;300&amp;quot; height=&amp;quot;250&amp;quot; logo=&amp;quot;true&amp;quot;&amp;gt;/videos/videoSumo20100312.flv&amp;lt;/wikiflv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [http://roboticaleon2009.googlepages.com/ Mobile Robot programming]. July 1 to 7, 2009.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Carrera.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [http://www.compcog.org/ CompCog] Workshop ESF on Held in the University of León, May 26 to 27, 2009.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Compcog09-leon.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Introduction seminar to Nao programming (Instructor [http://www.robotica-urjc.es Dr. Francisco Martín Rico]),&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Workshop-paco09.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=In the Press=&lt;br /&gt;
&lt;br /&gt;
* [http://www.diariodevalladolid.es/noticias/valladolid/villanueva-augura-empresas-innoven-creceran-triple-proximos-cinco-anos_14514.html Award ''Innovadores Castilla y Leon 2015'']  (El Mundo, 2015)&lt;br /&gt;
&lt;br /&gt;
*[http://www.elmundo.es/economia/2014/12/31/54a2ee5b22601d42688b4580.html?intcmp=ULNOH002 Robots que medican y chalan], coverage by national newspaper of the group's activities (El Mundo, Dec, 2014)&lt;br /&gt;
&lt;br /&gt;
* [http://www.diariodeleon.es/noticias/leon/premio-tic_845853.html Innova Award] to the best 2014 IT project, given by local newspaper (Diario de León).&lt;br /&gt;
&lt;br /&gt;
* Article about  Fernando's work on mobile robots [http://www.fernando.casadogarcia.es/innova.jpg Suplemento Innova (Diario de León)]&lt;br /&gt;
&lt;br /&gt;
[[Image:DL-FernandoCasado.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
* Juan Felipe was awarded the [http://www.diariodeleon.es/noticias/leon/juan-felipe-garcia_694743.html Young Talent 2012] title by Diario de León.&lt;br /&gt;
* Article about using mobile robots in [http://www.la-cronica.net/2011/02/28/leon/robots-moviles-para-asistir-a-personas-dependientes-113536.htm La Crónica].&lt;br /&gt;
&lt;br /&gt;
* Conference on social robotics applied to handicapped people. Organized by CRE:&lt;br /&gt;
&lt;br /&gt;
# Information about the event at [http://www.imserso.es/cresanandres_01/auxiliares/actualidad/novedades/2011/enero/IM_042486?dDocName=IM_042486 CRE's webpage].&lt;br /&gt;
&lt;br /&gt;
# Press note at [http://www.diariodeleon.es/noticias/noticia.asp?pkid=577647  Diario de León].&lt;br /&gt;
&lt;br /&gt;
* Press Note about [http://www.diariodeleon.es/noticias/noticia.asp?pkid=577402 Juan's PhD Viva].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:FotoDiarioLeon2011.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Yet another info in the local newspaper, about a project on [http://www.diariodeleon.es/noticias/noticia.asp?pkid=571538 robotics for security].&lt;br /&gt;
&lt;br /&gt;
* The same one in the local TV:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;videoflash&amp;gt;Z-lNsL-o8UY&amp;lt;/videoflash&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Another review in the local newspapers: [http://www.leonoticias.com/frontend/leonoticias/Dos-Robots-leoneses-Acudiran-A-La-Copa-De-Europa-De-Futb-vn45312-vst384 Leonnoticias], and [http://www.la-cronica.net/2010/03/11/leon/ingenieros-de-la-ule-estaran-en-la-champions-de-futbol-de-robots-72130.htm La Crónica], about the team entering the [http://www.robocup-mediterranean-open.org/ Mediterranean Open] (Rome, March 18 to 20, 2010).&lt;br /&gt;
&lt;br /&gt;
* Interview in the local newspaper (Diario de León): [http://www.diariodeleon.es/noticias/noticia.asp?pkid=495839 first one] and [http://www.diariodeleon.es/noticias/noticia.asp?pkid=495838 second one].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:FotoDiarioLeon.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
= RoboCup =&lt;br /&gt;
&lt;br /&gt;
* We were proud members of the Spanish Team ([https://twitter.com/spiteam_robot SPITeam]) in the RoboCup Standard Platform providing a goalkeeper. SPITeam was a joint effort of the [robotics group] of the Rey Juan Carlos University, Rovira y Virgili University and Universidad de León. Last Team Description paper can be found [http://www.gsyc.es/~fmartin/SPiTeamApplication.pdf here]. Our last competition entrance was in [http://www.robocup2011.org/en/content.asp?PID={41EB0A0E-7EAF-4365-98F7-1D951C9B8D88}&amp;amp;PageFn=Teams Instambul 2011]&lt;br /&gt;
&lt;br /&gt;
* We were previously also proud members of the [http://www.teamchaos.es TeamChaos] RoboCup SPL team. We have contributed the goalie to the team for the [http://www.robocup2009.org/ RoboCup 2009] and [http://www.robocup-german-open.de/en German Open 2009].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Teamchaos-08.JPG|center|300px]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=RoboCup&amp;diff=4823</id>
		<title>RoboCup</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=RoboCup&amp;diff=4823"/>
				<updated>2016-02-22T12:43:07Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
This page is currently being developed&lt;br /&gt;
&lt;br /&gt;
= Members =&lt;br /&gt;
&lt;br /&gt;
* Integration (Team Leader): Francisco Javier Rodríguez Lera&lt;br /&gt;
* Navigation and Control: Francisco Martín Rico&lt;br /&gt;
* Manipulation: Fernando Casado García&lt;br /&gt;
* Navigation:  Jesús Balsa Comerón&lt;br /&gt;
* Perception: Diego García Ordás&lt;br /&gt;
* Team Director: Vicente Matellán Olivera&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Robots =&lt;br /&gt;
&lt;br /&gt;
== OrBiOne ==&lt;br /&gt;
&lt;br /&gt;
OrBiOne is the robot that we will be using for 2016 competitons. This mobile manipulation is developed by Robotnik [[http://www.robotnik.eu/robotnik-introduces-rb-one-new-mobile-manipulator/ more info]])&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;pU4avjbgFMQ&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== CeRVaNTes ==&lt;br /&gt;
&lt;br /&gt;
We are developing a new platform, mobile bi-manipulator, that will not be ready for 2016 competition.&lt;br /&gt;
&lt;br /&gt;
[[Projects#Cervantes_Project|CeRVaNTeS platform]] proof of concept:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;JtDIRky7EKs&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Teleoperation of CeRVaNTeS using a [http://www.microsoft.com/hardware/es-es/p/xbox-360-wireless-controller-for-windows Xbox 360 wireless controller] ([[CeRVaNTeS' Teleoperation with xbox360 wireless controller (xbox360 controller+joy)|more info]]), and CeRVaNTeS platform Integrated in [http://moveit.ros.org/ moveIt!] ([[Integration of CeRVaNTeS in moveIt! (gazebo+moveIt!)|more info]]):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;videoflash&amp;gt;wF_gKr1KWfg#t=13&amp;lt;/videoflash&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;videoflash&amp;gt;bYtvNJHraVs&amp;lt;/videoflash&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Github Repository=&lt;br /&gt;
&lt;br /&gt;
* Individuals:&lt;br /&gt;
https://github.com/FranLera&lt;br /&gt;
&lt;br /&gt;
https://github.com/fmrico&lt;br /&gt;
&lt;br /&gt;
https://github.com/casadofer&lt;br /&gt;
&lt;br /&gt;
* Group&lt;br /&gt;
https://github.com/Robotica-ule/MYRABot&lt;br /&gt;
&lt;br /&gt;
= Papers =&lt;br /&gt;
&lt;br /&gt;
* [http://robotica.unileon.es/mediawiki/documents/TDP_OrbioneTeam.pdf  Team Description paper ]&lt;br /&gt;
&lt;br /&gt;
* List of group publications can be found in the University of [[Publications |León's robotics group publications page]] and in the URJC's [http://gsyc.es/~fmartin/#publications publications] page.&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=Home&amp;diff=4822</id>
		<title>Home</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=Home&amp;diff=4822"/>
				<updated>2016-02-22T12:41:19Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the official wiki of the Robotics Group of the [http://www.unileon.es University of León] (ULE). We are a group of researchers (faculty and students) interested in creating autonomous behaviour for mobile robots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Foto-grupo2015.jpg|center|border|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We are proud members of [http://www.redaf.es RedAF] (Red Nacional de Investigación en Agentes Físicos), [http://retecog.net ReteCog] (Red Temática de Ciencia Cognitiva), and [http://www.euron.org/members/index#257 Euron].&lt;br /&gt;
&lt;br /&gt;
==Location==&lt;br /&gt;
&lt;br /&gt;
Our lab is in the second floor of the [https://www.google.com/maps/@42.6150329,-5.5635688,203m/data=!3m1!1e3 MIC building] (Módulo de Investigación en Cibernética - Cybernetics Research Module). If you are interested in our work, you are welcome to pay us a visit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Mic2015.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We also hang around the [http://centros.unileon.es/eiii/ EIII] (Escuela de Ingenierías Industrial e Informática - Engineering School), which is [https://www.google.com/maps/@42.6141916,-5.5611339,203m/data=!3m1!1e3 here].&lt;br /&gt;
&lt;br /&gt;
==News==&lt;br /&gt;
&lt;br /&gt;
(December, 2015) Francisco J. Lera just became the newest doctor of the group. His defense was held on December the 11th and he got a &amp;quot;Sobresaliente Cum Laude&amp;quot;. Congratulations, Fran!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:tesis-fran.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(November 2015) We took part in [http://rockinrobotchallenge.eu/rockin2015.php RoCKIn2015], the second competition event in the EU-funded robotics project RoCKIn. It took place in Lisbon (Portugal), from 21 to 23 November 2015.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:RoCKIn2015.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the Team Description Paper of our &amp;quot;Watermelon Project&amp;quot; team for the RoCKIn@home Challenge in Lisbon.&lt;br /&gt;
&lt;br /&gt;
[http://rockincompetition.eu/node/256 RoCKIn competition]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(November 24, 2014) Coinciding with the [http://www.eu-robotics.net/eurobotics-week/about-eurobotics-week/ European Robotics Week], we are going to participate in the first competition event of RoCKIn. It will take place in the Cité de l'espace science museum in Toulouse, France.&lt;br /&gt;
&lt;br /&gt;
You can follow up with us [[RoCKInChallengeToulouse|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We want to express our gratitude to  [http://www.mountain.es/ Mountain] for sponsoring our group in the RoCKIn Challenge.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Lgo_mountain.png|center|330px|link=http://www.mountain.es/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We have been using Mountain hardware for research and development with our robots. The control software of the CeRVaNTeS platform runs on a [http://www.mountain.es/portatiles F-13] laptop. Soon, our MYRABot platorm will also mount one of their machines.&lt;br /&gt;
&lt;br /&gt;
We want to express our sincere gratitude to [http://www.mountain.es/ Mountain] for their support to small research groups like ours.&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=Home&amp;diff=4821</id>
		<title>Home</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=Home&amp;diff=4821"/>
				<updated>2016-02-22T12:40:10Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the official wiki of the Robotics Group of the [http://www.unileon.es University of León] (ULE). We are a group of researchers (faculty and students) interested in creating autonomous behaviour for mobile robots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Foto-grupo2015.jpg|center|border|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We are proud members of [http://www.redaf.es RedAF] (Red Nacional de Investigación en Agentes Físicos), [http://retecog.net ReteCog] (Red Temática de Ciencia Cognitiva), and [http://www.euron.org/members/index#257 Euron].&lt;br /&gt;
&lt;br /&gt;
==Location==&lt;br /&gt;
&lt;br /&gt;
Our lab is in the second floor of the [https://www.google.com/maps/@42.6150329,-5.5635688,203m/data=!3m1!1e3 MIC building] (Módulo de Investigación en Cibernética - Cybernetics Research Module). If you are interested in our work, you are welcome to pay us a visit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Mic2015.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We also hang around the [http://centros.unileon.es/eiii/ EIII] (Escuela de Ingenierías Industrial e Informática - Engineering School), which is [https://www.google.com/maps/@42.6141916,-5.5611339,203m/data=!3m1!1e3 here].&lt;br /&gt;
&lt;br /&gt;
==News==&lt;br /&gt;
&lt;br /&gt;
(December, 2015) Francisco J. Lera just became the newest doctor of the group. His defense was held on December the 11th and he got a &amp;quot;Sobresaliente Cum Laude&amp;quot;. Congratulations, Fran!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:tesis-fran.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(November 2015) We took part in [[rockin2015|RoCKIn2015]], the second competition event in the EU-funded robotics project RoCKIn. It took place in Lisbon (Portugal), from 21 to 23 November 2015.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:RoCKIn2015.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the Team Description Paper of our &amp;quot;Watermelon Project&amp;quot; team for the RoCKIn@home Challenge in Lisbon.&lt;br /&gt;
&lt;br /&gt;
[http://rockincompetition.eu/node/256 RoCKIn competition]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(November 24, 2014) Coinciding with the [http://www.eu-robotics.net/eurobotics-week/about-eurobotics-week/ European Robotics Week], we are going to participate in the first competition event of RoCKIn. It will take place in the Cité de l'espace science museum in Toulouse, France.&lt;br /&gt;
&lt;br /&gt;
You can follow up with us [[RoCKInChallengeToulouse|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We want to express our gratitude to  [http://www.mountain.es/ Mountain] for sponsoring our group in the RoCKIn Challenge.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Lgo_mountain.png|center|330px|link=http://www.mountain.es/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We have been using Mountain hardware for research and development with our robots. The control software of the CeRVaNTeS platform runs on a [http://www.mountain.es/portatiles F-13] laptop. Soon, our MYRABot platorm will also mount one of their machines.&lt;br /&gt;
&lt;br /&gt;
We want to express our sincere gratitude to [http://www.mountain.es/ Mountain] for their support to small research groups like ours.&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=Home&amp;diff=4820</id>
		<title>Home</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=Home&amp;diff=4820"/>
				<updated>2016-02-22T12:38:02Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the official wiki of the Robotics Group of the [http://www.unileon.es University of León] (ULE). We are a group of researchers (faculty and students) interested in creating autonomous behaviour for mobile robots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Foto-grupo2015.jpg|center|border|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We are proud members of [http://www.redaf.es RedAF] (Red Nacional de Investigación en Agentes Físicos), [http://retecog.net ReteCog] (Red Temática de Ciencia Cognitiva), and [http://www.euron.org/members/index#257 Euron].&lt;br /&gt;
&lt;br /&gt;
==Location==&lt;br /&gt;
&lt;br /&gt;
Our lab is in the second floor of the [https://www.google.com/maps/@42.6150329,-5.5635688,203m/data=!3m1!1e3 MIC building] (Módulo de Investigación en Cibernética - Cybernetics Research Module). If you are interested in our work, you are welcome to pay us a visit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Mic2015.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We also hang around the [http://centros.unileon.es/eiii/ EIII] (Escuela de Ingenierías Industrial e Informática - Engineering School), which is [https://www.google.com/maps/@42.6141916,-5.5611339,203m/data=!3m1!1e3 here].&lt;br /&gt;
&lt;br /&gt;
==News==&lt;br /&gt;
&lt;br /&gt;
(December, 2015) Francisco J. Lera just became the newest doctor of the group. His defense was held on December the 11th and he got a &amp;quot;Sobresaliente Cum Laude&amp;quot;. Congratulations, Fran!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:tesis-fran.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(November 2015) We took part in [[http://rockinrobotchallenge.eu/rockin2015.php RoCKIn2015]], the second competition event in the EU-funded robotics project RoCKIn. It took place in Lisbon (Portugal), from 21 to 23 November 2015.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:RoCKIn2015.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the Team Description Paper of our &amp;quot;Watermelon Project&amp;quot; team for the RoCKIn@home Challenge in Lisbon.&lt;br /&gt;
&lt;br /&gt;
[http://rockincompetition.eu/node/256 RoCKIn competition]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(November 24, 2014) Coinciding with the [http://www.eu-robotics.net/eurobotics-week/about-eurobotics-week/ European Robotics Week], we are going to participate in the first competition event of RoCKIn. It will take place in the Cité de l'espace science museum in Toulouse, France.&lt;br /&gt;
&lt;br /&gt;
You can follow up with us [[RoCKInChallengeToulouse|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We want to express our gratitude to  [http://www.mountain.es/ Mountain] for sponsoring our group in the RoCKIn Challenge.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Lgo_mountain.png|center|330px|link=http://www.mountain.es/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We have been using Mountain hardware for research and development with our robots. The control software of the CeRVaNTeS platform runs on a [http://www.mountain.es/portatiles F-13] laptop. Soon, our MYRABot platorm will also mount one of their machines.&lt;br /&gt;
&lt;br /&gt;
We want to express our sincere gratitude to [http://www.mountain.es/ Mountain] for their support to small research groups like ours.&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=Home&amp;diff=4800</id>
		<title>Home</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=Home&amp;diff=4800"/>
				<updated>2016-01-19T19:04:16Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the official wiki of the Robotics Group of the [http://www.unileon.es University of León] (ULE). We are a group of researchers (faculty and students) interested in creating autonomous behaviour for mobile robots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Foto-grupo2015.jpg|center|border|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We are proud members of [http://www.redaf.es RedAF] (Red Nacional de Investigación en Agentes Físicos), [http://retecog.net ReteCog] (Red Temática de Ciencia Cognitiva), and [http://www.euron.org/members/index#257 Euron].&lt;br /&gt;
&lt;br /&gt;
==Location==&lt;br /&gt;
&lt;br /&gt;
Our lab is in the second floor of the [https://www.google.com/maps/@42.6150329,-5.5635688,203m/data=!3m1!1e3 MIC building] (Módulo de Investigación en Cibernética - Cybernetics Research Module). If you are interested in our work, you are welcome to pay us a visit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Mic2015.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We also hang around the [http://centros.unileon.es/eiii/ EIII] (Escuela de Ingenierías Industrial e Informática - Engineering School), which is [https://www.google.com/maps/@42.6141916,-5.5611339,203m/data=!3m1!1e3 here].&lt;br /&gt;
&lt;br /&gt;
==News==&lt;br /&gt;
&lt;br /&gt;
(December, 2015) Francisco J. Lera just became the newest doctor of the group. His defense was held on December the 11th and he got a &amp;quot;Sobresaliente Cum Laude&amp;quot;. Congratulations, Fran!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:tesis-fran.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(September 30, 2015) This is the Team Description Paper of our &amp;quot;Watermelon Project&amp;quot; team for the RoCKIn@home Challenge in Lisbon (Portugal).&lt;br /&gt;
&lt;br /&gt;
[http://rockincompetition.eu/node/256 RoCKIn competition]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(November 24, 2014) Coinciding with the [http://www.eu-robotics.net/eurobotics-week/about-eurobotics-week/ European Robotics Week], we are going to participate in the first competition event of RoCKIn. It will take place in the Cité de l'espace science museum in Toulouse, France.&lt;br /&gt;
&lt;br /&gt;
You can follow up with us [[RoCKInChallengeToulouse|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(September 19, 2014) CeRVaNTeS platform. Teleoperation using a [http://www.microsoft.com/hardware/es-es/p/xbox-360-wireless-controller-for-windows Xbox 360 wireless controller] ([[CeRVaNTeS' Teleoperation with xbox360 wireless controller (xbox360 controller+joy)|more info]]):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;wF_gKr1KWfg#t=13&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(July 18, 2014) CeRVaNTeS platform. Integration in [http://moveit.ros.org/ moveIt!] ([[Integration of CeRVaNTeS in moveIt! (gazebo+moveIt!)|more info]]):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;bYtvNJHraVs&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(July 11, 2014)  [http://www.mountain.es/ Mountain] will sponsor our group for the next RoCKIn challenge.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Lgo_mountain.png|center|330px|link=http://www.mountain.es/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the last two months, we have been using Mountain hardware for research and development with our robots. The control software of the CeRVaNTeS platform runs on a [http://www.mountain.es/portatiles F-13] laptop. Soon, our MYRABot platorm will also mount one of their machines.&lt;br /&gt;
&lt;br /&gt;
We want to express our sincere gratitude to [http://www.mountain.es/ Mountain] for their support to small research groups like ours.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(June 16, 2014) [[Projects#Cervantes_Project|CeRVaNTeS platform]]. First steps and proof of concept:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;JtDIRky7EKs&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(January 25-31, 2014) RoCKIn camp participation. Our team attended the [[RoCKIn2014|RoCKIn Camp 2014]] on Rome, Italy. There they tested the performance of the MYRABot system in a competition environment. Also, the members participated in the lectures and in the practical sessions about basic software infrastructure, perception, manipulation and human robot interaction through speech.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:MBPAL1.JPG|center|border|250px]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=Activities&amp;diff=4799</id>
		<title>Activities</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=Activities&amp;diff=4799"/>
				<updated>2016-01-19T19:02:21Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Last Events=&lt;br /&gt;
&lt;br /&gt;
==Francisco Lera's PhD dissertation==&lt;br /&gt;
&lt;br /&gt;
Francisco J. Lera just became the newest doctor of the group. His defense was held on December the 11th and he got a &amp;quot;Sobresaliente Cum Laude&amp;quot;. Congratulations, Fran! &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:tesis-fran.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==RoCKIn 2015 (Lisboa) ==&lt;br /&gt;
&lt;br /&gt;
[[http://rockinrobotchallenge.eu/rockin2015.php RoCKIn2015]], the second and major competition event in the EU-funded robotics project RoCKIn, was held in Lisbon, 21-23 November 2015.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:RoCKIn2015.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It was a joint event to the [[https://web.fe.up.pt/~robot2015/ ROBOT'2015]], the Second Iberian Robotics Conference where we also presented [[a paper]]&lt;br /&gt;
&lt;br /&gt;
==RoCKIn Event 2014 (Toulouse)==&lt;br /&gt;
&lt;br /&gt;
(main article of this event [[RoCKInChallengeToulouse | here]])&lt;br /&gt;
&lt;br /&gt;
==RoCKIn Camp 2014 (Rome)==&lt;br /&gt;
&lt;br /&gt;
(main article of this event [[RoCKIn2014 | here]])&lt;br /&gt;
&lt;br /&gt;
The team was in Rome testing the MYRABot robot. There, we learned different topics about vision, grasping, and natural language understanding. We tested the MB+ (MYRABot+) robot, a modification of MYRABot. Other robots were introduced in this competition, such as REEM from PAL Robotics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:MBPAL1.JPG|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Hack For Good 2013==&lt;br /&gt;
&lt;br /&gt;
Proposals for [http://hackforgood.net HackForGood] can be found [[proposals-hackforfood-2013 | here]].&lt;br /&gt;
&lt;br /&gt;
=Seminars=&lt;br /&gt;
&lt;br /&gt;
* European Robotics Week Conference [[http://www.eu-robotics.net/eurobotics-week/events-2013/low-cost-service-robots-ule-watermelon-project-for-rockinhome-challenge.html | Low Cost Robots for RoCKIn Challenge ]]November 27th, 2013&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2012|Introduction to Mobile Robot Programming (4th edition)]]. October 22nd to 26th , 2012&lt;br /&gt;
&lt;br /&gt;
* Conference ''&amp;quot;Augmented reality for elderly medication control using a mobile robot&amp;quot;'' at the Curso de Verano [http://catedratelefonica.unileon.es/2012/07/09/participacion-de-la-catedra-en-el-curso-de-verano-envejecimiento-activo-una-aproximacion-a-la-realidad-actual/ Envejecimiento Activo].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Foto-Curso-Verano-2012.jpeg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Conference ''&amp;quot;The role of edutainment robotics in mechatronics research and teaching&amp;quot;'' by [http://www.ipb.pt/~jllima/ José Luis Lima] and José Gonçalves from the [http://www.ipb.pt Politechical Institute of Bragança], on November 4, 2011.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:CharlaIPB-2011.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2011 | Mobile Robot Programming, medium level (1st edition)]]. March 2011.&lt;br /&gt;
&lt;br /&gt;
* Seminar on BICA programming in the SPITeam for the RoboCup2011. Instructors: Francisco Martín and Carlos E. Agüero from [http://www.robotica-urjc.es URJC Robotics Group].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;wikiflv width=&amp;quot;300&amp;quot; height=&amp;quot;250&amp;quot; logo=&amp;quot;true&amp;quot;&amp;gt;/videos/CursoBica.flv&amp;lt;/wikiflv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2010|Introduction to Mobile Robot Programming (3rd edition)]]. December 9 to 23, 2010.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:16122012.JPG|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2010|Introduction to Mobile Robot Programming (2nd edition)]]. March 8 to 12, 2010.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;wikiflv width=&amp;quot;300&amp;quot; height=&amp;quot;250&amp;quot; logo=&amp;quot;true&amp;quot;&amp;gt;/videos/videoSumo20100312.flv&amp;lt;/wikiflv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [http://roboticaleon2009.googlepages.com/ Mobile Robot programming]. July 1 to 7, 2009.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Carrera.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [http://www.compcog.org/ CompCog] Workshop ESF on Held in the University of León, May 26 to 27, 2009.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Compcog09-leon.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Introduction seminar to Nao programming (Instructor [http://www.robotica-urjc.es Dr. Francisco Martín Rico]),&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Workshop-paco09.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=In the Press=&lt;br /&gt;
&lt;br /&gt;
* [http://www.diariodevalladolid.es/noticias/valladolid/villanueva-augura-empresas-innoven-creceran-triple-proximos-cinco-anos_14514.html Award ''Innovadores Castilla y Leon 2015'']  (El Mundo, 2015)&lt;br /&gt;
&lt;br /&gt;
*[http://www.elmundo.es/economia/2014/12/31/54a2ee5b22601d42688b4580.html?intcmp=ULNOH002 Robots que medican y chalan], coverage by national newspaper of the group's activities (El Mundo, Dec, 2014)&lt;br /&gt;
&lt;br /&gt;
* [http://www.diariodeleon.es/noticias/leon/premio-tic_845853.html Innova Award] to the best 2014 IT project, given by local newspaper (Diario de León).&lt;br /&gt;
&lt;br /&gt;
* Article about  Fernando's work on mobile robots [http://www.fernando.casadogarcia.es/innova.jpg Suplemento Innova (Diario de León)]&lt;br /&gt;
&lt;br /&gt;
[[Image:DL-FernandoCasado.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
* Juan Felipe was awarded the [http://www.diariodeleon.es/noticias/leon/juan-felipe-garcia_694743.html Young Talent 2012] title by Diario de León.&lt;br /&gt;
* Article about using mobile robots in [http://www.la-cronica.net/2011/02/28/leon/robots-moviles-para-asistir-a-personas-dependientes-113536.htm La Crónica].&lt;br /&gt;
&lt;br /&gt;
* Conference on social robotics applied to handicapped people. Organized by CRE:&lt;br /&gt;
&lt;br /&gt;
# Information about the event at [http://www.imserso.es/cresanandres_01/auxiliares/actualidad/novedades/2011/enero/IM_042486?dDocName=IM_042486 CRE's webpage].&lt;br /&gt;
&lt;br /&gt;
# Press note at [http://www.diariodeleon.es/noticias/noticia.asp?pkid=577647  Diario de León].&lt;br /&gt;
&lt;br /&gt;
* Press Note about [http://www.diariodeleon.es/noticias/noticia.asp?pkid=577402 Juan's PhD Viva].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:FotoDiarioLeon2011.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Yet another info in the local newspaper, about a project on [http://www.diariodeleon.es/noticias/noticia.asp?pkid=571538 robotics for security].&lt;br /&gt;
&lt;br /&gt;
* The same one in the local TV:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;videoflash&amp;gt;Z-lNsL-o8UY&amp;lt;/videoflash&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Another review in the local newspapers: [http://www.leonoticias.com/frontend/leonoticias/Dos-Robots-leoneses-Acudiran-A-La-Copa-De-Europa-De-Futb-vn45312-vst384 Leonnoticias], and [http://www.la-cronica.net/2010/03/11/leon/ingenieros-de-la-ule-estaran-en-la-champions-de-futbol-de-robots-72130.htm La Crónica], about the team entering the [http://www.robocup-mediterranean-open.org/ Mediterranean Open] (Rome, March 18 to 20, 2010).&lt;br /&gt;
&lt;br /&gt;
* Interview in the local newspaper (Diario de León): [http://www.diariodeleon.es/noticias/noticia.asp?pkid=495839 first one] and [http://www.diariodeleon.es/noticias/noticia.asp?pkid=495838 second one].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:FotoDiarioLeon.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
= RoboCup =&lt;br /&gt;
&lt;br /&gt;
* We were proud members of the Spanish Team ([https://twitter.com/spiteam_robot SPITeam]) in the RoboCup Standard Platform providing a goalkeeper. SPITeam was a joint effort of the [robotics group] of the Rey Juan Carlos University, Rovira y Virgili University and Universidad de León. Last Team Description paper can be found [http://www.gsyc.es/~fmartin/SPiTeamApplication.pdf here]. Our last competition entrance was in [http://www.robocup2011.org/en/content.asp?PID={41EB0A0E-7EAF-4365-98F7-1D951C9B8D88}&amp;amp;PageFn=Teams Instambul 2011]&lt;br /&gt;
&lt;br /&gt;
* We were previously also proud members of the [http://www.teamchaos.es TeamChaos] RoboCup SPL team. We have contributed the goalie to the team for the [http://www.robocup2009.org/ RoboCup 2009] and [http://www.robocup-german-open.de/en German Open 2009].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Teamchaos-08.JPG|center|300px]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=Activities&amp;diff=4798</id>
		<title>Activities</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=Activities&amp;diff=4798"/>
				<updated>2016-01-19T19:01:34Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Last Events=&lt;br /&gt;
&lt;br /&gt;
==Francisco Lera's PhD dissertation==&lt;br /&gt;
&lt;br /&gt;
Francisco J. Lera just became the newest doctor of the group. His defense was held on December the 11th and he got a &amp;quot;Sobresaliente Cum Laude&amp;quot;. Congratulations, Fran! &lt;br /&gt;
&lt;br /&gt;
[[Image:tesis-fran.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
==RoCKIn 2015 (Lisboa) ==&lt;br /&gt;
&lt;br /&gt;
[[http://rockinrobotchallenge.eu/rockin2015.php RoCKIn2015]], the second and major competition event in the EU-funded robotics project RoCKIn, was held in Lisbon from 21-23 November 2015.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:RoCKIn2015.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
It was a joint event to the [[https://web.fe.up.pt/~robot2015/ ROBOT'2015]], the Second Iberian Robotics Conference where we also presented [[a paper]]&lt;br /&gt;
&lt;br /&gt;
==RoCKIn Event 2014 (Toulouse)==&lt;br /&gt;
&lt;br /&gt;
(main article of this event [[RoCKInChallengeToulouse | here]])&lt;br /&gt;
&lt;br /&gt;
==RoCKIn Camp 2014 (Rome)==&lt;br /&gt;
&lt;br /&gt;
(main article of this event [[RoCKIn2014 | here]])&lt;br /&gt;
&lt;br /&gt;
The team was in Rome testing the MYRABot robot. There, we learned different topics about vision, grasping, and natural language understanding. We tested the MB+ (MYRABot+) robot, a modification of MYRABot. Other robots were introduced in this competition, such as REEM from PAL Robotics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:MBPAL1.JPG|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Hack For Good 2013==&lt;br /&gt;
&lt;br /&gt;
Proposals for [http://hackforgood.net HackForGood] can be found [[proposals-hackforfood-2013 | here]].&lt;br /&gt;
&lt;br /&gt;
=Seminars=&lt;br /&gt;
&lt;br /&gt;
* European Robotics Week Conference [[http://www.eu-robotics.net/eurobotics-week/events-2013/low-cost-service-robots-ule-watermelon-project-for-rockinhome-challenge.html | Low Cost Robots for RoCKIn Challenge ]]November 27th, 2013&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2012|Introduction to Mobile Robot Programming (4th edition)]]. October 22nd to 26th , 2012&lt;br /&gt;
&lt;br /&gt;
* Conference ''&amp;quot;Augmented reality for elderly medication control using a mobile robot&amp;quot;'' at the Curso de Verano [http://catedratelefonica.unileon.es/2012/07/09/participacion-de-la-catedra-en-el-curso-de-verano-envejecimiento-activo-una-aproximacion-a-la-realidad-actual/ Envejecimiento Activo].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Foto-Curso-Verano-2012.jpeg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Conference ''&amp;quot;The role of edutainment robotics in mechatronics research and teaching&amp;quot;'' by [http://www.ipb.pt/~jllima/ José Luis Lima] and José Gonçalves from the [http://www.ipb.pt Politechical Institute of Bragança], on November 4, 2011.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:CharlaIPB-2011.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2011 | Mobile Robot Programming, medium level (1st edition)]]. March 2011.&lt;br /&gt;
&lt;br /&gt;
* Seminar on BICA programming in the SPITeam for the RoboCup2011. Instructors: Francisco Martín and Carlos E. Agüero from [http://www.robotica-urjc.es URJC Robotics Group].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;wikiflv width=&amp;quot;300&amp;quot; height=&amp;quot;250&amp;quot; logo=&amp;quot;true&amp;quot;&amp;gt;/videos/CursoBica.flv&amp;lt;/wikiflv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2010|Introduction to Mobile Robot Programming (3rd edition)]]. December 9 to 23, 2010.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:16122012.JPG|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [[ProgramacionRobots2010|Introduction to Mobile Robot Programming (2nd edition)]]. March 8 to 12, 2010.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;wikiflv width=&amp;quot;300&amp;quot; height=&amp;quot;250&amp;quot; logo=&amp;quot;true&amp;quot;&amp;gt;/videos/videoSumo20100312.flv&amp;lt;/wikiflv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Course [http://roboticaleon2009.googlepages.com/ Mobile Robot programming]. July 1 to 7, 2009.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Carrera.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [http://www.compcog.org/ CompCog] Workshop ESF on Held in the University of León, May 26 to 27, 2009.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Compcog09-leon.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Introduction seminar to Nao programming (Instructor [http://www.robotica-urjc.es Dr. Francisco Martín Rico]),&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Workshop-paco09.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=In the Press=&lt;br /&gt;
&lt;br /&gt;
* [http://www.diariodevalladolid.es/noticias/valladolid/villanueva-augura-empresas-innoven-creceran-triple-proximos-cinco-anos_14514.html Award ''Innovadores Castilla y Leon 2015'']  (El Mundo, 2015)&lt;br /&gt;
&lt;br /&gt;
*[http://www.elmundo.es/economia/2014/12/31/54a2ee5b22601d42688b4580.html?intcmp=ULNOH002 Robots que medican y chalan], coverage by national newspaper of the group's activities (El Mundo, Dec, 2014)&lt;br /&gt;
&lt;br /&gt;
* [http://www.diariodeleon.es/noticias/leon/premio-tic_845853.html Innova Award] to the best 2014 IT project, given by local newspaper (Diario de León).&lt;br /&gt;
&lt;br /&gt;
* Article about  Fernando's work on mobile robots [http://www.fernando.casadogarcia.es/innova.jpg Suplemento Innova (Diario de León)]&lt;br /&gt;
&lt;br /&gt;
[[Image:DL-FernandoCasado.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
* Juan Felipe was awarded the [http://www.diariodeleon.es/noticias/leon/juan-felipe-garcia_694743.html Young Talent 2012] title by Diario de León.&lt;br /&gt;
* Article about using mobile robots in [http://www.la-cronica.net/2011/02/28/leon/robots-moviles-para-asistir-a-personas-dependientes-113536.htm La Crónica].&lt;br /&gt;
&lt;br /&gt;
* Conference on social robotics applied to handicapped people. Organized by CRE:&lt;br /&gt;
&lt;br /&gt;
# Information about the event at [http://www.imserso.es/cresanandres_01/auxiliares/actualidad/novedades/2011/enero/IM_042486?dDocName=IM_042486 CRE's webpage].&lt;br /&gt;
&lt;br /&gt;
# Press note at [http://www.diariodeleon.es/noticias/noticia.asp?pkid=577647  Diario de León].&lt;br /&gt;
&lt;br /&gt;
* Press Note about [http://www.diariodeleon.es/noticias/noticia.asp?pkid=577402 Juan's PhD Viva].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:FotoDiarioLeon2011.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Yet another info in the local newspaper, about a project on [http://www.diariodeleon.es/noticias/noticia.asp?pkid=571538 robotics for security].&lt;br /&gt;
&lt;br /&gt;
* The same one in the local TV:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;videoflash&amp;gt;Z-lNsL-o8UY&amp;lt;/videoflash&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Another review in the local newspapers: [http://www.leonoticias.com/frontend/leonoticias/Dos-Robots-leoneses-Acudiran-A-La-Copa-De-Europa-De-Futb-vn45312-vst384 Leonnoticias], and [http://www.la-cronica.net/2010/03/11/leon/ingenieros-de-la-ule-estaran-en-la-champions-de-futbol-de-robots-72130.htm La Crónica], about the team entering the [http://www.robocup-mediterranean-open.org/ Mediterranean Open] (Rome, March 18 to 20, 2010).&lt;br /&gt;
&lt;br /&gt;
* Interview in the local newspaper (Diario de León): [http://www.diariodeleon.es/noticias/noticia.asp?pkid=495839 first one] and [http://www.diariodeleon.es/noticias/noticia.asp?pkid=495838 second one].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:FotoDiarioLeon.jpg|center|300px]]&lt;br /&gt;
&lt;br /&gt;
= RoboCup =&lt;br /&gt;
&lt;br /&gt;
* We were proud members of the Spanish Team ([https://twitter.com/spiteam_robot SPITeam]) in the RoboCup Standard Platform providing a goalkeeper. SPITeam was a joint effort of the [robotics group] of the Rey Juan Carlos University, Rovira y Virgili University and Universidad de León. Last Team Description paper can be found [http://www.gsyc.es/~fmartin/SPiTeamApplication.pdf here]. Our last competition entrance was in [http://www.robocup2011.org/en/content.asp?PID={41EB0A0E-7EAF-4365-98F7-1D951C9B8D88}&amp;amp;PageFn=Teams Instambul 2011]&lt;br /&gt;
&lt;br /&gt;
* We were previously also proud members of the [http://www.teamchaos.es TeamChaos] RoboCup SPL team. We have contributed the goalie to the team for the [http://www.robocup2009.org/ RoboCup 2009] and [http://www.robocup-german-open.de/en German Open 2009].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Teamchaos-08.JPG|center|300px]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=RoboCup&amp;diff=4797</id>
		<title>RoboCup</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=RoboCup&amp;diff=4797"/>
				<updated>2016-01-19T19:00:23Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
This page is currently being developed&lt;br /&gt;
&lt;br /&gt;
= Members =&lt;br /&gt;
&lt;br /&gt;
* Francisco Javier Rodríguez Lera&lt;br /&gt;
* Francisco Martín Rico&lt;br /&gt;
* Vicente Matellán Olivera&lt;br /&gt;
* Fernando Casado García&lt;br /&gt;
* Jesús Balsa Comerón&lt;br /&gt;
* Diego García Ordás&lt;br /&gt;
&lt;br /&gt;
= Robots =&lt;br /&gt;
&lt;br /&gt;
* OrBiOne&lt;br /&gt;
&lt;br /&gt;
* Cervantes&lt;br /&gt;
&lt;br /&gt;
= Papers =&lt;br /&gt;
&lt;br /&gt;
Intention to participate&lt;br /&gt;
&lt;br /&gt;
Description paper&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=People&amp;diff=4796</id>
		<title>People</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=People&amp;diff=4796"/>
				<updated>2016-01-19T18:58:26Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Current members=&lt;br /&gt;
&lt;br /&gt;
The robotics group of the University of León is currently made up by:&lt;br /&gt;
&lt;br /&gt;
'''Faculty'''&lt;br /&gt;
# [http://robotica.unileon.es/~vmo Vicente Matellán Olivera] (head of the group)&lt;br /&gt;
# [https://www.linkedin.com/pub/camino-fern%C3%A1ndez-llamas/2b/169/74 Camino Fernández Llamas]&lt;br /&gt;
# Francisco Jesús Rodríguez Sedano&lt;br /&gt;
&lt;br /&gt;
'''Postdocs'''&lt;br /&gt;
# [[Miguel-A-Conde | Miguel Ángel Conde González]]&lt;br /&gt;
# [[Francisco-phd|Francisco Javier Rodríguez Lera]] (Augmented Reality in Assistance Environments) &lt;br /&gt;
# [https://www.linkedin.com/pub/juan-felipe-garc%C3%ADa-sierra/65/44a/715?trk=pub-pbmap Juan Felipe García Sierra]&lt;br /&gt;
&lt;br /&gt;
'''PhD students'''&lt;br /&gt;
# Gonzalo Esteban Costales (Haptic interaction) (October 2011 - )&lt;br /&gt;
# [[User:Victorm | Víctor Rodríguez]] ([[PhD-3D-Object-Tracking|3D object recognition and tracking]]) (October 2011 - )&lt;br /&gt;
# [[User:Fernando | Fernando Casado García]] ([[Mobile manipulation]]) (September 2013 - )&lt;br /&gt;
# Diego García Ordás (3D object recognition and tracking) (January 2016 - )&lt;br /&gt;
&lt;br /&gt;
'''Undergrad students'''&lt;br /&gt;
# Jesús Balsa Comerón&lt;br /&gt;
# Clauda Álvarez Aparicio&lt;br /&gt;
# Pablo Blanco Medina&lt;br /&gt;
# Laura Inyesto Alonso&lt;br /&gt;
&lt;br /&gt;
=Past members=&lt;br /&gt;
&lt;br /&gt;
==Visitors==&lt;br /&gt;
# [https://www.linkedin.com/pub/t%C3%A2nia-carrera/72/164/1a3 Tânia Carrera] from Bragança, Portugal ([[ Tania-TFM-ROS04|Kinect with ROS and PCL]]) (February - June 2013)&lt;br /&gt;
# [http://souzamarcelo.weebly.com Marcelo de Souza] from Santa Catarina State University, [http://www.udesc.br UDESC]/CEAVI, Brasil ([[Marcelo-TurtleBot|Navigation Algorithms for Turtlebot]]) (October 2012 - February 2013)&lt;br /&gt;
# [http://homepage.univie.ac.at/andrea.ravignani/ Andrea Ravignani] from the University of Wien, Austria ([[Comparative-Cognition|Comparative Cognition]]) (October - December 2012)&lt;br /&gt;
# [http://neithan.weebly.com/ Jesús Martínez Gómez] from the University of Castilla la Mancha (2008 - 2009)&lt;br /&gt;
&lt;br /&gt;
==2014-2015==&lt;br /&gt;
&lt;br /&gt;
* Summer research residence&lt;br /&gt;
# Rubén Rodríguez Fernández ([[Ruben-RV-ROS01|Turtlebot Navigation]]) (July 2013 - )&lt;br /&gt;
&lt;br /&gt;
==2013-2014==&lt;br /&gt;
&lt;br /&gt;
* Master students&lt;br /&gt;
# [https://www.linkedin.com/pub/aitana-alonso-nogueira/5b/174/572?trk=pub-pbmap Aitana Alonso Nogueira] ([[Aitana-TFM-kinect-ROS05|Human tracking and recognition]]) ([[Aitana-TFM-Android-MYRA|Evolución de la Solución de Asistencia MYRA para su Despliegue en Plataformas Android]]) (September 2013 - September 2014)&lt;br /&gt;
* Summer research residence (undergrad)&lt;br /&gt;
# [https://es.linkedin.com/pub/jose-%C3%A1lvaro-fern%C3%A1ndez-%C3%A1lvarez/65/a00/344 Jose Álvaro Fernández Álvarez] ([[Alvaro-RV-HAPTICROS01|Haptics and ROS]]) (July - September 2013)&lt;br /&gt;
&lt;br /&gt;
==2012-2013==&lt;br /&gt;
* PhD students&lt;br /&gt;
# [https://es.linkedin.com/pub/fernando-garc%C3%ADa-d%C3%ADaz-calvo/3b/b74/b16 Fernando García Diaz-Calvo] (Autonomous behavior generation in mobile robots) (October 2012 - discontinued)&lt;br /&gt;
* Master students&lt;br /&gt;
# Carlos Rodríguez Hernández ([[Carlos-TFM-MYRABot01|MYRA Robot: Hardware Update]]) (January - September 2013)&lt;br /&gt;
* Undergrad students&lt;br /&gt;
# [https://es.linkedin.com/pub/david-llamas-abad/88/548/9b6 David Llamas Abad] ([[David-PFC-ROS3|AR in ROS]])&lt;br /&gt;
# Julia Pérez del Río ([[JuliaP-PFC-Modular01|Modular robotics]])&lt;br /&gt;
&lt;br /&gt;
==2011-2012==&lt;br /&gt;
* Master students&lt;br /&gt;
# Isidoro Gayo Vélez ([[Isidoro-TFM-Rovio02|Using rovio for tele-surveillance]]) (January 2011 - discontinued)&lt;br /&gt;
# [https://www.linkedin.com/pub/alvaro-botas-mu%C3%B1oz/3a/517/90 Álvaro Botas Muñoz] ([[AlvaroB-TFM-ROS01 |AR Drone]]) (November 2011 - July 2012)&lt;br /&gt;
# [[User:Fernando | Fernando Casado García]] ([[Fernando-TFM-ROS02|Controlling Turtlebot with ROS]]) (January 2012 - July 2012)&lt;br /&gt;
* Undergrad students&lt;br /&gt;
# Jesús Balsa Comerón ([[JesusB-PFC-Nao01|Nao Control]]) (March 2011 - discontinued)&lt;br /&gt;
&lt;br /&gt;
==2010-2011==&lt;br /&gt;
* PhD students&lt;br /&gt;
# [https://www.linkedin.com/pub/juan-felipe-garc%C3%ADa-sierra/65/44a/715?trk=pub-pbmap Juan Felipe García Sierra] ([https://buleria.unileon.es/handle/10612/815 Aportaciones a la computación de atención visual y aplicación al control de un robot humanoide])&lt;br /&gt;
* Master students&lt;br /&gt;
# Pablo Lobato Criado ([[PabloL-Rovio04|Navigation with ROS and Rovio]])&lt;br /&gt;
# [https://es.linkedin.com/pub/v%C3%ADctor-manuel-gonz%C3%A1lez-mateos/38/356/827 Víctor Manuel González Mateos] ([[VictorM-TFM-Rovio03|Navigation with Rovio]])&lt;br /&gt;
# Julio San Román Gutiérrez (Augmented reality on turtlebot) (March 2010 - discontinued)&lt;br /&gt;
# [[User:Victorm | Víctor Rodríguez]] ([[TFM-Nao-Goalkeeper|Implementing a RoboCup goalie with Nao using BICA]]) (October 2010 - July 2011)&lt;br /&gt;
* Undergrad students&lt;br /&gt;
# [https://es.linkedin.com/pub/elena-ortega-fern%C3%A1ndez/35/aa/b17 Elena Ortega Fernández] ([[ElenaO-RV-proy1|NaoQI and BICA]]) (November 2010 - September 2011)&lt;br /&gt;
# [https://www.linkedin.com/pub/alvaro-botas-mu%C3%B1oz/3a/517/90 Álvaro Botas Muñoz] ([[AlvaroB-PFC-Drone02|AR Drone]]) (November 2010 - September 2011)&lt;br /&gt;
* Other&lt;br /&gt;
# [https://www.linkedin.com/pub/mario-castro-de-lera/14/886/3b9 Mario Castro de Lera] from the Robotics course of the Master in Cybernetics ([[MarioCL-Drone01|AR Drone]])&lt;br /&gt;
# [https://co.linkedin.com/pub/angel-rojo-roman/45/54a/0 Ángel Rojo Román] from the subject of Sistemas informáticos (Computer Systems) ([[AngelR-PFC-Drone01|AR Drone]]) (November 2010 - September 2011)&lt;br /&gt;
&lt;br /&gt;
==2009-2010==&lt;br /&gt;
* Master students&lt;br /&gt;
# Julio San Román Gutiérrez ([[JulioSR-TFM-Rovio01|Augmented reality for teleoperation of a Rovio robot]])&lt;br /&gt;
# [https://es.linkedin.com/in/vfidalgo Víctor Fidalgo Villar] ([[VictorFV-TFM-Cenital|Ball tracking with overhead camera]])&lt;br /&gt;
* Undergrad students&lt;br /&gt;
# [https://es.linkedin.com/pub/pablo-flecha-guti%C3%A9rrez/24/907/973 Pablo Flecha Gutiérrez]&lt;br /&gt;
# [https://es.linkedin.com/pub/david-mart%C3%ADnez-mart%C3%ADnez/61/656/176 David Martínez Martínez] ([[DMartinez-PFC-Nao|Teleoperation of Nao robot]])&lt;br /&gt;
# [https://es.linkedin.com/in/hectorquintian Héctor Quintián Pardo] and Alba Rodríguez from Artificial Intelligence, Industrial Engineering (2nd cycle) ([[HAQR-IA-Rovio|Teleoperation of Rovio robot]]) (2009 - 2010, 2nd semester)&lt;br /&gt;
* Summer research residence (undergrad)&lt;br /&gt;
# [https://es.linkedin.com/pub/elena-ortega-fern%C3%A1ndez/35/aa/b17 Elena Ortega Fernández] ([[ElenaO-RV-proy1|NaoQI and BICA]]) (July - September 2010)&lt;br /&gt;
# [https://es.linkedin.com/in/hectorquintian Héctor Quintián Pardo] ([[HectorQ-RV-proy2|Teleoperation of Rovio robot]]) (July - September 2010)&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=Home&amp;diff=4795</id>
		<title>Home</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=Home&amp;diff=4795"/>
				<updated>2016-01-19T18:53:30Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the official wiki of the Robotics Group of the [http://www.unileon.es University of León] (ULE). We are a group of researchers (faculty and students) interested in creating autonomous behaviour for mobile robots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Foto-grupo2015.jpg|center|border|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We are proud members of [http://www.redaf.es RedAF] (Red Nacional de Investigación en Agentes Físicos), [http://retecog.net ReteCog] (Red Temática de Ciencia Cognitiva), and [http://www.euron.org/members/index#257 Euron].&lt;br /&gt;
&lt;br /&gt;
==Location==&lt;br /&gt;
&lt;br /&gt;
Our lab is in the second floor of the [https://www.google.com/maps/@42.6150329,-5.5635688,203m/data=!3m1!1e3 MIC building] (Módulo de Investigación en Cibernética - Cybernetics Research Module). If you are interested in our work, you are welcome to pay us a visit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Mic2015.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We also hang around the [http://centros.unileon.es/eiii/ EIII] (Escuela de Ingenierías Industrial e Informática - Engineering School), which is [https://www.google.com/maps/@42.6141916,-5.5611339,203m/data=!3m1!1e3 here].&lt;br /&gt;
&lt;br /&gt;
==News==&lt;br /&gt;
&lt;br /&gt;
(December, 2015) Francisco J. Lera just became the newest doctor of the group. His defense was held on December the 11th and he got a &amp;quot;Sobresaliente Cum Laude&amp;quot;. Congratulations, Fran!&lt;br /&gt;
&lt;br /&gt;
[[Image:tesis-fran.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(September 30, 2015) This is the Team Description Paper of our &amp;quot;Watermelon Project&amp;quot; team for the RoCKIn@home Challenge in Lisbon (Portugal).&lt;br /&gt;
&lt;br /&gt;
[http://rockincompetition.eu/node/256 RoCKIn competition]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(November 24, 2014) Coinciding with the [http://www.eu-robotics.net/eurobotics-week/about-eurobotics-week/ European Robotics Week], we are going to participate in the first competition event of RoCKIn. It will take place in the Cité de l'espace science museum in Toulouse, France.&lt;br /&gt;
&lt;br /&gt;
You can follow up with us [[RoCKInChallengeToulouse|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(September 19, 2014) CeRVaNTeS platform. Teleoperation using a [http://www.microsoft.com/hardware/es-es/p/xbox-360-wireless-controller-for-windows Xbox 360 wireless controller] ([[CeRVaNTeS' Teleoperation with xbox360 wireless controller (xbox360 controller+joy)|more info]]):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;wF_gKr1KWfg#t=13&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(July 18, 2014) CeRVaNTeS platform. Integration in [http://moveit.ros.org/ moveIt!] ([[Integration of CeRVaNTeS in moveIt! (gazebo+moveIt!)|more info]]):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;bYtvNJHraVs&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(July 11, 2014)  [http://www.mountain.es/ Mountain] will sponsor our group for the next RoCKIn challenge.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Lgo_mountain.png|center|330px|link=http://www.mountain.es/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the last two months, we have been using Mountain hardware for research and development with our robots. The control software of the CeRVaNTeS platform runs on a [http://www.mountain.es/portatiles F-13] laptop. Soon, our MYRABot platorm will also mount one of their machines.&lt;br /&gt;
&lt;br /&gt;
We want to express our sincere gratitude to [http://www.mountain.es/ Mountain] for their support to small research groups like ours.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(June 16, 2014) [[Projects#Cervantes_Project|CeRVaNTeS platform]]. First steps and proof of concept:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;JtDIRky7EKs&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(January 25-31, 2014) RoCKIn camp participation. Our team attended the [[RoCKIn2014|RoCKIn Camp 2014]] on Rome, Italy. There they tested the performance of the MYRABot system in a competition environment. Also, the members participated in the lectures and in the practical sessions about basic software infrastructure, perception, manipulation and human robot interaction through speech.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:MBPAL1.JPG|center|border|250px]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=People&amp;diff=4783</id>
		<title>People</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=People&amp;diff=4783"/>
				<updated>2016-01-14T07:49:35Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Current members=&lt;br /&gt;
&lt;br /&gt;
The robotics group of the University of León is currently made up by:&lt;br /&gt;
&lt;br /&gt;
'''Faculty'''&lt;br /&gt;
# [http://robotica.unileon.es/~vmo Vicente Matellán Olivera] (head of the group)&lt;br /&gt;
# [https://www.linkedin.com/pub/camino-fern%C3%A1ndez-llamas/2b/169/74 Camino Fernández Llamas]&lt;br /&gt;
# Francisco J. Rodríguez Sedano&lt;br /&gt;
&lt;br /&gt;
'''Postdocs'''&lt;br /&gt;
# [[Miguel-A-Conde | Miguel Ángel Conde]]&lt;br /&gt;
# [[Francisco-phd|Francisco Javier Rodríguez Lera]] (Augmented Reality in Assistance Environments) &lt;br /&gt;
# [https://www.linkedin.com/pub/juan-felipe-garc%C3%ADa-sierra/65/44a/715?trk=pub-pbmap Juan Felipe García Sierra]&lt;br /&gt;
&lt;br /&gt;
'''PhD students'''&lt;br /&gt;
# Gonzalo Esteban Costales (Haptic interaction) (October 2011 - )&lt;br /&gt;
# [[User:Victorm | Víctor Rodríguez]] ([[PhD-3D-Object-Tracking|3D object recognition and tracking]]) (October 2011 - )&lt;br /&gt;
# [[User:Fernando | Fernando Casado García]] ([[Mobile manipulation]]) (September 2013 - )&lt;br /&gt;
&lt;br /&gt;
'''Undergrad students'''&lt;br /&gt;
# Jesús Balsa Comeron&lt;br /&gt;
# Clauda Álvarez Aparicio&lt;br /&gt;
# Pablo Blanco Medina&lt;br /&gt;
# Laura Inyesto Alonso&lt;br /&gt;
&lt;br /&gt;
=Past members=&lt;br /&gt;
&lt;br /&gt;
==Visitors==&lt;br /&gt;
# [https://www.linkedin.com/pub/t%C3%A2nia-carrera/72/164/1a3 Tânia Carrera] from Bragança, Portugal ([[ Tania-TFM-ROS04|Kinect with ROS and PCL]]) (February - June 2013)&lt;br /&gt;
# [http://souzamarcelo.weebly.com Marcelo de Souza] from Santa Catarina State University, [http://www.udesc.br UDESC]/CEAVI, Brasil ([[Marcelo-TurtleBot|Navigation Algorithms for Turtlebot]]) (October 2012 - February 2013)&lt;br /&gt;
# [http://homepage.univie.ac.at/andrea.ravignani/ Andrea Ravignani] from the University of Wien, Austria ([[Comparative-Cognition|Comparative Cognition]]) (October - December 2012)&lt;br /&gt;
# [http://neithan.weebly.com/ Jesús Martínez Gómez] from the University of Castilla la Mancha (2008 - 2009)&lt;br /&gt;
&lt;br /&gt;
==2014-2015==&lt;br /&gt;
&lt;br /&gt;
* Summer research residence&lt;br /&gt;
# Rubén Rodríguez Fernández ([[Ruben-RV-ROS01|Turtlebot Navigation]]) (July 2013 - )&lt;br /&gt;
&lt;br /&gt;
==2013-2014==&lt;br /&gt;
&lt;br /&gt;
* Master students&lt;br /&gt;
# [https://www.linkedin.com/pub/aitana-alonso-nogueira/5b/174/572?trk=pub-pbmap Aitana Alonso Nogueira] ([[Aitana-TFM-kinect-ROS05|Human tracking and recognition]]) ([[Aitana-TFM-Android-MYRA|Evolución de la Solución de Asistencia MYRA para su Despliegue en Plataformas Android]]) (September 2013 - September 2014)&lt;br /&gt;
* Summer research residence (undergrad)&lt;br /&gt;
# [https://es.linkedin.com/pub/jose-%C3%A1lvaro-fern%C3%A1ndez-%C3%A1lvarez/65/a00/344 Jose Álvaro Fernández Álvarez] ([[Alvaro-RV-HAPTICROS01|Haptics and ROS]]) (July - September 2013)&lt;br /&gt;
&lt;br /&gt;
==2012-2013==&lt;br /&gt;
* PhD students&lt;br /&gt;
# [https://es.linkedin.com/pub/fernando-garc%C3%ADa-d%C3%ADaz-calvo/3b/b74/b16 Fernando García Diaz-Calvo] (Autonomous behavior generation in mobile robots) (October 2012 - discontinued)&lt;br /&gt;
* Master students&lt;br /&gt;
# Carlos Rodríguez Hernández ([[Carlos-TFM-MYRABot01|MYRA Robot: Hardware Update]]) (January - September 2013)&lt;br /&gt;
* Undergrad students&lt;br /&gt;
# [https://es.linkedin.com/pub/david-llamas-abad/88/548/9b6 David Llamas Abad] ([[David-PFC-ROS3|AR in ROS]])&lt;br /&gt;
# Julia Pérez del Río ([[JuliaP-PFC-Modular01|Modular robotics]])&lt;br /&gt;
&lt;br /&gt;
==2011-2012==&lt;br /&gt;
* Master students&lt;br /&gt;
# Isidoro Gayo Vélez ([[Isidoro-TFM-Rovio02|Using rovio for tele-surveillance]]) (January 2011 - discontinued)&lt;br /&gt;
# [https://www.linkedin.com/pub/alvaro-botas-mu%C3%B1oz/3a/517/90 Álvaro Botas Muñoz] ([[AlvaroB-TFM-ROS01 |AR Drone]]) (November 2011 - July 2012)&lt;br /&gt;
# [[User:Fernando | Fernando Casado García]] ([[Fernando-TFM-ROS02|Controlling Turtlebot with ROS]]) (January 2012 - July 2012)&lt;br /&gt;
* Undergrad students&lt;br /&gt;
# Jesús Balsa Comerón ([[JesusB-PFC-Nao01|Nao Control]]) (March 2011 - discontinued)&lt;br /&gt;
&lt;br /&gt;
==2010-2011==&lt;br /&gt;
* PhD students&lt;br /&gt;
# [https://www.linkedin.com/pub/juan-felipe-garc%C3%ADa-sierra/65/44a/715?trk=pub-pbmap Juan Felipe García Sierra] ([https://buleria.unileon.es/handle/10612/815 Aportaciones a la computación de atención visual y aplicación al control de un robot humanoide])&lt;br /&gt;
* Master students&lt;br /&gt;
# Pablo Lobato Criado ([[PabloL-Rovio04|Navigation with ROS and Rovio]])&lt;br /&gt;
# [https://es.linkedin.com/pub/v%C3%ADctor-manuel-gonz%C3%A1lez-mateos/38/356/827 Víctor Manuel González Mateos] ([[VictorM-TFM-Rovio03|Navigation with Rovio]])&lt;br /&gt;
# Julio San Román Gutiérrez (Augmented reality on turtlebot) (March 2010 - discontinued)&lt;br /&gt;
# [[User:Victorm | Víctor Rodríguez]] ([[TFM-Nao-Goalkeeper|Implementing a RoboCup goalie with Nao using BICA]]) (October 2010 - July 2011)&lt;br /&gt;
* Undergrad students&lt;br /&gt;
# [https://es.linkedin.com/pub/elena-ortega-fern%C3%A1ndez/35/aa/b17 Elena Ortega Fernández] ([[ElenaO-RV-proy1|NaoQI and BICA]]) (November 2010 - September 2011)&lt;br /&gt;
# [https://www.linkedin.com/pub/alvaro-botas-mu%C3%B1oz/3a/517/90 Álvaro Botas Muñoz] ([[AlvaroB-PFC-Drone02|AR Drone]]) (November 2010 - September 2011)&lt;br /&gt;
* Other&lt;br /&gt;
# [https://www.linkedin.com/pub/mario-castro-de-lera/14/886/3b9 Mario Castro de Lera] from the Robotics course of the Master in Cybernetics ([[MarioCL-Drone01|AR Drone]])&lt;br /&gt;
# [https://co.linkedin.com/pub/angel-rojo-roman/45/54a/0 Ángel Rojo Román] from the subject of Sistemas informáticos (Computer Systems) ([[AngelR-PFC-Drone01|AR Drone]]) (November 2010 - September 2011)&lt;br /&gt;
&lt;br /&gt;
==2009-2010==&lt;br /&gt;
* Master students&lt;br /&gt;
# Julio San Román Gutiérrez ([[JulioSR-TFM-Rovio01|Augmented reality for teleoperation of a Rovio robot]])&lt;br /&gt;
# [https://es.linkedin.com/in/vfidalgo Víctor Fidalgo Villar] ([[VictorFV-TFM-Cenital|Ball tracking with overhead camera]])&lt;br /&gt;
* Undergrad students&lt;br /&gt;
# [https://es.linkedin.com/pub/pablo-flecha-guti%C3%A9rrez/24/907/973 Pablo Flecha Gutiérrez]&lt;br /&gt;
# [https://es.linkedin.com/pub/david-mart%C3%ADnez-mart%C3%ADnez/61/656/176 David Martínez Martínez] ([[DMartinez-PFC-Nao|Teleoperation of Nao robot]])&lt;br /&gt;
# [https://es.linkedin.com/in/hectorquintian Héctor Quintián Pardo] and Alba Rodríguez from Artificial Intelligence, Industrial Engineering (2nd cycle) ([[HAQR-IA-Rovio|Teleoperation of Rovio robot]]) (2009 - 2010, 2nd semester)&lt;br /&gt;
* Summer research residence (undergrad)&lt;br /&gt;
# [https://es.linkedin.com/pub/elena-ortega-fern%C3%A1ndez/35/aa/b17 Elena Ortega Fernández] ([[ElenaO-RV-proy1|NaoQI and BICA]]) (July - September 2010)&lt;br /&gt;
# [https://es.linkedin.com/in/hectorquintian Héctor Quintián Pardo] ([[HectorQ-RV-proy2|Teleoperation of Rovio robot]]) (July - September 2010)&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=Home&amp;diff=4782</id>
		<title>Home</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=Home&amp;diff=4782"/>
				<updated>2016-01-14T07:48:36Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the official wiki of the Robotics Group of the [http://www.unileon.es University of León] (ULE). We are a group of researchers (faculty and students) interested in creating autonomous behaviour for mobile robots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Foto-grupo2015.jpg|center|border|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We are proud members of [http://www.redaf.es RedAF] (Red Nacional de Investigación en Agentes Físicos), [http://retecog.net ReteCog] (Red Temática de Ciencia Cognitiva), and [http://www.euron.org/members/index#257 Euron].&lt;br /&gt;
&lt;br /&gt;
==Location==&lt;br /&gt;
&lt;br /&gt;
Our lab is in the second floor of the [https://www.google.com/maps/@42.6150329,-5.5635688,203m/data=!3m1!1e3 MIC building] (Módulo de Investigación en Cibernética - Cybernetics Research Module). If you are interested in our work, you are welcome to pay us a visit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Mic2015.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We also hang around the [http://centros.unileon.es/eiii/ EIII] (Escuela de Ingenierías Industrial e Informática - Engineering School), which is [https://www.google.com/maps/@42.6141916,-5.5611339,203m/data=!3m1!1e3 here].&lt;br /&gt;
&lt;br /&gt;
==News==&lt;br /&gt;
&lt;br /&gt;
(September 30, 2015) This is the Team Description Paper of our &amp;quot;Watermelon Project&amp;quot; team for the next RoCKIn@home Challenge. This competition will take place in Lisbon (Portugal).&lt;br /&gt;
&lt;br /&gt;
[http://rockincompetition.eu/node/256 RoCKIn competition]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(February 6, 2015) We are happy to inform that we have been accepted into the [http://www.dis.uniroma1.it/~rockin/node/33  RoCKIn Camp 2015] that will take place on Peccioli (Pisa, Italy) next March!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(November 24, 2014) Coinciding with the [http://www.eu-robotics.net/eurobotics-week/about-eurobotics-week/ European Robotics Week], we are going to participate in the first competition event of RoCKIn. It will take place in the Cité de l'espace science museum in Toulouse, France.&lt;br /&gt;
&lt;br /&gt;
You can follow up with us [[RoCKInChallengeToulouse|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(September 19, 2014) CeRVaNTeS platform. Teleoperation using a [http://www.microsoft.com/hardware/es-es/p/xbox-360-wireless-controller-for-windows Xbox 360 wireless controller] ([[CeRVaNTeS' Teleoperation with xbox360 wireless controller (xbox360 controller+joy)|more info]]):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;wF_gKr1KWfg#t=13&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(September 15, 2014) On September 27th, we attended the [http://makerfaireleon.com/ Mini Maker Faire] event in León, as Makers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:mmf_logo_leon.png|center|500px|link=http://makerfaireleon.com/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about the Maker Movement in [https://www.thegrommet.com/blog/the-maker-movement-infographic/ this infographic], and in the [http://makerfaire.com/ official web].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(July 18, 2014) CeRVaNTeS platform. Integration in [http://moveit.ros.org/ moveIt!] ([[Integration of CeRVaNTeS in moveIt! (gazebo+moveIt!)|more info]]):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;bYtvNJHraVs&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(July 11, 2014)  [http://www.mountain.es/ Mountain] will sponsor our group for the next RoCKIn challenge.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Lgo_mountain.png|center|330px|link=http://www.mountain.es/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the last two months, we have been using Mountain hardware for research and development with our robots. The control software of the CeRVaNTeS platform runs on a [http://www.mountain.es/portatiles F-13] laptop. Soon, our MYRABot platorm will also mount one of their machines.&lt;br /&gt;
&lt;br /&gt;
We want to express our sincere gratitude to [http://www.mountain.es/ Mountain] for their support to small research groups like ours.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(June 16, 2014) [[Projects#Cervantes_Project|CeRVaNTeS platform]]. First steps and proof of concept:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;JtDIRky7EKs&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(January 25-31, 2014) RoCKIn camp participation. Our team attended the [[RoCKIn2014|RoCKIn Camp 2014]] on Rome, Italy. There they tested the performance of the MYRABot system in a competition environment. Also, the members participated in the lectures and in the practical sessions about basic software infrastructure, perception, manipulation and human robot interaction through speech.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:MBPAL1.JPG|center|border|250px]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=Home&amp;diff=4781</id>
		<title>Home</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=Home&amp;diff=4781"/>
				<updated>2016-01-14T07:43:52Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the official wiki of the Robotics Group of the [http://www.unileon.es University of León] (ULE). We are a group of researchers (faculty and students) interested in creating autonomous behaviour for mobile robots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Foto-grupo2015.jpg|center|border|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We are proud members of [http://www.redaf.es RedAF] (Red Nacional de Investigación en Agentes Físicos), [http://retecog.net ReteCog] (Red Temática de Ciencia Cognitiva), and [http://www.euron.org/members/index#257 Euron].&lt;br /&gt;
&lt;br /&gt;
==Location==&lt;br /&gt;
&lt;br /&gt;
Our lab is in the second floor of the [https://www.google.com/maps/@42.6150329,-5.5635688,203m/data=!3m1!1e3 MIC building] (Módulo de Investigación en Cibernética - Cybernetics Research Module), in the University of León:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Mic2015.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can also find us around the [http://centros.unileon.es/eiii/ EIII] (Escuela de Ingenierías Industrial e Informática - Engineering School). If you are interested in our work, you are welcome to pay us a visit.&lt;br /&gt;
&lt;br /&gt;
==News==&lt;br /&gt;
&lt;br /&gt;
(September 30, 2015) This is the Team Description Paper of our &amp;quot;Watermelon Project&amp;quot; team for the next RoCKIn@home Challenge. This competition will take place in Lisbon (Portugal).&lt;br /&gt;
&lt;br /&gt;
[http://rockincompetition.eu/node/256 RoCKIn competition]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(February 6, 2015) We are happy to inform that we have been accepted into the [http://www.dis.uniroma1.it/~rockin/node/33  RoCKIn Camp 2015] that will take place on Peccioli (Pisa, Italy) next March!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(November 24, 2014) Coinciding with the [http://www.eu-robotics.net/eurobotics-week/about-eurobotics-week/ European Robotics Week], we are going to participate in the first competition event of RoCKIn. It will take place in the Cité de l'espace science museum in Toulouse, France.&lt;br /&gt;
&lt;br /&gt;
You can follow up with us [[RoCKInChallengeToulouse|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(September 19, 2014) CeRVaNTeS platform. Teleoperation using a [http://www.microsoft.com/hardware/es-es/p/xbox-360-wireless-controller-for-windows Xbox 360 wireless controller] ([[CeRVaNTeS' Teleoperation with xbox360 wireless controller (xbox360 controller+joy)|more info]]):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;wF_gKr1KWfg#t=13&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(September 15, 2014) On September 27th, we attended the [http://makerfaireleon.com/ Mini Maker Faire] event in León, as Makers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:mmf_logo_leon.png|center|500px|link=http://makerfaireleon.com/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about the Maker Movement in [https://www.thegrommet.com/blog/the-maker-movement-infographic/ this infographic], and in the [http://makerfaire.com/ official web].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(July 18, 2014) CeRVaNTeS platform. Integration in [http://moveit.ros.org/ moveIt!] ([[Integration of CeRVaNTeS in moveIt! (gazebo+moveIt!)|more info]]):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;bYtvNJHraVs&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(July 11, 2014)  [http://www.mountain.es/ Mountain] will sponsor our group for the next RoCKIn challenge.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Lgo_mountain.png|center|330px|link=http://www.mountain.es/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the last two months, we have been using Mountain hardware for research and development with our robots. The control software of the CeRVaNTeS platform runs on a [http://www.mountain.es/portatiles F-13] laptop. Soon, our MYRABot platorm will also mount one of their machines.&lt;br /&gt;
&lt;br /&gt;
We want to express our sincere gratitude to [http://www.mountain.es/ Mountain] for their support to small research groups like ours.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(June 16, 2014) [[Projects#Cervantes_Project|CeRVaNTeS platform]]. First steps and proof of concept:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;JtDIRky7EKs&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(January 25-31, 2014) RoCKIn camp participation. Our team attended the [[RoCKIn2014|RoCKIn Camp 2014]] on Rome, Italy. There they tested the performance of the MYRABot system in a competition environment. Also, the members participated in the lectures and in the practical sessions about basic software infrastructure, perception, manipulation and human robot interaction through speech.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:MBPAL1.JPG|center|border|250px]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=Home&amp;diff=4780</id>
		<title>Home</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=Home&amp;diff=4780"/>
				<updated>2016-01-09T11:21:12Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the official wiki of the Robotics Group of the [http://www.unileon.es University of León] (ULE). We are a group of researchers (faculty and students) interested in creating autonomous behaviour for mobile robots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Foto-grupo2015.jpg|center|border|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We are proud members of [http://www.redaf.es RedAF] (Red Nacional de Investigación en Agentes Físicos), [http://retecog.net ReteCog] (Red Temática de Ciencia Cognitiva), and [http://www.euron.org/members/index#257 Euron].&lt;br /&gt;
&lt;br /&gt;
==Location==&lt;br /&gt;
&lt;br /&gt;
Our lab is in the second floor of the [https://www.google.com/maps/@42.6150329,-5.5635688,203m/data=!3m1!1e3 MIC building] (Módulo de Investigación en Cibernética - Cybernetics Research Module), in the University of León:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Mic2015.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can also find us around the [http://centros.unileon.es/eiii/ EIII] (Escuela de Ingenierías Industrial e Informática, Engineering School). If you are interested in our work, you are welcome to pay us a visit.&lt;br /&gt;
&lt;br /&gt;
==News==&lt;br /&gt;
&lt;br /&gt;
(September 30, 2015) This is the Team Description Paper of our &amp;quot;Watermelon Project&amp;quot; team for the next RoCKIn@home Challenge. This competition will take place in Lisbon (Portugal).&lt;br /&gt;
&lt;br /&gt;
[http://rockincompetition.eu/node/256 RoCKIn competition]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(February 6, 2015) We are happy to inform that we have been accepted into the [http://www.dis.uniroma1.it/~rockin/node/33  RoCKIn Camp 2015] that will take place on Peccioli (Pisa, Italy) next March!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(November 24, 2014) Coinciding with the [http://www.eu-robotics.net/eurobotics-week/about-eurobotics-week/ European Robotics Week], we are going to participate in the first competition event of RoCKIn. It will take place in the Cité de l'espace science museum in Toulouse, France.&lt;br /&gt;
&lt;br /&gt;
You can follow up with us [[RoCKInChallengeToulouse|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(September 19, 2014) CeRVaNTeS platform. Teleoperation using a [http://www.microsoft.com/hardware/es-es/p/xbox-360-wireless-controller-for-windows Xbox 360 wireless controller] ([[CeRVaNTeS' Teleoperation with xbox360 wireless controller (xbox360 controller+joy)|more info]]):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;wF_gKr1KWfg#t=13&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(September 15, 2014) On September 27th, we attended the [http://makerfaireleon.com/ Mini Maker Faire] event in León, as Makers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:mmf_logo_leon.png|center|500px|link=http://makerfaireleon.com/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about the Maker Movement in [https://www.thegrommet.com/blog/the-maker-movement-infographic/ this infographic], and in the [http://makerfaire.com/ official web].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(July 18, 2014) CeRVaNTeS platform. Integration in [http://moveit.ros.org/ moveIt!] ([[Integration of CeRVaNTeS in moveIt! (gazebo+moveIt!)|more info]]):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;bYtvNJHraVs&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(July 11, 2014)  [http://www.mountain.es/ Mountain] will sponsor our group for the next RoCKIn challenge.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Lgo_mountain.png|center|330px|link=http://www.mountain.es/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the last two months, we have been using Mountain hardware for research and development with our robots. The control software of the CeRVaNTeS platform runs on a [http://www.mountain.es/portatiles F-13] laptop. Soon, our MYRABot platorm will also mount one of their machines.&lt;br /&gt;
&lt;br /&gt;
We want to express our sincere gratitude to [http://www.mountain.es/ Mountain] for their support to small research groups like ours.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(June 16, 2014) [[Projects#Cervantes_Project|CeRVaNTeS platform]]. First steps and proof of concept:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;JtDIRky7EKs&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(January 25-31, 2014) RoCKIn camp participation. Our team attended the [[RoCKIn2014|RoCKIn Camp 2014]] on Rome, Italy. There they tested the performance of the MYRABot system in a competition environment. Also, the members participated in the lectures and in the practical sessions about basic software infrastructure, perception, manipulation and human robot interaction through speech.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:MBPAL1.JPG|center|border|250px]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=Home&amp;diff=4779</id>
		<title>Home</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=Home&amp;diff=4779"/>
				<updated>2016-01-08T22:42:23Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the official wiki of the Robotics Group of the [http://www.unileon.es University of León] (ULE). We are a group of researchers (faculty and students) interested in creating autonomous behaviour for mobile robots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Foto-grupo2015.jpg|center|border|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We are proud members of [http://www.redaf.es RedAF] (Red Nacional de Investigación en Agentes Físicos), [http://retecog.net ReteCog] (Red Temática de Ciencia Cognitiva), and [http://www.euron.org/members/index#257 Euron].&lt;br /&gt;
&lt;br /&gt;
==Location==&lt;br /&gt;
&lt;br /&gt;
Our lab is in the second floor of the MIC building (Módulo de Investigación en Cibernética - Cybernetics Research Module), in the University of León:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Mic2015.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can also find us around the [http://centros.unileon.es/eiii/ EIII] (Escuela de Ingenierías Industrial e Informática, Engineering School). If you are interested in our work, you are welcome to pay us a visit.&lt;br /&gt;
&lt;br /&gt;
==News==&lt;br /&gt;
&lt;br /&gt;
(September 30, 2015) This is the Team Description Paper of our &amp;quot;Watermelon Project&amp;quot; team for the next RoCKIn@home Challenge. This competition will take place in Lisbon (Portugal).&lt;br /&gt;
&lt;br /&gt;
[http://rockincompetition.eu/node/256 RoCKIn competition]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(February 6, 2015) We are happy to inform that we have been accepted into the [http://www.dis.uniroma1.it/~rockin/node/33  RoCKIn Camp 2015] that will take place on Peccioli (Pisa, Italy) next March!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(November 24, 2014) Coinciding with the [http://www.eu-robotics.net/eurobotics-week/about-eurobotics-week/ European Robotics Week], we are going to participate in the first competition event of RoCKIn. It will take place in the Cité de l'espace science museum in Toulouse, France.&lt;br /&gt;
&lt;br /&gt;
You can follow up with us [[RoCKInChallengeToulouse|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(September 19, 2014) CeRVaNTeS platform. Teleoperation using a [http://www.microsoft.com/hardware/es-es/p/xbox-360-wireless-controller-for-windows Xbox 360 wireless controller] ([[CeRVaNTeS' Teleoperation with xbox360 wireless controller (xbox360 controller+joy)|more info]]):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;wF_gKr1KWfg#t=13&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(September 15, 2014) On September 27th, we attended the [http://makerfaireleon.com/ Mini Maker Faire] event in León, as Makers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:mmf_logo_leon.png|center|500px|link=http://makerfaireleon.com/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about the Maker Movement in [https://www.thegrommet.com/blog/the-maker-movement-infographic/ this infographic], and in the [http://makerfaire.com/ official web].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(July 18, 2014) CeRVaNTeS platform. Integration in [http://moveit.ros.org/ moveIt!] ([[Integration of CeRVaNTeS in moveIt! (gazebo+moveIt!)|more info]]):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;bYtvNJHraVs&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(July 11, 2014)  [http://www.mountain.es/ Mountain] will sponsor our group for the next RoCKIn challenge.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Lgo_mountain.png|center|330px|link=http://www.mountain.es/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the last two months, we have been using Mountain hardware for research and development with our robots. The control software of the CeRVaNTeS platform runs on a [http://www.mountain.es/portatiles F-13] laptop. Soon, our MYRABot platorm will also mount one of their machines.&lt;br /&gt;
&lt;br /&gt;
We want to express our sincere gratitude to [http://www.mountain.es/ Mountain] for their support to small research groups like ours.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(June 16, 2014) [[Projects#Cervantes_Project|CeRVaNTeS platform]]. First steps and proof of concept:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;JtDIRky7EKs&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(January 25-31, 2014) RoCKIn camp participation. Our team attended the [[RoCKIn2014|RoCKIn Camp 2014]] on Rome, Italy. There they tested the performance of the MYRABot system in a competition environment. Also, the members participated in the lectures and in the practical sessions about basic software infrastructure, perception, manipulation and human robot interaction through speech.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:MBPAL1.JPG|center|border|250px]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=Home&amp;diff=4778</id>
		<title>Home</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=Home&amp;diff=4778"/>
				<updated>2016-01-08T22:37:20Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the official wiki of the Robotics Group of the [http://www.unileon.es University of León] (ULE). We are a group of researchers (faculty and students) interested in creating autonomous behaviour for mobile robots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Foto-grupo2015.jpg|center|border|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We are proud members of [http://www.redaf.es RedAF] (Red Nacional de Investigación en Agentes Físicos), [http://retecog.net ReteCog] (Red Temática de Ciencia Cognitiva), and [http://www.euron.org/members/index#257 Euron].&lt;br /&gt;
&lt;br /&gt;
==Location==&lt;br /&gt;
&lt;br /&gt;
Our lab is in the second floor of the MIC building (Módulo de Investigación en Cibernética - Cybernetics Research Module), in the University of León:&lt;br /&gt;
&lt;br /&gt;
[[Image:Mic2015.jpg|center|border|300px]]&lt;br /&gt;
&lt;br /&gt;
You can also find us around the [http://centros.unileon.es/eiii/ EIII] (Escuela de Ingenierías Industrial e Informática, Engineering School). If you are interested in our work, you are welcome to pay us a visit.&lt;br /&gt;
&lt;br /&gt;
==News==&lt;br /&gt;
&lt;br /&gt;
(September 30, 2015) This is the Team Description Paper of our &amp;quot;Watermelon Project&amp;quot; team for the next RoCKIn@home Challenge. This competition will take place in Lisbon (Portugal).&lt;br /&gt;
&lt;br /&gt;
[http://rockincompetition.eu/node/256 RoCKIn competition]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(February 6, 2015) We are happy to inform that we have been accepted into the [http://www.dis.uniroma1.it/~rockin/node/33  RoCKIn Camp 2015] that will take place on Peccioli (Pisa, Italy) next March!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(November 24, 2014) Coinciding with the [http://www.eu-robotics.net/eurobotics-week/about-eurobotics-week/ European Robotics Week], we are going to participate in the first competition event of RoCKIn. It will take place in the Cité de l'espace science museum in Toulouse, France.&lt;br /&gt;
&lt;br /&gt;
You can follow up with us [[RoCKInChallengeToulouse|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(September 19, 2014) CeRVaNTeS platform. Teleoperation using a [http://www.microsoft.com/hardware/es-es/p/xbox-360-wireless-controller-for-windows Xbox 360 wireless controller] ([[CeRVaNTeS' Teleoperation with xbox360 wireless controller (xbox360 controller+joy)|more info]]):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;wF_gKr1KWfg#t=13&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(September 15, 2014) On September 27th, we attended the [http://makerfaireleon.com/ Mini Maker Faire] event in León, as Makers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:mmf_logo_leon.png|center|500px|link=http://makerfaireleon.com/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about the Maker Movement in [https://www.thegrommet.com/blog/the-maker-movement-infographic/ this infographic], and in the [http://makerfaire.com/ official web].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(July 18, 2014) CeRVaNTeS platform. Integration in [http://moveit.ros.org/ moveIt!] ([[Integration of CeRVaNTeS in moveIt! (gazebo+moveIt!)|more info]]):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;bYtvNJHraVs&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(July 11, 2014)  [http://www.mountain.es/ Mountain] will sponsor our group for the next RoCKIn challenge.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Lgo_mountain.png|center|330px|link=http://www.mountain.es/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the last two months, we have been using Mountain hardware for research and development with our robots. The control software of the CeRVaNTeS platform runs on a [http://www.mountain.es/portatiles F-13] laptop. Soon, our MYRABot platorm will also mount one of their machines.&lt;br /&gt;
&lt;br /&gt;
We want to express our sincere gratitude to [http://www.mountain.es/ Mountain] for their support to small research groups like ours.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(June 16, 2014) [[Projects#Cervantes_Project|CeRVaNTeS platform]]. First steps and proof of concept:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;JtDIRky7EKs&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(January 25-31, 2014) RoCKIn camp participation. Our team attended the [[RoCKIn2014|RoCKIn Camp 2014]] on Rome, Italy. There they tested the performance of the MYRABot system in a competition environment. Also, the members participated in the lectures and in the practical sessions about basic software infrastructure, perception, manipulation and human robot interaction through speech.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:MBPAL1.JPG|center|border|250px]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=Home&amp;diff=4758</id>
		<title>Home</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=Home&amp;diff=4758"/>
				<updated>2015-11-24T08:03:41Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the official wiki of the Robotics Group of the [http://www.unileon.es University of León] (ULE). We are a group of researchers (faculty and students) interested in creating autonomous behaviour for mobile robots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Robotics group ULE.jpg|center|border|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We are proud members of [http://www.redaf.es RedAF] (Red Nacional de Investigación en Agentes Físicos), [http://retecog.net ReteCog] (Red Temática de Ciencia Cognitiva), and [http://www.euron.org/members/index#257 Euron].&lt;br /&gt;
&lt;br /&gt;
==Location==&lt;br /&gt;
&lt;br /&gt;
We are usually in the F6 lab of the Escuela de Ingenierías Industrial e Informática ([http://centros.unileon.es/eiii/ EIII]) in León (Spain). If you are interested in our work, you are welcome to pay us a visit.&lt;br /&gt;
&lt;br /&gt;
==News==&lt;br /&gt;
&lt;br /&gt;
(September 30, 2015) This is the Team Description Paper of our &amp;quot;Watermelon Project&amp;quot; team for the next RoCKIn@home Challenge. This competition will take place in Lisbon (Portugal).&lt;br /&gt;
&lt;br /&gt;
[http://rockincompetition.eu/node/256 RoCKIn competition]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(February 6, 2015) We are happy to inform that we have been accepted into the [http://www.dis.uniroma1.it/~rockin/node/33  RoCKIn Camp 2015] that will take place on Peccioli (Pisa, Italy) next March!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(November 24, 2014) Coinciding with the [http://www.eu-robotics.net/eurobotics-week/about-eurobotics-week/ European Robotics Week], we are going to participate in the first competition event of RoCKIn. It will take place in the Cité de l'espace science museum in Toulouse, France.&lt;br /&gt;
&lt;br /&gt;
You can follow up with us [[RoCKInChallengeToulouse|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(September 19, 2014) CeRVaNTeS platform. Teleoperation using a [http://www.microsoft.com/hardware/es-es/p/xbox-360-wireless-controller-for-windows Xbox 360 wireless controller] ([[CeRVaNTeS' Teleoperation with xbox360 wireless controller (xbox360 controller+joy)|more info]]):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;wF_gKr1KWfg#t=13&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(September 15, 2014) On September 27th, we attended the [http://makerfaireleon.com/ Mini Maker Faire] event in León, as Makers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:mmf_logo_leon.png|center|500px|link=http://makerfaireleon.com/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about the Maker Movement in [https://www.thegrommet.com/blog/the-maker-movement-infographic/ this infographic], and in the [http://makerfaire.com/ official web].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(July 18, 2014) CeRVaNTeS platform. Integration in [http://moveit.ros.org/ moveIt!] ([[Integration of CeRVaNTeS in moveIt! (gazebo+moveIt!)|more info]]):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;bYtvNJHraVs&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(July 11, 2014)  [http://www.mountain.es/ Mountain] will sponsor our group for the next RoCKIn challenge.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Lgo_mountain.png|center|330px|link=http://www.mountain.es/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the last two months, we have been using Mountain hardware for research and development with our robots. The control software of the CeRVaNTeS platform runs on a [http://www.mountain.es/portatiles F-13] laptop. Soon, our MYRABot platorm will also mount one of their machines.&lt;br /&gt;
&lt;br /&gt;
We want to express our sincere gratitude to [http://www.mountain.es/ Mountain] for their support to small research groups like ours.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(June 16, 2014) [[Projects#Cervantes_Project|CeRVaNTeS platform]]. First steps and proof of concept:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;videoflash&amp;gt;JtDIRky7EKs&amp;lt;/videoflash&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border: solid 0px white; width: 50%&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
||&lt;br /&gt;
----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(January 25-31, 2014) RoCKIn camp participation. Our team attended the [[RoCKIn2014|RoCKIn Camp 2014]] on Rome, Italy. There they tested the performance of the MYRABot system in a competition environment. Also, the members participated in the lectures and in the practical sessions about basic software infrastructure, perception, manipulation and human robot interaction through speech.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:MBPAL1.JPG|center|border|250px]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)&amp;diff=4757</id>
		<title>PCL/OpenNI tutorial 2: Cloud processing (basic)</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)&amp;diff=4757"/>
				<updated>2015-11-18T19:47:11Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Point_cloud.png|thumb|right|200px|Rendering of a point cloud, showing a desktop.]]&lt;br /&gt;
&lt;br /&gt;
All right: in the [[PCL/OpenNI tutorial 1: Installing and testing| previous tutorial]] you installed OpenNI and PCL. You managed to get your device working. You compiled and ran the example program, watched as it fetched one frame after another, and even saved a couple of point clouds to disk. Now you may be wondering &amp;quot;What about now? What use depth sensors are? What can I do with point clouds?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Well, depth cameras have a lot of useful applications. For example, you could use them as [http://tech.integrate.biz/kinect_mocap.htm motion tracking] hardware and save you thousands of dollars in professional gear. The original Kinect was actually designed as an alternative to physical controllers (e.g., gamepads) in games. [https://frantracerkinectft.codeplex.com/ Hand gestures] can also be recognized, so people have managed to successfully implement control interfaces similar to what could be seen in Minority Report.&lt;br /&gt;
&lt;br /&gt;
Working low-level with point clouds offers interesting possibilities too. As they are essentially a 3D &amp;quot;scan&amp;quot; or the scene, you could capture your whole room as a [http://pointclouds.org/documentation/tutorials/using_kinfu_large_scale.php continuous 3D mesh], then import it in your modelling software of choice and use it as a reference for a better reconstruction. You could use it to measure distances, or detect obstacles, for a robot navigation system. But the most exciting alternative by far, and the one where most of the research is being focused on, is to perform 3D object recognition and pose estimation, even in [https://www.youtube.com/watch?feature=player_detailpage&amp;amp;v=quGhaggn3cQ#t=349 real time]. This allows a robot to fulfill tasks like going to the kitchen to get you a cup of coffee.&lt;br /&gt;
&lt;br /&gt;
Most of the advanced stuff that can be done with a point cloud requires some previous steps, like filtering, reconstruction or normal estimation. In this tutorial, I will introduce you to the basics of point cloud processing, and leave the complicated methods for the [[PCL/OpenNI tutorial 3: Cloud processing (advanced) | next tutorial]]. I will explain what every technique does and what it should be used for, and include PCL code snippets so you can check how to implement it in your programs.&lt;br /&gt;
&lt;br /&gt;
=Point clouds=&lt;br /&gt;
&lt;br /&gt;
First, a little explanation. A point cloud as taken from a depth sensor consists of a series of points in 3D space, as simple as that. A &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::PointCloud&amp;lt;PointT&amp;gt;&amp;quot;&amp;lt;/span&amp;gt; object stores the points inside a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;std::vector&amp;lt;PointT&amp;gt;&amp;quot;&amp;lt;/span&amp;gt; structure. The cloud object exposes some functions that let you get information about them, like the point count. As you can see, the class is templated so you can instantiate it for many types of points. For example, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt;, which stores three floats for the X, Y and Z coordinates, and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZRGB&amp;quot;&amp;lt;/span&amp;gt;, which also stores color (texture) information for each point, the kind we would get with Kinect or Xtion RGB-D cameras. A list of [http://pointclouds.org/documentation/tutorials/adding_custom_ptype.php#what-pointt-types-are-available-in-pcl point types] and a basic description of the [http://pointclouds.org/documentation/tutorials/basic_structures.php cloud object] are available on PCL's website.&lt;br /&gt;
&lt;br /&gt;
Clouds will usually be handled with pointers, as they tend to be large. PCL implements the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;boost::shared_ptr&amp;quot;&amp;lt;/span&amp;gt; class from the [https://en.wikipedia.org/wiki/Boost_%28C%2B%2B_libraries%29 Boost C++ libraries], that keeps track of the references to a certain object. Every time you pass a shared pointer and a copy of it is created, a reference counter is automatically increased. When the pointer is destroyed, it is decreased. If the counter ever reaches 0, it means that no one is holding an active reference to the object, and its destructor is called to free memory. This can take a while to get used to, and you will be mixing cloud objects and cloud object pointers until then, but do not worry.&lt;br /&gt;
&lt;br /&gt;
==PCD files==&lt;br /&gt;
&lt;br /&gt;
When clouds are saved to disk, a [http://pointclouds.org/documentation/tutorials/pcd_file_format.php PCD] (Point Cloud Data) file is produced. You can choose to do it so in binary or plain text format. The first one is faster to work with, whereas the second one will let you to inspect the cloud by opening it with a common text editor. If you do so, you may see something like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;# .PCD v.7 − Point Cloud Data file format&lt;br /&gt;
VERSION .7&lt;br /&gt;
FIELDS x y z&lt;br /&gt;
SIZE 4 4 4&lt;br /&gt;
TYPE F F F&lt;br /&gt;
WIDTH 2&lt;br /&gt;
HEIGHT 1&lt;br /&gt;
VIEWPOINT 0 0 0 1 0 0 0&lt;br /&gt;
POINTS 2&lt;br /&gt;
DATA ascii&lt;br /&gt;
0.73412 -1.12643 0.82218&lt;br /&gt;
0.44739 -0.34735 -0.04624&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This tells us that the cloud consist of 2 points of type &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; (because each point has only 3 fields; the X, Y and Z coordinates, each stored in a 4-byte float), and lists their coordinates. It also stores the viewport information (the relative position of the points to the sensor) as a translation quaternion. Knowing the position of the camera can be useful for certain procedures like normal estimation.&lt;br /&gt;
&lt;br /&gt;
===Reading from file===&lt;br /&gt;
&lt;br /&gt;
Reading a point cloud from a PCD file to and storing it in memory can be done with a single function call. You must specify the correct point type. For example, for .pcd files saved with the OpenNI viewer application that we saw in the [[PCL/OpenNI tutorial 1: Installing and testing | previous tutorial]], you could use either &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZRGBA&amp;quot;&amp;lt;/span&amp;gt; or &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; if you are not going to use the color information.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I have not bothered to include proper error/argument checking, but you should do it in your programs. Be sure to check the [http://docs.pointclouds.org PCL API] every time I introduce you to one of its classes or methods.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/reading_pcd.php Reading Point Cloud data from PCD files]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl_1_1io.html pcl::io namespace]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_file_read.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Writing to file===&lt;br /&gt;
&lt;br /&gt;
Saving a point cloud to a file is as easy as reading it. You can choose between doing it in plain readable (ASCII) or binary format. The first one can be read and altered with a text editor, the second one is faster.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Write it back to disk under a different name.&lt;br /&gt;
	// Another possibility would be &amp;quot;savePCDFileBinary()&amp;quot;.&lt;br /&gt;
	pcl::io::savePCDFileASCII(&amp;quot;output.pcd&amp;quot;, *cloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/writing_pcd.php Writing Point Cloud data to PCD files]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl_1_1io.html pcl::io namespace]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_file_write.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you want a better alternative for loading or saving PCD files (some of the functions we saw are a bit outdated, but they do the job), take a look at the [http://docs.pointclouds.org/trunk/classpcl_1_1_p_c_d_reader.html PCDReader] and [http://docs.pointclouds.org/trunk/classpcl_1_1_p_c_d_writer.html PCDWriter] classes.&lt;br /&gt;
&lt;br /&gt;
==Visualization==&lt;br /&gt;
&lt;br /&gt;
Of course, PCL ships with a dedicated program to load and visualize point clouds and a lot of other stuff, &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''pcl_viewer''&amp;lt;/span&amp;gt;. You can run it from a terminal with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_viewer &amp;lt;file 1&amp;gt; [file 2] ... [file N]&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you load several files, each cloud will be rendered with a different color to help you tell them apart. The program uses internally the [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_visualizer.html pcl::visualization::PCLVisualizer] class. Press the &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''H'''&amp;lt;/span&amp;gt; key while it is running to print the help on the terminal. Keys I tend to find useful are &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''1'''&amp;lt;/span&amp;gt;, which chooses a different color for the cloud (useful if the current one is too hard to see), &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''R'''&amp;lt;/span&amp;gt;, which resets the camera, and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''J'''&amp;lt;/span&amp;gt;, which saves a PNG snapshot.&lt;br /&gt;
&lt;br /&gt;
In your own code, it is possible to iterate through the cloud and print the information of each point to console, but this is cumbersome. Sometimes you just want to visualize the cloud to check something. PCL can create a window for you and display it, letting users pan and zoom their way around. The simplest method is the following one:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/cloud_viewer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Cloud viewer object. You can set the window title.&lt;br /&gt;
	pcl::visualization::CloudViewer viewer(argv[1]);&lt;br /&gt;
	viewer.showCloud(cloud);&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		// Do nothing but wait.&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;CloudViewer&amp;quot;&amp;lt;/span&amp;gt; has less features than &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PCLVisualizer&amp;quot;&amp;lt;/span&amp;gt; but it is easier to set up. The program will remain active until you close the window with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Alt+F4'''&amp;lt;/span&amp;gt; or press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Q'''&amp;lt;/span&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/cloud_viewer.php The CloudViewer]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/pcl_visualizer.php PCLVisualizer]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/walkthrough.php Visualization library overview]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_cloud_viewer.html pcl::visualization::CloudViewer]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_visualize_cloud.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Concatenating two clouds==&lt;br /&gt;
&lt;br /&gt;
There are two types of concatenation with point clouds:&lt;br /&gt;
&lt;br /&gt;
* '''Points''':  You initially have two clouds, &amp;quot;A&amp;quot; and &amp;quot;B&amp;quot;, with &amp;quot;N&amp;quot; and &amp;quot;M&amp;quot; points each. A third cloud is created (&amp;quot;C&amp;quot;), that has &amp;quot;N + M&amp;quot; points (all points of the first two clouds). For this to work, the point type must be the same for all clouds (&amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt;, or &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt;, or whatever).&lt;br /&gt;
* '''Fields''': With this, you can join the fields of two clouds with different types. The only constraint is that both must have the same number of points. For example, if you have a cloud of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; type and another of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt; type with &amp;quot;N&amp;quot; points each, you can concatenate them to create a cloud of type &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointNormal&amp;quot;&amp;lt;/span&amp;gt; (that stores X, Y and Z coordinates, plus the normal) that also has &amp;quot;N&amp;quot; points.&lt;br /&gt;
&lt;br /&gt;
Point concatenation is the easiest one becase you can use the cloud's add operator (+), which is overloaded.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudA(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudB(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudC(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read two PCD files from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGBA&amp;gt;(argv[1], *cloudA) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGBA&amp;gt;(argv[2], *cloudB) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Create cloud &amp;quot;C&amp;quot;, with the points of both &amp;quot;A&amp;quot; and &amp;quot;B&amp;quot;.&lt;br /&gt;
	*cloudC = (*cloudA) + (*cloudB);&lt;br /&gt;
	// Now cloudC-&amp;gt;points.size() equals cloudA-&amp;gt;points.size() + cloudB-&amp;gt;points.size().&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/concatenate_clouds.php Concatenate the points of two Point Clouds]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_concatenate_clouds.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Field concatenation is done with a function call:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudPoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr cloudNormals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;::Ptr cloudAll(new pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudPoints) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Compute the normals of the cloud (do not worry, we will see this later).&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloudPoints);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.05);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*cloudNormals);&lt;br /&gt;
&lt;br /&gt;
	// Concatenate the fields (PointXYZ + Normal = PointNormal).&lt;br /&gt;
	pcl::concatenateFields(*cloudPoints, *cloudNormals, *cloudAll);&lt;br /&gt;
&lt;br /&gt;
	// Print the data to standard output.&lt;br /&gt;
	for (size_t currentPoint = 0; currentPoint &amp;lt; cloudAll-&amp;gt;points.size(); currentPoint++)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tXYZ:&amp;quot; &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].x &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].y &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].z &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tNormal:&amp;quot; &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[0] &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[1] &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[2] &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/concatenate_clouds.php Concatenate the points of two Point Clouds]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__common.html#gac6add803f86fd16a998dce541e9ef402 pcl::concatenateFields()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_concatenate_fields.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Matrix transformations==&lt;br /&gt;
&lt;br /&gt;
Clouds can be directly manipulated with [https://en.wikipedia.org/wiki/Transformation_matrix transformations] (translation, rotation, scaling). 3D transformations use a 4x4 matrix, which can be a little complicated to understand (specially when it comes to rotations), so check some book or [http://web.iitd.ac.in/~hegde/cad/lecture/L6_3dtrans.pdf tutorial] before trying to use one. The transformed cloud will be saved to another object so you can keep both versions and use them as you wish.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/common/transforms.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr transformed(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Transformation matrix object, initialized to the identity matrix&lt;br /&gt;
	// (a null transformation).&lt;br /&gt;
	Eigen::Matrix4f transformation = Eigen::Matrix4f::Identity();&lt;br /&gt;
&lt;br /&gt;
	// Set a rotation around the Z axis (right hand rule).&lt;br /&gt;
	float theta = 90.0f * (M_PI / 180.0f); // 90 degrees.&lt;br /&gt;
	transformation(0, 0) = cos(theta);&lt;br /&gt;
	transformation(0, 1) = -sin(theta);&lt;br /&gt;
	transformation(1, 0) = sin(theta);&lt;br /&gt;
	transformation(1, 1) = cos(theta);&lt;br /&gt;
&lt;br /&gt;
	// Set a translation on the X axis.&lt;br /&gt;
	transformation(0, 3) = 1.0f; // 1 meter (positive direction).&lt;br /&gt;
&lt;br /&gt;
	pcl::transformPointCloud(*cloud, *transformed, transformation);&lt;br /&gt;
&lt;br /&gt;
	// Visualize both the original and the result.&lt;br /&gt;
	pcl::visualization::PCLVisualizer viewer(argv[1]);&lt;br /&gt;
	viewer.addPointCloud(cloud, &amp;quot;original&amp;quot;);&lt;br /&gt;
	// The transformed one's points will be red in color.&lt;br /&gt;
	pcl::visualization::PointCloudColorHandlerCustom&amp;lt;pcl::PointXYZ&amp;gt; colorHandler(transformed, 255, 0, 0);&lt;br /&gt;
	viewer.addPointCloud(transformed, colorHandler, &amp;quot;transformed&amp;quot;);&lt;br /&gt;
	// Add 3D colored axes to help see the transformation.&lt;br /&gt;
	viewer.addCoordinateSystem(1.0, 0);&lt;br /&gt;
&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer.spinOnce();&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Cloud_transformation.png|thumb|center|600px|Rotation and translation applied to a cloud.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check the rest of the functions, because you can also use a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Affine3&amp;quot;&amp;lt;/span&amp;gt; or a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Quaternion&amp;quot;&amp;lt;/span&amp;gt; instead of a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Matrix4&amp;quot;&amp;lt;/span&amp;gt; object. Another important note: the function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;addCoordinateSystem()&amp;quot;&amp;lt;/span&amp;gt; with those parameters seen above is marked as deprecated in new versions of PCL. From these on you will have to use something like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;viewer.addCoordinateSystem(1.0, &amp;quot;reference&amp;quot;, 0);&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Transformation&lt;br /&gt;
* '''Output''': Transformed points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/matrix_transform.php Using a matrix to transform a point cloud]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl.html#aefbe4956d1c8fb785a97df6708d57c56 pcl::transformPointCloud()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_matrix_transform.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Using indices==&lt;br /&gt;
&lt;br /&gt;
Many of PCL's algorithms return indices. Indices are just a list of some of the points of a cloud (not the points themselves with all their data, but just their index in the cloud). For example, a cylinder segmentation algorithm will give you as output a list of points that were considered to fit the cylinder model. You can then use the indices to extract the points to a second cloud for further work with the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::ExtractIndices&amp;quot;&amp;lt;/span&amp;gt; class. Its &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt; function will instruct it to extract the points NOT indexed (so you can remove all cylinder-shaped clusters from the cloud).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/extract_indices.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/segmentation/sac_segmentation.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudAll(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudExtracted(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudAll) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Plane segmentation (do not worry, we will see this later).&lt;br /&gt;
	pcl::ModelCoefficients::Ptr coefficients(new pcl::ModelCoefficients);&lt;br /&gt;
	pcl::SACSegmentation&amp;lt;pcl::PointXYZ&amp;gt; segmentation;&lt;br /&gt;
	segmentation.setOptimizeCoefficients(true);&lt;br /&gt;
	segmentation.setModelType(pcl::SACMODEL_PLANE);&lt;br /&gt;
	segmentation.setMethodType(pcl::SAC_RANSAC);&lt;br /&gt;
	segmentation.setDistanceThreshold(0.01);&lt;br /&gt;
	segmentation.setInputCloud(cloudAll);&lt;br /&gt;
&lt;br /&gt;
	// Object for storing the indices.&lt;br /&gt;
	pcl::PointIndices::Ptr pointIndices(new pcl::PointIndices);&lt;br /&gt;
&lt;br /&gt;
	segmentation.segment(*pointIndices, *coefficients);&lt;br /&gt;
&lt;br /&gt;
	// Object for extracting points from a list of indices.&lt;br /&gt;
	pcl::ExtractIndices&amp;lt;pcl::PointXYZ&amp;gt; extract;&lt;br /&gt;
	extract.setInputCloud(cloudAll);&lt;br /&gt;
	extract.setIndices(pointIndices);&lt;br /&gt;
	// We will extract the points that are NOT indexed (the ones that are not in a plane).&lt;br /&gt;
	extract.setNegative(true);&lt;br /&gt;
	extract.filter(*cloudExtracted);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Indices, [Negative mode]&lt;br /&gt;
* '''Output''': Points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/extract_indices.php Extracting indices from a PointCloud]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/structpcl_1_1_point_indices.html pcl::PointIndices]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_extract_indices.html pcl::ExtractIndices]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_extract_indices.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Removing NaNs==&lt;br /&gt;
&lt;br /&gt;
The cloud retrieved from the sensor may contain several kind of measurement errors and/or inaccuracies. One of them is the presence of [https://en.wikipedia.org/wiki/NaN NaN (Not a Number)] values in the coordinates of some points, as you can see in the following file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;# .PCD v0.7 - Point Cloud Data file format&lt;br /&gt;
VERSION 0.7&lt;br /&gt;
FIELDS x y z rgba&lt;br /&gt;
SIZE 4 4 4 4&lt;br /&gt;
TYPE F F F U&lt;br /&gt;
COUNT 1 1 1 1&lt;br /&gt;
WIDTH 640&lt;br /&gt;
HEIGHT 480&lt;br /&gt;
VIEWPOINT 0 0 0 1 0 0 0&lt;br /&gt;
POINTS 307200&lt;br /&gt;
DATA ascii&lt;br /&gt;
nan nan nan 10135463&lt;br /&gt;
nan nan nan 10398635&lt;br /&gt;
nan nan nan 10070692&lt;br /&gt;
nan nan nan 10268071&lt;br /&gt;
...&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cloud objects have a member function called &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;is_dense()&amp;quot;&amp;lt;/span&amp;gt; that will return true if all points have valid, finite values. A NaN indicates that there was a problem measuring the distance to that point, maybe because the sensor was too close or too far, or because the surface was reflective.&lt;br /&gt;
&lt;br /&gt;
Sadly, many of PCL's algorithms will fail if even a single NaN is present in the cloud you feed them as input, with a message like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;Assertion `point_representation_-&amp;gt;isValid (point) &amp;amp;&amp;amp; &amp;quot;Invalid (NaN, Inf) point coordinates given to radiusSearch!&amp;quot;' failed.&amp;quot;''&amp;lt;/span&amp;gt;. If this happens, you will have to remove such values from the cloud. You can use a program like the following to &amp;quot;clean&amp;quot; a cloud from all NaN points:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/filter.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	if (argc != 3)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tUsage: &amp;quot; &amp;lt;&amp;lt; argv[0] &amp;lt;&amp;lt; &amp;quot; &amp;lt;input cloud&amp;gt; &amp;lt;output cloud&amp;gt;&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// The mapping tells you to what points of the old cloud the new ones correspond,&lt;br /&gt;
	// but we will not use it.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; mapping;&lt;br /&gt;
	pcl::removeNaNFromPointCloud(*cloud, *cloud, mapping);&lt;br /&gt;
&lt;br /&gt;
	// Save it back.&lt;br /&gt;
	pcl::io::savePCDFileASCII(argv[2], *cloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The problem with this method is that it will not keep the cloud organized. All clouds store a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;width&amp;quot;&amp;lt;/span&amp;gt; and a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;height&amp;quot;&amp;lt;/span&amp;gt; variable. In ''unorganized'' clouds, the total number of points is the same as the width, and the height is set to 1. In ''organized'' clouds (like the ones taken from camera-like sensors such as Kinect or Xtion), the width and the height are the same that the pixels of the image resolution the sensor works with. Points are distributed in rows like in the depth image, and every one of them corresponds to a pixel. The member function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;isOrganized()&amp;quot;&amp;lt;/span&amp;gt; will return true if the height is greater than 1.&lt;br /&gt;
&lt;br /&gt;
Because removing NaNs will change the number of points of the cloud, it can no longer be kept organized with the original width/height ratio, so the function will set the height to 1. This is not a big issue, as only a few of PCL's algorithms work explicitly with organized clouds (and they mostly use it for optimization), but you must take it into account.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Cloud&lt;br /&gt;
* '''Output''': Cloud without NaNs&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__filters.html#gac463283a9e9c18a66d3d29b28a575064 pcl::removeNaNFromPointCloud()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_remove_NaNs.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Saving a PNG==&lt;br /&gt;
&lt;br /&gt;
PCL offers the possibility of saving the values of a point cloud to a PNG image file. Obviously, this can only be done with organized clouds, because the rows and columns of the resulting image will correspond exactly to the ones of the cloud. For example, if you have a point cloud taken from a sensor like Kinect or Xtion, you can use this to retrieve the 640x480 RGB image that matches the cloud.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/io/png_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGB&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Save the image (cloud must be organized).&lt;br /&gt;
	pcl::io::savePNGFile(&amp;quot;output.png&amp;quot;, *cloud, &amp;quot;rgb&amp;quot;);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;savePNGFile()&amp;quot;&amp;lt;/span&amp;gt; has several implementations; in this case, we have chosen the one that lets you specify which fields to save. If you omit that parameter, the function will save the RGB fields by default.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__io.html#gabd03e43fac3048635512f447c8bc50b3 pcl::io::savePNGFile()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_save_PNG.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Computing the centroid==&lt;br /&gt;
&lt;br /&gt;
The [https://en.wikipedia.org/wiki/Centroid centroid] of a cloud is a point with coordinates that result from computing the mean of the values of all points in the cloud. You could say it is the &amp;quot;center of mass&amp;quot;, and it has several uses for certain algorithms. If you intend to compute the actual center of gravity of a [[PCL/OpenNI_tutorial_3:_Cloud_processing_%28advanced%29#Segmentation|clustered]] object though, remember that the sensor does not retrieve the part of the object that is hidden from the camera, like the back faces that are occluded by the front ones, or the inside. You only have the part of the surface that faces the camera.&lt;br /&gt;
&lt;br /&gt;
The centroid of a cloud can be computed with a single function call:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/common/centroid.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object to store the centroid coordinates.&lt;br /&gt;
	Eigen::Vector4f centroid;&lt;br /&gt;
	&lt;br /&gt;
	pcl::compute3DCentroid(*cloud, centroid);&lt;br /&gt;
&lt;br /&gt;
	std::cout &amp;lt;&amp;lt; &amp;quot;The XYZ coordinates of the centroid are: (&amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[0] &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[1] &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[2] &amp;lt;&amp;lt; &amp;quot;).&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl.html#a23daec3829d2d4100a2f185372b3753a pcl::compute3DCentroid()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_compute_centroid.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Feature estimation=&lt;br /&gt;
&lt;br /&gt;
A ''feature'' of a point is some characteristic that can help us to tell it apart from a different point. Of course, you could say that the XYZ coordinates would be characteristic enough, but when interpolating the data of two different, yet corresponding, clouds, they become useless. We need something better, that will have similar values when computed for the same spot of a surface in both sets.&lt;br /&gt;
&lt;br /&gt;
Normals are an example of a very simple feature. 3D [http://pointclouds.org/documentation/tutorials/how_features_work.php features] will be important later in our tutorials, when we talk about [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29|descriptors]] (more detailed &amp;quot;signatures&amp;quot; of points used for close matching).&lt;br /&gt;
&lt;br /&gt;
==Normal estimation==&lt;br /&gt;
&lt;br /&gt;
As you may remember from geometry class, the [https://en.wikipedia.org/wiki/Surface_normal normal] of a plane is an unit vector that is perpendicular to it. The normal of a surface at a point is defined as the vector that is perpendicular to the plane that is tangent to the surface at the point. Surface normals can be calculated for the points of a cloud, too. It is considered a feature, albeit not a very discriminative one.&lt;br /&gt;
&lt;br /&gt;
I will not go into detail with the math of the estimation method, but you just have to know that is uses the nearest neighbors (the points that are closest to the one we are calculating the normal for) to find out the tangent plane and the normal vector. You can customize the method with the search radius (think about a sphere of that radius, centered in the point; all neighboring points that lie within will be used for the computation) and the viewpoint (by default, the output normals will be directionless; by supposing that all vectors must point towards the camera - because otherwise they would belong to surfaces that are not visible from the sensor - they can all be re-oriented accordingly).&lt;br /&gt;
&lt;br /&gt;
Normals are also important because they give us information about the ''curvature'' of the surface at some point, which can be used to our advantage. Many of PCL's algorithms will require us to provide the normals of the input cloud. To estimate them, you can use the following code:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for normal estimation.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	// For every point, use all neighbors in a radius of 3cm.&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	// A kd-tree is a data structure that makes searches efficient. More about it later.&lt;br /&gt;
	// The normal estimation object will use it to find nearest neighbors.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
&lt;br /&gt;
	// Calculate the normals.&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Visualize them.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Normals&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(cloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	// Display one normal out of 20, as a line of length 3cm.&lt;br /&gt;
	viewer-&amp;gt;addPointCloudNormals&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt;(cloud, normals, 20, 0.03, &amp;quot;normals&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, normals are stored in &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointCloud&amp;quot;&amp;lt;/span&amp;gt; objects too, instantiated to the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt; type. If you ever extract some indices from a cloud before feeding them to an algorithm, be sure to do the same with the normals. I have not made use of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setViewPoint()&amp;quot;&amp;lt;/span&amp;gt; function, but you get the idea.&lt;br /&gt;
&lt;br /&gt;
Instead of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setRadiusSearch()&amp;quot;&amp;lt;/span&amp;gt;, you can make use of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setKSearch()&amp;quot;&amp;lt;/span&amp;gt;, which takes an integer, ''K''. This will take into account a point's ''K'' nearest neighbors to compute the normal. Please mind that this can produce some odd results for points that are not in dense areas, as some of the &amp;quot;neighbors&amp;quot; will be too far from them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Normal_estimation.png|thumb|center|600px|Visualization of the normals computed for a point cloud.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search method, Radius | Neighbor count, [Indices], [Viewpoint]&lt;br /&gt;
* '''Output''': Normals&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/normal_estimation.php Estimating Surface Normals in a Point Cloud]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_normal_estimation.html pcl::NormalEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_estimate_normals.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Integral images===&lt;br /&gt;
&lt;br /&gt;
Integral images are a method for normal estimation on organized clouds. The algorithm sees the cloud as a depth image, and creates certain rectangular areas over which the normals are computed, by taking into account the relationship between neighboring &amp;quot;pixels&amp;quot; (points), without the need to run lookups on a search structure like a tree. Because of this, it is very efficient and the normals are usually computed in a split second. If you are performing real time computations on clouds taken from a RGB-D sensor, this is the approach you should use because the previous method will probably take several seconds to finish.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/integral_image_normal.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for normal estimation.&lt;br /&gt;
	pcl::IntegralImageNormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	// Other estimation methods: COVARIANCE_MATRIX, AVERAGE_DEPTH_CHANGE, SIMPLE_3D_GRADIENT.&lt;br /&gt;
	// They determine the smoothness of the result, and the running time.&lt;br /&gt;
	normalEstimation.setNormalEstimationMethod(normalEstimation.AVERAGE_3D_GRADIENT);&lt;br /&gt;
	// Depth threshold for computing object borders based on depth changes, in meters.&lt;br /&gt;
	normalEstimation.setMaxDepthChangeFactor(0.02f);&lt;br /&gt;
	// Factor that influences the size of the area used to smooth the normals.&lt;br /&gt;
	normalEstimation.setNormalSmoothingSize(10.0f);&lt;br /&gt;
&lt;br /&gt;
	// Calculate the normals.&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Visualize them.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Normals&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(cloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	// Display one normal out of 20, as a line of length 3cm.&lt;br /&gt;
	viewer-&amp;gt;addPointCloudNormals&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt;(cloud, normals, 20, 0.03, &amp;quot;normals&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Estimation method, [Smoothing factor], [Depth threshold], [Rectangle size], [Border policy], [Depth dependent smoothing]&lt;br /&gt;
* '''Output''': Normals&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/normal_estimation_using_integral_images.php Normal Estimation Using Integral Images]&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://islab.ulsan.ac.kr/files/announcement/411/Adaptive%20Neighborhood%20Selection%20for%20Real-Time%20Surface%20Normal.pdf Adaptive Neighborhood Selection for Real-Time Surface Normal Estimation from Organized Point Cloud Data Using Integral Images] (S. Holzer et al., 2012)&lt;br /&gt;
** [https://www.willowgarage.com/sites/default/files/holz_rgbd_seg.pdf Real-Time Plane Segmentation using RGB-D Cameras] (Dirk Holz et al., 2012)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_integral_image_normal_estimation.html pcl::IntegralImageNormalEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_estimate_normals_integral.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Decomposition=&lt;br /&gt;
&lt;br /&gt;
When I talk about ''decomposition'', I mean the process of creating an ordered, organized data structure from the points of the cloud, with some purpose in mind like making further analysis easier, or performing filters.&lt;br /&gt;
&lt;br /&gt;
==''k''-d tree==&lt;br /&gt;
&lt;br /&gt;
A [https://en.wikipedia.org/wiki/K-d_tree ''k''-d tree] (''k''-dimensional tree) is a data structure that organizes a set of points in a ''k''-dimensional space, in a way that makes range search operations very efficient (for example, finding the nearest neighbor of a point, which is the point that is closest to it in space; or finding all neighbors within a radius).&lt;br /&gt;
&lt;br /&gt;
It is a binary tree, that is, every non-leaf node in it has two children nodes, the &amp;quot;left&amp;quot; and &amp;quot;right&amp;quot; ones. Each level splits space on a specific dimension. For example, in 3-dimensional space, at the root node (first level) all children would be split based on the first dimension, X (points having coordinates with greater X values would go to the right subtree, points with lesser values to the left one). At the second level (the nodes we just created), the split would be done on the Y axis, following the same criteria. At the third level (the grandchildren), we would use the Z axis. At the fourth level, we would get back to the X axis, and so on. Usually, the median point is chosen to be root at every level.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Kdtree.png|thumb|center|700px|Example of a 2D ''k''d-tree (image from Wikimedia Commons).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In PCL, creating a ''k''-d tree from a cloud is a straightforward and fast operation:&lt;br /&gt;
&lt;br /&gt;
===Searching===&lt;br /&gt;
&lt;br /&gt;
The following code snippet shows you how to use a ''k''-d tree to perform efficient searches:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/search/kdtree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// kd-tree object.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt; kdtree;&lt;br /&gt;
	kdtree.setInputCloud(cloud);&lt;br /&gt;
&lt;br /&gt;
	// We will find the 5 nearest neighbors of this point&lt;br /&gt;
	// (it does not have to be one of the cloud's, we can use any coordinate).&lt;br /&gt;
	pcl::PointXYZ point;&lt;br /&gt;
	point.x = 0.0524343;&lt;br /&gt;
	point.y = -0.58016;&lt;br /&gt;
	point.z = 1.776;&lt;br /&gt;
	// This vector will store the output neighbors.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; pointIndices(5);&lt;br /&gt;
	// This vector will store their squared distances to the search point.&lt;br /&gt;
	std::vector&amp;lt;float&amp;gt; squaredDistances(5);&lt;br /&gt;
	// Perform the search, and print out results.&lt;br /&gt;
	if (kdtree.nearestKSearch(point, 5, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;5 nearest neighbors of the point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Now we find all neighbors within 3cm of the point&lt;br /&gt;
	// (inside a sphere of radius 3cm centered at the point).&lt;br /&gt;
	if (kdtree.radiusSearch(point, 0.03, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors within 3cm:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another alternative would be &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;KdTreeFLANN&amp;quot;&amp;lt;/span&amp;gt;, which makes use of [http://www.cs.ubc.ca/research/flann/ FLANN] (Fast Library for Approximate Nearest Neighbors).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius | Neighbor count&lt;br /&gt;
* '''Output''': Point indices, Squared distances&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/kdtree_search.php How to use a KdTree to search]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1search_1_1_kd_tree.html pcl::search::KdTree]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_kd_tree_f_l_a_n_n.html pcl::KdTreeFLANN]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_kdtree_search.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Octree==&lt;br /&gt;
&lt;br /&gt;
Like the ''k''-d tree, the [https://en.wikipedia.org/wiki/Octree octree] is an hierarchical tree data structure useful for searches, but also compression or downsampling. Each octree node (called a ''voxel'', a term that will sound familiar for users of 3D engines, and which could be described as a &amp;quot;3D pixel&amp;quot;, that is, a cube) has either eight children or no children. The root node describes a cubic bounding box which encapsulates all points. At every level, it becomes subdivided by a factor of 2 which results in an increased&lt;br /&gt;
voxel resolution. That way, we can selectively give more resolution to certain zones.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Octree.png|thumb|center|700px|Depiction of an octree (image from Wikimedia Commons).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Searching===&lt;br /&gt;
&lt;br /&gt;
Downsampling will be covered in a different section, so now I will show you how to perform searches with an octree:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/octree/octree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Octree object, with a maximum resolution of 128&lt;br /&gt;
	// (resolution at lowest octree level).&lt;br /&gt;
	pcl::octree::OctreePointCloudSearch&amp;lt;pcl::PointXYZ&amp;gt; octree(128);&lt;br /&gt;
	octree.setInputCloud(cloud);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
&lt;br /&gt;
	// We will find all neighbors within the same voxel as this point.&lt;br /&gt;
	// (it does not have to be one of the cloud's, we can use any coordinate).&lt;br /&gt;
	pcl::PointXYZ point;&lt;br /&gt;
	point.x = 0.0524343;&lt;br /&gt;
	point.y = -0.58016;&lt;br /&gt;
	point.z = 1.776;&lt;br /&gt;
	// This vector will store the output neighbors.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; pointIndices;&lt;br /&gt;
	// Perform the search, and print out results.&lt;br /&gt;
	if (! octree.voxelSearch(point, pointIndices) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors in the same voxel:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// We will find the 5 nearest neighbors of the point.&lt;br /&gt;
	// This vector will store their squared distances to the search point.&lt;br /&gt;
	std::vector&amp;lt;float&amp;gt; squaredDistances;&lt;br /&gt;
	if (octree.nearestKSearch(point, 5, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;5 nearest neighbors of the point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Now we find all neighbors within 3cm of the point&lt;br /&gt;
	// (inside a sphere of radius 3cm centered at the point).&lt;br /&gt;
	// The point DOES have to belong in the cloud.&lt;br /&gt;
	if (octree.radiusSearch(point, 0.03, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors within 3cm:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, the code is mostly interchangeable with the one of the ''k''d-tree.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius | Neighbor count&lt;br /&gt;
* '''Output''': Point indices, Squared distances&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/octree.php Spatial Partitioning and Search Operations with Octrees]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1search_1_1_octree.html pcl::search::Octree]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1octree_1_1_octree_point_cloud_search.html pcl::octree::OctreePointCloudSearch]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_octree_search.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Compressing/Decompressing===&lt;br /&gt;
&lt;br /&gt;
On the other hand, this is the code needed to compress and decompress a point cloud with PCL:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/compression/octree_pointcloud_compression.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr decompressedCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Octree compressor object.&lt;br /&gt;
	// Check /usr/include/pcl-&amp;lt;version&amp;gt;/pcl/compression/compression_profiles.h for more profiles.&lt;br /&gt;
	// The second parameter enables the output of information.&lt;br /&gt;
	pcl::io::OctreePointCloudCompression&amp;lt;pcl::PointXYZ&amp;gt; octreeCompression(pcl::io::MED_RES_ONLINE_COMPRESSION_WITHOUT_COLOR, true);&lt;br /&gt;
	// Stringstream that will hold the compressed cloud.&lt;br /&gt;
	std::stringstream compressedData;&lt;br /&gt;
&lt;br /&gt;
	// Compress the cloud (you would save the stream to disk).&lt;br /&gt;
	octreeCompression.encodePointCloud(cloud, compressedData);&lt;br /&gt;
&lt;br /&gt;
	// Decompress the cloud.&lt;br /&gt;
	octreeCompression.decodePointCloud(compressedData, decompressedCloud);&lt;br /&gt;
&lt;br /&gt;
	// Display the decompressed cloud.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Octree compression&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(decompressedCloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforeCompression.png | Original point cloud.&lt;br /&gt;
File:AfterCompression.png | Point cloud retrieved after octree compression.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As you can see, the process has decreased the amount of points in the cloud. The default parameters of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;OctreePointCloudCompression&amp;quot;&amp;lt;/span&amp;gt; object establish an octree resolution (at lowest level) of 1cm, and a coordinate precision of 1mm. These values can of course be changed to suit your needs, but simplifying the cloud in this way is an useful operation that we will cover later.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Resolution, Compression profile&lt;br /&gt;
* '''Output''': Compressed stream&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/compression.php Point Cloud Compression]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1io_1_1_octree_point_cloud_compression.html pcl::io::OctreePointCloudCompression]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_octree_compress.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Detecting changes===&lt;br /&gt;
&lt;br /&gt;
PCL has an octree-based object that allows to detect changes between two point clouds (for example, two clouds taken one after another with an OpenNI device). This object stores the trees of both clouds at the same time, and compares them, returning a list of indices of the points that are new in the second cloud (they did not exist in the first):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/octree/octree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudA(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudB(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read two PCD files from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudA) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[2], *cloudB) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Change detector object, with a resolution of 128&lt;br /&gt;
	// (resolution at lowest octree level).&lt;br /&gt;
	pcl::octree::OctreePointCloudChangeDetector&amp;lt;pcl::PointXYZ&amp;gt; octree(128.0f);&lt;br /&gt;
&lt;br /&gt;
	// Add cloudA to the octree.&lt;br /&gt;
	octree.setInputCloud(cloudA);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
	// The change detector object is able to store two clouds at the same time;&lt;br /&gt;
	// with this we can reset the buffer but keep the previous tree saved.&lt;br /&gt;
	octree.switchBuffers();&lt;br /&gt;
	// Now, add cloudB to the octree like we did before.&lt;br /&gt;
	octree.setInputCloud(cloudB);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; newPoints;&lt;br /&gt;
	// Get a vector with the indices of all points that are new in cloud B,&lt;br /&gt;
	// when compared with the ones that existed in cloud A.&lt;br /&gt;
	octree.getPointIndicesFromNewVoxels(newPoints);&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; newPoints.size(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Point (&amp;quot; &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].x &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].y &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].z&lt;br /&gt;
				  &amp;lt;&amp;lt; &amp;quot;) was not in cloud A but is in cloud B&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
	std::cout &amp;lt;&amp;lt; newPoints.size() &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
} &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': First cloud, Second cloud, Resolution&lt;br /&gt;
* '''Output''': Point indices&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/octree_change.php Spatial change detection on unorganized point cloud data]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1octree_1_1_octree_point_cloud_change_detector.html pcl::octree::OctreePointCloudChangeDetector]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_change_detection.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Filtering=&lt;br /&gt;
&lt;br /&gt;
The raw point cloud, as returned from the sensor, usually needs some kind of polishing before being handed to processing algorithms. Normally, this is due to the low precision of the sensor, which leads to noise, incorrect measurements, holes in surfaces... Sometimes it is just the opposite: the cloud has so many points that processing it would take forever. Luckily, several filtering methods exist to &amp;quot;fix&amp;quot; many recurring errors in point clouds.&lt;br /&gt;
&lt;br /&gt;
==Passthrough filter==&lt;br /&gt;
&lt;br /&gt;
A passthrough filter will remove any point from the cloud whose values do not fall in a certain range, user-given. For example, if you wanted to discard all points farther away than, say, 3 meters, you would run the filter on the Z coordinate with a range of [0,3]. This filter can be useful to discard unneeded objects from the cloud, but you may have to adapt a different reference frame if the default one (relative to the sensor) is not fitting. For example, filtering on the Y value to remove all points not sitting on a table will yield unwanted results if the camera was at an odd angle.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/passthrough.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::PassThrough&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Filter out all points with Z values not in the [0-2] range.&lt;br /&gt;
	filter.setFilterFieldName(&amp;quot;z&amp;quot;);&lt;br /&gt;
	filter.setFilterLimits(0.0, 2.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforePassthrough.png | Original point cloud.&lt;br /&gt;
File:AfterPassthrough.png | Point cloud after applying a passthrough filter with Z in [0,2].&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setFilterLimitsNegative()&amp;quot;&amp;lt;/span&amp;gt; function (not seen in the example) will tell the filter object whether to return the points inside the range (&amp;quot;false&amp;quot;, the default value) or outside it (&amp;quot;true&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Filter field, Filter limits, [Negative mode]&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/passthrough.php Filtering a PointCloud using a PassThrough filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_pass_through.html pcl::PassThrough]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_passthrough_filter.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Conditional removal==&lt;br /&gt;
&lt;br /&gt;
Another way of getting the exact same result than with the previous example would be to use a conditional removal filter. We can build any kind of condition we want for the values of the point:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/conditional_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// We must build a condition.&lt;br /&gt;
	// And &amp;quot;And&amp;quot; condition requires all tests to check true. &amp;quot;Or&amp;quot; conditions also available.&lt;br /&gt;
	pcl::ConditionAnd&amp;lt;pcl::PointXYZ&amp;gt;::Ptr condition(new pcl::ConditionAnd&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// First test, the point's Z value must be greater than (GT) 0.&lt;br /&gt;
	condition-&amp;gt;addComparison(pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;::ConstPtr(new pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;(&amp;quot;z&amp;quot;, pcl::ComparisonOps::GT, 0.0)));&lt;br /&gt;
	// Second test, the point's Z value must be less than (LT) 2.&lt;br /&gt;
	condition-&amp;gt;addComparison(pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;::ConstPtr(new pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;(&amp;quot;z&amp;quot;, pcl::ComparisonOps::LT, 2.0)));&lt;br /&gt;
	// Checks available: GT, GE, LT, LE, EQ.&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::ConditionalRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setCondition(condition);&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// If true, points that do not pass the filter will be set to a certain value (default NaN).&lt;br /&gt;
	// If false, they will be just removed, but that could break the structure of the cloud&lt;br /&gt;
	// (organized clouds are clouds taken from camera-like sensors that return a matrix-like image).&lt;br /&gt;
	filter.setKeepOrganized(true);&lt;br /&gt;
	// If keep organized was set true, points that failed the test will have their Z value set to this.&lt;br /&gt;
	filter.setUserFilterValue(0.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Condition, [Keep organized]&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/remove_outliers.php Removing outliers using a Conditional or RadiusOutlier removal]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_conditional_removal.html pcl::ConditionalRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_conditional_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Outlier removal==&lt;br /&gt;
&lt;br /&gt;
Outliers are lonely points that are spread here and there in the cloud, like annoying mosquitoes. They are the product of the sensor's inaccuracy, which registers measurements where there should not be any. Outliers are considered undesired noise because they can introduce errors in calculations, like for example the normal estimation. Hence, trimming they out of the cloud will not only make computations faster (because the less points there are, the less time it takes, as we will see with downsampling) but also help us get more precise values.&lt;br /&gt;
&lt;br /&gt;
===Radius-based===&lt;br /&gt;
&lt;br /&gt;
The radius-based outlier removal is the simplest method of all. You must specify a search radius and the minimum number of neighbors than a point must have to avoid being labelled as outlier. The algorithm will then iterate through all points (which can be slow in if the cloud is big) and perform the check: if less than that number of points are found within the radius, it is removed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/radius_outlier_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::RadiusOutlierRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Every point must have 10 neighbors within 15cm, or it will be removed.&lt;br /&gt;
	filter.setRadiusSearch(0.15);&lt;br /&gt;
	filter.setMinNeighborsInRadius(10);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All PCL classes that inherit from &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::FilterIndices&amp;quot;&amp;lt;/span&amp;gt; (like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::RadiusOutlierRemoval&amp;quot;&amp;lt;/span&amp;gt;) provide the function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt;, which if set to true, makes the filter return the points that did NOT pass the filter.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius, Neighbor count&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/remove_outliers.php Removing outliers using a Conditional or RadiusOutlier removal]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_radius_outlier_removal.html pcl::RadiusOutlierRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_radius_outlier_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Statistical===&lt;br /&gt;
&lt;br /&gt;
The statistical outlier removal process is a bit more refined. First, for every point, the mean distance to its ''K'' neighbors is computed. Then, if we asume that the result is a [https://en.wikipedia.org/wiki/Normal_distribution normal (gaussian) distribution] with a mean ''μ'' and a standard deviation ''σ'', we can deem it safe to remove all points with mean distances that fall out of the global mean plus deviation. Basically, it runs a statistical analysis of the distances between neighboring points, and trims all which are not considered &amp;quot;normal&amp;quot; (you define what &amp;quot;normal&amp;quot; is with the parameters of the algorithm).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/statistical_outlier_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::StatisticalOutlierRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Set number of neighbors to consider to 50.&lt;br /&gt;
	filter.setMeanK(50);&lt;br /&gt;
	// Set standard deviation multiplier to 1.&lt;br /&gt;
	// Points with a distance larger than 1 standard deviation of the mean distance will be outliers.&lt;br /&gt;
	filter.setStddevMulThresh(1.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can make use of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt; with this filter too.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Neighbor count, Deviation multiplier&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/statistical_outlier.php Removing outliers using a StatisticalOutlierRemoval filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_statistical_outlier_removal.html pcl::StatisticalOutlierRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_statistical_outlier_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Resampling==&lt;br /&gt;
&lt;br /&gt;
Resampling means changing the number of points of the cloud, either by reducing (downsampling) or increasing (upsampling). Both have a purpose, as we will see.&lt;br /&gt;
&lt;br /&gt;
===Downsampling===&lt;br /&gt;
&lt;br /&gt;
A sensor like Kinect or Xtion produces a cloud with 307200 (640x480) points. Believe it or not, that is a lot of information, and the new Kinect is already out there, working at a higher resolution. Keep in mind that, if we were to perform a simple operation on every point of a cloud, it would be of ''O(n)'', n being the number of points. If we had to compare every point with its ''k'' nearest neighbors, it would be ''O(nk)''. You may have noticed by now that some of the algorithms I showed you take several seconds to run on your PC; you now know why.&lt;br /&gt;
&lt;br /&gt;
Optimization techniques are being implemented as I write this to allow PCL to employ the computer's GPU (graphic card) to perform the computations using [https://en.wikipedia.org/wiki/CUDA CUDA]. Many classes already provide an alternative in the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::gpu&amp;quot;&amp;lt;/span&amp;gt; namespace. This offers dramatic improvements in speed, but it is not always a possibility.&lt;br /&gt;
&lt;br /&gt;
Another option is to reduce the complexity of the cloud, that is, to remove all points that are not needed for the purpose. There are several ways to do this. You could trim all outliers from the cloud, then perform segmentation to extract only the parts that interest you. A common operation is also downsampling, which will give you back a cloud that, for all intents and purposes, equivalent to the original one, despite having less points.&lt;br /&gt;
&lt;br /&gt;
Downsampling is done via ''voxelization''. I already explained what voxels were in the [[#Octree | octree section]]. The cloud is divided in multiple cube-shaped regions with the desired resolution. Then, all points inside every voxel are processed so only one remains. The simplest way would be to randomly select one of them, but a more accurate approach would be to compute the [[PCL/OpenNI_tutorial_2:_Cloud_processing_%28basic%29#Computing_the_centroid|centroid]], which is the point whose coordinates are the mean values of all the points that belong to the voxel.&lt;br /&gt;
&lt;br /&gt;
If done with sensible parameters, downsampling will yield results that are precise enough to work with, at the cost of less CPU power.&lt;br /&gt;
&lt;br /&gt;
====Voxel grid====&lt;br /&gt;
&lt;br /&gt;
A straightforward way of downsampling a cloud:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/voxel_grid.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::VoxelGrid&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// We set the size of every voxel to be 1x1x1cm&lt;br /&gt;
	// (only one point per every cubic centimeter will survive).&lt;br /&gt;
	filter.setLeafSize(0.01f, 0.01f, 0.01f);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforeDownsampling.png | Original point cloud.&lt;br /&gt;
File:AfterDownsampling.png | Point cloud after being downsampled with a leaf size of 1cm.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Voxel size&lt;br /&gt;
* '''Output''': Downsampled cloud&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/voxel_grid.php Downsampling a PointCloud using a VoxelGrid filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_voxel_grid.html pcl::VoxelGrid]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_downsample.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Uniform sampling====&lt;br /&gt;
&lt;br /&gt;
This class does basically the same, but it outputs the indices of the points that survived the process. It is a common way of choosing keypoints when computing [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29|descriptors]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/keypoints/uniform_sampling.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Uniform sampling object.&lt;br /&gt;
	pcl::UniformSampling&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// We set the size of every voxel to be 1x1x1cm&lt;br /&gt;
	// (only one point per every cubic centimeter will survive).&lt;br /&gt;
	filter.setRadiusSearch(0.01f);&lt;br /&gt;
	// We need an additional object to store the indices of surviving points.&lt;br /&gt;
	pcl::PointCloud&amp;lt;int&amp;gt; keypointIndices;&lt;br /&gt;
&lt;br /&gt;
	filter.compute(keypointIndices);&lt;br /&gt;
	pcl::copyPointCloud(*cloud, keypointIndices.points, *filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Voxel size&lt;br /&gt;
* '''Output''': Point indices&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_uniform_sampling.html pcl::UniformSampling]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_downsample_uniform.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Upsampling===&lt;br /&gt;
&lt;br /&gt;
Upsampling is a form of [[PCL/OpenNI_tutorial_2:_Cloud_processing_%28basic%29#Reconstruction| surface reconstruction]], but I put it here. When you have fewer points than you would like, upsampling can help you to recover the original surface(s), by [https://en.wikipedia.org/wiki/Upsampling interpolating] the ones you currently have, which is just a sophisticated guess. The result will never be a hundred percent accurate, but sometimes it is the only choice. So, before downsampling your clouds, be sure to keep a copy of the original data!&lt;br /&gt;
&lt;br /&gt;
PCL uses the [https://en.wikipedia.org/wiki/Moving_least_squares Moving Least Squares (MLS)] algorithm for this. The code looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/mls.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filtering object.&lt;br /&gt;
	pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Object for searching.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree;&lt;br /&gt;
	filter.setSearchMethod(kdtree);&lt;br /&gt;
	// Use all neighbors in a radius of 3cm.&lt;br /&gt;
	filter.setSearchRadius(0.03);&lt;br /&gt;
	// Upsampling method. Other possibilites are DISTINCT_CLOUD, RANDOM_UNIFORM_DENSITY&lt;br /&gt;
	// and VOXEL_GRID_DILATION. NONE disables upsampling. Check the API for details.&lt;br /&gt;
	filter.setUpsamplingMethod(pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointXYZ&amp;gt;::SAMPLE_LOCAL_PLANE);&lt;br /&gt;
	// Radius around each point, where the local plane will be sampled.&lt;br /&gt;
	filter.setUpsamplingRadius(0.03);&lt;br /&gt;
	// Sampling step size. Bigger values will yield less (if any) new points.&lt;br /&gt;
	filter.setUpsamplingStepSize(0.02);&lt;br /&gt;
&lt;br /&gt;
	filter.process(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:PCL_upsampling_original.png | Original point cloud (coffee mug on a table).&lt;br /&gt;
File:PCL_upsampling_downsampled.png | Downsampled with a leaf size of 1cm.&lt;br /&gt;
File:PCL_upsampling_local_plane.png | Upsampled using SAMPLE_LOCAL_PLANE (upsampling radius 3cm, step size 2cm).&lt;br /&gt;
File:PCL_upsampling_uniform_density.png | Upsampled using RANDOM_UNIFORM_DENSITY (target point density of 300 per 3cm radius). &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check the [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html#ac29ad97b98353d64ce64e2ff924f7d20 setUpsamplingMethod()] API to see all upsampling methods available, and the parameters you need to set for them to work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search radius, Upsampling method, [Upsampling parameters]&lt;br /&gt;
* '''Output''': Upsampled cloud&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html pcl::MovingLeastSquares]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_upsample.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Reconstruction=&lt;br /&gt;
&lt;br /&gt;
==Surface smoothing==&lt;br /&gt;
&lt;br /&gt;
As stated, depth sensors are not very accurate, and the resulting clouds have measurement errors, outliers, holes in surfaces, etc. Surfaces can be reconstructed by means of an algorithm, that iterates through all points and interpolates the data, trying to guess how the original surface was. Like with upsampling, PCL uses the [https://en.wikipedia.org/wiki/Moving_least_squares MLS] algorithm and class. Performing this step is important, because the resulting cloud's normals will be more accurate.&lt;br /&gt;
&lt;br /&gt;
Surface smoothing is easily done with the following code:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/mls.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// The output will also contain the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;::Ptr smoothedCloud(new pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Smoothing object (we choose what point types we want as input and output).&lt;br /&gt;
	pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointNormal&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Use all neighbors in a radius of 3cm.&lt;br /&gt;
	filter.setSearchRadius(0.03);&lt;br /&gt;
	// If true, the surface and normal are approximated using a polynomial estimation&lt;br /&gt;
	// (if false, only a tangent one).&lt;br /&gt;
	filter.setPolynomialFit(true);&lt;br /&gt;
	// We can tell the algorithm to also compute smoothed normals (optional).&lt;br /&gt;
	filter.setComputeNormals(true);&lt;br /&gt;
	// kd-tree object for performing searches.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree;&lt;br /&gt;
	filter.setSearchMethod(kdtree);&lt;br /&gt;
&lt;br /&gt;
	filter.process(*smoothedCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The output cloud will also contain the improved normals of the smoother cloud.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search radius, [Polynomial fit], [Compute normals]&lt;br /&gt;
* '''Output''': Smoothed cloud, [Smoothed normals]&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/resampling.php Smoothing and normal estimation based on polynomial reconstruction]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html pcl::MovingLeastSquares]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_surface_smoothing.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)&amp;diff=4754</id>
		<title>PCL/OpenNI tutorial 4: 3D object recognition (descriptors)</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)&amp;diff=4754"/>
				<updated>2015-11-05T16:59:49Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It is time to learn the basics of one of the most interesting applications of point cloud processing: 3D object recognition. Akin to 2D recognition, this technique relies on finding good keypoints (characteristic points) in the cloud, and matching them to a set of previously saved ones. But 3D has several advantages over 2D: namely, we will be able to estimate with decent accuracy the exact position and orientation of the object, relative to the sensor. Also, 3D object recognition tends to be more robust to clutter (crowded scenes where objects in the front occluding objects in the background). And finally, having information about the object's shape will help with collision avoidance or grasping operations.&lt;br /&gt;
&lt;br /&gt;
In this first tutorial we will see what descriptors are, how many types are there available in PCL, and how to compute them.&lt;br /&gt;
&lt;br /&gt;
=Overview=&lt;br /&gt;
&lt;br /&gt;
The basis of 3D object recognition is to find a set of correspondences between two different clouds, one of them containing the object we are looking for. In order to do this, we need a way to compare points in an unambiguous manner. Until now, we have worked with points that store the XYZ coordinates, the RGB color... but none of those properties are unique enough. In two sequential scans, two points could share the same coordinates despite belonging to different surfaces, and using the color information takes us back to 2D recognition, will all the lightning related problems.&lt;br /&gt;
&lt;br /&gt;
In a previous tutorial, we talked about [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#Feature_estimation | features]], before introducing the normals. Normals are an example of feature, because they encode information about the vicinity of the point. That is, the neighboring points are taken into account when computing it, giving us an idea of how the surrounding surface is. But this is not enough. For a feature to be optimal, it must meet the following criteria:&lt;br /&gt;
&lt;br /&gt;
# '''It must be robust to transformations''': rigid transformations (the ones that do not change the distance between points) like translations and rotations must not affect the feature. Even if we play with the cloud a bit beforehand, there should be no difference.&lt;br /&gt;
# '''It must be robust to noise''': measurement errors that cause noise should not change the feature estimation much.&lt;br /&gt;
# '''It must be resolution invariant''': if sampled with different density (like after performing downsampling), the result must be identical or similar.&lt;br /&gt;
&lt;br /&gt;
This is where descriptors come in. They are more complex (and precise) signatures of a point, that encode a lot of information about the surrounding geometry. The purpose is to unequivocally identify a point across multiple point clouds, no matter the noise, resolution or transformations. Also, some of them capture additional data about the object they belong to, like the viewpoint (that lets us retrieve the pose).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Correspondence.jpg|thumb|center|600px|Finding correspondences between point features of two clouds (image from http://pointclouds.org/).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are many 3D descriptors implemented into PCL. Each one has its own method for computing unique values for a point. Some use the difference between the angles of the normals of the point and its neighbors, for example. Others use the distances between the points. Because of this, some are inherently better or worse for certain purposes. A given descriptor may be scale invariant, and another one may be better with occlusions and partial views of objects. Which one you choose depends on what you want to do.&lt;br /&gt;
&lt;br /&gt;
After calculating the necessary values, an additional step is performed to reduce the descriptor size: the result is binned into an [https://en.wikipedia.org/wiki/Histogram histogram]. To do this, the value range of each variable that makes up the descriptor is divided into ''n'' subdivisions, and the number of occurrences in each one is counted. Try to imagine a descriptor that computes a single variable, that ranges from 1 to 100. We choose to create 10 bins for it, so the first bin would gather all occurrences between 1 and 10, the second from 11 to 20, and so on. We look at the value of the variable for the first point-neighbor pair, and it is 27, so we increment the value of the third bin by 1. We keep doing this until we get a final histogram for that keypoint. The bin size must be carefully chosen depending on how descriptive that variable is (the variables do not have to share the same number of bins, and also the bins do not have to be of the same size; if for example most values from the previous example fell in the 50-100 range then it would be sensible to have more bins of smaller size in that range).&lt;br /&gt;
&lt;br /&gt;
Descriptors can be classified in two main categories: global and local. The process for computing and using each one (recognition pipeline) is different, so each will be explained in its own section in this article.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/how_features_work.php How 3D Features work in PCL]&lt;br /&gt;
** [https://github.com/PointCloudLibrary/pcl/wiki/Overview-and-Comparison-of-Features  Overview and Comparison of Features]&lt;br /&gt;
** [http://www.pointclouds.org/assets/icra2013/pcl_features_icra13.pdf How does a good feature look like?]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [https://www.inf.ethz.ch/personal/zeislb/publications/aldoma_2012jram_PCLTutorial.pdf Point Cloud Library: Three-Dimensional Object Recognition and 6 DoF Pose Estimation] (Aitor Aldoma et al., 2012)&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Table==&lt;br /&gt;
&lt;br /&gt;
The following table will give you a hint of how many descriptors are there in PCL, and some of their features:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot; style=&amp;quot;margin: auto; text-align:center&amp;quot; cellpadding=&amp;quot;15&amp;quot;&lt;br /&gt;
|+ 3D descriptors&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Name&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Type&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Size&amp;lt;sup&amp;gt;†&amp;lt;/sup&amp;gt;&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Custom PointType&amp;lt;sup&amp;gt;††&amp;lt;/sup&amp;gt;&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#PFH|PFH]] (Point Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 125&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#FPFH|FPFH]] (Fast Point Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 33&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#RSD|RSD]] (Radius-Based Surface Descriptor)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 289&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#3DSC|3DSC]] (3D Shape Context)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 1980&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#USC|USC]] (Unique Shape Context)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 1960&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#SHOT|SHOT]] (Signatures of Histograms of Orientations)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 352&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#Spin_image|Spin image]]&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 153*&lt;br /&gt;
| style=&amp;quot;color: red;&amp;quot; | No&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#RIFT|RIFT]] (Rotation-Invariant Feature Transform)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 32*&lt;br /&gt;
| style=&amp;quot;color: red;&amp;quot; | No&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#NARF|NARF]] (Normal Aligned Radial Feature)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 36&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#RoPS|RoPS]] (Rotational Projection Statistics)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 135*&lt;br /&gt;
| style=&amp;quot;color: red;&amp;quot; | No&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#VFH|VFH]] (Viewpoint Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 308&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#CVFH|CVFH]] (Clustered Viewpoint Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 308&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#OUR-CVFH|OUR-CVFH]] (Oriented, Unique and Repeatable Clustered Viewpoint Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 308&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#ESF|ESF]] (Ensemble of Shape Functions)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 640&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#GFPFH|GFPFH]] (Global Fast Point Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 16&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#GRSD|GRSD]] (Global Radius-Based Surface Descriptor)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 21&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
† Values marked with an asterisk (*) indicate that the descriptor's size depends on some parameter(s), and the one given is for the default values.&lt;br /&gt;
&lt;br /&gt;
†† Descriptors without their own custom PointType use the generic &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::Histogram&amp;lt;&amp;gt;&amp;quot;&amp;lt;/span&amp;gt; type. See [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#Saving_and_loading|Saving and loading]].&lt;br /&gt;
&lt;br /&gt;
Optionally, you can download a document with a less simple version of the table. Page format is A4, landscape:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Downloads''':&lt;br /&gt;
** [http://robotica.unileon.es/~victorm/3D_descriptors_comparison.odt Table as original ODT] (uses font embedding, requires LibreOffice 4)&lt;br /&gt;
** [http://robotica.unileon.es/~victorm/3D_descriptors_comparison.pdf Table as PDF]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Local descriptors=&lt;br /&gt;
&lt;br /&gt;
Local descriptors are computed for individual points that we give as input. They have no notion of what an object is, they just describe how the local geometry is around that point. Usually, it is your task to choose which points you want a descriptor to be computed for: the ''keypoints''. Most of the time, you can get away by just performing a downsampling and choosing all remaining points, but keypoint detectors are available, like the one used for [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#Finding_keypoints|NARF]], or [[PCL/OpenNI_tutorial_5:_3D_object_recognition_%28pipeline%29#ISS|ISS]].&lt;br /&gt;
&lt;br /&gt;
Local descriptors are used for object recognition and [[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Registration | registration]]. Now we will see which ones are implemented into PCL.&lt;br /&gt;
&lt;br /&gt;
==PFH==&lt;br /&gt;
&lt;br /&gt;
PFH stands for Point Feature Histogram. It is one of the most important descriptors offered by PCL and the basis of others such as FPFH. The PFH tries to capture information of the geometry surrounding the point by analyzing the difference between the directions of the normals in the vicinity (and because of this, an imprecise normal estimation may produce low-quality descriptors).&lt;br /&gt;
&lt;br /&gt;
First, the algorithm pairs all points in the vicinity (not just the chosen keypoint with its neighbors, but also the neighbors with themselves). Then, for each pair, a [https://en.wikipedia.org/wiki/Darboux_frame fixed coordinate frame] is computed from their normals. With this frame, the difference between the normals can be encoded with 3 angular variables. These variables, together with the euclidean distance between the points, are saved, and then binned to an histogram when all pairs have been computed. The final descriptor is the concatenation of the histograms of each variable (4 in total).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:PFH_neighbors.png | Point pairs established when computing the PFH for a point (image from http://pointclouds.org).&lt;br /&gt;
File:PFH_frame.png | Fixed coordinate frame and angular features computed for one of the pairs (image from http://pointclouds.org).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Computing descriptors in PCL is very easy, and the PFH is not an exception:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/pfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the PFH descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PFHSignature125&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::PFHSignature125&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// PFH estimation object.&lt;br /&gt;
	pcl::PFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::PFHSignature125&amp;gt; pfh;&lt;br /&gt;
	pfh.setInputCloud(cloud);&lt;br /&gt;
	pfh.setInputNormals(normals);&lt;br /&gt;
	pfh.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. Note: the value given here has to be&lt;br /&gt;
	// larger than the radius used to estimate the normals.&lt;br /&gt;
	pfh.setRadiusSearch(0.05);&lt;br /&gt;
&lt;br /&gt;
	pfh.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, PCL uses the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PFHSignature125&amp;quot;&amp;lt;/span&amp;gt; type to save the descriptor to. This means that the descriptor's size is 125 (the dimensionality of the feature vector). Dividing a feature in ''D'' dimensional space in ''B'' divisions requires a total of B&amp;lt;sup&amp;gt;D&amp;lt;/sup&amp;gt; bins. The original proposal makes use of the distance between the points, but the implementation of PCL does not, as it was not considered discriminative enough (specially in 2.5D scans, where the distance between points increases the further away from the sensor). Hence, the remaining features (with one dimension each) can be divided in 5 divisions, resulting in a 125-dimensional (5&amp;lt;sup&amp;gt;3&amp;lt;/sup&amp;gt;) vector.&lt;br /&gt;
&lt;br /&gt;
The final object that stores the computed descriptors can be handled like a normal cloud (even saved to, or read from, disk), and it has the same number of &amp;quot;points&amp;quot; than the original one. The &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PFHSignature125&amp;quot;&amp;lt;/span&amp;gt; object at position 0, for example, stores the PFH descriptor for the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; point at the same position in the cloud.&lt;br /&gt;
&lt;br /&gt;
For additional details about the descriptor, check the original publications listed below, or PCL's tutorial.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Search method, Radius&lt;br /&gt;
* '''Output''': PFH descriptors&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/pfh_estimation.php Point Feature Histograms (PFH) descriptors]&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://ias.in.tum.de/_media/spezial/bib/rusu08ias.pdf Persistent Point Feature Histograms for 3D Point Clouds] (Radu Bogdan Rusu et al., 2008)&lt;br /&gt;
** [http://files.rbrusu.com/publications/RusuPhDThesis.pdf Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments] (Radu Bogdan Rusu, 2009, section 4.4, page 50)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_p_f_h_estimation.html pcl::PFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_PFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===FPFH===&lt;br /&gt;
&lt;br /&gt;
PFH gives accurate results, but it has a drawback: it is too computationally expensive to perform at real time. For a cloud of ''n'' keypoints with ''k'' neighbors considered, it has a [https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations complexity] of ''O(nk&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;)''. Because of this, a derived descriptor was created, named FPFH (Fast Point Feature Histogram).&lt;br /&gt;
&lt;br /&gt;
The FPFH considers only the direct connections between the current keypoint and its neighbors, removing additional links between neighbors. This takes the complexity down to ''O(nk)''. Because of this, the resulting histogram is referred to as SPFH (Simplified Point Feature Histogram). The reference frame and the angular variables are computed as always.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:FPFH_neighbors.png|thumb|center|400px|Point pairs established when computing the FPFH for a point (image from http://pointclouds.org).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To account for the loss of these extra connections, an additional step takes place after all histograms have been computed: the SPFHs of a point's neighbors are &amp;quot;merged&amp;quot; with its own, weighted according to the distance. This has the effect of giving a point surface information of points as far away as 2 times the radius used. Finally, the 3 histograms (distance is not used) are concatenated to compose the final descriptor.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/fpfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the FPFH descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::FPFHSignature33&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::FPFHSignature33&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// FPFH estimation object.&lt;br /&gt;
	pcl::FPFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::FPFHSignature33&amp;gt; fpfh;&lt;br /&gt;
	fpfh.setInputCloud(cloud);&lt;br /&gt;
	fpfh.setInputNormals(normals);&lt;br /&gt;
	fpfh.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. Note: the value given here has to be&lt;br /&gt;
	// larger than the radius used to estimate the normals.&lt;br /&gt;
	fpfh.setRadiusSearch(0.05);&lt;br /&gt;
&lt;br /&gt;
	fpfh.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An additional implementation of the FPFH estimation that takes advantage of multithreaded optimizations (with the [https://en.wikipedia.org/wiki/OpenMP OpenMP API]) is available in the class &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;FPFHEstimationOMP&amp;quot;&amp;lt;/span&amp;gt;. Its interface is identical to the standard unoptimized implementation. Using it will result in a big performance boost on multi-core systems, meaning faster computation times. Remember to include the header &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;fpfh_omp.h&amp;quot;&amp;lt;/span&amp;gt; instead.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Search method, Radius&lt;br /&gt;
* '''Output''': FPFH descriptors&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/fpfh_estimation.php Fast Point Feature Histograms (FPFH) descriptors]&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://files.rbrusu.com/publications/Rusu09ICRA.pdf Fast Point Feature Histograms (FPFH) for 3D Registration] (Radu Bogdan Rusu et al., 2009)&lt;br /&gt;
** [http://files.rbrusu.com/publications/RusuPhDThesis.pdf Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments] (Radu Bogdan Rusu, 2009, section 4.5, page 57)&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_f_p_f_h_estimation.html pcl::FPFHEstimation]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_f_p_f_h_estimation_o_m_p.html pcl::FPFHEstimationOMP]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_FPFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==RSD==&lt;br /&gt;
&lt;br /&gt;
The Radius-Based Surface Descriptor encodes the radial relationship of the point and its neighborhood. For every pair of the keypoint with a neighbor, the algorithm computes the distance between them, and the difference between their normals. Then, by assuming that both points lie on the surface of a sphere, said sphere is found by fitting not only the points, but also the normals (otherwise, there would be infinite possible spheres). Finally, from all the point-neighbor spheres, only the ones with the maximum and minimum radii are kept and saved to the descriptor of that point.&lt;br /&gt;
&lt;br /&gt;
As you may have deduced already, when two points lie on a flat surface, the sphere radius will be infinite. If, on the other hand, they lie on the curved face of a cylinder, the radius will be more or less the same as that of the cylinder. This allows us to tell objects apart with RSD. The algorithm takes a parameter that sets the maximum radius at which the points will be considered to be part of a plane.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:RSD_sphere.png | Sphere that fits two points with their normals (image from [http://ias.in.tum.de/_media/events/rgbd2011/marton.pdf original presentation]).&lt;br /&gt;
File:RSD_values.png | RSD values for a cloud; points on a flat surface have maximum values (image from [http://ias.in.tum.de/_media/events/rgbd2011/marton.pdf original presentation]).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the code for compiling the RSD descriptor:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/rsd.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the RSD descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PrincipalRadiiRSD&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::PrincipalRadiiRSD&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// RSD estimation object.&lt;br /&gt;
	pcl::RSDEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::PrincipalRadiiRSD&amp;gt; rsd;&lt;br /&gt;
	rsd.setInputCloud(cloud);&lt;br /&gt;
	rsd.setInputNormals(normals);&lt;br /&gt;
	rsd.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. Note: the value given here has to be&lt;br /&gt;
	// larger than the radius used to estimate the normals.&lt;br /&gt;
	rsd.setRadiusSearch(0.05);&lt;br /&gt;
	// Plane radius. Any radius larger than this is considered infinite (a plane).&lt;br /&gt;
	rsd.setPlaneRadius(0.1);&lt;br /&gt;
	// Do we want to save the full distance-angle histograms?&lt;br /&gt;
	rsd.setSaveHistograms(false);&lt;br /&gt;
	&lt;br /&gt;
	rsd.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: This code will only compile with PCL versions 1.8 and above (the current trunk).'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Optionally, you can use the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setSaveHistograms()&amp;quot;&amp;lt;/span&amp;gt; function to enable the saving of the full distance-angle histograms, and then use &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getHistograms()&amp;quot;&amp;lt;/span&amp;gt; to retrieve them.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Search method, Radius, Maximum Radius, [Subdivisions], [Save full histograms]&lt;br /&gt;
* '''Output''': RSD descriptors&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.391.7398&amp;amp;rep=rep1&amp;amp;type=pdf General 3D Modelling of Novel Objects from a Single View] (Zoltan-Csaba Marton et al., 2010)&lt;br /&gt;
** [https://ias.cs.tum.edu/_media/spezial/bib/marton11ijrr.pdf Combined 2D-3D Categorization and Classification for Multimodal Perception Systems] (Zoltan-Csaba Marton et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_r_s_d_estimation.html pcl::RSDEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_RSD.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==3DSC==&lt;br /&gt;
&lt;br /&gt;
The 3D Shape Context is a descriptor that extends its existing 2D counterpart to the third dimension. It works by creating a support structure (a sphere, to be precise) centered at the point we are computing the descriptor for, with the given search radius. The &amp;quot;north pole&amp;quot; of that sphere (the notion of &amp;quot;up&amp;quot;) is pointed to match the normal at that point. Then, the sphere is divided in 3D regions or bins. In the first 2 coordinates (azimuth and elevation) the divisions are equally spaced, but in the third (the radial dimension), divisions are logarithmically spaced so they are smaller towards the center. A minimum radius can be specified to prevent very small bins, that would be too sensitive to small changes in the surface.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:3DSC_support_structure.png|thumb|center|200px|Support structure to compute the 3DSC for a point (image from [http://www.cs.jhu.edu/~misha/Papers/Frome04.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For each bin, a weighted count is accumulated for every neighboring point that lies within. The weight depends on the volume of the bin and the local point density (number of points around the current neighbor). This gives the descriptor some degree of resolution invariance.&lt;br /&gt;
&lt;br /&gt;
We have mentioned that the sphere is given the direction of the normal. This still leaves one degree of freedom (only two axes have been locked, the azimuth remains free). Because of this, the descriptor so far does not cope with rotation. To overcome this (so the same point in two different clouds has the same value), the support sphere is rotated around the normal ''N'' times (a number of degrees that corresponds with the divisions in the azimuth) and the process is repeated for each, giving a total of ''N'' descriptors for that point.&lt;br /&gt;
&lt;br /&gt;
You can compute the 3DSC descriptor the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/3dsc.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the 3DSC descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::ShapeContext1980&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::ShapeContext1980&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// 3DSC estimation object.&lt;br /&gt;
	pcl::ShapeContext3DEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::ShapeContext1980&amp;gt; sc3d;&lt;br /&gt;
	sc3d.setInputCloud(cloud);&lt;br /&gt;
	sc3d.setInputNormals(normals);&lt;br /&gt;
	sc3d.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. It will also be the radius of the support sphere.&lt;br /&gt;
	sc3d.setRadiusSearch(0.05);&lt;br /&gt;
	// The minimal radius value for the search sphere, to avoid being too sensitive&lt;br /&gt;
	// in bins close to the center of the sphere.&lt;br /&gt;
	sc3d.setMinimalRadius(0.05 / 10.0);&lt;br /&gt;
	// Radius used to compute the local point density for the neighbors&lt;br /&gt;
	// (the density is the number of points within that radius).&lt;br /&gt;
	sc3d.setPointDensityRadius(0.05 / 5.0);&lt;br /&gt;
&lt;br /&gt;
	sc3d.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Search method, Radius, Minimal radius, Point density radius&lt;br /&gt;
* '''Output''': 3DSC descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://www.cs.jhu.edu/~misha/Papers/Frome04.pdf Recognizing Objects in Range Data Using Regional Point Descriptors] (Andrea Frome et al., 2004)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_shape_context3_d_estimation.html pcl::ShapeContext3DEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_3DSC.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===USC===&lt;br /&gt;
&lt;br /&gt;
The Unique Shape Context descriptor extends the 3DSC by defining a local reference frame, in order to provide an unique orientation for each point. This not only improves the accuracy of the descriptor, it also reduces its size, as computing multiple descriptors to account for orientation is no longer necessary.&lt;br /&gt;
&lt;br /&gt;
You can check the second publication listed below to learn more about how the LRF is computed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/usc.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the USC descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::UniqueShapeContext1960&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::UniqueShapeContext1960&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// USC estimation object.&lt;br /&gt;
	pcl::UniqueShapeContext&amp;lt;pcl::PointXYZ, pcl::UniqueShapeContext1960, pcl::ReferenceFrame&amp;gt; usc;&lt;br /&gt;
	usc.setInputCloud(cloud);&lt;br /&gt;
	// Search radius, to look for neighbors. It will also be the radius of the support sphere.&lt;br /&gt;
	usc.setRadiusSearch(0.05);&lt;br /&gt;
	// The minimal radius value for the search sphere, to avoid being too sensitive&lt;br /&gt;
	// in bins close to the center of the sphere.&lt;br /&gt;
	usc.setMinimalRadius(0.05 / 10.0);&lt;br /&gt;
	// Radius used to compute the local point density for the neighbors&lt;br /&gt;
	// (the density is the number of points within that radius).&lt;br /&gt;
	usc.setPointDensityRadius(0.05 / 5.0);&lt;br /&gt;
	// Set the radius to compute the Local Reference Frame.&lt;br /&gt;
	usc.setLocalRadius(0.05);&lt;br /&gt;
&lt;br /&gt;
	usc.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: This code will only compile with PCL versions 1.8 and above (the current trunk). For 1.7 and below, change UniqueShapeContext1960 to ShapeContext1980, and edit CMakeLists.txt.'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius, Minimal radius, Point density radius, Local radius&lt;br /&gt;
* '''Output''': USC descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://vision.deis.unibo.it/fede/papers/3dor10.pdf Unique Shape Context for 3D Data Description] (Federico Tombari et al., 2010)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_unique_shape_context.html pcl::UniqueShapeContext]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_USC.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==SHOT==&lt;br /&gt;
&lt;br /&gt;
SHOT stands for Signature of Histograms of Orientations. Like 3DSC, it encodes information about the topology (surface) withing a spherical support structure. This sphere is divided in 32 bins or volumes, with 8 divisions along the azimuth, 2 along the elevation, and 2 along the radius. For every volume, a one-dimensional local histogram is computed. The variable chosen is the angle between the normal of the keypoint and the current point within that volume (to be precise, the cosine, which was found to be better suitable).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:SHOT_support_structure.png|thumb|center|200px|Support structure to compute SHOT. Only 4 azimuth divisions are shown for clarity (image from [http://vision.deis.unibo.it/fede/papers/eccv10.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When all local histograms have been computed, they are stitched together in a final descriptor. Like the USC descriptor, SHOT makes use of a local reference frame, making it rotation invariant. It is also robust to noise and clutter.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/shot.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the SHOT descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// SHOT estimation object.&lt;br /&gt;
	pcl::SHOTEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::SHOT352&amp;gt; shot;&lt;br /&gt;
	shot.setInputCloud(cloud);&lt;br /&gt;
	shot.setInputNormals(normals);&lt;br /&gt;
	// The radius that defines which of the keypoint's neighbors are described.&lt;br /&gt;
	// If too large, there may be clutter, and if too small, not enough points may be found.&lt;br /&gt;
	shot.setRadiusSearch(0.02);&lt;br /&gt;
&lt;br /&gt;
	shot.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Like with FPFH, a multithreading-optimized variant is available with &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;SHOTEstimationOMP&amp;quot;&amp;lt;/span&amp;gt;, that makes use of OpenMP. You need to include the header &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;shot_omp.h&amp;quot;&amp;lt;/span&amp;gt;. Also, another variant that uses the texture for matching is available, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;SHOTColorEstimation&amp;quot;&amp;lt;/span&amp;gt;, with an optimized version too (see the second publication for more details). It outputs a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;SHOT1344&amp;quot;&amp;lt;/span&amp;gt; descriptor.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Radius&lt;br /&gt;
* '''Output''': SHOT descriptors&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://vision.deis.unibo.it/fede/papers/eccv10.pdf Unique Signatures of Histograms for Local Surface Description] (Federico Tombari et al., 2010)&lt;br /&gt;
** [http://www.vision.deis.unibo.it/fede/papers/icip11.pdf A Combined Texture-Shaped Descriptor for Enhanced 3D Feature Matching] (Federico Tombari et al., 2011)&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_estimation.html pcl::SHOTEstimation], &lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_color_estimation.html pcl::SHOTColorEstimation]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_estimation_o_m_p.html pcl::SHOTEstimationOMP]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_color_estimation_o_m_p.html pcl::SHOTColorEstimationOMP]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_SHOT.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Spin image==&lt;br /&gt;
&lt;br /&gt;
The Spin Image (SI) is the oldest descriptor we are going to see here. It has been around since 1997, but it still sees some use for certain applications. It was originally designed to describe surfaces made by vertices, edges and polygons, but it has been since adapted for point clouds. The descriptor is unlike all others in that the output resembles an image that can be compared with another with the usual means.&lt;br /&gt;
&lt;br /&gt;
The support structure used is a cylinder, centered at the point, with a given radius and height, and aligned with the normal. This cylinder is divided radially and vertically into volumes. For each one, the number of neighbors lying inside is added up, eventually producing a descriptor. Weighting and interpolation are used to improve the result. The final descriptor can be seen as a grayscale image where dark areas correspond to volumes with higher point density.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Spin images.png|thumb|center|400px|Spin images computed for 3 points of a model (image from [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.71.4190&amp;amp;rep=rep1&amp;amp;type=pdf original thesis]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/spin_image.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// A handy typedef.&lt;br /&gt;
typedef pcl::Histogram&amp;lt;153&amp;gt; SpinImage;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the spin image for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;SpinImage&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;SpinImage&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Spin image estimation object.&lt;br /&gt;
	pcl::SpinImageEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, SpinImage&amp;gt; si;&lt;br /&gt;
	si.setInputCloud(cloud);&lt;br /&gt;
	si.setInputNormals(normals);&lt;br /&gt;
	// Radius of the support cylinder.&lt;br /&gt;
	si.setRadiusSearch(0.02);&lt;br /&gt;
	// Set the resolution of the spin image (the number of bins along one dimension).&lt;br /&gt;
	// Note: you must change the output histogram size to reflect this.&lt;br /&gt;
	si.setImageWidth(8);&lt;br /&gt;
&lt;br /&gt;
	si.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Spin Image estimation object provides more methods for tuning the estimation, so checking the API is recommended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Radius, Image resolution&lt;br /&gt;
* '''Output''': Spin images&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.71.4190&amp;amp;rep=rep1&amp;amp;type=pdf Spin-Images: A Representation for 3-D Surface Matching] (Andrew Edie Johnson, 1997)&lt;br /&gt;
** [http://www.cs.jhu.edu/~misha/Papers/Johnson99.pdf Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes] (Andrew Edie Johnson and Martial Hebert, 1999)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_spin_image_estimation.html pcl::SpinImageEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_spin_image.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==RIFT==&lt;br /&gt;
&lt;br /&gt;
The Rotation-Invariant Feature Transform, like the spin image, takes some concepts from 2D features, in this case from the Scale-Invariant Feature Transform ([https://en.wikipedia.org/wiki/Scale-invariant_feature_transform SIFT]). It is the only descriptor seen here that requires intensity information in order to compute it (it can be obtained from the RGB color values). This means, of course, that you will not be able to use RIFT with standard XYZ clouds, you also need the texture.&lt;br /&gt;
&lt;br /&gt;
In the first step, a circular patch (with the given radius) is fitted on the surface the point lies on. This patch is divided into concentric rings, according to the chosen distance bin size. Then, an histogram is populated with all the point's neighbors lying inside a sphere centered at that point and with the mentioned radius. The distance and the orientation of the intensity gradient at each point are considered. To make it rotation invariant, the angle between the gradient orientation and the vector pointing outward from the center of the patch is measured.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:RIFT.png|thumb|center|600px|RIFT feature values at 3 different locations in the descriptor (image from [http://www.cs.illinois.edu/~slazebni/publications/pami05.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The authors' original implementation uses 4 rings and 8 histogram orientations, which produce a descriptor of size 32. RIFT is not robust to texture flipping, though this was never considered a big issue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/point_types_conversion.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/intensity_gradient.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/rift.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// A handy typedef.&lt;br /&gt;
typedef pcl::Histogram&amp;lt;32&amp;gt; RIFT32;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud with color information.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;::Ptr cloudColor(new pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;);&lt;br /&gt;
	// Object for storing the point cloud with intensity value.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZI&amp;gt;::Ptr cloudIntensity(new pcl::PointCloud&amp;lt;pcl::PointXYZI&amp;gt;);&lt;br /&gt;
	// Object for storing the intensity gradients.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::IntensityGradient&amp;gt;::Ptr gradients(new pcl::PointCloud&amp;lt;pcl::IntensityGradient&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the RIFT descriptor for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;RIFT32&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;RIFT32&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGB&amp;gt;(argv[1], *cloudColor) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Convert the RGB to intensity.&lt;br /&gt;
	pcl::PointCloudXYZRGBtoXYZI(*cloudColor, *cloudIntensity);&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZI, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloudIntensity);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZI&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZI&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Compute the intensity gradients.&lt;br /&gt;
	pcl::IntensityGradientEstimation &amp;lt; pcl::PointXYZI, pcl::Normal, pcl::IntensityGradient,&lt;br /&gt;
		pcl::common::IntensityFieldAccessor&amp;lt;pcl::PointXYZI&amp;gt; &amp;gt; ge;&lt;br /&gt;
	ge.setInputCloud(cloudIntensity);&lt;br /&gt;
	ge.setInputNormals(normals);&lt;br /&gt;
	ge.setRadiusSearch(0.03);&lt;br /&gt;
	ge.compute(*gradients);&lt;br /&gt;
&lt;br /&gt;
	// RIFT estimation object.&lt;br /&gt;
	pcl::RIFTEstimation&amp;lt;pcl::PointXYZI, pcl::IntensityGradient, RIFT32&amp;gt; rift;&lt;br /&gt;
	rift.setInputCloud(cloudIntensity);&lt;br /&gt;
	rift.setSearchMethod(kdtree);&lt;br /&gt;
	// Set the intensity gradients to use.&lt;br /&gt;
	rift.setInputGradient(gradients);&lt;br /&gt;
	// Radius, to get all neighbors within.&lt;br /&gt;
	rift.setRadiusSearch(0.02);&lt;br /&gt;
	// Set the number of bins to use in the distance dimension.&lt;br /&gt;
	rift.setNrDistanceBins(4);&lt;br /&gt;
	// Set the number of bins to use in the gradient orientation dimension.&lt;br /&gt;
	rift.setNrGradientBins(8);&lt;br /&gt;
	// Note: you must change the output histogram size to reflect the previous values.&lt;br /&gt;
&lt;br /&gt;
	rift.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search method, Intensity gradients, Radius, Distance bins, Gradient bins&lt;br /&gt;
* '''Output''': RIFT descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://www.cs.illinois.edu/~slazebni/publications/pami05.pdf A Sparse Texture Representation Using Local Affine Regions] (Svetlana Lazebnik et al., 2005)&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_r_i_f_t_estimation.html pcl::RIFTEstimation]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_intensity_gradient_estimation.html pcl::IntensityGradientEstimation]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/structpcl_1_1_intensity_gradient.html pcl::IntensityGradient]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_RIFT.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==NARF==&lt;br /&gt;
&lt;br /&gt;
The Normal Aligned Radial Feature is the only descriptor here that does not take a point cloud as input. Instead, it works with range images. A range image is a common RGB image in which the distance to the point that corresponds to a certain pixel is encoded as a color value in the [https://en.wikipedia.org/wiki/Visible_spectrum visible light spectrum]: the points that are closer to the camera would be violet, while the points near the maximum sensor range would be red.&lt;br /&gt;
&lt;br /&gt;
NARF also requires us to find suitable keypoints to compute the descriptor for. NARF keypoints are located near an object's corners, and this also requires to find the borders (transitions from foreground to background), which are trivial to find with a range image. Because of this lengthy pipeline, I will describe the whole process in different sections.&lt;br /&gt;
&lt;br /&gt;
===Obtaining a range image===&lt;br /&gt;
&lt;br /&gt;
Because we always work with point clouds, I will now explain how you can convert one into a range image, in order to use it for the NARF descriptor. PCL provides a couple of handy classes to perform the conversion, given that you fill the camera data correctly.&lt;br /&gt;
&lt;br /&gt;
A range image can be created in two ways. First, we can use spherical projection, which would give us an image similar to the ones produced by a LIDAR sensor. Second, we can use planar projection, which is better suitable for camera-like sensors as the Kinect or the Xtion, and will not have the characteristic distortion of the first one.&lt;br /&gt;
&lt;br /&gt;
====Spherical projection====&lt;br /&gt;
&lt;br /&gt;
The following code will take a point cloud and create a range image from it, using spherical projection:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/range_image_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Parameters needed by the range image object:&lt;br /&gt;
&lt;br /&gt;
	// Angular resolution is the angular distance between pixels.&lt;br /&gt;
	// Kinect: 57° horizontal FOV, 43° vertical FOV, 640x480 (chosen here).&lt;br /&gt;
	// Xtion: 58° horizontal FOV, 45° vertical FOV, 640x480.&lt;br /&gt;
	float angularResolutionX = (float)(57.0f / 640.0f * (M_PI / 180.0f));&lt;br /&gt;
	float angularResolutionY = (float)(43.0f / 480.0f * (M_PI / 180.0f));&lt;br /&gt;
	// Maximum horizontal and vertical angles. For example, for a full panoramic scan,&lt;br /&gt;
	// the first would be 360º. Choosing values that adjust to the real sensor will&lt;br /&gt;
	// decrease the time it takes, but don't worry. If the values are bigger than&lt;br /&gt;
	// the real ones, the image will be automatically cropped to discard empty zones.&lt;br /&gt;
	float maxAngleX = (float)(60.0f * (M_PI / 180.0f));&lt;br /&gt;
	float maxAngleY = (float)(50.0f * (M_PI / 180.0f));&lt;br /&gt;
	// Sensor pose. Thankfully, the cloud includes the data.&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	// Noise level. If greater than 0, values of neighboring points will be averaged.&lt;br /&gt;
	// This would set the search radius (e.g., 0.03 == 3cm).&lt;br /&gt;
	float noiseLevel = 0.0f;&lt;br /&gt;
	// Minimum range. If set, any point closer to the sensor than this will be ignored.&lt;br /&gt;
	float minimumRange = 0.0f;&lt;br /&gt;
	// Border size. If greater than 0, a border of &amp;quot;unobserved&amp;quot; points will be left&lt;br /&gt;
	// in the image when it is cropped.&lt;br /&gt;
	int borderSize = 1;&lt;br /&gt;
&lt;br /&gt;
	// Range image object.&lt;br /&gt;
	pcl::RangeImage rangeImage;&lt;br /&gt;
	rangeImage.createFromPointCloud(*cloud, angularResolutionX, angularResolutionY,&lt;br /&gt;
									maxAngleX, maxAngleY, sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
									noiseLevel, minimumRange, borderSize);&lt;br /&gt;
&lt;br /&gt;
	// Visualize the image.&lt;br /&gt;
	pcl::visualization::RangeImageVisualizer viewer(&amp;quot;Range image&amp;quot;);&lt;br /&gt;
	viewer.showRangeImage(rangeImage);&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer.spinOnce();&lt;br /&gt;
		// Sleep 100ms to go easy on the CPU.&lt;br /&gt;
		pcl_sleep(0.1);&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here you can see an example of the output range image:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_spherical.png|thumb|center|400px|Range image of a point cloud, using spherical projection.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Angular resolution, Maximum angle, Sensor pose, Coordinate frame, Noise level, Maximum range, Border size&lt;br /&gt;
* '''Output''': Range image (spherical projection)&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/range_image_creation.php How to create a range image from a point cloud]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/range_image_visualization.php How to visualize a range image]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_range_image.html pcl::RangeImage]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_range_image_spherical.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Planar projection====&lt;br /&gt;
&lt;br /&gt;
As mentioned, planar projection will give better results with clouds taken from a depth camera:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image_planar.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/range_image_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Parameters needed by the planar range image object:&lt;br /&gt;
&lt;br /&gt;
	// Image size. Both Kinect and Xtion work at 640x480.&lt;br /&gt;
	int imageSizeX = 640;&lt;br /&gt;
	int imageSizeY = 480;&lt;br /&gt;
	// Center of projection. here, we choose the middle of the image.&lt;br /&gt;
	float centerX = 640.0f / 2.0f;&lt;br /&gt;
	float centerY = 480.0f / 2.0f;&lt;br /&gt;
	// Focal length. The value seen here has been taken from the original depth images.&lt;br /&gt;
	// It is safe to use the same value vertically and horizontally.&lt;br /&gt;
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;&lt;br /&gt;
	// Sensor pose. Thankfully, the cloud includes the data.&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	// Noise level. If greater than 0, values of neighboring points will be averaged.&lt;br /&gt;
	// This would set the search radius (e.g., 0.03 == 3cm).&lt;br /&gt;
	float noiseLevel = 0.0f;&lt;br /&gt;
	// Minimum range. If set, any point closer to the sensor than this will be ignored.&lt;br /&gt;
	float minimumRange = 0.0f;&lt;br /&gt;
&lt;br /&gt;
	// Planar range image object.&lt;br /&gt;
	pcl::RangeImagePlanar rangeImagePlanar;&lt;br /&gt;
	rangeImagePlanar.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,&lt;br /&gt;
			centerX, centerY, focalLengthX, focalLengthX,&lt;br /&gt;
			sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
			noiseLevel, minimumRange);&lt;br /&gt;
&lt;br /&gt;
	// Visualize the image.&lt;br /&gt;
	pcl::visualization::RangeImageVisualizer viewer(&amp;quot;Planar range image&amp;quot;);&lt;br /&gt;
	viewer.showRangeImage(rangeImagePlanar);&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer.spinOnce();&lt;br /&gt;
		// Sleep 100ms to go easy on the CPU.&lt;br /&gt;
		pcl_sleep(0.1);&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_planar.png|thumb|center|400px|Range image of a point cloud, using planar projection.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Image size, Projection center, Focal length, Sensor pose, Coordinate frame, Noise level, Maximum range&lt;br /&gt;
* '''Output''': Range image (planar projection)&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/range_image_creation.php How to create a range image from a point cloud]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/range_image_visualization.php How to visualize a range image]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_range_image_planar.html pcl::RangeImagePlanar]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_range_image_planar.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you prefer to do the conversion in real time while you inspect the cloud, PCL ships with an [https://github.com/PointCloudLibrary/pcl/tree/master/doc/tutorials/content/sources/openni_range_image_visualization example] that fetches an &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;openni_wrapper::DepthImage&amp;quot;&amp;lt;/span&amp;gt; from an OpenNI device and creates the range image from it. You can adapt the code of the [[PCL/OpenNI_tutorial_1:_Installing_and_testing#Testing | example]] example from tutorial 1 to save it to disk with the function [http://docs.pointclouds.org/trunk/group__io.html#ga7291a029cdcde32ca3639d07dc6491b9 pcl::io::saveRangeImagePlanarFilePNG()].&lt;br /&gt;
&lt;br /&gt;
===Extracting borders===&lt;br /&gt;
&lt;br /&gt;
NARF keypoints are located near the edges of objects in the range image, so in order to find them, we first have to extract the borders. A border is defined as an abrupt change from foreground to background. In a range image, this can be easily seen because there is a &amp;quot;jump&amp;quot; in the depth value of two adjacent pixels.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_border_detection.png|thumb|center|400px|Border detection on a range image (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are three types of borders. ''Object borders'' consist of the pixels (or points) located on the very edge of an object (the outermost points still belonging to the object). ''Shadow borders'' are points in the background on the edge of occlusions (empty areas in the background due to the objects in front covering them). Notice that, when the cloud is seen from the sensor's viewpoint, object and shadow points will seem adjacent. Finally, ''veil points'' are points interpolated between the previous two which appear in scans taken with LIDAR sensors, so we do not have to worry about them here.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:Range_image_border_type.png | Types of borders (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).&lt;br /&gt;
File:Range_image_borders_example.png | Example of object and shadow borders on a cloud.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The algorithm basically compares a point's depth with the values of its neighbors, and if a big difference is found, we know it is due to a border. Points closer to the sensor will be marked as object borders, and the other ones as shadow borders.&lt;br /&gt;
&lt;br /&gt;
PCL provides a class for extracting borders of a range image:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image_planar.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/range_image_border_extractor.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/range_image_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the borders.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::BorderDescription&amp;gt;::Ptr borders(new pcl::PointCloud&amp;lt;pcl::BorderDescription&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Convert the cloud to range image.&lt;br /&gt;
	int imageSizeX = 640, imageSizeY = 480;&lt;br /&gt;
	float centerX = (640.0f / 2.0f), centerY = (480.0f / 2.0f);&lt;br /&gt;
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	float noiseLevel = 0.0f, minimumRange = 0.0f;&lt;br /&gt;
	pcl::RangeImagePlanar rangeImage;&lt;br /&gt;
	rangeImage.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,&lt;br /&gt;
			centerX, centerY, focalLengthX, focalLengthX,&lt;br /&gt;
			sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
			noiseLevel, minimumRange);&lt;br /&gt;
&lt;br /&gt;
	// Border extractor object.&lt;br /&gt;
	pcl::RangeImageBorderExtractor borderExtractor(&amp;amp;rangeImage);&lt;br /&gt;
&lt;br /&gt;
	borderExtractor.compute(*borders);&lt;br /&gt;
&lt;br /&gt;
	// Visualize the borders.&lt;br /&gt;
	pcl::visualization::RangeImageVisualizer* viewer = NULL;&lt;br /&gt;
	viewer = pcl::visualization::RangeImageVisualizer::getRangeImageBordersWidget(rangeImage,&lt;br /&gt;
			 -std::numeric_limits&amp;lt;float&amp;gt;::infinity(),&lt;br /&gt;
			 std::numeric_limits&amp;lt;float&amp;gt;::infinity(),&lt;br /&gt;
			 false, *borders, &amp;quot;Borders&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce();&lt;br /&gt;
		// Sleep 100ms to go easy on the CPU.&lt;br /&gt;
		pcl_sleep(0.1);&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_borders.png|thumb|center|400px|Borders found on the range image.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can use the extractor's &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getParameters()&amp;quot;&amp;lt;/span&amp;gt; function to get a [http://docs.pointclouds.org/trunk/structpcl_1_1_range_image_border_extractor_1_1_parameters.html pcl::RangeImageBorderExtractor::Parameters] struct with the settings that will be used.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Range image&lt;br /&gt;
* '''Output''': Borders&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/range_image_border_extraction.php How to extract borders from range images]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://europa.informatik.uni-freiburg.de/files/steder11icra.pdf Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries] (Bastian Steder et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_range_image_border_extractor.html pcl::RangeImageBorderExtractor]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_range_image_borders.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Finding keypoints===&lt;br /&gt;
&lt;br /&gt;
Citing the [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original publication]:''&lt;br /&gt;
&lt;br /&gt;
''&amp;quot;We have the following requirements for our interest point extraction procedure:&lt;br /&gt;
&lt;br /&gt;
# ''It must take information about borders and the surface structure into account.''&lt;br /&gt;
# ''It must select positions that can be reliably detected even if the object is observed from another perspective.''&lt;br /&gt;
# ''The points must be on positions that provide stable areas for normal estimation or the descriptor calculation in general.&amp;quot;''&lt;br /&gt;
&lt;br /&gt;
The procedure is the following: for every point in the range image, a score is computed that conveys how much the surface changes in its neighborhood (this is tuned with the ''support size'' σ, which is the diameter of the sphere used to find neighboring points). Also, the dominant direction of this change is computed. Then, this direction is compared with those of the neighbors, trying to find how stable the point is (if the directions are very different, that means the point is not stable, and that the surface around changes a lot). Points that are near the object's corners (but not exactly on the very edge) will be good keypoints, yet stable enough.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:NARF_keypoints.png | NARF keypoints (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).&lt;br /&gt;
File:NARF_keypoints_support_sizes.png | Interest regions with a support size of 20cm (up) and 1m (down) (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In PCL, NARF keypoints can be found this way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image_planar.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/range_image_border_extractor.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/keypoints/narf_keypoint.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/range_image_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the keypoints' indices.&lt;br /&gt;
	pcl::PointCloud&amp;lt;int&amp;gt;::Ptr keypoints(new pcl::PointCloud&amp;lt;int&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Convert the cloud to range image.&lt;br /&gt;
	int imageSizeX = 640, imageSizeY = 480;&lt;br /&gt;
	float centerX = (640.0f / 2.0f), centerY = (480.0f / 2.0f);&lt;br /&gt;
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	float noiseLevel = 0.0f, minimumRange = 0.0f;&lt;br /&gt;
	pcl::RangeImagePlanar rangeImage;&lt;br /&gt;
	rangeImage.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,&lt;br /&gt;
			centerX, centerY, focalLengthX, focalLengthX,&lt;br /&gt;
			sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
			noiseLevel, minimumRange);&lt;br /&gt;
&lt;br /&gt;
	pcl::RangeImageBorderExtractor borderExtractor;&lt;br /&gt;
	// Keypoint detection object.&lt;br /&gt;
	pcl::NarfKeypoint detector(&amp;amp;borderExtractor);&lt;br /&gt;
	detector.setRangeImage(&amp;amp;rangeImage);&lt;br /&gt;
	// The support size influences how big the surface of interest will be,&lt;br /&gt;
	// when finding keypoints from the border information.&lt;br /&gt;
	detector.getParameters().support_size = 0.2f;&lt;br /&gt;
&lt;br /&gt;
	detector.compute(*keypoints);&lt;br /&gt;
&lt;br /&gt;
	// Visualize the keypoints.&lt;br /&gt;
	pcl::visualization::RangeImageVisualizer viewer(&amp;quot;NARF keypoints&amp;quot;);&lt;br /&gt;
	viewer.showRangeImage(rangeImage);&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; keypoints-&amp;gt;points.size(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		viewer.markPoint(keypoints-&amp;gt;points[i] % rangeImage.width,&lt;br /&gt;
						 keypoints-&amp;gt;points[i] / rangeImage.width,&lt;br /&gt;
						 // Set the color of the pixel to red (the background&lt;br /&gt;
						 // circle is already that color). All other parameters&lt;br /&gt;
						 // are left untouched, check the API for more options.&lt;br /&gt;
						 pcl::visualization::Vector3ub(1.0f, 0.0f, 0.0f));&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer.spinOnce();&lt;br /&gt;
		// Sleep 100ms to go easy on the CPU.&lt;br /&gt;
		pcl_sleep(0.1);&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_NARF_keypoints.png|thumb|center|400px|NARF keypoints found on the range image.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Range image, Border extractor, Support Size&lt;br /&gt;
* '''Output''': NARF keypoints&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/narf_keypoint_extraction.php How to extract NARF keypoint from a range image]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://europa.informatik.uni-freiburg.de/files/steder11icra.pdf Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries] (Bastian Steder et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_narf_keypoint.html pcl::NarfKeypoint]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_NARF_keypoints.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Computing the descriptor===&lt;br /&gt;
&lt;br /&gt;
We have created the range image from a point cloud, and we have extracted the borders in order to find good keypoints. Now it is time to compute the NARF descriptor for each keypoint.&lt;br /&gt;
&lt;br /&gt;
The NARF descriptor encodes information about surface changes around a point. First, a local range patch is created around the point. It is like a small range image centered at that point, aligned with the normal (it would seem as if we were looking at the point along the normal). Then, a star pattern with ''n'' beams is overlaid onto the patch, also centered at the point. For every beam, a value is computed, that reflects how much the surface under it changes. The stronger the change is, and the closer to the center it is, the higher the final value will be. The ''n'' resulting values compose the final descriptor.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:NARF_descriptor.png|thumb|center|600px|Computing the NARF descriptor for a keypoint (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The descriptor right now is not invariant to rotations around the normal. To achieve this, the whole possible 360 degrees are binned into an histogram. The value of each bin is computed from the descriptor values according to the angle. Then, the bin with the highest value is considered the dominant orientation, and the descriptor is shifted according to it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image_planar.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/range_image_border_extractor.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/keypoints/narf_keypoint.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/narf_descriptor.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the keypoints' indices.&lt;br /&gt;
	pcl::PointCloud&amp;lt;int&amp;gt;::Ptr keypoints(new pcl::PointCloud&amp;lt;int&amp;gt;);&lt;br /&gt;
	// Object for storing the NARF descriptors.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Narf36&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::Narf36&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Convert the cloud to range image.&lt;br /&gt;
	int imageSizeX = 640, imageSizeY = 480;&lt;br /&gt;
	float centerX = (640.0f / 2.0f), centerY = (480.0f / 2.0f);&lt;br /&gt;
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	float noiseLevel = 0.0f, minimumRange = 0.0f;&lt;br /&gt;
	pcl::RangeImagePlanar rangeImage;&lt;br /&gt;
	rangeImage.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,&lt;br /&gt;
			centerX, centerY, focalLengthX, focalLengthX,&lt;br /&gt;
			sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
			noiseLevel, minimumRange);&lt;br /&gt;
&lt;br /&gt;
	// Extract the keypoints.&lt;br /&gt;
	pcl::RangeImageBorderExtractor borderExtractor;&lt;br /&gt;
	pcl::NarfKeypoint detector(&amp;amp;borderExtractor);&lt;br /&gt;
	detector.setRangeImage(&amp;amp;rangeImage);&lt;br /&gt;
	detector.getParameters().support_size = 0.2f;&lt;br /&gt;
	detector.compute(*keypoints);&lt;br /&gt;
&lt;br /&gt;
	// The NARF estimator needs the indices in a vector, not a cloud.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; keypoints2;&lt;br /&gt;
	keypoints2.resize(keypoints-&amp;gt;points.size());&lt;br /&gt;
	for (unsigned int i = 0; i &amp;lt; keypoints-&amp;gt;size(); ++i)&lt;br /&gt;
		keypoints2[i] = keypoints-&amp;gt;points[i];&lt;br /&gt;
	// NARF estimation object.&lt;br /&gt;
	pcl::NarfDescriptor narf(&amp;amp;rangeImage, &amp;amp;keypoints2);&lt;br /&gt;
	// Support size: choose the same value you used for keypoint extraction.&lt;br /&gt;
	narf.getParameters().support_size = 0.2f;&lt;br /&gt;
	// If true, the rotation invariant version of NARF will be used. The histogram&lt;br /&gt;
	// will be shifted according to the dominant orientation to provide robustness to&lt;br /&gt;
	// rotations around the normal.&lt;br /&gt;
	narf.getParameters().rotation_invariant = true;&lt;br /&gt;
&lt;br /&gt;
	narf.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Range image, Key points, Support Size&lt;br /&gt;
* '''Output''': NARF descriptors&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/narf_feature_extraction.php How to extract NARF Features from a range image]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://europa.informatik.uni-freiburg.de/files/steder11icra.pdf Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries] (Bastian Steder et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_narf_descriptor.html pcl::NarfDescriptor]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_NARF_descriptor.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==RoPS==&lt;br /&gt;
&lt;br /&gt;
The Rotational Projection Statistics (RoPS) feature is a bit different from the other descriptors because it works with a triangle mesh, so a previous triangulation step is needed for generating this mesh from the cloud. Apart from that, most concepts are similar.&lt;br /&gt;
&lt;br /&gt;
In order to compute RoPS for a keypoint, the local surface is cropped according to a support radius, so only points and triangles lying inside are taken into account. Then, a local reference frame (LRF) is computed, giving the descriptor its rotational invariance. A coordinate system is created with the point as the origin, and the axes aligned with the LRF. Then, for every axis, several steps are performed.&lt;br /&gt;
&lt;br /&gt;
First, the local surface is rotated around the current axis. The angle is determined by one of the parameters, which sets the number of rotations. Then, all points in the local surface are projected onto the XY, XZ and YZ planes. For each one, statistical information about the distribution of the projected points is computed, and concatenated to form the final descriptor.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/point_types_conversion.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/gp3.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/rops_estimation.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// A handy typedef.&lt;br /&gt;
typedef pcl::Histogram&amp;lt;135&amp;gt; ROPS135;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing both the points and the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;::Ptr cloudNormals(new pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
	// Object for storing the ROPS descriptor for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;ROPS135&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;ROPS135&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Perform triangulation.&lt;br /&gt;
	pcl::concatenateFields(*cloud, *normals, *cloudNormals);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointNormal&amp;gt;::Ptr kdtree2(new pcl::search::KdTree&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
	kdtree2-&amp;gt;setInputCloud(cloudNormals);&lt;br /&gt;
	pcl::GreedyProjectionTriangulation&amp;lt;pcl::PointNormal&amp;gt; triangulation;&lt;br /&gt;
	pcl::PolygonMesh triangles;&lt;br /&gt;
	triangulation.setSearchRadius(0.025);&lt;br /&gt;
	triangulation.setMu(2.5);&lt;br /&gt;
	triangulation.setMaximumNearestNeighbors(100);&lt;br /&gt;
	triangulation.setMaximumSurfaceAngle(M_PI / 4); // 45 degrees.&lt;br /&gt;
	triangulation.setNormalConsistency(false);&lt;br /&gt;
	triangulation.setMinimumAngle(M_PI / 18); // 10 degrees.&lt;br /&gt;
	triangulation.setMaximumAngle(2 * M_PI / 3); // 120 degrees.&lt;br /&gt;
	triangulation.setInputCloud(cloudNormals);&lt;br /&gt;
	triangulation.setSearchMethod(kdtree2);&lt;br /&gt;
	triangulation.reconstruct(triangles);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should only compute descriptors for chosen keypoints. It has&lt;br /&gt;
	// been omitted here for simplicity.&lt;br /&gt;
&lt;br /&gt;
	// RoPs estimation object.&lt;br /&gt;
	pcl::ROPSEstimation&amp;lt;pcl::PointXYZ, ROPS135&amp;gt; rops;&lt;br /&gt;
	rops.setInputCloud(cloud);&lt;br /&gt;
	rops.setSearchMethod(kdtree);&lt;br /&gt;
	rops.setRadiusSearch(0.03);&lt;br /&gt;
	rops.setTriangles(triangles.polygons);&lt;br /&gt;
	// Number of partition bins that is used for distribution matrix calculation.&lt;br /&gt;
	rops.setNumberOfPartitionBins(5);&lt;br /&gt;
	// The greater the number of rotations is, the bigger the resulting descriptor.&lt;br /&gt;
	// Make sure to change the histogram size accordingly.&lt;br /&gt;
	rops.setNumberOfRotations(3);&lt;br /&gt;
	// Support radius that is used to crop the local surface of the point.&lt;br /&gt;
	rops.setSupportRadius(0.025);&lt;br /&gt;
&lt;br /&gt;
	rops.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Triangles, Search method, Support radius, Number of rotations, Number of partition bins&lt;br /&gt;
* '''Output''': RoPS descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://arxiv.org/pdf/1304.3192v1.pdf Rotational Projection Statistics for 3D Local Surface Description and Object Recognition] (Yulan Guo et al., 2013)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_r_o_p_s_estimation.html pcl::ROPSEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_RoPS.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Global descriptors=&lt;br /&gt;
&lt;br /&gt;
Global descriptors encode object geometry. They are not computed for individual points, but for a whole cluster that represents an object. Because of this, a preprocessing step ([[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Segmentation|segmentation]]) is required, in order to retrieve possible candidates.&lt;br /&gt;
&lt;br /&gt;
Global descriptors are used for object recognition and classification, geometric analysis (object type, shape...), and pose estimation.&lt;br /&gt;
&lt;br /&gt;
You should also know that many local descriptors can also be used as global ones. This can be done with descriptors that use a radius to search for neighbors (as PFH does). The trick is to compute it for one single point in the object cluster, and set the radius to the maximum possible distance between any two points (so all points in the cluster are considered as neighbors).&lt;br /&gt;
&lt;br /&gt;
==VFH==&lt;br /&gt;
&lt;br /&gt;
The Viewpoint Feature Histogram is based on the FPFH. Because the latter is invariant to the object's pose, the authors decided to expand it by including information about the viewpoint. Also, the FPFH is estimated once for the whole cluster, not for every point.&lt;br /&gt;
&lt;br /&gt;
The VFH is made up by two parts: a viewpoint direction component, and an extended FPFH component. To compute the first one, the object's [[PCL/OpenNI_tutorial_2:_Cloud_processing_%28basic%29#Computing_the_centroid|centroid]] is found, which is the point that results from averaging the X, Y and Z coordinates of all points. Then, the vector between the viewpoint (the position of the sensor) and this centroid is computed, and normalized. Finally, for all points in the cluster, the angle between this vector and their normal is calculated, and the result is binned into an histogram. The vector is translated to each point when computing the angle because it makes the descriptor scale invariant.&lt;br /&gt;
&lt;br /&gt;
The second component is computed like the FPFH (that results in 3 histograms for the 3 angular features, α, φ and θ), with some differences: it is only computed for the centroid, using the computed viewpoint direction vector as its normal (as the point, obviously, does not have a normal), and setting all the cluster's points as neighbors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:VFH_viewpoint_component.png | Viewpoint component of the VFH (image from [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf original paper]).&lt;br /&gt;
File:VFH_extended_FPFH_component.png | Extended FPFH component of the VFH (image from [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf original paper]).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The resulting 4 histograms (1 for the viewpoint component, 3 for the extended FPFH component) are concatenated to build the final VFH descriptor. By default, the bins are normalized using the total number of points in the cluster. This makes the VFH descriptor invariant to scale.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:VFH_histogram.png|thumb|center|400px| VFH histogram (image from [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The PCL implementation computes an additional fifth histogram with the distances of the cluster points to the centroid (the Shape Distribution Component, SDC), increading the size of the output descriptor from 263 to 308. The SDC is taken from the CVFH descriptor that we will see in the next section, and makes the result more robust.&lt;br /&gt;
&lt;br /&gt;
The VFH of an already clustered object can be computed this way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/vfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the VFH descriptor.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// VFH estimation object.&lt;br /&gt;
	pcl::VFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; vfh;&lt;br /&gt;
	vfh.setInputCloud(object);&lt;br /&gt;
	vfh.setInputNormals(normals);&lt;br /&gt;
	vfh.setSearchMethod(kdtree);&lt;br /&gt;
	// Optionally, we can normalize the bins of the resulting histogram,&lt;br /&gt;
	// using the total number of points.&lt;br /&gt;
	vfh.setNormalizeBins(true);&lt;br /&gt;
	// Also, we can normalize the SDC with the maximum size found between&lt;br /&gt;
	// the centroid and any of the cluster's points.&lt;br /&gt;
	vfh.setNormalizeDistance(false);&lt;br /&gt;
&lt;br /&gt;
	vfh.compute(*descriptor);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Because only one VFH descriptor is computed for the whole cluster, the size of the cloud object that stores the result will be 1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Normals, Search method, [Normalize bins], [Normalize SDC]&lt;br /&gt;
* '''Output''': VFH descriptor&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/vfh_estimation.php Estimating VFH signatures for a set of points]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf Fast 3D Recognition and Pose Using the Viewpoint Feature Histogram] (Radu Bogdan Rusu et al., 2010)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_v_f_h_estimation.html pcl::VFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_VFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===CVFH===&lt;br /&gt;
&lt;br /&gt;
The original VFH descriptor is not robust to occlusion or other sensor artifacts, or measurement errors. If the object cluster is missing many points, the resulting computed centroid will differ from the original, altering the final descriptor, and preventing a positive match from being found. Because of that, the Clustered Viewpoint Feature Histogram (CVFH) was introduced.&lt;br /&gt;
&lt;br /&gt;
The idea is very simple: instead of computing a single VFH histogram for the whole cluster, the object is first divided in stable, smooth regions using [[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Region_growing | region-growing segmentation]], that enforces several constraints in the distances and differences of normals of the points belonging to every region. Then, a VFH is computed for every region. Thanks to this, an object can be found in a scene, as long as at least one of its regions is fully visible.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:CVFH_occlusion.png | Typical occlusion issues in a point cloud (image from original paper).&lt;br /&gt;
File:CVFH_regions.png | Object regions computed for the CVFH (image from original paper).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additionally, a Shape Distribution Component (SDC) is also computed and included. It encodes information about the distribution of the points arond the region's centroid, measuring the distances. The SDC allows to differentiate objects with similar characteristics (size and normal distribution), like two planar surfaces from each other.&lt;br /&gt;
&lt;br /&gt;
The authors proposed to discard the histogram normalization step that is performed in VFH. This has the effect of making the descriptor dependant of scale, so an object of a certain size would not match a bigger or smaller copy of itself. It also makes CVFH more robust to occlusion.&lt;br /&gt;
&lt;br /&gt;
CVFH is invariant to the camera roll angle, like most global descriptors. This is so because rotations about that camera axis do not change the observable geometry that descriptors are computed from, limiting the pose estimation to 5 DoF. The use of a Camera Roll Histogram (CRH) has been proposed to overcome this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/cvfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the CVFH descriptors.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// CVFH estimation object.&lt;br /&gt;
	pcl::CVFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; cvfh;&lt;br /&gt;
	cvfh.setInputCloud(object);&lt;br /&gt;
	cvfh.setInputNormals(normals);&lt;br /&gt;
	cvfh.setSearchMethod(kdtree);&lt;br /&gt;
	// Set the maximum allowable deviation of the normals,&lt;br /&gt;
	// for the region segmentation step.&lt;br /&gt;
	cvfh.setEPSAngleThreshold(5.0 / 180.0 * M_PI); // 5 degrees.&lt;br /&gt;
	// Set the curvature threshold (maximum disparity between curvatures),&lt;br /&gt;
	// for the region segmentation step.&lt;br /&gt;
	cvfh.setCurvatureThreshold(1.0);&lt;br /&gt;
	// Set to true to normalize the bins of the resulting histogram,&lt;br /&gt;
	// using the total number of points. Note: enabling it will make CVFH&lt;br /&gt;
	// invariant to scale just like VFH, but the authors encourage the opposite.&lt;br /&gt;
	cvfh.setNormalizeBins(false);&lt;br /&gt;
&lt;br /&gt;
	cvfh.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can further customize the segmentation step with &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setClusterTolerance()&amp;quot;&amp;lt;/span&amp;gt; (to set the maximum Euclidean distance between points in the same cluster) and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setMinPoints()&amp;quot;&amp;lt;/span&amp;gt;. The size of the output will be equal to the number of regions the object was divided in. Also, check the functions &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getCentroidClusters()&amp;quot;&amp;lt;/span&amp;gt; and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getCentroidNormalClusters()&amp;quot;&amp;lt;/span&amp;gt;, you can use them to get information about the centroids used to compute the different CVFH descriptors.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Normals, Search method, Angle threshold, Curvature threshold, [Normalize bins], [Cluster tolerance], [Minimum points]&lt;br /&gt;
* '''Output''': CVFH descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6130296 CAD-Model Recognition and 6DOF Pose Estimation Using 3D Cues] (requires IEEE Xplore subscription) (Aitor Aldoma et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_c_v_f_h_estimation.html pcl::CVFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_CVFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====OUR-CVFH====&lt;br /&gt;
&lt;br /&gt;
The Oriented, Unique and Repeatable CVFH expands the previous descriptor, adding the computation of an unique reference frame to make it more robust.&lt;br /&gt;
&lt;br /&gt;
OUR-CVFH relies on the use of Semi-Global Unique Reference Frames (SGURFs), which are repeatable coordinate systems computed for each region. Not only they remove the invariance to camera roll and allow to extract the 6DoF pose directly without additional steps, but they also improve the spatial descriptiveness.&lt;br /&gt;
&lt;br /&gt;
The first part of the computation is akin to CVFH, but after segmentation, the points in each region are filtered once more according to the difference between their normals and the region's average normal. This results in better shaped regions, improving the estimation of the Reference Frames (RFs).&lt;br /&gt;
&lt;br /&gt;
After this, the SGURF is computed for each region. Disambiguation is performed to decide the sign of the axes, according to the points' distribution. If this is not enough and the sign remains ambiguous, multiple RFs will need to be created to account for it. Finally, the OUR-CVFH descriptor is computed. The original Shape Distribution Component (SDC) is discarded, and the surface is now described according to the RFs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:OUR-CVFH.png|thumb|center|600px| SGURF frame and resulting histogram of a region (image from [http://vision.deis.unibo.it/fede/papers/dagm12.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/our_cvfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the OUR-CVFH descriptors.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// OUR-CVFH estimation object.&lt;br /&gt;
	pcl::OURCVFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; ourcvfh;&lt;br /&gt;
	ourcvfh.setInputCloud(object);&lt;br /&gt;
	ourcvfh.setInputNormals(normals);&lt;br /&gt;
	ourcvfh.setSearchMethod(kdtree);&lt;br /&gt;
	ourcvfh.setEPSAngleThreshold(5.0 / 180.0 * M_PI); // 5 degrees.&lt;br /&gt;
	ourcvfh.setCurvatureThreshold(1.0);&lt;br /&gt;
	ourcvfh.setNormalizeBins(false);&lt;br /&gt;
	// Set the minimum axis ratio between the SGURF axes. At the disambiguation phase,&lt;br /&gt;
	// this will decide if additional Reference Frames need to be created, if ambiguous.&lt;br /&gt;
	ourcvfh.setAxisRatio(0.8);&lt;br /&gt;
&lt;br /&gt;
	ourcvfh.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can use the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getTransforms()&amp;quot;&amp;lt;/span&amp;gt; function to get the transformations aligning the cloud to the corresponding SGURF.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Normals, Search method, Angle threshold, Curvature threshold, [Normalize bins], [Cluster tolerance], [Minimum points], [Axis ratio]&lt;br /&gt;
* '''Output''': OUR-CVFH descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://vision.deis.unibo.it/fede/papers/dagm12.pdf OUR-CVFH – Oriented, Unique and Repeatable Clustered Viewpoint Feature Histogram for Object Recognition and 6DOF Pose Estimation] (Aitor Aldoma et al., 2012)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_o_u_r_c_v_f_h_estimation.html pcl::OURCVFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_OUR-CVFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==ESF==&lt;br /&gt;
&lt;br /&gt;
The Ensemble of Shape Functions (ESF) is a combination of 3 different shape functions that describe certain properties of the cloud's points: distances, angles and area. This descriptor is very unique because it does not require normal information. Actually, it does not need any preprocessing, as it is robust to noise and incomplete surfaces.&lt;br /&gt;
&lt;br /&gt;
The algorithm uses a voxel grid as an approximation of the real surface. It iterates through all the points in the cloud: for every iteration, 3 random points are chosen. For these points, the shape functions are computed:&lt;br /&gt;
&lt;br /&gt;
* '''D2''': this function computes the distances between point pairs (3 overall). Then, for every pair, it checks if the line that connects both points lies entirely inside the surface, entirely outside (crossing free space), or both. Depending on this, the distance value will be binned to one of three possible histograms: IN, OUT or MIXED.&lt;br /&gt;
* '''D2 ratio''': an additional histogram for the ratio between parts of the line inside the surface, and parts outside. This value will be 0 if the line is completely outside, 1 if completely inside, and some value in between if mixed.&lt;br /&gt;
* '''D3''': this computes the square root of the area of the triangle formed by the 3 points. Like D2, the result is also classified as IN, OUT or MIXED, each with its own histogram.&lt;br /&gt;
* '''A3''': this function computes the angle formed by the points. Then, the value is binned depending on how the line opposite to the angle is (once again, as IN, OUT or MIXED).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:ESF.png|thumb|center|600px| ESF descriptor (image from [http://www.inf.ethz.ch/personal/zeislb/publications/aldoma_2012jram_PCLTutorial.pdf this paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After the loop is over, we are left with 10 subhistograms (IN, OUT and MIXED for D2, D3 and A3, and an additional one for the ratio). Each one has 64 bins, so the size of the final ESF descriptor is 640.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/esf.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the ESF descriptor.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::ESFSignature640&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::ESFSignature640&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// ESF estimation object.&lt;br /&gt;
	pcl::ESFEstimation&amp;lt;pcl::PointXYZ, pcl::ESFSignature640&amp;gt; esf;&lt;br /&gt;
	esf.setInputCloud(object);&lt;br /&gt;
&lt;br /&gt;
	esf.compute(*descriptor);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster)&lt;br /&gt;
* '''Output''': ESF descriptor&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6181760 Ensemble of shape functions for 3D object classification] (requires IEEE Xplore subscription) (Walter Wohlkinger and Markus Vincze, 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_e_s_f_estimation.html pcl::ESFEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_ESF.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==GFPFH==&lt;br /&gt;
&lt;br /&gt;
As you may have guessed, GFPFH stands for Global Fast Point Feature Histogram, the global version of the FPFH descriptor. GFPFH was designed for the task of helping a robot navigate its environment, having some context of the objects around it.&lt;br /&gt;
&lt;br /&gt;
The first step before being able to compute the descriptor is surface categorization. A set of logical primitives (the classes, or categories) is created, which depends on the type of objects we expect the robot to find on the scene. For example, if we know there will be a coffee mug, we create three: one for the handle, and the other two for the outer and inner faces. Then, FPFH descriptors are computed, and everything is fed to a [https://en.wikipedia.org/wiki/Conditional_random_field Conditional Random Field] (CRF) algorithm. The CRF will label each surface with one of the previous categories, so we end up with a cloud where each point has been classified depending of the type of object (or object's region) it belongs to.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:FPFH_CRF.png|thumb|center|300px| Classification of objects made with FPFH and CRF (image from [https://www.willowgarage.com/sites/default/files/iccv09.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Now, the GFPFH descriptor can be computed with the result of the classification step. It will encode what the object is made of, so the robot can easily recognize it. First, an [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#Octree | octree]] is created, dividing the object in voxel leaves. For every leaf, a set of probabilities is created, one for each class. Each one stores the probability of that leaf belonging to the class, and it is computed according to the number of points in that leaf that have been labelled as that class, and the total number of points. Then, for every pair of leaves in the octree, a line is casted, connecting them. Every leaf in its path is checked for occupancy, storing the result in an histogram. If the leaf is empty (free space), a value of 0 is saved. Otherwise, the leaf probabilities are used.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:GFPFH.png|thumb|center|500px| Computing the GFPFH with a voxel grid (image from [https://www.willowgarage.com/sites/default/files/iccv09.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following code will compute the GFPFH for a cloud with label information. The categorization step is up to you, as it depends largely on the type of the scene, and the use you are going to give it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/gfpfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZL&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZL&amp;gt;);&lt;br /&gt;
	// Object for storing the GFPFH descriptor.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::GFPFHSignature16&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::GFPFHSignature16&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZL&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you should now perform classification on the cloud's points. See the&lt;br /&gt;
	// original paper for more details. For this example, we will now consider 4&lt;br /&gt;
	// different classes, and randomly label each point as one of them.&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; object-&amp;gt;points.size(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		object-&amp;gt;points[i].label = 1 + i % 4;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// ESF estimation object.&lt;br /&gt;
	pcl::GFPFHEstimation&amp;lt;pcl::PointXYZL, pcl::PointXYZL, pcl::GFPFHSignature16&amp;gt; gfpfh;&lt;br /&gt;
	gfpfh.setInputCloud(object);&lt;br /&gt;
	// Set the object that contains the labels for each point. Thanks to the&lt;br /&gt;
	// PointXYZL type, we can use the same object we store the cloud in.&lt;br /&gt;
	gfpfh.setInputLabels(object);&lt;br /&gt;
	// Set the size of the octree leaves to 1cm (cubic).&lt;br /&gt;
	gfpfh.setOctreeLeafSize(0.01);&lt;br /&gt;
	// Set the number of classes the cloud has been labelled with (default is 16).&lt;br /&gt;
	gfpfh.setNumberOfClasses(4);&lt;br /&gt;
&lt;br /&gt;
	gfpfh.compute(*descriptor);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Labels, Number of classes, Leaf size&lt;br /&gt;
* '''Output''': GFPFH descriptor&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [https://www.willowgarage.com/sites/default/files/iccv09.pdf Detecting and Segmenting Objects for Mobile Manipulation] (Radu Bogdan Rusu et al., 2009)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_g_f_p_f_h_estimation.html pcl::GFPFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_GFPFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==GRSD==&lt;br /&gt;
&lt;br /&gt;
The global version of the Radius-based Surface Descriptor works in a similar fashion to GFPFH. A voxelization and a surface categorization step are performed beforehand, labelling all surface patches according to the geometric category (plane, cylinder, edge, rim, sphere), using RSD. Then, the whole cluster is classified into one of these categories, and the GRSD descriptor is computed from this.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:GRSD.png|thumb|center|400px| Classification of objects for GRSD and resulting histogram (image from [https://ias.in.tum.de/_media/spezial/bib/grsd10humanoids.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To compute it:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/grsd.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the GRSD descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::GRSDSignature21&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::GRSDSignature21&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// GRSD estimation object.&lt;br /&gt;
	pcl::GRSDEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::GRSDSignature21&amp;gt; grsd;&lt;br /&gt;
	grsd.setInputCloud(cloud);&lt;br /&gt;
	grsd.setInputNormals(normals);&lt;br /&gt;
	grsd.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. Note: the value given here has to be&lt;br /&gt;
	// larger than the radius used to estimate the normals.&lt;br /&gt;
	grsd.setRadiusSearch(0.05);&lt;br /&gt;
	&lt;br /&gt;
	grsd.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: This code will only compile with PCL versions 1.8 and above (the current trunk).'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Normals, Search method, Radius&lt;br /&gt;
* '''Output''': GRSD descriptor&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [https://ias.in.tum.de/_media/spezial/bib/grsd10humanoids.pdf Hierarchical Object Geometric Categorization and Appearance Classification for Mobile Manipulation] (Zoltan-Csaba Marton et al., 2010)&lt;br /&gt;
** [https://ias.cs.tum.edu/_media/spezial/bib/marton11ijrr.pdf Combined 2D-3D Categorization and Classification for Multimodal Perception Systems] (Zoltan-Csaba Marton et al., 2011)&lt;br /&gt;
** [http://www.mi.t.u-tokyo.ac.jp/top/downloadpublication/36 Voxelized Shape and Color Histograms for RGB-D] (Zoltan-Csaba Marton et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_g_r_s_d_estimation.html pcl::GRSDEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_GRSD.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Saving and loading=&lt;br /&gt;
&lt;br /&gt;
You can save a descriptor to a file just [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#Writing_to_file|like with any other cloud type]]. One caveat, though. If you are using a descriptor that has its own custom type, like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PFHSignature125&amp;quot;&amp;lt;/span&amp;gt;, everything will be OK. But with descriptors that do not (where you have to use &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::Histogram&amp;lt;&amp;gt;&amp;quot;&amp;lt;/span&amp;gt;), you will get this error: &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;POINT_TYPE_NOT_PROPERLY_REGISTERED&amp;quot;''&amp;lt;/span&amp;gt;. In order to save or load from a file, PCL's IO functions need to know about the number, type and size of the fields. To solve this, you will have to properly register a new point type for your descriptor. For example, this will work for the [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#RoPS|RoPS]] descriptor example we saw earlier:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;POINT_CLOUD_REGISTER_POINT_STRUCT(ROPS135,&lt;br /&gt;
                                  (float[135], histogram, histogram)&lt;br /&gt;
)&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add the previous to your code (change it accordingly), and you will be able to save and load descriptors as usual.&lt;br /&gt;
&lt;br /&gt;
=Visualization=&lt;br /&gt;
&lt;br /&gt;
Sometimes it is desired to check a visual representation of a descriptor, perhaps to analyze the distribution of data over different bins. Because they are saved as histograms, this is something trivial to do. PCL offers a couple of classes to do this.&lt;br /&gt;
&lt;br /&gt;
==PCLHistogramVisualizer==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PCLHistogramVisualizer&amp;quot;&amp;lt;/span&amp;gt; is the simplest way to plot an histogram. The class has little functionality, but it does its job. Only one call is necessary to give the histogram and its size:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/vfh.h&amp;gt;&lt;br /&gt;
#include&amp;lt;pcl/visualization/histogram_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Clouds for storing everything.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Estimate VFH descriptor.&lt;br /&gt;
	pcl::VFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; vfh;&lt;br /&gt;
	vfh.setInputCloud(object);&lt;br /&gt;
	vfh.setInputNormals(normals);&lt;br /&gt;
	vfh.setSearchMethod(kdtree);&lt;br /&gt;
	vfh.setNormalizeBins(true);&lt;br /&gt;
	vfh.setNormalizeDistance(false);&lt;br /&gt;
	vfh.compute(*descriptor);&lt;br /&gt;
&lt;br /&gt;
	// Plotter object.&lt;br /&gt;
	pcl::visualization::PCLHistogramVisualizer viewer;&lt;br /&gt;
	// We need to set the size of the descriptor beforehand.&lt;br /&gt;
	viewer.addFeatureHistogram(*descriptor, 308);&lt;br /&gt;
&lt;br /&gt;
	viewer.spin();&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Histogram_visualizer_VFH.png|thumb|center|500px| VFH histogram seen with the Histogram Visualizer.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Descriptor, descriptor size, [Window size], [Background color], [Y axis range]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_histogram_visualizer.html pcl::visualization::PCLHistogramVisualizer]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_histogram_visualizer.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==PCLPlotter==&lt;br /&gt;
&lt;br /&gt;
This class has all the methods from &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PCLHistogramVisualizer&amp;quot;&amp;lt;/span&amp;gt; (which will be deprecated soon) plus a lot more features. The code is almost the same:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/vfh.h&amp;gt;&lt;br /&gt;
#include&amp;lt;pcl/visualization/pcl_plotter.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Clouds for storing everything.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Estimate VFH descriptor.&lt;br /&gt;
	pcl::VFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; vfh;&lt;br /&gt;
	vfh.setInputCloud(object);&lt;br /&gt;
	vfh.setInputNormals(normals);&lt;br /&gt;
	vfh.setSearchMethod(kdtree);&lt;br /&gt;
	vfh.setNormalizeBins(true);&lt;br /&gt;
	vfh.setNormalizeDistance(false);&lt;br /&gt;
	vfh.compute(*descriptor);&lt;br /&gt;
&lt;br /&gt;
	// Plotter object.&lt;br /&gt;
	pcl::visualization::PCLPlotter plotter;&lt;br /&gt;
	// We need to set the size of the descriptor beforehand.&lt;br /&gt;
	plotter.addFeatureHistogram(*descriptor, 308);&lt;br /&gt;
&lt;br /&gt;
	plotter.plot();&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:PCL_plotter_VFH.png|thumb|center|500px| VFH histogram seen with the PCLPlotter class.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you have raw data (such as a vector of floats) you can use the [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_plotter.html#aaf23b2b1c2f91c517cfde387ee1b654e addHistogramData()] function to plot it as an histogram.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Descriptor, descriptor size, [Window size], [Background color], [Y axis range]&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/pcl_plotter.php PCLPlotter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_plotter.html pcl::visualization::PCLPlotter]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_plotter.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==PCL Viewer==&lt;br /&gt;
&lt;br /&gt;
This program, included with PCL, will also let you open and visualize a saved descriptor. Internally, it uses [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#PCLPlotter|PCLPlotter]]. You can invoke the viewer from the command line like this, so it comes handy:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_viewer &amp;lt;descriptor_file&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:PCL_viewer_VFH.png|thumb|center|500px| VFH histogram seen with ''pcl_viewer''.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_1:_Installing_and_testing&amp;diff=4753</id>
		<title>PCL/OpenNI tutorial 1: Installing and testing</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_1:_Installing_and_testing&amp;diff=4753"/>
				<updated>2015-11-05T16:53:34Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Kinect.jpg|thumb|right|200px|Microsoft Kinect device.]]&lt;br /&gt;
&lt;br /&gt;
This series of tutorials will explain the usage of a depth camera like [https://en.wikipedia.org/wiki/Kinect Kinect] for &amp;quot;serious&amp;quot; researching purposes. As you may know, Kinect is in fact an affordable depth sensor, developed with technology from [https://en.wikipedia.org/wiki/PrimeSense PrimeSense], based on infrarred structured light method. It also has a common camera (which makes it a RGB-D device), a microphone and a motorized pivot. Its use is not limited to playing with a Xbox360 console, you can plug it to a computer and use it like any other sensor. Many open-source drivers and frameworks are available.&lt;br /&gt;
&lt;br /&gt;
Since its release on November 2010, it has gained a lot of popularity, specially among the scientific community. Many researches have procured themselves one because, despite the low cost (about 100 $), it has proven to be a powerful solution for depth sensing projects. Current investigations focus on real-time surface mapping, object recognition and tracking, and localization. Impressive results (like the [http://research.microsoft.com/en-us/projects/surfacerecon/ KinectFusion] project from Microsoft) are already possible.&lt;br /&gt;
&lt;br /&gt;
The new [https://en.wikipedia.org/wiki/Xbox_One Xbox One] ships with an upgraded version, [https://en.wikipedia.org/wiki/Kinect_for_Xbox_One Kinect v2], with enhanced resolution, that is able to detect your facial expression, measure your heart rate, and track every one of your fingers. A PC development-ready version ([https://en.wikipedia.org/wiki/Kinect_for_Xbox_One#Kinect_for_Windows Kinect for Windows v2]) was released in July 2014, but it could only be used with the official Windows SDK (open source support [https://github.com/OpenKinect/libfreenect2 exists] but is still young). In October 2014 an adapter that allows to plug the standard Kinect v2 to a PC was released, so the development version of the sensor was discontinued in April 2015. Now you can just buy the [http://www.microsoftstore.com/store/msusa/en_US/pdp/Kinect-for-Xbox-One/productID.307499400 standard one] for the console plus the [http://www.microsoftstore.com/store/msusa/en_US/pdp/Kinect-Adapter-for-Windows/productID.308803600 adapter].&lt;br /&gt;
&lt;br /&gt;
I will explain the installation and usage of one of the &amp;quot;legacy&amp;quot; Kinect 1.0 devices with a common PC, and the possibilities it offers. I will do it in an easy to understand way, intended for students that have just acquired it and want to start from scratch.&lt;br /&gt;
&lt;br /&gt;
Keep in mind that the software that we are going to use (Point Cloud Library and OpenNI drivers) will also let you use any other device like the [https://www.asus.com/3D-Sensor/Xtion_PRO/ Xtion PRO] or [https://www.asus.com/3D-Sensor/Xtion_PRO_LIVE/ Xtion PRO LIVE] from ASUS (the PRO only has a depth sensor, the PRO LIVE is a RGB-D camera) without changing a line of code.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: The tutorials are written for Linux platforms. Also, 64-bit versions seem to work better than 32-bit.'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Requirements=&lt;br /&gt;
&lt;br /&gt;
You will need the following:&lt;br /&gt;
&lt;br /&gt;
* A common Kinect device, out of the box. You can buy it in your local electronics shop, or online. It also includes a free copy of ''[https://en.wikipedia.org/wiki/Kinect_Adventures! Kinect Adventures!]'', which is useless if you do not own the console. Microsoft has released a ''Kinect for Windows'' device, which is a normal looking Kinect no longer compatible with Xbox360, that will only work with their official SDK, intended for developers only. Also, like I stated earlier, you can use an ASUS Xtion indistinctly.&lt;br /&gt;
* A computer running Linux (Debian or Ubuntu preferably).&lt;br /&gt;
* A medium-sized room. Kinect has some limitations for depth measurement: 40cm minimum, 8m maximum (make it 6).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: ''Kinect for Windows'' may have problems working with open source drivers on Linux .'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Connecting everything=&lt;br /&gt;
&lt;br /&gt;
Kinect does not work with a common USB port. Its power consumption is a bit higher because of the motor, so Microsoft came up with a connector that combines USB and power supply. Old Xbox 360 models needed a special adapter, new S model ones already have this new port. Luckily, Kinect comes with the official adapter out of the box (otherwise you will have to [http://www.amazon.com/Power-Supply-Cable-Kinect-Xbox-360/dp/B004S7GA46/ref=sr_1_1?ie=UTF8&amp;amp;qid=1378288327&amp;amp;sr=8-1 buy one]).&lt;br /&gt;
&lt;br /&gt;
Just plug the adapter to any power socket, and the USB to your computer. Let's check typing this in a terminal:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;lsusb&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Output should list the following devices:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;Bus 001 Device 005: ID 045e:02b0 Microsoft Corp. Xbox NUI Motor&lt;br /&gt;
Bus 001 Device 006: ID 045e:02ad Microsoft Corp. Xbox NUI Audio&lt;br /&gt;
Bus 001 Device 007: ID 045e:02ae Microsoft Corp. Xbox NUI Camera&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are using a Xtion, you should see:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;Bus 001 Device 004: ID 1d27:0601 ASUS&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;0601&amp;quot; identifies the new Xtion model, &amp;quot;0600&amp;quot; the older one. Both should work the same. But try to avoid [[PCL/OpenNI_troubleshooting#ASUS_Xtion_PRO|USB 3.0 ports]]!&lt;br /&gt;
&lt;br /&gt;
=Installing the software=&lt;br /&gt;
&lt;br /&gt;
There is more than one way to get your Kinect working on your PC, and start developing applications for it:&lt;br /&gt;
&lt;br /&gt;
* [https://www.microsoft.com/en-us/download/details.aspx?id=40278 Kinect for Windows SDK] and [https://www.microsoft.com/en-us/download/details.aspx?id=40276 Developer Toolkit]: released on June 16, 2011 as a non-commercial SDK intended for application development. Version 1.8, the last there will ever be now that Kinect v2 is out, was released on September 2013. Because it comes from Microsoft, it is obviously the easiest way to get everything working. Sadly, there is no Linux version.&lt;br /&gt;
* [https://github.com/OpenKinect/libfreenect libfreenect] library: from the [http://openkinect.org/wiki/Main_Page OpenKinect] project, it is intended to be a free and open source alternative to the official drivers. libfreenect is used by projects like [https://github.com/ofTheo/ofxKinect ofxKinect], an addon for the [http://www.openframeworks.cc/ openFrameworks] toolkit (and as of version 0.8, included in the core package) that runs on Linux and OS X. ofxKinect packs a nice example application to show the RGB and point cloud taken from Kinect.&lt;br /&gt;
* [https://github.com/PrimeSense/Sensor PrimeSense drivers]: they were released as open source after the the [https://en.wikipedia.org/wiki/OpenNI OpenNI] project was created, along with the motion tracking middleware, ''NITE'', and the SDK. NI stands for Natural Interaction, and the project tried to enforce a common standard for human input using Kinect-like sensors. These official drivers are used by [http://wiki.ros.org/openni_kinect ROS] (the Robot Operating System, a massive collection of libraries and tools for robotic researchers) and [http://pointclouds.org/ PCL] (the Point Cloud Library, with everything needed for 3D point cloud processing). Sadly, version 2.0 of the OpenNI SDK dropped support for Kinect on Linux due to licensing issues, so for now PCL relies on legacy 1.x versions. Also, Apple bought PrimeSense on November 2013, and on April 2014 OpenNI's webpage was closed. The source is now being maintained by [http://structure.io/openni a third party].&lt;br /&gt;
* [https://github.com/avin2/SensorKinect SensorKinect]: a modified version of the official PrimeSense drivers, used for example by [https://github.com/gameoverhack/ofxOpenNI ofxOpenNI] (another openFrameworks addon). Last updated on 2012.&lt;br /&gt;
&lt;br /&gt;
For this tutorial, we are going to use PCL with the OpenNI drivers, so owners of a Xtion can also get it to work.&lt;br /&gt;
&lt;br /&gt;
==Precompiled PCL for Ubuntu==&lt;br /&gt;
&lt;br /&gt;
If you have a valid [http://wiki.ros.org/ROS/Installation installation of ROS] (through their repository), you do not have to install anything. Both OpenNI and PrimeSense drivers, as well as PCL, should be already installed. You can check it with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install libpcl-1.7-all libpcl-1.7-all-dev libopenni-dev libopenni-sensor-primesense-dev&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The previous command should state that all packages are already installed (change the PCL version number as needed), install them if not. If you get an error about overwriting some file, check [[PCL/OpenNI_troubleshooting#Trying_to_overwrite_X.2C_which_is_also_in_package_Y|this]].&lt;br /&gt;
&lt;br /&gt;
In case you do not want ROS, there is a [https://launchpad.net/~v-launchpad-jochen-sprickerhof-de/+archive/ubuntu/pcl PPA] (Personal Package Archive, a private repository) which has everything we need. Add it to your sources, and install everything:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo add-apt-repository ppa:v-launchpad-jochen-sprickerhof-de/pcl&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt-get install build-essential cmake libpcl1.7 libpcl-dev pcl-tools&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Trying to mix ROS and PCL repositories and packages can cause some errors, so choose one of them and stick with it. Check the [[PCL/OpenNI troubleshooting]] page because your Kinect may not work by default in 32 bits.&lt;br /&gt;
&lt;br /&gt;
==Compiling PCL from source==&lt;br /&gt;
&lt;br /&gt;
For Linuxes without a precompiled version of PCL, you will need to compile it yourself. This has an advantage, actually: you can customize the build options and choose what you want. Also, the resulting binaries and libraries should be a bit faster. And you always get the latest version! The instructions are [http://pointclouds.org/documentation/tutorials/compiling_pcl_posix.php here], but the steps are easy so I will show them to you.&lt;br /&gt;
&lt;br /&gt;
===Installing the dependencies===&lt;br /&gt;
&lt;br /&gt;
Some of PCL dependencies can be installed via the package manager. Others will require additional work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash enclose=&amp;quot;div&amp;quot;&amp;gt;sudo apt-get install build-essential cmake cmake-curses-gui libboost-all-dev libeigen3-dev libflann-dev libvtk5-dev libvtk5-qt4-dev libglew-dev libxmu-dev libsuitesparse-dev libqhull-dev libpcap-dev libxmu-dev libxi-dev libgtest-dev libqt4-opengl-dev&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The trunk (1.8) version of PCL adds support for [https://en.wikipedia.org/wiki/VTK VTK] 6 and [https://en.wikipedia.org/wiki/Qt_%28software%29 Qt] 5, so if you want to use them, you must install the following packages (say yes if you are asked to remove VTK 5 and Qt 4 packages):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash enclose=&amp;quot;div&amp;quot;&amp;gt;sudo apt-get install libvtk6-dev libqt5opengl5-dev&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
PCL will not build if you mix them, e.g., using VTK 5 with Qt 5 will give you an error halfway through the compilation.&lt;br /&gt;
&lt;br /&gt;
====JDK====&lt;br /&gt;
&lt;br /&gt;
OpenNI requires Sun's official JDK (Java Development Kit), which is no longer available on official apt repositories. You can use [http://www.webupd8.org/2012/09/install-oracle-java-8-in-ubuntu-via-ppa.html unofficial ones], or do it manually. For the last method, go to the [http://www.oracle.com/technetwork/java/javase/downloads/index.html Java SE downloads] page (SE means Standard Edition) and download the latest version (e.g., &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''jdk-8u66-linux-x64.tar.gz''&amp;lt;/span&amp;gt;). Extract it, then move the contents to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''/usr/lib/jvm/''&amp;lt;/span&amp;gt; so it is available system-wide:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo mkdir -p /usr/lib/jvm/&lt;br /&gt;
sudo cp -r jdk1.8.0_66/ /usr/lib/jvm/&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, make it the default choice to compile and run Java programs. Remember to change the version number as needed!&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo update-alternatives --install &amp;quot;/usr/bin/java&amp;quot; &amp;quot;java&amp;quot; &amp;quot;/usr/lib/jvm/jdk1.8.0_66/bin/java&amp;quot; 1&lt;br /&gt;
sudo update-alternatives --install &amp;quot;/usr/bin/javac&amp;quot; &amp;quot;javac&amp;quot; &amp;quot;/usr/lib/jvm/jdk1.8.0_66/bin/javac&amp;quot; 1&lt;br /&gt;
sudo update-alternatives --install &amp;quot;/usr/bin/jar&amp;quot; &amp;quot;jar&amp;quot; &amp;quot;/usr/lib/jvm/jdk1.8.0_66/bin/jar&amp;quot; 1&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To be sure, use the following commands to manually select the correct option, in case there is more than one choice:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo update-alternatives --config java&lt;br /&gt;
sudo update-alternatives --config javac&lt;br /&gt;
sudo update-alternatives --config jar&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are still not sure, run them and display the version, making sure it is the one you installed:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;java -version&lt;br /&gt;
javac -version&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sun's JDK is now installed.&lt;br /&gt;
&lt;br /&gt;
====OpenNI====&lt;br /&gt;
&lt;br /&gt;
PCL uses OpenNI and the PrimeSense drivers to get data from devices like Kinect or Xtion. It is optional, but it would not make much sense not to install it, would it? If you are using Ubuntu, add the [[#Precompiled PCL for Ubuntu | PPA]] and install &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''libopenni-dev''&amp;lt;/span&amp;gt; and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''libopenni-sensor-primesense-dev''&amp;lt;/span&amp;gt;, which should be done already. Otherwise, fetch the [https://github.com/OpenNI/OpenNI OpenNI] and [https://github.com/PrimeSense/Sensor PrimeSense Sensor] sources from GitHub (download them as .zip, the link is on the right). Extract them, and install the dependencies:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install python libusb-1.0-0-dev freeglut3-dev doxygen graphviz&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Go to the directory where you extracted OpenNI (&amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''OpenNI-master/''&amp;lt;/span&amp;gt; for me), and open a terminal in the &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/CreateRedist/''&amp;lt;/span&amp;gt; subdirectory. Issue:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;./RedistMaker&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When it finishes, and if there are no errors (check the [[PCL/OpenNI troubleshooting]] page if you get any), go to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/Redist/OpenNI-Bin-Dev-Linux-x64-v1.5.7.10/''&amp;lt;/span&amp;gt; (or your equivalent), and install (you must be root):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo ./install.sh&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, go to the directory where you extracted the PrimeSense drivers (&amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Sensor-master/''&amp;lt;/span&amp;gt; for me), and repeat the exact same procedure (go to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/CreateRedist/''&amp;lt;/span&amp;gt;, issue &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''./RedistMaker''&amp;lt;/span&amp;gt;, go to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/Redist/Sensor-Bin-Linux-x64-v5.1.6.6/''&amp;lt;/span&amp;gt;, issue &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''sudo ./install.sh''&amp;lt;/span&amp;gt;). Congratulations, you have now installed OpenNI.&lt;br /&gt;
&lt;br /&gt;
====CUDA====&lt;br /&gt;
&lt;br /&gt;
Like OpenNI, [https://en.wikipedia.org/wiki/CUDA nVidia CUDA] is an optional dependency, that will allow PCL to use your GPU (Graphics Processing Unit, that is, your graphics card) for certain computations. This is mandatory for tools like KinFu (do not bother unless you have at least a series 400 card with 1.5 GB of VRAM).&lt;br /&gt;
&lt;br /&gt;
Some distributions provide packages for CUDA in the repositories. For example, in Ubuntu:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install nvidia-cuda-dev nvidia-cuda-toolkit&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you want to install it manually (which is incompatible with the previous method), go to the [https://developer.nvidia.com/cuda-downloads CUDA downloads] page, which is self-explanatory, and get the small .deb file or the huge .run toolkit/SDK installer file for your system (you should have installed the nVidia drivers already, but the installer will also do it for you if needed).&lt;br /&gt;
&lt;br /&gt;
If you chose the .deb file, install it using the method of your choice, like Gdebi package manager, or through console:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo dpkg -i &amp;lt;package.deb&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The .deb does not contain all CUDA stuff, it just adds their repository to your software lists. Now you must install everything:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get update&lt;br /&gt;
sudo apt-get install cuda&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If, on the other hand, you downloaded the .run, give it execution permissions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;chmod +x cuda_7.0.28_linux.run&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And install it. You can use the default options, although if you have a working nVidia graphics driver for your card, you may want to say &amp;quot;no&amp;quot; when the installer offers to install it for you:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo ./cuda_7.0.28_linux.run&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The environment variables need to be changed so your system can find CUDA's libraries and binaries. Open &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''/etc/ld.so.conf''&amp;lt;/span&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo nano /etc/ld.so.conf&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And append one of these two lines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;/usr/local/cuda/lib64 # Add this on 64-bit only.&lt;br /&gt;
/usr/local/cuda/lib # Add this on 32-bit only.&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Ctrl+O'''&amp;lt;/span&amp;gt; and Enter, exit with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Ctrl+X'''&amp;lt;/span&amp;gt;. Reload the cache of the dynamic linker with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo ldconfig&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, append CUDA's bin directory to your PATH. Do this by editing your local &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''.bashrc''&amp;lt;/span&amp;gt; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;nano ~/.bashrc&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And append this line:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;export PATH=$PATH:/usr/local/cuda/bin&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CUDA is now installed.&lt;br /&gt;
&lt;br /&gt;
===Getting the source===&lt;br /&gt;
&lt;br /&gt;
Every dependency is installed. Time to download PCL's source code. First, you must choose whether to install the stable or the experimental branch of PCL. The stable branch is the latest official release and it is guaranteed to work without problems. The experimental branch may give you a compiling error seldomly, but you can find some interesting features that stable users will have to wait some months for. Apart from that, both are built the same way.&lt;br /&gt;
&lt;br /&gt;
To get the stable version, go to the [https://github.com/PointCloudLibrary/pcl/releases downloads] page, get &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''pcl-pcl-1.7.2.tar.gz''&amp;lt;/span&amp;gt; or whatever the latest release is, and extract it somewhere. For the experimental version, use Git:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install git&lt;br /&gt;
git clone https://github.com/PointCloudLibrary/pcl PCL-trunk-Source&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The compiled trunk version of PCL will take up more than 8 GB. So make sure you put the source folder in a partition with enough free space!&lt;br /&gt;
&lt;br /&gt;
===Compiling===&lt;br /&gt;
&lt;br /&gt;
Go the the PCL source directory (&amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''pcl-pcl-1.7.2/''&amp;lt;/span&amp;gt; or &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''PCL-trunk-Source/''&amp;lt;/span&amp;gt;), and create a new subdirectory to keep the build files in:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;mkdir build&lt;br /&gt;
cd build&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now it is time to configure the project using CMake. We will tell it to build in Release (fully optimized, no debug capabilities) mode now, and customize the rest of the options later:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;cmake -DCMAKE_BUILD_TYPE=Release ..&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CMake should be able to find every dependency, thus being able to build every subsystem except for the ones marked as &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;Disabled by default&amp;quot;''&amp;lt;/span&amp;gt;. If you are happy, you can build now, otherwise let's invoke CMake's curses interface to change a couple of things (mind the final dot):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;ccmake .&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Ccmake_GUI_PCL.png|thumb|center|900px|Interface of ''ccmake''.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here you can change the build options. The program usage can be found at the bottom of the screen. Try turning all functionality &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;ON&amp;quot;&amp;lt;/span&amp;gt;. The most important thing, in case you want to use CUDA, is to enable it and give CMake the path to your SDK. Go to the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;CUDA_SDK_ROOT_DIR&amp;quot;&amp;lt;/span&amp;gt; option and enter the correct path (probably &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''/usr/local/cuda/''&amp;lt;/span&amp;gt; or similar).&lt;br /&gt;
&lt;br /&gt;
When you are done, press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''C'''&amp;lt;/span&amp;gt; to configure and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''G'''&amp;lt;/span&amp;gt; to generate and exit the tool. Sometimes, the options you change can activate previously omitted parameters, or prompt some warning text. Just press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''E'''&amp;lt;/span&amp;gt; when you are finished reading the message, and keep pressing &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''C'''&amp;lt;/span&amp;gt; until it lets you generate (new parameters will be marked with an asterisk, so you can check them and decide whether or not you want further customization).&lt;br /&gt;
&lt;br /&gt;
If you are done configuring, it is time to build:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;make&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: Additionally, you can append the parameter -jX to speed up the compilation, X being the number of cores or processors of your PC, plus one.'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Remember that, at any time, you can manually force the project to be reconfigured and built from scratch by emptying the &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''build/''&amp;lt;/span&amp;gt; directory with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;rm -rf ./*&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Installing===&lt;br /&gt;
&lt;br /&gt;
It will take some time to compile PCL (up to a few hours if your PC is not powerful enough). When it is finished, install it system-wide with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo make install&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And you should reboot and proceed to the next section, to see if your computer now recognizes (and uses) your Kinect device.&lt;br /&gt;
&lt;br /&gt;
=Testing (OpenNI viewer)=&lt;br /&gt;
&lt;br /&gt;
We are going to write a simple example program that will fetch data from the Kinect or Xtion and present it to the user, using the PCL library. It will also allow to save the current frame (as point cloud) to disk. If you feel lazy, you can download it [http://robotica.unileon.es/~victorm/PCL_openni_viewer.tar.gz here], and skip the next two sections. Otherwise, create a new directory anywhere in your hard disk and proceed.&lt;br /&gt;
&lt;br /&gt;
==CMakeLists.txt==&lt;br /&gt;
&lt;br /&gt;
Inside that directory, create a new text file named &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''CMakeLists.txt''&amp;lt;/span&amp;gt;. PCL-based programs use the CMake build system, too. Open it with any editor and paste the following content:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CMake&amp;gt;cmake_minimum_required(VERSION 2.8 FATAL_ERROR)&lt;br /&gt;
 &lt;br /&gt;
project(PCL_openni_viewer)&lt;br /&gt;
 &lt;br /&gt;
find_package(PCL 1.7 REQUIRED)&lt;br /&gt;
 &lt;br /&gt;
include_directories(${PCL_INCLUDE_DIRS})&lt;br /&gt;
link_directories(${PCL_LIBRARY_DIRS})&lt;br /&gt;
add_definitions(${PCL_DEFINITIONS})&lt;br /&gt;
 &lt;br /&gt;
set(PCL_BUILD_TYPE Release)&lt;br /&gt;
 &lt;br /&gt;
file(GLOB PCL_openni_viewer_SRC&lt;br /&gt;
    &amp;quot;src/*.h&amp;quot;&lt;br /&gt;
    &amp;quot;src/*.cpp&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
add_executable(openniViewer ${PCL_openni_viewer_SRC})&lt;br /&gt;
 &lt;br /&gt;
target_link_libraries (openniViewer ${PCL_LIBRARIES})&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CMake syntax is quite self-explanatory. We ask for a CMake version 2.8 installation, minimum. We declare a new project named &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;openni_PCL_viewer&amp;quot;&amp;lt;/span&amp;gt;. We tell CMake to check for the presence of PCL library development files, version 1.7. If our system can not meet the CMake and PCL version requirement, the process will fail.&lt;br /&gt;
&lt;br /&gt;
Next, we feed the compiler and linker the directories where PCL includes and libraries can be found, and the defined symbols. We tell CMake to use the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Release&amp;quot;&amp;lt;/span&amp;gt; build type, which will activate certain optimizations depending on the compiler we use. Other build types are available, like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Debug&amp;quot;&amp;lt;/span&amp;gt;, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;MinSizeRel&amp;quot;&amp;lt;/span&amp;gt;, and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;RelWithDebInfo&amp;quot;&amp;lt;/span&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Finally, we create a variable, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;opennipclviewer_SRC&amp;quot;&amp;lt;/span&amp;gt;, that will store a list of files to be compiled (though we will only have one). We create a new binary to be compiled from these source files, and we link it with the PCL library.&lt;br /&gt;
&lt;br /&gt;
Check the [https://cmake.org/Wiki/CMake_Useful_Variables CMake help] for more interesting options.&lt;br /&gt;
&lt;br /&gt;
==main.cpp==&lt;br /&gt;
&lt;br /&gt;
We told CMake it could find the source files in a &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''src/''&amp;lt;/span&amp;gt; subdirectory, so let's keep to out word and create it. Then, add a new &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''main.cpp''&amp;lt;/span&amp;gt; file inside and paste the following lines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;// Original code by Geoffrey Biggs, taken from the PCL tutorial in&lt;br /&gt;
// http://pointclouds.org/documentation/tutorials/pcl_visualizer.php&lt;br /&gt;
&lt;br /&gt;
// Simple OpenNI viewer that also allows to write the current scene to a .pcd&lt;br /&gt;
// when pressing SPACE.&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;pcl/io/openni_grabber.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/cloud_viewer.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/console/parse.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
using namespace std;&lt;br /&gt;
using namespace pcl;&lt;br /&gt;
&lt;br /&gt;
PointCloud&amp;lt;PointXYZRGBA&amp;gt;::Ptr cloudptr(new PointCloud&amp;lt;PointXYZRGBA&amp;gt;); // A cloud that will store color info.&lt;br /&gt;
PointCloud&amp;lt;PointXYZ&amp;gt;::Ptr fallbackCloud(new PointCloud&amp;lt;PointXYZ&amp;gt;);    // A fallback cloud with just depth data.&lt;br /&gt;
boost::shared_ptr&amp;lt;visualization::CloudViewer&amp;gt; viewer;                 // Point cloud viewer object.&lt;br /&gt;
Grabber* openniGrabber;                                               // OpenNI grabber that takes data from the device.&lt;br /&gt;
unsigned int filesSaved = 0;                                          // For the numbering of the clouds saved to disk.&lt;br /&gt;
bool saveCloud(false), noColor(false);                                // Program control.&lt;br /&gt;
&lt;br /&gt;
void&lt;br /&gt;
printUsage(const char* programName)&lt;br /&gt;
{&lt;br /&gt;
	cout &amp;lt;&amp;lt; &amp;quot;Usage: &amp;quot; &amp;lt;&amp;lt; programName &amp;lt;&amp;lt; &amp;quot; [options]&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; endl&lt;br /&gt;
		 &amp;lt;&amp;lt; endl&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;Options:\n&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; endl&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;\t&amp;lt;none&amp;gt;     start capturing from an OpenNI device.\n&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;\t-v FILE    visualize the given .pcd file.\n&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;\t-h         shows this help.\n&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// This function is called every time the device has new data.&lt;br /&gt;
void&lt;br /&gt;
grabberCallback(const PointCloud&amp;lt;PointXYZRGBA&amp;gt;::ConstPtr&amp;amp; cloud)&lt;br /&gt;
{&lt;br /&gt;
	if (! viewer-&amp;gt;wasStopped())&lt;br /&gt;
		viewer-&amp;gt;showCloud(cloud);&lt;br /&gt;
&lt;br /&gt;
	if (saveCloud)&lt;br /&gt;
	{&lt;br /&gt;
		stringstream stream;&lt;br /&gt;
		stream &amp;lt;&amp;lt; &amp;quot;inputCloud&amp;quot; &amp;lt;&amp;lt; filesSaved &amp;lt;&amp;lt; &amp;quot;.pcd&amp;quot;;&lt;br /&gt;
		string filename = stream.str();&lt;br /&gt;
		if (io::savePCDFile(filename, *cloud, true) == 0)&lt;br /&gt;
		{&lt;br /&gt;
			filesSaved++;&lt;br /&gt;
			cout &amp;lt;&amp;lt; &amp;quot;Saved &amp;quot; &amp;lt;&amp;lt; filename &amp;lt;&amp;lt; &amp;quot;.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
		}&lt;br /&gt;
		else PCL_ERROR(&amp;quot;Problem saving %s.\n&amp;quot;, filename.c_str());&lt;br /&gt;
&lt;br /&gt;
		saveCloud = false;&lt;br /&gt;
	}&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// For detecting when SPACE is pressed.&lt;br /&gt;
void&lt;br /&gt;
keyboardEventOccurred(const visualization::KeyboardEvent&amp;amp; event,&lt;br /&gt;
					  void* nothing)&lt;br /&gt;
{&lt;br /&gt;
	if (event.getKeySym() == &amp;quot;space&amp;quot; &amp;amp;&amp;amp; event.keyDown())&lt;br /&gt;
		saveCloud = true;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Creates, initializes and returns a new viewer.&lt;br /&gt;
boost::shared_ptr&amp;lt;visualization::CloudViewer&amp;gt;&lt;br /&gt;
createViewer()&lt;br /&gt;
{&lt;br /&gt;
	boost::shared_ptr&amp;lt;visualization::CloudViewer&amp;gt; v&lt;br /&gt;
	(new visualization::CloudViewer(&amp;quot;OpenNI viewer&amp;quot;));&lt;br /&gt;
	v-&amp;gt;registerKeyboardCallback(keyboardEventOccurred);&lt;br /&gt;
&lt;br /&gt;
	return (v);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	if (console::find_argument(argc, argv, &amp;quot;-h&amp;quot;) &amp;gt;= 0)&lt;br /&gt;
	{&lt;br /&gt;
		printUsage(argv[0]);&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	bool justVisualize(false);&lt;br /&gt;
	string filename;&lt;br /&gt;
	if (console::find_argument(argc, argv, &amp;quot;-v&amp;quot;) &amp;gt;= 0)&lt;br /&gt;
	{&lt;br /&gt;
		if (argc != 3)&lt;br /&gt;
		{&lt;br /&gt;
			printUsage(argv[0]);&lt;br /&gt;
			return -1;&lt;br /&gt;
		}&lt;br /&gt;
&lt;br /&gt;
		filename = argv[2];&lt;br /&gt;
		justVisualize = true;&lt;br /&gt;
	}&lt;br /&gt;
	else if (argc != 1)&lt;br /&gt;
	{&lt;br /&gt;
		printUsage(argv[0]);&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// First mode, open and show a cloud from disk.&lt;br /&gt;
	if (justVisualize)&lt;br /&gt;
	{&lt;br /&gt;
		// Try with color information...&lt;br /&gt;
		try&lt;br /&gt;
		{&lt;br /&gt;
			io::loadPCDFile&amp;lt;PointXYZRGBA&amp;gt;(filename.c_str(), *cloudptr);&lt;br /&gt;
		}&lt;br /&gt;
		catch (PCLException e1)&lt;br /&gt;
		{&lt;br /&gt;
			try&lt;br /&gt;
			{&lt;br /&gt;
				// ...and if it fails, fall back to just depth.&lt;br /&gt;
				io::loadPCDFile&amp;lt;PointXYZ&amp;gt;(filename.c_str(), *fallbackCloud);&lt;br /&gt;
			}&lt;br /&gt;
			catch (PCLException e2)&lt;br /&gt;
			{&lt;br /&gt;
				return -1;&lt;br /&gt;
			}&lt;br /&gt;
&lt;br /&gt;
			noColor = true;&lt;br /&gt;
		}&lt;br /&gt;
&lt;br /&gt;
		cout &amp;lt;&amp;lt; &amp;quot;Loaded &amp;quot; &amp;lt;&amp;lt; filename &amp;lt;&amp;lt; &amp;quot;.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
		if (noColor)&lt;br /&gt;
			cout &amp;lt;&amp;lt; &amp;quot;This cloud has no RGBA color information present.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
		else cout &amp;lt;&amp;lt; &amp;quot;This cloud has RGBA color information present.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
	}&lt;br /&gt;
	// Second mode, start fetching and displaying frames from the device.&lt;br /&gt;
	else&lt;br /&gt;
	{&lt;br /&gt;
		openniGrabber = new OpenNIGrabber();&lt;br /&gt;
		if (openniGrabber == 0)&lt;br /&gt;
			return -1;&lt;br /&gt;
		boost::function&amp;lt;void (const PointCloud&amp;lt;PointXYZRGBA&amp;gt;::ConstPtr&amp;amp;)&amp;gt; f =&lt;br /&gt;
			boost::bind(&amp;amp;grabberCallback, _1);&lt;br /&gt;
		openniGrabber-&amp;gt;registerCallback(f);&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	viewer = createViewer();&lt;br /&gt;
&lt;br /&gt;
	if (justVisualize)&lt;br /&gt;
	{&lt;br /&gt;
		if (noColor)&lt;br /&gt;
			viewer-&amp;gt;showCloud(fallbackCloud);&lt;br /&gt;
		else viewer-&amp;gt;showCloud(cloudptr);&lt;br /&gt;
	}&lt;br /&gt;
	else openniGrabber-&amp;gt;start();&lt;br /&gt;
&lt;br /&gt;
	// Main loop.&lt;br /&gt;
	while (! viewer-&amp;gt;wasStopped())&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::seconds(1));&lt;br /&gt;
&lt;br /&gt;
	if (! justVisualize)&lt;br /&gt;
		openniGrabber-&amp;gt;stop();&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save and close.&lt;br /&gt;
&lt;br /&gt;
==Compiling==&lt;br /&gt;
&lt;br /&gt;
Follow the same steps you used to build PCL. That is, create a new &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''build/''&amp;lt;/span&amp;gt; subdirectory next to the &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''src/''&amp;lt;/span&amp;gt; one. Open a terminal there and issue:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;cmake -DCMAKE_BUILD_TYPE=Release ..&lt;br /&gt;
make&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Executing==&lt;br /&gt;
&lt;br /&gt;
Still from the same terminal, run the compiled example program:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;./openniViewer&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After some seconds, the main window will appear and the application will start grabbing frames from the device. You can inspect the current point cloud using the mouse, holding the left button to rotate, the right one (or the mouse wheel) to zoom, and the middle one to pan the camera around. At first, you may see only a black screen, or some big colored axes, but no cloud. Try zooming out, to see the whole scene. Another useful key is &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''R'''&amp;lt;/span&amp;gt;, which will reset the camera parameters when pressed. Use it whenever you notice that zooming has gotten slow after some camera movement, or if you still can not see the cloud. See the [http://pointclouds.org/documentation/tutorials/pcl_visualizer.php PCLVisualizer] tutorial for additional controls and features.&lt;br /&gt;
&lt;br /&gt;
Whenever you feel ready, press the &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''SPACE'''&amp;lt;/span&amp;gt; key. The program will pause for a fraction of a second and the output &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;Saved inputCloud0.pcd.&amp;quot;''&amp;lt;/span&amp;gt; will appear on the console. Check the current folder to see that file &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''inputCloud0.pcd''&amp;lt;/span&amp;gt; has indeed been written. You can now close the program with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Q'''&amp;lt;/span&amp;gt; or &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Alt+F4'''&amp;lt;/span&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Next, run it again giving the following parameter:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;./openniViewer -v inputCloud0.pcd&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will tell the program not to take data from the device, but from the saved point cloud file instead. After it loads, you will realize that you are presented the same scene you saved to disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: PCD data is saved relative to the sensor. No matter how much you have manipulated the view, it will reset to default when you load the file.'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
&lt;br /&gt;
At this point, your Kinect device should be working and getting depth data for you. There is a collection of excellent [http://pointclouds.org/documentation/tutorials/ tutorials] for PCL in the official webpage. I encourage you to finish them all before proceeding your experiments with the Kinect sensor. You can also find a good introduction/tutorial at the PCL library [http://www.pointclouds.org/assets/uploads/PCL-IAS13_Introduction.pdf here].&lt;br /&gt;
&lt;br /&gt;
If you use an ASUS Xtion PRO device instead, you should have gotten everything to work without problems or additional steps (except maybe for [[PCL/OpenNI_troubleshooting#ASUS_Xtion_PRO | this one]]).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)&amp;diff=4752</id>
		<title>PCL/OpenNI tutorial 2: Cloud processing (basic)</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)&amp;diff=4752"/>
				<updated>2015-11-05T16:51:09Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Point_cloud.png|thumb|right|200px|Rendering of a point cloud, showing a desktop.]]&lt;br /&gt;
&lt;br /&gt;
All right: in the [[PCL/OpenNI tutorial 1: Installing and testing| previous tutorial]] you installed OpenNI and PCL. You managed to get your device working. You compiled and ran the example program, watched as it fetched one frame after another, and even saved a couple of point clouds to disk. Now you may be wondering &amp;quot;What about now? What use depth sensors are? What can I do with point clouds?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Well, depth cameras have a lot of useful applications. For example, you could use them as [http://tech.integrate.biz/kinect_mocap.htm motion tracking] hardware and save you thousands of dollars in professional gear. The original Kinect was actually designed as an alternative to physical controllers (e.g., gamepads) in games. [https://frantracerkinectft.codeplex.com/ Hand gestures] can also be recognized, so people have managed to successfully implement control interfaces similar to what could be seen in Minority Report.&lt;br /&gt;
&lt;br /&gt;
Working low-level with point clouds offers interesting possibilities too. As they are essentially a 3D &amp;quot;scan&amp;quot; or the scene, you could capture your whole room as a [http://pointclouds.org/documentation/tutorials/using_kinfu_large_scale.php continuous 3D mesh], then import it in your modelling software of choice and use it as a reference for a better reconstruction. You could use it to measure distances, or detect obstacles, for a robot navigation system. But the most exciting alternative by far, and the one where most of the research is being focused on, is to perform 3D object recognition and pose estimation, even in [https://www.youtube.com/watch?feature=player_detailpage&amp;amp;v=quGhaggn3cQ#t=349 real time]. This allows a robot to fulfill tasks like going to the kitchen to get you a cup of coffee.&lt;br /&gt;
&lt;br /&gt;
Most of the advanced stuff that can be done with a point cloud requires some previous steps, like filtering, reconstruction or normal estimation. In this tutorial, I will introduce you to the basics of point cloud processing, and leave the complicated methods for the [[PCL/OpenNI tutorial 3: Cloud processing (advanced) | next tutorial]]. I will explain what every technique does and what it should be used for, and include PCL code snippets so you can check how to implement it in your programs.&lt;br /&gt;
&lt;br /&gt;
=Point clouds=&lt;br /&gt;
&lt;br /&gt;
First, a little explanation. A point cloud as taken from a depth sensor consists of a series of points in 3D space, as simple as that. A &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::PointCloud&amp;lt;PointT&amp;gt;&amp;quot;&amp;lt;/span&amp;gt; object stores the points inside a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;std::vector&amp;lt;PointT&amp;gt;&amp;quot;&amp;lt;/span&amp;gt; structure. The cloud object exposes some functions that let you get information about them, like the point count. As you can see, the class is templated so you can instantiate it for many types of points. For example, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt;, which stores three floats for the X, Y and Z coordinates, and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZRGB&amp;quot;&amp;lt;/span&amp;gt;, which also stores color (texture) information for each point, the kind we would get with Kinect or Xtion RGB-D cameras. A list of [http://pointclouds.org/documentation/tutorials/adding_custom_ptype.php#what-pointt-types-are-available-in-pcl point types] and a basic description of the [http://pointclouds.org/documentation/tutorials/basic_structures.php cloud object] are available on PCL's website.&lt;br /&gt;
&lt;br /&gt;
Clouds will usually be handled with pointers, as they tend to be large. PCL implements the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;boost::shared_ptr&amp;quot;&amp;lt;/span&amp;gt; class from the [https://en.wikipedia.org/wiki/Boost_%28C%2B%2B_libraries%29 Boost C++ libraries], that keeps track of the references to a certain object. Every time you pass a shared pointer and a copy of it is created, a reference counter is automatically increased. When the pointer is destroyed, it is decreased. If the counter ever reaches 0, it means that no one is holding an active reference to the object, and its destructor is called to free memory. This can take a while to get used to, and you will be mixing cloud objects and cloud object pointers until then, but do not worry.&lt;br /&gt;
&lt;br /&gt;
==PCD files==&lt;br /&gt;
&lt;br /&gt;
When clouds are saved to disk, a [http://pointclouds.org/documentation/tutorials/pcd_file_format.php PCD] (Point Cloud Data) file is produced. You can choose to do it so in binary or plain text format. The first one is faster to work with, whereas the second one will let you to inspect the cloud by opening it with a common text editor. If you do so, you may see something like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;# .PCD v.7 − Point Cloud Data file format&lt;br /&gt;
VERSION .7&lt;br /&gt;
FIELDS x y z&lt;br /&gt;
SIZE 4 4 4&lt;br /&gt;
TYPE F F F&lt;br /&gt;
WIDTH 2&lt;br /&gt;
HEIGHT 1&lt;br /&gt;
VIEWPOINT 0 0 0 1 0 0 0&lt;br /&gt;
POINTS 2&lt;br /&gt;
DATA ascii&lt;br /&gt;
0.73412 -1.12643 0.82218&lt;br /&gt;
0.44739 -0.34735 -0.04624&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This tells us that the cloud consist of 2 points of type &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; (because each point has only 3 fields; the X, Y and Z coordinates, each stored in a 4-byte float), and lists their coordinates. It also stores the viewport information (the relative position of the points to the sensor) as a translation quaternion. Knowing the position of the camera can be useful for certain procedures like normal estimation.&lt;br /&gt;
&lt;br /&gt;
===Reading from file===&lt;br /&gt;
&lt;br /&gt;
Reading a point cloud from a PCD file to and storing it in memory can be done with a single function call. You must specify the correct point type. For example, for .pcd files saved with the OpenNI viewer application that we saw in the [[PCL/OpenNI tutorial 1: Installing and testing | previous tutorial]], you could use either &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZRGBA&amp;quot;&amp;lt;/span&amp;gt; or &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; if you are not going to use the color information.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I have not bothered to include proper error/argument checking, but you should do it in your programs. Be sure to check the [http://docs.pointclouds.org PCL API] every time I introduce you to one of its classes or methods.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/reading_pcd.php Reading Point Cloud data from PCD files]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl_1_1io.html pcl::io namespace]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_file_read.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Writing to file===&lt;br /&gt;
&lt;br /&gt;
Saving a point cloud to a file is as easy as reading it. You can choose between doing it in plain readable (ASCII) or binary format. The first one can be read and altered with a text editor, the second one is faster.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Write it back to disk under a different name.&lt;br /&gt;
	// Another possibility would be &amp;quot;savePCDFileBinary()&amp;quot;.&lt;br /&gt;
	pcl::io::savePCDFileASCII(&amp;quot;output.pcd&amp;quot;, *cloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/writing_pcd.php Writing Point Cloud data to PCD files]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl_1_1io.html pcl::io namespace]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_file_write.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you want a better alternative for loading or saving PCD files (some of the functions we saw are a bit outdated, but they do the job), take a look at the [http://docs.pointclouds.org/trunk/classpcl_1_1_p_c_d_reader.html PCDReader] and [http://docs.pointclouds.org/trunk/classpcl_1_1_p_c_d_writer.html PCDWriter] classes.&lt;br /&gt;
&lt;br /&gt;
==Visualization==&lt;br /&gt;
&lt;br /&gt;
Of course, PCL ships with a dedicated program to load and visualize point clouds and a lot of other stuff, &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''pcl_viewer''&amp;lt;/span&amp;gt;. You can run it from a terminal with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_viewer &amp;lt;file 1&amp;gt; [file 2] ... [file N]&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you load several files, each cloud will be rendered with a different color to help you tell them apart. The program uses internally the [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_visualizer.html pcl::visualization::PCLVisualizer] class. Press the &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''H'''&amp;lt;/span&amp;gt; key while it is running to print the help on the terminal. Keys I tend to find useful are &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''1'''&amp;lt;/span&amp;gt;, which chooses a different color for the cloud (useful if the current one is too hard to see), &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''R'''&amp;lt;/span&amp;gt;, which resets the camera, and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''J'''&amp;lt;/span&amp;gt;, which saves a PNG snapshot.&lt;br /&gt;
&lt;br /&gt;
In your own code, it is possible to iterate through the cloud and print the information of each point to console, but this is cumbersome. Sometimes you just want to visualize the cloud to check something. PCL can create a window for you and display it, letting users pan and zoom their way around. The simplest method is the following one:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/cloud_viewer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Cloud viewer object. You can set the window title.&lt;br /&gt;
	pcl::visualization::CloudViewer viewer(argv[1]);&lt;br /&gt;
	viewer.showCloud(cloud);&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		// Do nothing but wait.&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;CloudViewer&amp;quot;&amp;lt;/span&amp;gt; has less features than &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PCLVisualizer&amp;quot;&amp;lt;/span&amp;gt; but it is easier to set up. The program will remain active until you close the window with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Alt+F4'''&amp;lt;/span&amp;gt; or press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Q'''&amp;lt;/span&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/cloud_viewer.php The CloudViewer]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/pcl_visualizer.php PCLVisualizer]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/walkthrough.php Visualization library overview]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_cloud_viewer.html pcl::visualization::CloudViewer]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_visualize_cloud.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Concatenating two clouds==&lt;br /&gt;
&lt;br /&gt;
There are two types of concatenation with point clouds:&lt;br /&gt;
&lt;br /&gt;
* '''Points''':  You initially have two clouds, &amp;quot;A&amp;quot; and &amp;quot;B&amp;quot;, with &amp;quot;N&amp;quot; and &amp;quot;M&amp;quot; points each. A third cloud is created (&amp;quot;C&amp;quot;), that has &amp;quot;N + M&amp;quot; points (all points of the first two clouds). For this to work, the point type must be the same for all clouds (&amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt;, or &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt;, or whatever).&lt;br /&gt;
* '''Fields''': With this, you can join the fields of two clouds with different types. The only constraint is that both must have the same number of points. For example, if you have a cloud of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; type and another of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt; type with &amp;quot;N&amp;quot; points each, you can concatenate them to create a cloud of type &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointNormal&amp;quot;&amp;lt;/span&amp;gt; (that stores X, Y and Z coordinates, plus the normal) that also has &amp;quot;N&amp;quot; points.&lt;br /&gt;
&lt;br /&gt;
Point concatenation is the easiest one becase you can use the cloud's add operator (+), which is overloaded.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudA(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudB(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudC(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read two PCD files from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGBA&amp;gt;(argv[1], *cloudA) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGBA&amp;gt;(argv[2], *cloudB) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Create cloud &amp;quot;C&amp;quot;, with the points of both &amp;quot;A&amp;quot; and &amp;quot;B&amp;quot;.&lt;br /&gt;
	*cloudC = (*cloudA) + (*cloudB);&lt;br /&gt;
	// Now cloudC-&amp;gt;points.size() equals cloudA-&amp;gt;points.size() + cloudB-&amp;gt;points.size().&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/concatenate_clouds.php Concatenate the points of two Point Clouds]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_concatenate_clouds.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Field concatenation is done with a function call:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudPoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr cloudNormals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;::Ptr cloudAll(new pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudPoints) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Compute the normals of the cloud (do not worry, we will see this later).&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloudPoints);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.05);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*cloudNormals);&lt;br /&gt;
&lt;br /&gt;
	// Concatenate the fields (PointXYZ + Normal = PointNormal).&lt;br /&gt;
	pcl::concatenateFields(*cloudPoints, *cloudNormals, *cloudAll);&lt;br /&gt;
&lt;br /&gt;
	// Print the data to standard output.&lt;br /&gt;
	for (size_t currentPoint = 0; currentPoint &amp;lt; cloudAll-&amp;gt;points.size(); currentPoint++)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tXYZ:&amp;quot; &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].x &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].y &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].z &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tNormal:&amp;quot; &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[0] &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[1] &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[2] &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/concatenate_clouds.php Concatenate the points of two Point Clouds]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__common.html#gac6add803f86fd16a998dce541e9ef402 pcl::concatenateFields()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_concatenate_fields.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Matrix transformations==&lt;br /&gt;
&lt;br /&gt;
Clouds can be directly manipulated with [https://en.wikipedia.org/wiki/Transformation_matrix transformations] (translation, rotation, scaling). 3D transformations use a 4x4 matrix, which can be a little complicated to understand (specially when it comes to rotations), so check some book or [http://web.iitd.ac.in/~hegde/cad/lecture/L6_3dtrans.pdf tutorial] before trying to use one. The transformed cloud will be saved to another object so you can keep both versions and use them as you wish.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/common/transforms.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr transformed(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Transformation matrix object, initialized to the identity matrix&lt;br /&gt;
	// (a null transformation).&lt;br /&gt;
	Eigen::Matrix4f transformation = Eigen::Matrix4f::Identity();&lt;br /&gt;
&lt;br /&gt;
	// Set a rotation around the Z axis.&lt;br /&gt;
	float theta = M_PI / 2; // 180 degrees.&lt;br /&gt;
	transformation(0, 0) = cos(theta);&lt;br /&gt;
	transformation(0, 1) = -sin(theta);&lt;br /&gt;
	transformation(1, 0) = sin(theta);&lt;br /&gt;
	transformation(1, 1) = cos(theta);&lt;br /&gt;
&lt;br /&gt;
	// Set a translation on the X axis.&lt;br /&gt;
	transformation(0, 3) = 1.0f; // 1 meter (positive direction).&lt;br /&gt;
&lt;br /&gt;
	pcl::transformPointCloud(*cloud, *transformed, transformation);&lt;br /&gt;
&lt;br /&gt;
	// Visualize both the original and the result.&lt;br /&gt;
	pcl::visualization::PCLVisualizer viewer(argv[1]);&lt;br /&gt;
	viewer.addPointCloud(cloud, &amp;quot;original&amp;quot;);&lt;br /&gt;
	// The transformed one's points will be red in color.&lt;br /&gt;
	pcl::visualization::PointCloudColorHandlerCustom&amp;lt;pcl::PointXYZ&amp;gt; colorHandler(transformed, 255, 0, 0);&lt;br /&gt;
	viewer.addPointCloud(transformed, colorHandler, &amp;quot;transformed&amp;quot;);&lt;br /&gt;
	// Add 3D colored axes to help see the transformation.&lt;br /&gt;
	viewer.addCoordinateSystem(1.0, 0);&lt;br /&gt;
&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer.spinOnce();&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Cloud_transformation.png|thumb|center|600px|Rotation and translation applied to a cloud.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check the rest of the functions, because you can also use a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Affine3&amp;quot;&amp;lt;/span&amp;gt; or a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Quaternion&amp;quot;&amp;lt;/span&amp;gt; instead of a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Matrix4&amp;quot;&amp;lt;/span&amp;gt; object. Another important note: the function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;addCoordinateSystem()&amp;quot;&amp;lt;/span&amp;gt; with those parameters seen above is marked as deprecated in new versions of PCL. From these on you will have to use something like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;viewer.addCoordinateSystem(1.0, &amp;quot;reference&amp;quot;, 0);&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Transformation&lt;br /&gt;
* '''Output''': Transformed points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/matrix_transform.php Using a matrix to transform a point cloud]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl.html#aefbe4956d1c8fb785a97df6708d57c56 pcl::transformPointCloud()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_matrix_transform.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Using indices==&lt;br /&gt;
&lt;br /&gt;
Many of PCL's algorithms return indices. Indices are just a list of some of the points of a cloud (not the points themselves with all their data, but just their index in the cloud). For example, a cylinder segmentation algorithm will give you as output a list of points that were considered to fit the cylinder model. You can then use the indices to extract the points to a second cloud for further work with the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::ExtractIndices&amp;quot;&amp;lt;/span&amp;gt; class. Its &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt; function will instruct it to extract the points NOT indexed (so you can remove all cylinder-shaped clusters from the cloud).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/extract_indices.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/segmentation/sac_segmentation.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudAll(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudExtracted(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudAll) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Plane segmentation (do not worry, we will see this later).&lt;br /&gt;
	pcl::ModelCoefficients::Ptr coefficients(new pcl::ModelCoefficients);&lt;br /&gt;
	pcl::SACSegmentation&amp;lt;pcl::PointXYZ&amp;gt; segmentation;&lt;br /&gt;
	segmentation.setOptimizeCoefficients(true);&lt;br /&gt;
	segmentation.setModelType(pcl::SACMODEL_PLANE);&lt;br /&gt;
	segmentation.setMethodType(pcl::SAC_RANSAC);&lt;br /&gt;
	segmentation.setDistanceThreshold(0.01);&lt;br /&gt;
	segmentation.setInputCloud(cloudAll);&lt;br /&gt;
&lt;br /&gt;
	// Object for storing the indices.&lt;br /&gt;
	pcl::PointIndices::Ptr pointIndices(new pcl::PointIndices);&lt;br /&gt;
&lt;br /&gt;
	segmentation.segment(*pointIndices, *coefficients);&lt;br /&gt;
&lt;br /&gt;
	// Object for extracting points from a list of indices.&lt;br /&gt;
	pcl::ExtractIndices&amp;lt;pcl::PointXYZ&amp;gt; extract;&lt;br /&gt;
	extract.setInputCloud(cloudAll);&lt;br /&gt;
	extract.setIndices(pointIndices);&lt;br /&gt;
	// We will extract the points that are NOT indexed (the ones that are not in a plane).&lt;br /&gt;
	extract.setNegative(true);&lt;br /&gt;
	extract.filter(*cloudExtracted);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Indices, [Negative mode]&lt;br /&gt;
* '''Output''': Points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/extract_indices.php Extracting indices from a PointCloud]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/structpcl_1_1_point_indices.html pcl::PointIndices]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_extract_indices.html pcl::ExtractIndices]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_extract_indices.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Removing NaNs==&lt;br /&gt;
&lt;br /&gt;
The cloud retrieved from the sensor may contain several kind of measurement errors and/or inaccuracies. One of them is the presence of [https://en.wikipedia.org/wiki/NaN NaN (Not a Number)] values in the coordinates of some points, as you can see in the following file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;# .PCD v0.7 - Point Cloud Data file format&lt;br /&gt;
VERSION 0.7&lt;br /&gt;
FIELDS x y z rgba&lt;br /&gt;
SIZE 4 4 4 4&lt;br /&gt;
TYPE F F F U&lt;br /&gt;
COUNT 1 1 1 1&lt;br /&gt;
WIDTH 640&lt;br /&gt;
HEIGHT 480&lt;br /&gt;
VIEWPOINT 0 0 0 1 0 0 0&lt;br /&gt;
POINTS 307200&lt;br /&gt;
DATA ascii&lt;br /&gt;
nan nan nan 10135463&lt;br /&gt;
nan nan nan 10398635&lt;br /&gt;
nan nan nan 10070692&lt;br /&gt;
nan nan nan 10268071&lt;br /&gt;
...&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cloud objects have a member function called &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;is_dense()&amp;quot;&amp;lt;/span&amp;gt; that will return true if all points have valid, finite values. A NaN indicates that there was a problem measuring the distance to that point, maybe because the sensor was too close or too far, or because the surface was reflective.&lt;br /&gt;
&lt;br /&gt;
Sadly, many of PCL's algorithms will fail if even a single NaN is present in the cloud you feed them as input, with a message like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;Assertion `point_representation_-&amp;gt;isValid (point) &amp;amp;&amp;amp; &amp;quot;Invalid (NaN, Inf) point coordinates given to radiusSearch!&amp;quot;' failed.&amp;quot;''&amp;lt;/span&amp;gt;. If this happens, you will have to remove such values from the cloud. You can use a program like the following to &amp;quot;clean&amp;quot; a cloud from all NaN points:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/filter.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	if (argc != 3)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tUsage: &amp;quot; &amp;lt;&amp;lt; argv[0] &amp;lt;&amp;lt; &amp;quot; &amp;lt;input cloud&amp;gt; &amp;lt;output cloud&amp;gt;&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// The mapping tells you to what points of the old cloud the new ones correspond,&lt;br /&gt;
	// but we will not use it.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; mapping;&lt;br /&gt;
	pcl::removeNaNFromPointCloud(*cloud, *cloud, mapping);&lt;br /&gt;
&lt;br /&gt;
	// Save it back.&lt;br /&gt;
	pcl::io::savePCDFileASCII(argv[2], *cloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The problem with this method is that it will not keep the cloud organized. All clouds store a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;width&amp;quot;&amp;lt;/span&amp;gt; and a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;height&amp;quot;&amp;lt;/span&amp;gt; variable. In ''unorganized'' clouds, the total number of points is the same as the width, and the height is set to 1. In ''organized'' clouds (like the ones taken from camera-like sensors such as Kinect or Xtion), the width and the height are the same that the pixels of the image resolution the sensor works with. Points are distributed in rows like in the depth image, and every one of them corresponds to a pixel. The member function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;isOrganized()&amp;quot;&amp;lt;/span&amp;gt; will return true if the height is greater than 1.&lt;br /&gt;
&lt;br /&gt;
Because removing NaNs will change the number of points of the cloud, it can no longer be kept organized with the original width/height ratio, so the function will set the height to 1. This is not a big issue, as only a few of PCL's algorithms work explicitly with organized clouds (and they mostly use it for optimization), but you must take it into account.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Cloud&lt;br /&gt;
* '''Output''': Cloud without NaNs&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__filters.html#gac463283a9e9c18a66d3d29b28a575064 pcl::removeNaNFromPointCloud()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_remove_NaNs.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Saving a PNG==&lt;br /&gt;
&lt;br /&gt;
PCL offers the possibility of saving the values of a point cloud to a PNG image file. Obviously, this can only be done with organized clouds, because the rows and columns of the resulting image will correspond exactly to the ones of the cloud. For example, if you have a point cloud taken from a sensor like Kinect or Xtion, you can use this to retrieve the 640x480 RGB image that matches the cloud.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/io/png_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGB&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Save the image (cloud must be organized).&lt;br /&gt;
	pcl::io::savePNGFile(&amp;quot;output.png&amp;quot;, *cloud, &amp;quot;rgb&amp;quot;);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;savePNGFile()&amp;quot;&amp;lt;/span&amp;gt; has several implementations; in this case, we have chosen the one that lets you specify which fields to save. If you omit that parameter, the function will save the RGB fields by default.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__io.html#gabd03e43fac3048635512f447c8bc50b3 pcl::io::savePNGFile()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_save_PNG.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Computing the centroid==&lt;br /&gt;
&lt;br /&gt;
The [https://en.wikipedia.org/wiki/Centroid centroid] of a cloud is a point with coordinates that result from computing the mean of the values of all points in the cloud. You could say it is the &amp;quot;center of mass&amp;quot;, and it has several uses for certain algorithms. If you intend to compute the actual center of gravity of a [[PCL/OpenNI_tutorial_3:_Cloud_processing_%28advanced%29#Segmentation|clustered]] object though, remember that the sensor does not retrieve the part of the object that is hidden from the camera, like the back faces that are occluded by the front ones, or the inside. You only have the part of the surface that faces the camera.&lt;br /&gt;
&lt;br /&gt;
The centroid of a cloud can be computed with a single function call:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/common/centroid.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object to store the centroid coordinates.&lt;br /&gt;
	Eigen::Vector4f centroid;&lt;br /&gt;
	&lt;br /&gt;
	pcl::compute3DCentroid(*cloud, centroid);&lt;br /&gt;
&lt;br /&gt;
	std::cout &amp;lt;&amp;lt; &amp;quot;The XYZ coordinates of the centroid are: (&amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[0] &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[1] &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[2] &amp;lt;&amp;lt; &amp;quot;).&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl.html#a23daec3829d2d4100a2f185372b3753a pcl::compute3DCentroid()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_compute_centroid.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Feature estimation=&lt;br /&gt;
&lt;br /&gt;
A ''feature'' of a point is some characteristic that can help us to tell it apart from a different point. Of course, you could say that the XYZ coordinates would be characteristic enough, but when interpolating the data of two different, yet corresponding, clouds, they become useless. We need something better, that will have similar values when computed for the same spot of a surface in both sets.&lt;br /&gt;
&lt;br /&gt;
Normals are an example of a very simple feature. 3D [http://pointclouds.org/documentation/tutorials/how_features_work.php features] will be important later in our tutorials, when we talk about [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29|descriptors]] (more detailed &amp;quot;signatures&amp;quot; of points used for close matching).&lt;br /&gt;
&lt;br /&gt;
==Normal estimation==&lt;br /&gt;
&lt;br /&gt;
As you may remember from geometry class, the [https://en.wikipedia.org/wiki/Surface_normal normal] of a plane is an unit vector that is perpendicular to it. The normal of a surface at a point is defined as the vector that is perpendicular to the plane that is tangent to the surface at the point. Surface normals can be calculated for the points of a cloud, too. It is considered a feature, albeit not a very discriminative one.&lt;br /&gt;
&lt;br /&gt;
I will not go into detail with the math of the estimation method, but you just have to know that is uses the nearest neighbors (the points that are closest to the one we are calculating the normal for) to find out the tangent plane and the normal vector. You can customize the method with the search radius (think about a sphere of that radius, centered in the point; all neighboring points that lie within will be used for the computation) and the viewpoint (by default, the output normals will be directionless; by supposing that all vectors must point towards the camera - because otherwise they would belong to surfaces that are not visible from the sensor - they can all be re-oriented accordingly).&lt;br /&gt;
&lt;br /&gt;
Normals are also important because they give us information about the ''curvature'' of the surface at some point, which can be used to our advantage. Many of PCL's algorithms will require us to provide the normals of the input cloud. To estimate them, you can use the following code:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for normal estimation.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	// For every point, use all neighbors in a radius of 3cm.&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	// A kd-tree is a data structure that makes searches efficient. More about it later.&lt;br /&gt;
	// The normal estimation object will use it to find nearest neighbors.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
&lt;br /&gt;
	// Calculate the normals.&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Visualize them.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Normals&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(cloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	// Display one normal out of 20, as a line of length 3cm.&lt;br /&gt;
	viewer-&amp;gt;addPointCloudNormals&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt;(cloud, normals, 20, 0.03, &amp;quot;normals&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, normals are stored in &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointCloud&amp;quot;&amp;lt;/span&amp;gt; objects too, instantiated to the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt; type. If you ever extract some indices from a cloud before feeding them to an algorithm, be sure to do the same with the normals. I have not made use of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setViewPoint()&amp;quot;&amp;lt;/span&amp;gt; function, but you get the idea.&lt;br /&gt;
&lt;br /&gt;
Instead of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setRadiusSearch()&amp;quot;&amp;lt;/span&amp;gt;, you can make use of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setKSearch()&amp;quot;&amp;lt;/span&amp;gt;, which takes an integer, ''K''. This will take into account a point's ''K'' nearest neighbors to compute the normal. Please mind that this can produce some odd results for points that are not in dense areas, as some of the &amp;quot;neighbors&amp;quot; will be too far from them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Normal_estimation.png|thumb|center|600px|Visualization of the normals computed for a point cloud.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search method, Radius | Neighbor count, [Indices], [Viewpoint]&lt;br /&gt;
* '''Output''': Normals&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/normal_estimation.php Estimating Surface Normals in a Point Cloud]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_normal_estimation.html pcl::NormalEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_estimate_normals.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Integral images===&lt;br /&gt;
&lt;br /&gt;
Integral images are a method for normal estimation on organized clouds. The algorithm sees the cloud as a depth image, and creates certain rectangular areas over which the normals are computed, by taking into account the relationship between neighboring &amp;quot;pixels&amp;quot; (points), without the need to run lookups on a search structure like a tree. Because of this, it is very efficient and the normals are usually computed in a split second. If you are performing real time computations on clouds taken from a RGB-D sensor, this is the approach you should use because the previous method will probably take several seconds to finish.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/integral_image_normal.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for normal estimation.&lt;br /&gt;
	pcl::IntegralImageNormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	// Other estimation methods: COVARIANCE_MATRIX, AVERAGE_DEPTH_CHANGE, SIMPLE_3D_GRADIENT.&lt;br /&gt;
	// They determine the smoothness of the result, and the running time.&lt;br /&gt;
	normalEstimation.setNormalEstimationMethod(normalEstimation.AVERAGE_3D_GRADIENT);&lt;br /&gt;
	// Depth threshold for computing object borders based on depth changes, in meters.&lt;br /&gt;
	normalEstimation.setMaxDepthChangeFactor(0.02f);&lt;br /&gt;
	// Factor that influences the size of the area used to smooth the normals.&lt;br /&gt;
	normalEstimation.setNormalSmoothingSize(10.0f);&lt;br /&gt;
&lt;br /&gt;
	// Calculate the normals.&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Visualize them.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Normals&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(cloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	// Display one normal out of 20, as a line of length 3cm.&lt;br /&gt;
	viewer-&amp;gt;addPointCloudNormals&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt;(cloud, normals, 20, 0.03, &amp;quot;normals&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Estimation method, [Smoothing factor], [Depth threshold], [Rectangle size], [Border policy], [Depth dependent smoothing]&lt;br /&gt;
* '''Output''': Normals&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/normal_estimation_using_integral_images.php Normal Estimation Using Integral Images]&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://islab.ulsan.ac.kr/files/announcement/411/Adaptive%20Neighborhood%20Selection%20for%20Real-Time%20Surface%20Normal.pdf Adaptive Neighborhood Selection for Real-Time Surface Normal Estimation from Organized Point Cloud Data Using Integral Images] (S. Holzer et al., 2012)&lt;br /&gt;
** [https://www.willowgarage.com/sites/default/files/holz_rgbd_seg.pdf Real-Time Plane Segmentation using RGB-D Cameras] (Dirk Holz et al., 2012)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_integral_image_normal_estimation.html pcl::IntegralImageNormalEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_estimate_normals_integral.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Decomposition=&lt;br /&gt;
&lt;br /&gt;
When I talk about ''decomposition'', I mean the process of creating an ordered, organized data structure from the points of the cloud, with some purpose in mind like making further analysis easier, or performing filters.&lt;br /&gt;
&lt;br /&gt;
==''k''-d tree==&lt;br /&gt;
&lt;br /&gt;
A [https://en.wikipedia.org/wiki/K-d_tree ''k''-d tree] (''k''-dimensional tree) is a data structure that organizes a set of points in a ''k''-dimensional space, in a way that makes range search operations very efficient (for example, finding the nearest neighbor of a point, which is the point that is closest to it in space; or finding all neighbors within a radius).&lt;br /&gt;
&lt;br /&gt;
It is a binary tree, that is, every non-leaf node in it has two children nodes, the &amp;quot;left&amp;quot; and &amp;quot;right&amp;quot; ones. Each level splits space on a specific dimension. For example, in 3-dimensional space, at the root node (first level) all children would be split based on the first dimension, X (points having coordinates with greater X values would go to the right subtree, points with lesser values to the left one). At the second level (the nodes we just created), the split would be done on the Y axis, following the same criteria. At the third level (the grandchildren), we would use the Z axis. At the fourth level, we would get back to the X axis, and so on. Usually, the median point is chosen to be root at every level.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Kdtree.png|thumb|center|700px|Example of a 2D ''k''d-tree (image from Wikimedia Commons).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In PCL, creating a ''k''-d tree from a cloud is a straightforward and fast operation:&lt;br /&gt;
&lt;br /&gt;
===Searching===&lt;br /&gt;
&lt;br /&gt;
The following code snippet shows you how to use a ''k''-d tree to perform efficient searches:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/search/kdtree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// kd-tree object.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt; kdtree;&lt;br /&gt;
	kdtree.setInputCloud(cloud);&lt;br /&gt;
&lt;br /&gt;
	// We will find the 5 nearest neighbors of this point&lt;br /&gt;
	// (it does not have to be one of the cloud's, we can use any coordinate).&lt;br /&gt;
	pcl::PointXYZ point;&lt;br /&gt;
	point.x = 0.0524343;&lt;br /&gt;
	point.y = -0.58016;&lt;br /&gt;
	point.z = 1.776;&lt;br /&gt;
	// This vector will store the output neighbors.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; pointIndices(5);&lt;br /&gt;
	// This vector will store their squared distances to the search point.&lt;br /&gt;
	std::vector&amp;lt;float&amp;gt; squaredDistances(5);&lt;br /&gt;
	// Perform the search, and print out results.&lt;br /&gt;
	if (kdtree.nearestKSearch(point, 5, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;5 nearest neighbors of the point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Now we find all neighbors within 3cm of the point&lt;br /&gt;
	// (inside a sphere of radius 3cm centered at the point).&lt;br /&gt;
	if (kdtree.radiusSearch(point, 0.03, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors within 3cm:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another alternative would be &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;KdTreeFLANN&amp;quot;&amp;lt;/span&amp;gt;, which makes use of [http://www.cs.ubc.ca/research/flann/ FLANN] (Fast Library for Approximate Nearest Neighbors).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius | Neighbor count&lt;br /&gt;
* '''Output''': Point indices, Squared distances&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/kdtree_search.php How to use a KdTree to search]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1search_1_1_kd_tree.html pcl::search::KdTree]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_kd_tree_f_l_a_n_n.html pcl::KdTreeFLANN]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_kdtree_search.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Octree==&lt;br /&gt;
&lt;br /&gt;
Like the ''k''-d tree, the [https://en.wikipedia.org/wiki/Octree octree] is an hierarchical tree data structure useful for searches, but also compression or downsampling. Each octree node (called a ''voxel'', a term that will sound familiar for users of 3D engines, and which could be described as a &amp;quot;3D pixel&amp;quot;, that is, a cube) has either eight children or no children. The root node describes a cubic bounding box which encapsulates all points. At every level, it becomes subdivided by a factor of 2 which results in an increased&lt;br /&gt;
voxel resolution. That way, we can selectively give more resolution to certain zones.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Octree.png|thumb|center|700px|Depiction of an octree (image from Wikimedia Commons).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Searching===&lt;br /&gt;
&lt;br /&gt;
Downsampling will be covered in a different section, so now I will show you how to perform searches with an octree:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/octree/octree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Octree object, with a maximum resolution of 128&lt;br /&gt;
	// (resolution at lowest octree level).&lt;br /&gt;
	pcl::octree::OctreePointCloudSearch&amp;lt;pcl::PointXYZ&amp;gt; octree(128);&lt;br /&gt;
	octree.setInputCloud(cloud);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
&lt;br /&gt;
	// We will find all neighbors within the same voxel as this point.&lt;br /&gt;
	// (it does not have to be one of the cloud's, we can use any coordinate).&lt;br /&gt;
	pcl::PointXYZ point;&lt;br /&gt;
	point.x = 0.0524343;&lt;br /&gt;
	point.y = -0.58016;&lt;br /&gt;
	point.z = 1.776;&lt;br /&gt;
	// This vector will store the output neighbors.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; pointIndices;&lt;br /&gt;
	// Perform the search, and print out results.&lt;br /&gt;
	if (! octree.voxelSearch(point, pointIndices) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors in the same voxel:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// We will find the 5 nearest neighbors of the point.&lt;br /&gt;
	// This vector will store their squared distances to the search point.&lt;br /&gt;
	std::vector&amp;lt;float&amp;gt; squaredDistances;&lt;br /&gt;
	if (octree.nearestKSearch(point, 5, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;5 nearest neighbors of the point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Now we find all neighbors within 3cm of the point&lt;br /&gt;
	// (inside a sphere of radius 3cm centered at the point).&lt;br /&gt;
	// The point DOES have to belong in the cloud.&lt;br /&gt;
	if (octree.radiusSearch(point, 0.03, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors within 3cm:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, the code is mostly interchangeable with the one of the ''k''d-tree.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius | Neighbor count&lt;br /&gt;
* '''Output''': Point indices, Squared distances&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/octree.php Spatial Partitioning and Search Operations with Octrees]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1search_1_1_octree.html pcl::search::Octree]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1octree_1_1_octree_point_cloud_search.html pcl::octree::OctreePointCloudSearch]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_octree_search.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Compressing/Decompressing===&lt;br /&gt;
&lt;br /&gt;
On the other hand, this is the code needed to compress and decompress a point cloud with PCL:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/compression/octree_pointcloud_compression.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr decompressedCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Octree compressor object.&lt;br /&gt;
	// Check /usr/include/pcl-&amp;lt;version&amp;gt;/pcl/compression/compression_profiles.h for more profiles.&lt;br /&gt;
	// The second parameter enables the output of information.&lt;br /&gt;
	pcl::io::OctreePointCloudCompression&amp;lt;pcl::PointXYZ&amp;gt; octreeCompression(pcl::io::MED_RES_ONLINE_COMPRESSION_WITHOUT_COLOR, true);&lt;br /&gt;
	// Stringstream that will hold the compressed cloud.&lt;br /&gt;
	std::stringstream compressedData;&lt;br /&gt;
&lt;br /&gt;
	// Compress the cloud (you would save the stream to disk).&lt;br /&gt;
	octreeCompression.encodePointCloud(cloud, compressedData);&lt;br /&gt;
&lt;br /&gt;
	// Decompress the cloud.&lt;br /&gt;
	octreeCompression.decodePointCloud(compressedData, decompressedCloud);&lt;br /&gt;
&lt;br /&gt;
	// Display the decompressed cloud.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Octree compression&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(decompressedCloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforeCompression.png | Original point cloud.&lt;br /&gt;
File:AfterCompression.png | Point cloud retrieved after octree compression.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As you can see, the process has decreased the amount of points in the cloud. The default parameters of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;OctreePointCloudCompression&amp;quot;&amp;lt;/span&amp;gt; object establish an octree resolution (at lowest level) of 1cm, and a coordinate precision of 1mm. These values can of course be changed to suit your needs, but simplifying the cloud in this way is an useful operation that we will cover later.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Resolution, Compression profile&lt;br /&gt;
* '''Output''': Compressed stream&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/compression.php Point Cloud Compression]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1io_1_1_octree_point_cloud_compression.html pcl::io::OctreePointCloudCompression]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_octree_compress.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Detecting changes===&lt;br /&gt;
&lt;br /&gt;
PCL has an octree-based object that allows to detect changes between two point clouds (for example, two clouds taken one after another with an OpenNI device). This object stores the trees of both clouds at the same time, and compares them, returning a list of indices of the points that are new in the second cloud (they did not exist in the first):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/octree/octree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudA(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudB(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read two PCD files from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudA) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[2], *cloudB) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Change detector object, with a resolution of 128&lt;br /&gt;
	// (resolution at lowest octree level).&lt;br /&gt;
	pcl::octree::OctreePointCloudChangeDetector&amp;lt;pcl::PointXYZ&amp;gt; octree(128.0f);&lt;br /&gt;
&lt;br /&gt;
	// Add cloudA to the octree.&lt;br /&gt;
	octree.setInputCloud(cloudA);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
	// The change detector object is able to store two clouds at the same time;&lt;br /&gt;
	// with this we can reset the buffer but keep the previous tree saved.&lt;br /&gt;
	octree.switchBuffers();&lt;br /&gt;
	// Now, add cloudB to the octree like we did before.&lt;br /&gt;
	octree.setInputCloud(cloudB);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; newPoints;&lt;br /&gt;
	// Get a vector with the indices of all points that are new in cloud B,&lt;br /&gt;
	// when compared with the ones that existed in cloud A.&lt;br /&gt;
	octree.getPointIndicesFromNewVoxels(newPoints);&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; newPoints.size(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Point (&amp;quot; &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].x &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].y &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].z&lt;br /&gt;
				  &amp;lt;&amp;lt; &amp;quot;) was not in cloud A but is in cloud B&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
	std::cout &amp;lt;&amp;lt; newPoints.size() &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
} &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': First cloud, Second cloud, Resolution&lt;br /&gt;
* '''Output''': Point indices&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/octree_change.php Spatial change detection on unorganized point cloud data]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1octree_1_1_octree_point_cloud_change_detector.html pcl::octree::OctreePointCloudChangeDetector]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_change_detection.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Filtering=&lt;br /&gt;
&lt;br /&gt;
The raw point cloud, as returned from the sensor, usually needs some kind of polishing before being handed to processing algorithms. Normally, this is due to the low precision of the sensor, which leads to noise, incorrect measurements, holes in surfaces... Sometimes it is just the opposite: the cloud has so many points that processing it would take forever. Luckily, several filtering methods exist to &amp;quot;fix&amp;quot; many recurring errors in point clouds.&lt;br /&gt;
&lt;br /&gt;
==Passthrough filter==&lt;br /&gt;
&lt;br /&gt;
A passthrough filter will remove any point from the cloud whose values do not fall in a certain range, user-given. For example, if you wanted to discard all points farther away than, say, 3 meters, you would run the filter on the Z coordinate with a range of [0,3]. This filter can be useful to discard unneeded objects from the cloud, but you may have to adapt a different reference frame if the default one (relative to the sensor) is not fitting. For example, filtering on the Y value to remove all points not sitting on a table will yield unwanted results if the camera was at an odd angle.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/passthrough.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::PassThrough&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Filter out all points with Z values not in the [0-2] range.&lt;br /&gt;
	filter.setFilterFieldName(&amp;quot;z&amp;quot;);&lt;br /&gt;
	filter.setFilterLimits(0.0, 2.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforePassthrough.png | Original point cloud.&lt;br /&gt;
File:AfterPassthrough.png | Point cloud after applying a passthrough filter with Z in [0,2].&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setFilterLimitsNegative()&amp;quot;&amp;lt;/span&amp;gt; function (not seen in the example) will tell the filter object whether to return the points inside the range (&amp;quot;false&amp;quot;, the default value) or outside it (&amp;quot;true&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Filter field, Filter limits, [Negative mode]&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/passthrough.php Filtering a PointCloud using a PassThrough filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_pass_through.html pcl::PassThrough]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_passthrough_filter.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Conditional removal==&lt;br /&gt;
&lt;br /&gt;
Another way of getting the exact same result than with the previous example would be to use a conditional removal filter. We can build any kind of condition we want for the values of the point:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/conditional_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// We must build a condition.&lt;br /&gt;
	// And &amp;quot;And&amp;quot; condition requires all tests to check true. &amp;quot;Or&amp;quot; conditions also available.&lt;br /&gt;
	pcl::ConditionAnd&amp;lt;pcl::PointXYZ&amp;gt;::Ptr condition(new pcl::ConditionAnd&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// First test, the point's Z value must be greater than (GT) 0.&lt;br /&gt;
	condition-&amp;gt;addComparison(pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;::ConstPtr(new pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;(&amp;quot;z&amp;quot;, pcl::ComparisonOps::GT, 0.0)));&lt;br /&gt;
	// Second test, the point's Z value must be less than (LT) 2.&lt;br /&gt;
	condition-&amp;gt;addComparison(pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;::ConstPtr(new pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;(&amp;quot;z&amp;quot;, pcl::ComparisonOps::LT, 2.0)));&lt;br /&gt;
	// Checks available: GT, GE, LT, LE, EQ.&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::ConditionalRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setCondition(condition);&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// If true, points that do not pass the filter will be set to a certain value (default NaN).&lt;br /&gt;
	// If false, they will be just removed, but that could break the structure of the cloud&lt;br /&gt;
	// (organized clouds are clouds taken from camera-like sensors that return a matrix-like image).&lt;br /&gt;
	filter.setKeepOrganized(true);&lt;br /&gt;
	// If keep organized was set true, points that failed the test will have their Z value set to this.&lt;br /&gt;
	filter.setUserFilterValue(0.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Condition, [Keep organized]&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/remove_outliers.php Removing outliers using a Conditional or RadiusOutlier removal]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_conditional_removal.html pcl::ConditionalRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_conditional_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Outlier removal==&lt;br /&gt;
&lt;br /&gt;
Outliers are lonely points that are spread here and there in the cloud, like annoying mosquitoes. They are the product of the sensor's inaccuracy, which registers measurements where there should not be any. Outliers are considered undesired noise because they can introduce errors in calculations, like for example the normal estimation. Hence, trimming they out of the cloud will not only make computations faster (because the less points there are, the less time it takes, as we will see with downsampling) but also help us get more precise values.&lt;br /&gt;
&lt;br /&gt;
===Radius-based===&lt;br /&gt;
&lt;br /&gt;
The radius-based outlier removal is the simplest method of all. You must specify a search radius and the minimum number of neighbors than a point must have to avoid being labelled as outlier. The algorithm will then iterate through all points (which can be slow in if the cloud is big) and perform the check: if less than that number of points are found within the radius, it is removed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/radius_outlier_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::RadiusOutlierRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Every point must have 10 neighbors within 15cm, or it will be removed.&lt;br /&gt;
	filter.setRadiusSearch(0.15);&lt;br /&gt;
	filter.setMinNeighborsInRadius(10);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All PCL classes that inherit from &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::FilterIndices&amp;quot;&amp;lt;/span&amp;gt; (like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::RadiusOutlierRemoval&amp;quot;&amp;lt;/span&amp;gt;) provide the function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt;, which if set to true, makes the filter return the points that did NOT pass the filter.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius, Neighbor count&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/remove_outliers.php Removing outliers using a Conditional or RadiusOutlier removal]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_radius_outlier_removal.html pcl::RadiusOutlierRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_radius_outlier_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Statistical===&lt;br /&gt;
&lt;br /&gt;
The statistical outlier removal process is a bit more refined. First, for every point, the mean distance to its ''K'' neighbors is computed. Then, if we asume that the result is a [https://en.wikipedia.org/wiki/Normal_distribution normal (gaussian) distribution] with a mean ''μ'' and a standard deviation ''σ'', we can deem it safe to remove all points with mean distances that fall out of the global mean plus deviation. Basically, it runs a statistical analysis of the distances between neighboring points, and trims all which are not considered &amp;quot;normal&amp;quot; (you define what &amp;quot;normal&amp;quot; is with the parameters of the algorithm).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/statistical_outlier_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::StatisticalOutlierRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Set number of neighbors to consider to 50.&lt;br /&gt;
	filter.setMeanK(50);&lt;br /&gt;
	// Set standard deviation multiplier to 1.&lt;br /&gt;
	// Points with a distance larger than 1 standard deviation of the mean distance will be outliers.&lt;br /&gt;
	filter.setStddevMulThresh(1.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can make use of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt; with this filter too.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Neighbor count, Deviation multiplier&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/statistical_outlier.php Removing outliers using a StatisticalOutlierRemoval filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_statistical_outlier_removal.html pcl::StatisticalOutlierRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_statistical_outlier_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Resampling==&lt;br /&gt;
&lt;br /&gt;
Resampling means changing the number of points of the cloud, either by reducing (downsampling) or increasing (upsampling). Both have a purpose, as we will see.&lt;br /&gt;
&lt;br /&gt;
===Downsampling===&lt;br /&gt;
&lt;br /&gt;
A sensor like Kinect or Xtion produces a cloud with 307200 (640x480) points. Believe it or not, that is a lot of information, and the new Kinect is already out there, working at a higher resolution. Keep in mind that, if we were to perform a simple operation on every point of a cloud, it would be of ''O(n)'', n being the number of points. If we had to compare every point with its ''k'' nearest neighbors, it would be ''O(nk)''. You may have noticed by now that some of the algorithms I showed you take several seconds to run on your PC; you now know why.&lt;br /&gt;
&lt;br /&gt;
Optimization techniques are being implemented as I write this to allow PCL to employ the computer's GPU (graphic card) to perform the computations using [https://en.wikipedia.org/wiki/CUDA CUDA]. Many classes already provide an alternative in the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::gpu&amp;quot;&amp;lt;/span&amp;gt; namespace. This offers dramatic improvements in speed, but it is not always a possibility.&lt;br /&gt;
&lt;br /&gt;
Another option is to reduce the complexity of the cloud, that is, to remove all points that are not needed for the purpose. There are several ways to do this. You could trim all outliers from the cloud, then perform segmentation to extract only the parts that interest you. A common operation is also downsampling, which will give you back a cloud that, for all intents and purposes, equivalent to the original one, despite having less points.&lt;br /&gt;
&lt;br /&gt;
Downsampling is done via ''voxelization''. I already explained what voxels were in the [[#Octree | octree section]]. The cloud is divided in multiple cube-shaped regions with the desired resolution. Then, all points inside every voxel are processed so only one remains. The simplest way would be to randomly select one of them, but a more accurate approach would be to compute the [[PCL/OpenNI_tutorial_2:_Cloud_processing_%28basic%29#Computing_the_centroid|centroid]], which is the point whose coordinates are the mean values of all the points that belong to the voxel.&lt;br /&gt;
&lt;br /&gt;
If done with sensible parameters, downsampling will yield results that are precise enough to work with, at the cost of less CPU power.&lt;br /&gt;
&lt;br /&gt;
====Voxel grid====&lt;br /&gt;
&lt;br /&gt;
A straightforward way of downsampling a cloud:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/voxel_grid.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::VoxelGrid&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// We set the size of every voxel to be 1x1x1cm&lt;br /&gt;
	// (only one point per every cubic centimeter will survive).&lt;br /&gt;
	filter.setLeafSize(0.01f, 0.01f, 0.01f);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforeDownsampling.png | Original point cloud.&lt;br /&gt;
File:AfterDownsampling.png | Point cloud after being downsampled with a leaf size of 1cm.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Voxel size&lt;br /&gt;
* '''Output''': Downsampled cloud&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/voxel_grid.php Downsampling a PointCloud using a VoxelGrid filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_voxel_grid.html pcl::VoxelGrid]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_downsample.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Uniform sampling====&lt;br /&gt;
&lt;br /&gt;
This class does basically the same, but it outputs the indices of the points that survived the process. It is a common way of choosing keypoints when computing [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29|descriptors]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/keypoints/uniform_sampling.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Uniform sampling object.&lt;br /&gt;
	pcl::UniformSampling&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// We set the size of every voxel to be 1x1x1cm&lt;br /&gt;
	// (only one point per every cubic centimeter will survive).&lt;br /&gt;
	filter.setRadiusSearch(0.01f);&lt;br /&gt;
	// We need an additional object to store the indices of surviving points.&lt;br /&gt;
	pcl::PointCloud&amp;lt;int&amp;gt; keypointIndices;&lt;br /&gt;
&lt;br /&gt;
	filter.compute(keypointIndices);&lt;br /&gt;
	pcl::copyPointCloud(*cloud, keypointIndices.points, *filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Voxel size&lt;br /&gt;
* '''Output''': Point indices&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_uniform_sampling.html pcl::UniformSampling]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_downsample_uniform.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Upsampling===&lt;br /&gt;
&lt;br /&gt;
Upsampling is a form of [[PCL/OpenNI_tutorial_2:_Cloud_processing_%28basic%29#Reconstruction| surface reconstruction]], but I put it here. When you have fewer points than you would like, upsampling can help you to recover the original surface(s), by [https://en.wikipedia.org/wiki/Upsampling interpolating] the ones you currently have, which is just a sophisticated guess. The result will never be a hundred percent accurate, but sometimes it is the only choice. So, before downsampling your clouds, be sure to keep a copy of the original data!&lt;br /&gt;
&lt;br /&gt;
PCL uses the [https://en.wikipedia.org/wiki/Moving_least_squares Moving Least Squares (MLS)] algorithm for this. The code looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/mls.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filtering object.&lt;br /&gt;
	pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Object for searching.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree;&lt;br /&gt;
	filter.setSearchMethod(kdtree);&lt;br /&gt;
	// Use all neighbors in a radius of 3cm.&lt;br /&gt;
	filter.setSearchRadius(0.03);&lt;br /&gt;
	// Upsampling method. Other possibilites are DISTINCT_CLOUD, RANDOM_UNIFORM_DENSITY&lt;br /&gt;
	// and VOXEL_GRID_DILATION. NONE disables upsampling. Check the API for details.&lt;br /&gt;
	filter.setUpsamplingMethod(pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointXYZ&amp;gt;::SAMPLE_LOCAL_PLANE);&lt;br /&gt;
	// Radius around each point, where the local plane will be sampled.&lt;br /&gt;
	filter.setUpsamplingRadius(0.03);&lt;br /&gt;
	// Sampling step size. Bigger values will yield less (if any) new points.&lt;br /&gt;
	filter.setUpsamplingStepSize(0.02);&lt;br /&gt;
&lt;br /&gt;
	filter.process(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:PCL_upsampling_original.png | Original point cloud (coffee mug on a table).&lt;br /&gt;
File:PCL_upsampling_downsampled.png | Downsampled with a leaf size of 1cm.&lt;br /&gt;
File:PCL_upsampling_local_plane.png | Upsampled using SAMPLE_LOCAL_PLANE (upsampling radius 3cm, step size 2cm).&lt;br /&gt;
File:PCL_upsampling_uniform_density.png | Upsampled using RANDOM_UNIFORM_DENSITY (target point density of 300 per 3cm radius). &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check the [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html#ac29ad97b98353d64ce64e2ff924f7d20 setUpsamplingMethod()] API to see all upsampling methods available, and the parameters you need to set for them to work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search radius, Upsampling method, [Upsampling parameters]&lt;br /&gt;
* '''Output''': Upsampled cloud&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html pcl::MovingLeastSquares]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_upsample.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Reconstruction=&lt;br /&gt;
&lt;br /&gt;
==Surface smoothing==&lt;br /&gt;
&lt;br /&gt;
As stated, depth sensors are not very accurate, and the resulting clouds have measurement errors, outliers, holes in surfaces, etc. Surfaces can be reconstructed by means of an algorithm, that iterates through all points and interpolates the data, trying to guess how the original surface was. Like with upsampling, PCL uses the [https://en.wikipedia.org/wiki/Moving_least_squares MLS] algorithm and class. Performing this step is important, because the resulting cloud's normals will be more accurate.&lt;br /&gt;
&lt;br /&gt;
Surface smoothing is easily done with the following code:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/mls.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// The output will also contain the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;::Ptr smoothedCloud(new pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Smoothing object (we choose what point types we want as input and output).&lt;br /&gt;
	pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointNormal&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Use all neighbors in a radius of 3cm.&lt;br /&gt;
	filter.setSearchRadius(0.03);&lt;br /&gt;
	// If true, the surface and normal are approximated using a polynomial estimation&lt;br /&gt;
	// (if false, only a tangent one).&lt;br /&gt;
	filter.setPolynomialFit(true);&lt;br /&gt;
	// We can tell the algorithm to also compute smoothed normals (optional).&lt;br /&gt;
	filter.setComputeNormals(true);&lt;br /&gt;
	// kd-tree object for performing searches.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree;&lt;br /&gt;
	filter.setSearchMethod(kdtree);&lt;br /&gt;
&lt;br /&gt;
	filter.process(*smoothedCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The output cloud will also contain the improved normals of the smoother cloud.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search radius, [Polynomial fit], [Compute normals]&lt;br /&gt;
* '''Output''': Smoothed cloud, [Smoothed normals]&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/resampling.php Smoothing and normal estimation based on polynomial reconstruction]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html pcl::MovingLeastSquares]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_surface_smoothing.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)&amp;diff=4751</id>
		<title>PCL/OpenNI tutorial 4: 3D object recognition (descriptors)</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)&amp;diff=4751"/>
				<updated>2015-11-05T16:42:05Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It is time to learn the basics of one of the most interesting applications of point cloud processing: 3D object recognition. Akin to 2D recognition, this technique relies on finding good keypoints (characteristic points) in the cloud, and matching them to a set of previously saved ones. But 3D has several advantages over 2D: namely, we will be able to estimate with decent accuracy the exact position and orientation of the object, relative to the sensor. Also, 3D object recognition tends to be more robust to clutter (crowded scenes where objects in the front occluding objects in the background). And finally, having information about the object's shape will help with collision avoidance or grasping operations.&lt;br /&gt;
&lt;br /&gt;
In this first tutorial we will see what descriptors are, how many types are there available in PCL, and how to compute them.&lt;br /&gt;
&lt;br /&gt;
=Overview=&lt;br /&gt;
&lt;br /&gt;
The basis of 3D object recognition is to find a set of correspondences between two different clouds, one of them containing the object we are looking for. In order to do this, we need a way to compare points in an unambiguous manner. Until now, we have worked with points that store the XYZ coordinates, the RGB color... but none of those properties are unique enough. In two sequential scans, two points could share the same coordinates despite belonging to different surfaces, and using the color information takes us back to 2D recognition, will all the lightning related problems.&lt;br /&gt;
&lt;br /&gt;
In a previous tutorial, we talked about [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#Feature_estimation | features]], before introducing the normals. Normals are an example of feature, because they encode information about the vicinity of the point. That is, the neighboring points are taken into account when computing it, giving us an idea of how the surrounding surface is. But this is not enough. For a feature to be optimal, it must meet the following criteria:&lt;br /&gt;
&lt;br /&gt;
# '''It must be robust to transformations''': rigid transformations (the ones that do not change the distance between points) like translations and rotations must not affect the feature. Even if we play with the cloud a bit beforehand, there should be no difference.&lt;br /&gt;
# '''It must be robust to noise''': measurement errors that cause noise should not change the feature estimation much.&lt;br /&gt;
# '''It must be resolution invariant''': if sampled with different density (like after performing downsampling), the result must be identical or similar.&lt;br /&gt;
&lt;br /&gt;
This is where descriptors come in. They are more complex (and precise) signatures of a point, that encode a lot of information about the surrounding geometry. The purpose is to unequivocally identify a point across multiple point clouds, no matter the noise, resolution or transformations. Also, some of them capture additional data about the object they belong to, like the viewpoint (that lets us retrieve the pose).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Correspondence.jpg|thumb|center|600px|Finding correspondences between point features of two clouds (image from http://pointclouds.org/).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are many 3D descriptors implemented into PCL. Each one has its own method for computing unique values for a point. Some use the difference between the angles of the normals of the point and its neighbors, for example. Others use the distances between the points. Because of this, some are inherently better or worse for certain purposes. A given descriptor may be scale invariant, and another one may be better with occlusions and partial views of objects. Which one you choose depends on what you want to do.&lt;br /&gt;
&lt;br /&gt;
After calculating the necessary values, an additional step is performed to reduce the descriptor size: the result is binned into an [https://en.wikipedia.org/wiki/Histogram histogram]. To do this, the value range of each variable that makes up the descriptor is divided into ''n'' subdivisions, and the number of occurrences in each one is counted. Try to imagine a descriptor that computes a single variable, that ranges from 1 to 100. We choose to create 10 bins for it, so the first bin would gather all occurrences between 1 and 10, the second from 11 to 20, and so on. We look at the value of the variable for the first point-neighbor pair, and it is 27, so we increment the value of the third bin by 1. We keep doing this until we get a final histogram for that keypoint. The bin size must be carefully chosen depending on how descriptive that variable is (the variables do not have to share the same number of bins, and also the bins do not have to be of the same size; if for example most values from the previous example fell in the 50-100 range then it would be sensible to have more bins of smaller size in that range).&lt;br /&gt;
&lt;br /&gt;
Descriptors can be classified in two main categories: global and local. The process for computing and using each one (recognition pipeline) is different, so each will be explained in its own section in this article.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/how_features_work.php How 3D Features work in PCL]&lt;br /&gt;
** [https://github.com/PointCloudLibrary/pcl/wiki/Overview-and-Comparison-of-Features  Overview and Comparison of Features]&lt;br /&gt;
** [http://www.pointclouds.org/assets/icra2013/pcl_features_icra13.pdf How does a good feature look like?]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [https://www.inf.ethz.ch/personal/zeislb/publications/aldoma_2012jram_PCLTutorial.pdf Point Cloud Library: Three-Dimensional Object Recognition and 6 DoF Pose Estimation] (Aitor Aldoma et al., 2012)&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Table==&lt;br /&gt;
&lt;br /&gt;
The following table will give you a hint of how many descriptors are there in PCL, and some of their features:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot; style=&amp;quot;margin: auto; text-align:center&amp;quot; cellpadding=&amp;quot;15&amp;quot;&lt;br /&gt;
|+ 3D descriptors&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Name&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Type&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Size&amp;lt;sup&amp;gt;†&amp;lt;/sup&amp;gt;&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Custom PointType&amp;lt;sup&amp;gt;††&amp;lt;/sup&amp;gt;&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#PFH|PFH]] (Point Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 125&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#FPFH|FPFH]] (Fast Point Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 33&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#RSD|RSD]] (Radius-Based Surface Descriptor)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 289&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#3DSC|3DSC]] (3D Shape Context)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 1980&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#USC|USC]] (Unique Shape Context)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 1960&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#SHOT|SHOT]] (Signatures of Histograms of Orientations)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 352&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#Spin_image|Spin image]]&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 153*&lt;br /&gt;
| style=&amp;quot;color: red;&amp;quot; | No&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#RIFT|RIFT]] (Rotation-Invariant Feature Transform)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 32*&lt;br /&gt;
| style=&amp;quot;color: red;&amp;quot; | No&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#NARF|NARF]] (Normal Aligned Radial Feature)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 36&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#RoPS|RoPS]] (Rotational Projection Statistics)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 135*&lt;br /&gt;
| style=&amp;quot;color: red;&amp;quot; | No&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#VFH|VFH]] (Viewpoint Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 308&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#CVFH|CVFH]] (Clustered Viewpoint Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 308&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#OUR-CVFH|OUR-CVFH]] (Oriented, Unique and Repeatable Clustered Viewpoint Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 308&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#ESF|ESF]] (Ensemble of Shape Functions)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 640&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#GFPFH|GFPFH]] (Global Fast Point Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 16&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#GRSD|GRSD]] (Global Radius-Based Surface Descriptor)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 21&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
† Values marked with an asterisk (*) indicate that the descriptor's size depends on some parameter(s), and the one given is for the default values.&lt;br /&gt;
&lt;br /&gt;
†† Descriptors without their own custom PointType use the generic &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::Histogram&amp;lt;&amp;gt;&amp;quot;&amp;lt;/span&amp;gt; type. See [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#Saving_and_loading|Saving and loading]].&lt;br /&gt;
&lt;br /&gt;
Optionally, you can download a document with a less simple version of the table. Page format is A4, landscape:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Downloads''':&lt;br /&gt;
** [http://robotica.unileon.es/~victorm/3D_descriptors_comparison.odt Table as original ODT] (uses font embedding, requires LibreOffice 4)&lt;br /&gt;
** [http://robotica.unileon.es/~victorm/3D_descriptors_comparison.pdf Table as PDF]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Local descriptors=&lt;br /&gt;
&lt;br /&gt;
Local descriptors are computed for individual points that we give as input. They have no notion of what an object is, they just describe how the local geometry is around that point. Usually, it is your task to choose which points you want a descriptor to be computed for: the ''keypoints''. Most of the time, you can get away by just performing a downsampling and choosing all remaining points, but keypoint detectors are available, like the one used for [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#Finding_keypoints|NARF]], or [[PCL/OpenNI_tutorial_5:_3D_object_recognition_%28pipeline%29#ISS|ISS]].&lt;br /&gt;
&lt;br /&gt;
Local descriptors are used for object recognition and [[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Registration | registration]]. Now we will see which ones are implemented into PCL.&lt;br /&gt;
&lt;br /&gt;
==PFH==&lt;br /&gt;
&lt;br /&gt;
PFH stands for Point Feature Histogram. It is one of the most important descriptors offered by PCL and the basis of others such as FPFH. The PFH tries to capture information of the geometry surrounding the point by analyzing the difference between the directions of the normals in the vicinity (and because of this, an imprecise normal estimation may produce low-quality descriptors).&lt;br /&gt;
&lt;br /&gt;
First, the algorithm pairs all points in the vicinity (not just the chosen keypoint with its neighbors, but also the neighbors with themselves). Then, for each pair, a [https://en.wikipedia.org/wiki/Darboux_frame fixed coordinate frame] is computed from their normals. With this frame, the difference between the normals can be encoded with 3 angular variables. These variables, together with the euclidean distance between the points, are saved, and then binned to an histogram when all pairs have been computed. The final descriptor is the concatenation of the histograms of each variable (4 in total).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:PFH_neighbors.png | Point pairs established when computing the PFH for a point (image from http://pointclouds.org).&lt;br /&gt;
File:PFH_frame.png | Fixed coordinate frame and angular features computed for one of the pairs (image from http://pointclouds.org).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Computing descriptors in PCL is very easy, and the PFH is not an exception:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/pfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the PFH descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PFHSignature125&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::PFHSignature125&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// PFH estimation object.&lt;br /&gt;
	pcl::PFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::PFHSignature125&amp;gt; pfh;&lt;br /&gt;
	pfh.setInputCloud(cloud);&lt;br /&gt;
	pfh.setInputNormals(normals);&lt;br /&gt;
	pfh.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. Note: the value given here has to be&lt;br /&gt;
	// larger than the radius used to estimate the normals.&lt;br /&gt;
	pfh.setRadiusSearch(0.05);&lt;br /&gt;
&lt;br /&gt;
	pfh.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, PCL uses the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PFHSignature125&amp;quot;&amp;lt;/span&amp;gt; type to save the descriptor to. This means that the descriptor's size is 125 (the dimensionality of the feature vector). Dividing a feature in ''D'' dimensional space in ''B'' divisions requires a total of B&amp;lt;sup&amp;gt;D&amp;lt;/sup&amp;gt; bins. The original proposal makes use of the distance between the points, but the implementation of PCL does not, as it was not considered discriminative enough (specially in 2.5D scans, where the distance between points increases the further away from the sensor). Hence, the remaining features (with one dimension each) can be divided in 5 divisions, resulting in a 125-dimensional (5&amp;lt;sup&amp;gt;3&amp;lt;/sup&amp;gt;) vector.&lt;br /&gt;
&lt;br /&gt;
The final object that stores the computed descriptors can be handled like a normal cloud (even saved to, or read from, disk), and it has the same number of &amp;quot;points&amp;quot; than the original one. The &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PFHSignature125&amp;quot;&amp;lt;/span&amp;gt; object at position 0, for example, stores the PFH descriptor for the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; point at the same position in the cloud.&lt;br /&gt;
&lt;br /&gt;
For additional details about the descriptor, check the original publications listed below, or PCL's tutorial.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Search method, Radius&lt;br /&gt;
* '''Output''': PFH descriptors&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/pfh_estimation.php Point Feature Histograms (PFH) descriptors]&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://ias.in.tum.de/_media/spezial/bib/rusu08ias.pdf Persistent Point Feature Histograms for 3D Point Clouds] (Radu Bogdan Rusu et al., 2008)&lt;br /&gt;
** [http://files.rbrusu.com/publications/RusuPhDThesis.pdf Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments] (Radu Bogdan Rusu, 2009, section 4.4, page 50)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_p_f_h_estimation.html pcl::PFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_PFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===FPFH===&lt;br /&gt;
&lt;br /&gt;
PFH gives accurate results, but it has a drawback: it is too computationally expensive to perform at real time. For a cloud of ''n'' keypoints with ''k'' neighbors considered, it has a [https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations complexity] of ''O(nk&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;)''. Because of this, a derived descriptor was created, named FPFH (Fast Point Feature Histogram).&lt;br /&gt;
&lt;br /&gt;
The FPFH considers only the direct connections between the current keypoint and its neighbors, removing additional links between neighbors. This takes the complexity down to ''O(nk)''. Because of this, the resulting histogram is referred to as SPFH (Simplified Point Feature Histogram). The reference frame and the angular variables are computed as always.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:FPFH_neighbors.png|thumb|center|400px|Point pairs established when computing the FPFH for a point (image from http://pointclouds.org).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To account for the loss of these extra connections, an additional step takes place after all histograms have been computed: the SPFHs of a point's neighbors are &amp;quot;merged&amp;quot; with its own, weighted according to the distance. This has the effect of giving a point surface information of points as far away as 2 times the radius used. Finally, the 3 histograms (distance is not used) are concatenated to compose the final descriptor.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/fpfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the FPFH descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::FPFHSignature33&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::FPFHSignature33&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// FPFH estimation object.&lt;br /&gt;
	pcl::FPFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::FPFHSignature33&amp;gt; fpfh;&lt;br /&gt;
	fpfh.setInputCloud(cloud);&lt;br /&gt;
	fpfh.setInputNormals(normals);&lt;br /&gt;
	fpfh.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. Note: the value given here has to be&lt;br /&gt;
	// larger than the radius used to estimate the normals.&lt;br /&gt;
	fpfh.setRadiusSearch(0.05);&lt;br /&gt;
&lt;br /&gt;
	fpfh.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An additional implementation of the FPFH estimation that takes advantage of multithreaded optimizations (with the [https://en.wikipedia.org/wiki/OpenMP OpenMP API]) is available in the class &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;FPFHEstimationOMP&amp;quot;&amp;lt;/span&amp;gt;. Its interface is identical to the standard unoptimized implementation. Using it will result in a big performance boost on multi-core systems, meaning faster computation times. Remember to include the header &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;fpfh_omp.h&amp;quot;&amp;lt;/span&amp;gt; instead.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Search method, Radius&lt;br /&gt;
* '''Output''': FPFH descriptors&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/fpfh_estimation.php Fast Point Feature Histograms (FPFH) descriptors]&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://files.rbrusu.com/publications/Rusu09ICRA.pdf Fast Point Feature Histograms (FPFH) for 3D Registration] (Radu Bogdan Rusu et al., 2009)&lt;br /&gt;
** [http://files.rbrusu.com/publications/RusuPhDThesis.pdf Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments] (Radu Bogdan Rusu, 2009, section 4.5, page 57)&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_f_p_f_h_estimation.html pcl::FPFHEstimation]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_f_p_f_h_estimation_o_m_p.html pcl::FPFHEstimationOMP]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_FPFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==RSD==&lt;br /&gt;
&lt;br /&gt;
The Radius-Based Surface Descriptor encodes the radial relationship of the point and its neighborhood. For every pair of the keypoint with a neighbor, the algorithm computes the distance between them, and the difference between their normals. Then, by assuming that both points lie on the surface of a sphere, said sphere is found by fitting not only the points, but also the normals (otherwise, there would be infinite possible spheres). Finally, from all the point-neighbor spheres, only the ones with the maximum and minimum radii are kept and saved to the descriptor of that point.&lt;br /&gt;
&lt;br /&gt;
As you may have deduced already, when two points lie on a flat surface, the sphere radius will be infinite. If, on the other hand, they lie on the curved face of a cylinder, the radius will be more or less the same as that of the cylinder. This allows us to tell objects apart with RSD. The algorithm takes a parameter that sets the maximum radius at which the points will be considered to be part of a plane.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:RSD_sphere.png | Sphere that fits two points with their normals (image from [http://ias.in.tum.de/_media/events/rgbd2011/marton.pdf original presentation]).&lt;br /&gt;
File:RSD_values.png | RSD values for a cloud; points on a flat surface have maximum values (image from [http://ias.in.tum.de/_media/events/rgbd2011/marton.pdf original presentation]).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the code for compiling the RSD descriptor:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/rsd.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the RSD descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PrincipalRadiiRSD&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::PrincipalRadiiRSD&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// RSD estimation object.&lt;br /&gt;
	pcl::RSDEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::PrincipalRadiiRSD&amp;gt; rsd;&lt;br /&gt;
	rsd.setInputCloud(cloud);&lt;br /&gt;
	rsd.setInputNormals(normals);&lt;br /&gt;
	rsd.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. Note: the value given here has to be&lt;br /&gt;
	// larger than the radius used to estimate the normals.&lt;br /&gt;
	rsd.setRadiusSearch(0.05);&lt;br /&gt;
	// Plane radius. Any radius larger than this is considered infinite (a plane).&lt;br /&gt;
	rsd.setPlaneRadius(0.1);&lt;br /&gt;
	// Do we want to save the full distance-angle histograms?&lt;br /&gt;
	rsd.setSaveHistograms(false);&lt;br /&gt;
	&lt;br /&gt;
	rsd.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: This code will only compile with PCL versions 1.8 and above (the current trunk).'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Optionally, you can use the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setSaveHistograms()&amp;quot;&amp;lt;/span&amp;gt; function to enable the saving of the full distance-angle histograms, and then use &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getHistograms()&amp;quot;&amp;lt;/span&amp;gt; to retrieve them.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Search method, Radius, Maximum Radius, [Subdivisions], [Save full histograms]&lt;br /&gt;
* '''Output''': RSD descriptors&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.391.7398&amp;amp;rep=rep1&amp;amp;type=pdf General 3D Modelling of Novel Objects from a Single View] (Zoltan-Csaba Marton et al., 2010)&lt;br /&gt;
** [https://ias.cs.tum.edu/_media/spezial/bib/marton11ijrr.pdf Combined 2D-3D Categorization and Classification for Multimodal Perception Systems] (Zoltan-Csaba Marton et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_r_s_d_estimation.html pcl::RSDEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_RSD.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==3DSC==&lt;br /&gt;
&lt;br /&gt;
The 3D Shape Context is a descriptor that extends its existing 2D counterpart to the third dimension. It works by creating a support structure (a sphere, to be precise) centered at the point we are computing the descriptor for, with the given search radius. The &amp;quot;north pole&amp;quot; of that sphere (the notion of &amp;quot;up&amp;quot;) is pointed to match the normal at that point. Then, the sphere is divided in 3D regions or bins. In the first 2 coordinates (azimuth and elevation) the divisions are equally spaced, but in the third (the radial dimension), divisions are logarithmically spaced so they are smaller towards the center. A minimum radius can be specified to prevent very small bins, that would be too sensitive to small changes in the surface.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:3DSC_support_structure.png|thumb|center|200px|Support structure to compute the 3DSC for a point (image from [http://www.cs.jhu.edu/~misha/Papers/Frome04.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For each bin, a weighted count is accumulated for every neighboring point that lies within. The weight depends on the volume of the bin and the local point density (number of points around the current neighbor). This gives the descriptor some degree of resolution invariance.&lt;br /&gt;
&lt;br /&gt;
We have mentioned that the sphere is given the direction of the normal. This still leaves one degree of freedom (only two axes have been locked, the azimuth remains free). Because of this, the descriptor so far does not cope with rotation. To overcome this (so the same point in two different clouds has the same value), the support sphere is rotated around the normal ''N'' times (a number of degrees that corresponds with the divisions in the azimuth) and the process is repeated for each, giving a total of ''N'' descriptors for that point.&lt;br /&gt;
&lt;br /&gt;
You can compute the 3DSC descriptor the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/3dsc.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the 3DSC descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::ShapeContext1980&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::ShapeContext1980&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// 3DSC estimation object.&lt;br /&gt;
	pcl::ShapeContext3DEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::ShapeContext1980&amp;gt; sc3d;&lt;br /&gt;
	sc3d.setInputCloud(cloud);&lt;br /&gt;
	sc3d.setInputNormals(normals);&lt;br /&gt;
	sc3d.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. It will also be the radius of the support sphere.&lt;br /&gt;
	sc3d.setRadiusSearch(0.05);&lt;br /&gt;
	// The minimal radius value for the search sphere, to avoid being too sensitive&lt;br /&gt;
	// in bins close to the center of the sphere.&lt;br /&gt;
	sc3d.setMinimalRadius(0.05 / 10.0);&lt;br /&gt;
	// Radius used to compute the local point density for the neighbors&lt;br /&gt;
	// (the density is the number of points within that radius).&lt;br /&gt;
	sc3d.setPointDensityRadius(0.05 / 5.0);&lt;br /&gt;
&lt;br /&gt;
	sc3d.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Search method, Radius, Minimal radius, Point density radius&lt;br /&gt;
* '''Output''': 3DSC descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://www.cs.jhu.edu/~misha/Papers/Frome04.pdf Recognizing Objects in Range Data Using Regional Point Descriptors] (Andrea Frome et al., 2004)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_shape_context3_d_estimation.html pcl::ShapeContext3DEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_3DSC.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===USC===&lt;br /&gt;
&lt;br /&gt;
The Unique Shape Context descriptor extends the 3DSC by defining a local reference frame, in order to provide an unique orientation for each point. This not only improves the accuracy of the descriptor, it also reduces its size, as computing multiple descriptors to account for orientation is no longer necessary.&lt;br /&gt;
&lt;br /&gt;
You can check the second publication listed below to learn more about how the LRF is computed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/usc.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the USC descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::UniqueShapeContext1960&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::UniqueShapeContext1960&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// USC estimation object.&lt;br /&gt;
	pcl::UniqueShapeContext&amp;lt;pcl::PointXYZ, pcl::UniqueShapeContext1960, pcl::ReferenceFrame&amp;gt; usc;&lt;br /&gt;
	usc.setInputCloud(cloud);&lt;br /&gt;
	// Search radius, to look for neighbors. It will also be the radius of the support sphere.&lt;br /&gt;
	usc.setRadiusSearch(0.05);&lt;br /&gt;
	// The minimal radius value for the search sphere, to avoid being too sensitive&lt;br /&gt;
	// in bins close to the center of the sphere.&lt;br /&gt;
	usc.setMinimalRadius(0.05 / 10.0);&lt;br /&gt;
	// Radius used to compute the local point density for the neighbors&lt;br /&gt;
	// (the density is the number of points within that radius).&lt;br /&gt;
	usc.setPointDensityRadius(0.05 / 5.0);&lt;br /&gt;
	// Set the radius to compute the Local Reference Frame.&lt;br /&gt;
	usc.setLocalRadius(0.05);&lt;br /&gt;
&lt;br /&gt;
	usc.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: This code will only compile with PCL versions 1.8 and above (the current trunk). For 1.7 and below, change UniqueShapeContext1960 to ShapeContext1980, and edit CMakeLists.txt.'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius, Minimal radius, Point density radius, Local radius&lt;br /&gt;
* '''Output''': USC descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://vision.deis.unibo.it/fede/papers/3dor10.pdf Unique Shape Context for 3D Data Description] (Federico Tombari et al., 2010)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_unique_shape_context.html pcl::UniqueShapeContext]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_USC.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==SHOT==&lt;br /&gt;
&lt;br /&gt;
SHOT stands for Signature of Histograms of Orientations. Like 3DSC, it encodes information about the topology (surface) withing a spherical support structure. This sphere is divided in 32 bins or volumes, with 8 divisions along the azimuth, 2 along the elevation, and 2 along the radius. For every volume, a one-dimensional local histogram is computed. The variable chosen is the angle between the normal of the keypoint and the current point within that volume (to be precise, the cosine, which was found to be better suitable).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:SHOT_support_structure.png|thumb|center|200px|Support structure to compute SHOT. Only 4 azimuth divisions are shown for clarity (image from [http://vision.deis.unibo.it/fede/papers/eccv10.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When all local histograms have been computed, they are stitched together in a final descriptor. Like the USC descriptor, SHOT makes use of a local reference frame, making it rotation invariant. It is also robust to noise and clutter.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/shot.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the SHOT descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// SHOT estimation object.&lt;br /&gt;
	pcl::SHOTEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::SHOT352&amp;gt; shot;&lt;br /&gt;
	shot.setInputCloud(cloud);&lt;br /&gt;
	shot.setInputNormals(normals);&lt;br /&gt;
	// The radius that defines which of the keypoint's neighbors are described.&lt;br /&gt;
	// If too large, there may be clutter, and if too small, not enough points may be found.&lt;br /&gt;
	shot.setRadiusSearch(0.02);&lt;br /&gt;
&lt;br /&gt;
	shot.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Like with FPFH, a multithreading-optimized variant is available with &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;SHOTEstimationOMP&amp;quot;&amp;lt;/span&amp;gt;, that makes use of OpenMP. You need to include the header &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;shot_omp.h&amp;quot;&amp;lt;/span&amp;gt;. Also, another variant that uses the texture for matching is available, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;SHOTColorEstimation&amp;quot;&amp;lt;/span&amp;gt;, with an optimized version too (see the second publication for more details). It outputs a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;SHOT1344&amp;quot;&amp;lt;/span&amp;gt; descriptor.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Radius&lt;br /&gt;
* '''Output''': SHOT descriptors&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://vision.deis.unibo.it/fede/papers/eccv10.pdf Unique Signatures of Histograms for Local Surface Description] (Federico Tombari et al., 2010)&lt;br /&gt;
** [http://www.vision.deis.unibo.it/fede/papers/icip11.pdf A Combined Texture-Shaped Descriptor for Enhanced 3D Feature Matching] (Federico Tombari et al., 2011)&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_estimation.html pcl::SHOTEstimation], &lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_color_estimation.html pcl::SHOTColorEstimation]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_estimation_o_m_p.html pcl::SHOTEstimationOMP]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_color_estimation_o_m_p.html pcl::SHOTColorEstimationOMP]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_SHOT.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Spin image==&lt;br /&gt;
&lt;br /&gt;
The Spin Image (SI) is the oldest descriptor we are going to see here. It has been around since 1997, but it still sees some use for certain applications. It was originally designed to describe surfaces made by vertices, edges and polygons, but it has been since adapted for point clouds. The descriptor is unlike all others in that the output resembles an image that can be compared with another with the usual means.&lt;br /&gt;
&lt;br /&gt;
The support structure used is a cylinder, centered at the point, with a given radius and height, and aligned with the normal. This cylinder is divided radially and vertically into volumes. For each one, the number of neighbors lying inside is added up, eventually producing a descriptor. Weighting and interpolation are used to improve the result. The final descriptor can be seen as a grayscale image where dark areas correspond to volumes with higher point density.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Spin images.png|thumb|center|400px|Spin images computed for 3 points of a model (image from [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.71.4190&amp;amp;rep=rep1&amp;amp;type=pdf original thesis]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/spin_image.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// A handy typedef.&lt;br /&gt;
typedef pcl::Histogram&amp;lt;153&amp;gt; SpinImage;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the spin image for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;SpinImage&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;SpinImage&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Spin image estimation object.&lt;br /&gt;
	pcl::SpinImageEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, SpinImage&amp;gt; si;&lt;br /&gt;
	si.setInputCloud(cloud);&lt;br /&gt;
	si.setInputNormals(normals);&lt;br /&gt;
	// Radius of the support cylinder.&lt;br /&gt;
	si.setRadiusSearch(0.02);&lt;br /&gt;
	// Set the resolution of the spin image (the number of bins along one dimension).&lt;br /&gt;
	// Note: you must change the output histogram size to reflect this.&lt;br /&gt;
	si.setImageWidth(8);&lt;br /&gt;
&lt;br /&gt;
	si.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Spin Image estimation object provides more methods for tuning the estimation, so checking the API is recommended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Radius, Image resolution&lt;br /&gt;
* '''Output''': Spin images&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.71.4190&amp;amp;rep=rep1&amp;amp;type=pdf Spin-Images: A Representation for 3-D Surface Matching] (Andrew Edie Johnson, 1997)&lt;br /&gt;
** [http://www.cs.jhu.edu/~misha/Papers/Johnson99.pdf Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes] (Andrew Edie Johnson and Martial Hebert, 1999)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_spin_image_estimation.html pcl::SpinImageEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_spin_image.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==RIFT==&lt;br /&gt;
&lt;br /&gt;
The Rotation-Invariant Feature Transform, like the spin image, takes some concepts from 2D features, in this case from the Scale-Invariant Feature Transform ([https://en.wikipedia.org/wiki/Scale-invariant_feature_transform SIFT]). It is the only descriptor seen here that requires intensity information in order to compute it (it can be obtained from the RGB color values). This means, of course, that you will not be able to use RIFT with standard XYZ clouds, you also need the texture.&lt;br /&gt;
&lt;br /&gt;
In the first step, a circular patch (with the given radius) is fitted on the surface the point lies on. This patch is divided into concentric rings, according to the chosen distance bin size. Then, an histogram is populated with all the point's neighbors lying inside a sphere centered at that point and with the mentioned radius. The distance and the orientation of the intensity gradient at each point are considered. To make it rotation invariant, the angle between the gradient orientation and the vector pointing outward from the center of the patch is measured.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:RIFT.png|thumb|center|600px|RIFT feature values at 3 different locations in the descriptor (image from [http://www.cs.illinois.edu/~slazebni/publications/pami05.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The authors' original implementation uses 4 rings and 8 histogram orientations, which produce a descriptor of size 32. RIFT is not robust to texture flipping, though this was never considered a big issue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/point_types_conversion.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/intensity_gradient.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/rift.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// A handy typedef.&lt;br /&gt;
typedef pcl::Histogram&amp;lt;32&amp;gt; RIFT32;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud with color information.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;::Ptr cloudColor(new pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;);&lt;br /&gt;
	// Object for storing the point cloud with intensity value.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZI&amp;gt;::Ptr cloudIntensity(new pcl::PointCloud&amp;lt;pcl::PointXYZI&amp;gt;);&lt;br /&gt;
	// Object for storing the intensity gradients.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::IntensityGradient&amp;gt;::Ptr gradients(new pcl::PointCloud&amp;lt;pcl::IntensityGradient&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the RIFT descriptor for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;RIFT32&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;RIFT32&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGB&amp;gt;(argv[1], *cloudColor) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Convert the RGB to intensity.&lt;br /&gt;
	pcl::PointCloudXYZRGBtoXYZI(*cloudColor, *cloudIntensity);&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZI, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloudIntensity);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZI&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZI&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Compute the intensity gradients.&lt;br /&gt;
	pcl::IntensityGradientEstimation &amp;lt; pcl::PointXYZI, pcl::Normal, pcl::IntensityGradient,&lt;br /&gt;
		pcl::common::IntensityFieldAccessor&amp;lt;pcl::PointXYZI&amp;gt; &amp;gt; ge;&lt;br /&gt;
	ge.setInputCloud(cloudIntensity);&lt;br /&gt;
	ge.setInputNormals(normals);&lt;br /&gt;
	ge.setRadiusSearch(0.03);&lt;br /&gt;
	ge.compute(*gradients);&lt;br /&gt;
&lt;br /&gt;
	// RIFT estimation object.&lt;br /&gt;
	pcl::RIFTEstimation&amp;lt;pcl::PointXYZI, pcl::IntensityGradient, RIFT32&amp;gt; rift;&lt;br /&gt;
	rift.setInputCloud(cloudIntensity);&lt;br /&gt;
	rift.setSearchMethod(kdtree);&lt;br /&gt;
	// Set the intensity gradients to use.&lt;br /&gt;
	rift.setInputGradient(gradients);&lt;br /&gt;
	// Radius, to get all neighbors within.&lt;br /&gt;
	rift.setRadiusSearch(0.02);&lt;br /&gt;
	// Set the number of bins to use in the distance dimension.&lt;br /&gt;
	rift.setNrDistanceBins(4);&lt;br /&gt;
	// Set the number of bins to use in the gradient orientation dimension.&lt;br /&gt;
	rift.setNrGradientBins(8);&lt;br /&gt;
	// Note: you must change the output histogram size to reflect the previous values.&lt;br /&gt;
&lt;br /&gt;
	rift.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search method, Intensity gradients, Radius, Distance bins, Gradient bins&lt;br /&gt;
* '''Output''': RIFT descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://www.cs.illinois.edu/~slazebni/publications/pami05.pdf A Sparse Texture Representation Using Local Affine Regions] (Svetlana Lazebnik et al., 2005)&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_r_i_f_t_estimation.html pcl::RIFTEstimation]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_intensity_gradient_estimation.html pcl::IntensityGradientEstimation]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/structpcl_1_1_intensity_gradient.html pcl::IntensityGradient]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_RIFT.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==NARF==&lt;br /&gt;
&lt;br /&gt;
The Normal Aligned Radial Feature is the only descriptor here that does not take a point cloud as input. Instead, it works with range images. A range image is a common RGB image in which the distance to the point that corresponds to a certain pixel is encoded as a color value in the [https://en.wikipedia.org/wiki/Visible_spectrum visible light spectrum]: the points that are closer to the camera would be violet, while the points near the maximum sensor range would be red.&lt;br /&gt;
&lt;br /&gt;
NARF also requires us to find suitable keypoints to compute the descriptor for. NARF keypoints are located near an object's corners, and this also requires to find the borders (transitions from foreground to background), which are trivial to find with a range image. Because of this lengthy pipeline, I will describe the whole process in different sections.&lt;br /&gt;
&lt;br /&gt;
===Obtaining a range image===&lt;br /&gt;
&lt;br /&gt;
Because we always work with point clouds, I will now explain how you can convert one into a range image, in order to use it for the NARF descriptor. PCL provides a couple of handy classes to perform the conversion, given that you fill the camera data correctly.&lt;br /&gt;
&lt;br /&gt;
A range image can be created in two ways. First, we can use spherical projection, which would give us an image similar to the ones produced by a LIDAR sensor. Second, we can use planar projection, which is better suitable for camera-like sensors as the Kinect or the Xtion, and will not have the characteristic distortion of the first one.&lt;br /&gt;
&lt;br /&gt;
====Spherical projection====&lt;br /&gt;
&lt;br /&gt;
The following code will take a point cloud and create a range image from it, using spherical projection:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/range_image_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Parameters needed by the range image object:&lt;br /&gt;
&lt;br /&gt;
	// Angular resolution is the angular distance between pixels.&lt;br /&gt;
	// Kinect: 57° horizontal FOV, 43° vertical FOV, 640x480 (chosen here).&lt;br /&gt;
	// Xtion: 58° horizontal FOV, 45° vertical FOV, 640x480.&lt;br /&gt;
	float angularResolutionX = (float)(57.0f / 640.0f * (M_PI / 180.0f));&lt;br /&gt;
	float angularResolutionY = (float)(43.0f / 480.0f * (M_PI / 180.0f));&lt;br /&gt;
	// Maximum horizontal and vertical angles. For example, for a full panoramic scan,&lt;br /&gt;
	// the first would be 360º. Choosing values that adjust to the real sensor will&lt;br /&gt;
	// decrease the time it takes, but don't worry. If the values are bigger than&lt;br /&gt;
	// the real ones, the image will be automatically cropped to discard empty zones.&lt;br /&gt;
	float maxAngleX = (float)(60.0f * (M_PI / 180.0f));&lt;br /&gt;
	float maxAngleY = (float)(50.0f * (M_PI / 180.0f));&lt;br /&gt;
	// Sensor pose. Thankfully, the cloud includes the data.&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	// Noise level. If greater than 0, values of neighboring points will be averaged.&lt;br /&gt;
	// This would set the search radius (e.g., 0.03 == 3cm).&lt;br /&gt;
	float noiseLevel = 0.0f;&lt;br /&gt;
	// Minimum range. If set, any point closer to the sensor than this will be ignored.&lt;br /&gt;
	float minimumRange = 0.0f;&lt;br /&gt;
	// Border size. If greater than 0, a border of &amp;quot;unobserved&amp;quot; points will be left&lt;br /&gt;
	// in the image when it is cropped.&lt;br /&gt;
	int borderSize = 1;&lt;br /&gt;
&lt;br /&gt;
	// Range image object.&lt;br /&gt;
	pcl::RangeImage rangeImage;&lt;br /&gt;
	rangeImage.createFromPointCloud(*cloud, angularResolutionX, angularResolutionY,&lt;br /&gt;
									maxAngleX, maxAngleY, sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
									noiseLevel, minimumRange, borderSize);&lt;br /&gt;
&lt;br /&gt;
	// Visualize the image.&lt;br /&gt;
	pcl::visualization::RangeImageVisualizer viewer(&amp;quot;Range image&amp;quot;);&lt;br /&gt;
	viewer.showRangeImage(rangeImage);&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer.spinOnce();&lt;br /&gt;
		// Sleep 100ms to go easy on the CPU.&lt;br /&gt;
		pcl_sleep(0.1);&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here you can see an example of the output range image:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_spherical.png|thumb|center|400px|Range image of a point cloud, using spherical projection.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Angular resolution, Maximum angle, Sensor pose, Coordinate frame, Noise level, Maximum range, Border size&lt;br /&gt;
* '''Output''': Range image (spherical projection)&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/range_image_creation.php How to create a range image from a point cloud]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/range_image_visualization.php How to visualize a range image]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_range_image.html pcl::RangeImage]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_range_image_spherical.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Planar projection====&lt;br /&gt;
&lt;br /&gt;
As mentioned, planar projection will give better results with clouds taken from a depth camera:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image_planar.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/range_image_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Parameters needed by the planar range image object:&lt;br /&gt;
&lt;br /&gt;
	// Image size. Both Kinect and Xtion work at 640x480.&lt;br /&gt;
	int imageSizeX = 640;&lt;br /&gt;
	int imageSizeY = 480;&lt;br /&gt;
	// Center of projection. here, we choose the middle of the image.&lt;br /&gt;
	float centerX = 640.0f / 2.0f;&lt;br /&gt;
	float centerY = 480.0f / 2.0f;&lt;br /&gt;
	// Focal length. The value seen here has been taken from the original depth images.&lt;br /&gt;
	// It is safe to use the same value vertically and horizontally.&lt;br /&gt;
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;&lt;br /&gt;
	// Sensor pose. Thankfully, the cloud includes the data.&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	// Noise level. If greater than 0, values of neighboring points will be averaged.&lt;br /&gt;
	// This would set the search radius (e.g., 0.03 == 3cm).&lt;br /&gt;
	float noiseLevel = 0.0f;&lt;br /&gt;
	// Minimum range. If set, any point closer to the sensor than this will be ignored.&lt;br /&gt;
	float minimumRange = 0.0f;&lt;br /&gt;
&lt;br /&gt;
	// Planar range image object.&lt;br /&gt;
	pcl::RangeImagePlanar rangeImagePlanar;&lt;br /&gt;
	rangeImagePlanar.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,&lt;br /&gt;
			centerX, centerY, focalLengthX, focalLengthX,&lt;br /&gt;
			sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
			noiseLevel, minimumRange);&lt;br /&gt;
&lt;br /&gt;
	// Visualize the image.&lt;br /&gt;
	pcl::visualization::RangeImageVisualizer viewer(&amp;quot;Planar range image&amp;quot;);&lt;br /&gt;
	viewer.showRangeImage(rangeImagePlanar);&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer.spinOnce();&lt;br /&gt;
		// Sleep 100ms to go easy on the CPU.&lt;br /&gt;
		pcl_sleep(0.1);&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_planar.png|thumb|center|400px|Range image of a point cloud, using planar projection.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Image size, Projection center, Focal length, Sensor pose, Coordinate frame, Noise level, Maximum range&lt;br /&gt;
* '''Output''': Range image (planar projection)&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/range_image_creation.php How to create a range image from a point cloud]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/range_image_visualization.php How to visualize a range image]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_range_image_planar.html pcl::RangeImagePlanar]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_range_image_planar.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you prefer to do the conversion in real time while you inspect the cloud, PCL ships with an [https://github.com/PointCloudLibrary/pcl/tree/master/doc/tutorials/content/sources/openni_range_image_visualization example] that fetches an &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;openni_wrapper::DepthImage&amp;quot;&amp;lt;/span&amp;gt; from an OpenNI device and creates the range image from it. You can adapt the code of the [[PCL/OpenNI_tutorial_1:_Installing_and_testing#Testing | example]] example from tutorial 1 to save it to disk with the function [http://docs.pointclouds.org/trunk/group__io.html#ga7291a029cdcde32ca3639d07dc6491b9 pcl::io::saveRangeImagePlanarFilePNG()].&lt;br /&gt;
&lt;br /&gt;
===Extracting borders===&lt;br /&gt;
&lt;br /&gt;
NARF keypoints are located near the edges of objects in the range image, so in order to find them, we first have to extract the borders. A border is defined as an abrupt change from foreground to background. In a range image, this can be easily seen because there is a &amp;quot;jump&amp;quot; in the depth value of two adjacent pixels.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_border_detection.png|thumb|center|400px|Border detection on a range image (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are three types of borders. ''Object borders'' consist of the pixels (or points) located on the very edge of an object (the outermost points still belonging to the object). ''Shadow borders'' are points in the background on the edge of occlusions (empty areas in the background due to the objects in front covering them). Notice that, when the cloud is seen from the sensor's viewpoint, object and shadow points will seem adjacent. Finally, ''veil points'' are points interpolated between the previous two which appear in scans taken with LIDAR sensors, so we do not have to worry about them here.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:Range_image_border_type.png | Types of borders (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).&lt;br /&gt;
File:Range_image_borders_example.png | Example of object and shadow borders on a cloud.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The algorithm basically compares a point's depth with the values of its neighbors, and if a big difference is found, we know it is due to a border. Points closer to the sensor will be marked as object borders, and the other ones as shadow borders.&lt;br /&gt;
&lt;br /&gt;
PCL provides a class for extracting borders of a range image:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image_planar.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/range_image_border_extractor.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/range_image_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the borders.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::BorderDescription&amp;gt;::Ptr borders(new pcl::PointCloud&amp;lt;pcl::BorderDescription&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Convert the cloud to range image.&lt;br /&gt;
	int imageSizeX = 640, imageSizeY = 480;&lt;br /&gt;
	float centerX = (640.0f / 2.0f), centerY = (480.0f / 2.0f);&lt;br /&gt;
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	float noiseLevel = 0.0f, minimumRange = 0.0f;&lt;br /&gt;
	pcl::RangeImagePlanar rangeImage;&lt;br /&gt;
	rangeImage.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,&lt;br /&gt;
			centerX, centerY, focalLengthX, focalLengthX,&lt;br /&gt;
			sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
			noiseLevel, minimumRange);&lt;br /&gt;
&lt;br /&gt;
	// Border extractor object.&lt;br /&gt;
	pcl::RangeImageBorderExtractor borderExtractor(&amp;amp;rangeImage);&lt;br /&gt;
&lt;br /&gt;
	borderExtractor.compute(*borders);&lt;br /&gt;
&lt;br /&gt;
	// Visualize the borders.&lt;br /&gt;
	pcl::visualization::RangeImageVisualizer* viewer = NULL;&lt;br /&gt;
	viewer = pcl::visualization::RangeImageVisualizer::getRangeImageBordersWidget(rangeImage,&lt;br /&gt;
			 -std::numeric_limits&amp;lt;float&amp;gt;::infinity(),&lt;br /&gt;
			 std::numeric_limits&amp;lt;float&amp;gt;::infinity(),&lt;br /&gt;
			 false, *borders, &amp;quot;Borders&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce();&lt;br /&gt;
		// Sleep 100ms to go easy on the CPU.&lt;br /&gt;
		pcl_sleep(0.1);&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_borders.png|thumb|center|400px|Borders found on the range image.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can use the extractor's &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getParameters()&amp;quot;&amp;lt;/span&amp;gt; function to get a [http://docs.pointclouds.org/trunk/structpcl_1_1_range_image_border_extractor_1_1_parameters.html pcl::RangeImageBorderExtractor::Parameters] struct with the settings that will be used.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Range image&lt;br /&gt;
* '''Output''': Borders&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/range_image_border_extraction.php How to extract borders from range images]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://europa.informatik.uni-freiburg.de/files/steder11icra.pdf Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries] (Bastian Steder et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_range_image_border_extractor.html pcl::RangeImageBorderExtractor]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_range_image_borders.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Finding keypoints===&lt;br /&gt;
&lt;br /&gt;
Citing the [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original publication]:''&lt;br /&gt;
&lt;br /&gt;
''&amp;quot;We have the following requirements for our interest point extraction procedure:&lt;br /&gt;
&lt;br /&gt;
# ''It must take information about borders and the surface structure into account.''&lt;br /&gt;
# ''It must select positions that can be reliably detected even if the object is observed from another perspective.''&lt;br /&gt;
# ''The points must be on positions that provide stable areas for normal estimation or the descriptor calculation in general.&amp;quot;''&lt;br /&gt;
&lt;br /&gt;
The procedure is the following: for every point in the range image, a score is computed that conveys how much the surface changes in its neighborhood (this is tuned with the ''support size'' σ, which is the diameter of the sphere used to find neighboring points). Also, the dominant direction of this change is computed. Then, this direction is compared with those of the neighbors, trying to find how stable the point is (if the directions are very different, that means the point is not stable, and that the surface around changes a lot). Points that are near the object's corners (but not exactly on the very edge) will be good keypoints, yet stable enough.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:NARF_keypoints.png | NARF keypoints (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).&lt;br /&gt;
File:NARF_keypoints_support_sizes.png | Interest regions with a support size of 20cm (up) and 1m (down) (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In PCL, NARF keypoints can be found this way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image_planar.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/range_image_border_extractor.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/keypoints/narf_keypoint.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/range_image_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the keypoints' indices.&lt;br /&gt;
	pcl::PointCloud&amp;lt;int&amp;gt;::Ptr keypoints(new pcl::PointCloud&amp;lt;int&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Convert the cloud to range image.&lt;br /&gt;
	int imageSizeX = 640, imageSizeY = 480;&lt;br /&gt;
	float centerX = (640.0f / 2.0f), centerY = (480.0f / 2.0f);&lt;br /&gt;
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	float noiseLevel = 0.0f, minimumRange = 0.0f;&lt;br /&gt;
	pcl::RangeImagePlanar rangeImage;&lt;br /&gt;
	rangeImage.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,&lt;br /&gt;
			centerX, centerY, focalLengthX, focalLengthX,&lt;br /&gt;
			sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
			noiseLevel, minimumRange);&lt;br /&gt;
&lt;br /&gt;
	pcl::RangeImageBorderExtractor borderExtractor;&lt;br /&gt;
	// Keypoint detection object.&lt;br /&gt;
	pcl::NarfKeypoint detector(&amp;amp;borderExtractor);&lt;br /&gt;
	detector.setRangeImage(&amp;amp;rangeImage);&lt;br /&gt;
	// The support size influences how big the surface of interest will be,&lt;br /&gt;
	// when finding keypoints from the border information.&lt;br /&gt;
	detector.getParameters().support_size = 0.2f;&lt;br /&gt;
&lt;br /&gt;
	detector.compute(*keypoints);&lt;br /&gt;
&lt;br /&gt;
	// Visualize the keypoints.&lt;br /&gt;
	pcl::visualization::RangeImageVisualizer viewer(&amp;quot;NARF keypoints&amp;quot;);&lt;br /&gt;
	viewer.showRangeImage(rangeImage);&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; keypoints-&amp;gt;points.size(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		viewer.markPoint(keypoints-&amp;gt;points[i] % rangeImage.width,&lt;br /&gt;
						 keypoints-&amp;gt;points[i] / rangeImage.width,&lt;br /&gt;
						 // Set the color of the pixel to red (the background&lt;br /&gt;
						 // circle is already that color). All other parameters&lt;br /&gt;
						 // are left untouched, check the API for more options.&lt;br /&gt;
						 pcl::visualization::Vector3ub(1.0f, 0.0f, 0.0f));&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer.spinOnce();&lt;br /&gt;
		// Sleep 100ms to go easy on the CPU.&lt;br /&gt;
		pcl_sleep(0.1);&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_NARF_keypoints.png|thumb|center|400px|NARF keypoints found on the range image.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Range image, Border extractor, Support Size&lt;br /&gt;
* '''Output''': NARF keypoints&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/narf_keypoint_extraction.php How to extract NARF keypoint from a range image]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://europa.informatik.uni-freiburg.de/files/steder11icra.pdf Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries] (Bastian Steder et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_narf_keypoint.html pcl::NarfKeypoint]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_NARF_keypoints.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Computing the descriptor===&lt;br /&gt;
&lt;br /&gt;
We have created the range image from a point cloud, and we have extracted the borders in order to find good keypoints. Now it is time to compute the NARF descriptor for each keypoint.&lt;br /&gt;
&lt;br /&gt;
The NARF descriptor encodes information about surface changes around a point. First, a local range patch is created around the point. It is like a small range image centered at that point, aligned with the normal (it would seem as if we were looking at the point along the normal). Then, a star pattern with ''n'' beams is overlaid onto the patch, also centered at the point. For every beam, a value is computed, that reflects how much the surface under it changes. The stronger the change is, and the closer to the center it is, the higher the final value will be. The ''n'' resulting values compose the final descriptor.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:NARF_descriptor.png|thumb|center|600px|Computing the NARF descriptor for a keypoint (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The descriptor right now is not invariant to rotations around the normal. To achieve this, the whole possible 360 degrees are binned into an histogram. The value of each bin is computed from the descriptor values according to the angle. Then, the bin with the highest value is considered the dominant orientation, and the descriptor is shifted according to it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image_planar.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/range_image_border_extractor.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/keypoints/narf_keypoint.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/narf_descriptor.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the keypoints' indices.&lt;br /&gt;
	pcl::PointCloud&amp;lt;int&amp;gt;::Ptr keypoints(new pcl::PointCloud&amp;lt;int&amp;gt;);&lt;br /&gt;
	// Object for storing the NARF descriptors.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Narf36&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::Narf36&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Convert the cloud to range image.&lt;br /&gt;
	int imageSizeX = 640, imageSizeY = 480;&lt;br /&gt;
	float centerX = (640.0f / 2.0f), centerY = (480.0f / 2.0f);&lt;br /&gt;
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	float noiseLevel = 0.0f, minimumRange = 0.0f;&lt;br /&gt;
	pcl::RangeImagePlanar rangeImage;&lt;br /&gt;
	rangeImage.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,&lt;br /&gt;
			centerX, centerY, focalLengthX, focalLengthX,&lt;br /&gt;
			sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
			noiseLevel, minimumRange);&lt;br /&gt;
&lt;br /&gt;
	// Extract the keypoints.&lt;br /&gt;
	pcl::RangeImageBorderExtractor borderExtractor;&lt;br /&gt;
	pcl::NarfKeypoint detector(&amp;amp;borderExtractor);&lt;br /&gt;
	detector.setRangeImage(&amp;amp;rangeImage);&lt;br /&gt;
	detector.getParameters().support_size = 0.2f;&lt;br /&gt;
	detector.compute(*keypoints);&lt;br /&gt;
&lt;br /&gt;
	// The NARF estimator needs the indices in a vector, not a cloud.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; keypoints2;&lt;br /&gt;
	keypoints2.resize(keypoints-&amp;gt;points.size());&lt;br /&gt;
	for (unsigned int i = 0; i &amp;lt; keypoints-&amp;gt;size(); ++i)&lt;br /&gt;
		keypoints2[i] = keypoints-&amp;gt;points[i];&lt;br /&gt;
	// NARF estimation object.&lt;br /&gt;
	pcl::NarfDescriptor narf(&amp;amp;rangeImage, &amp;amp;keypoints2);&lt;br /&gt;
	// Support size: choose the same value you used for keypoint extraction.&lt;br /&gt;
	narf.getParameters().support_size = 0.2f;&lt;br /&gt;
	// If true, the rotation invariant version of NARF will be used. The histogram&lt;br /&gt;
	// will be shifted according to the dominant orientation to provide robustness to&lt;br /&gt;
	// rotations around the normal.&lt;br /&gt;
	narf.getParameters().rotation_invariant = true;&lt;br /&gt;
&lt;br /&gt;
	narf.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Range image, Key points, Support Size&lt;br /&gt;
* '''Output''': NARF descriptors&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/narf_feature_extraction.php How to extract NARF Features from a range image]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://europa.informatik.uni-freiburg.de/files/steder11icra.pdf Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries] (Bastian Steder et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_narf_descriptor.html pcl::NarfDescriptor]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_NARF_descriptor.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==RoPS==&lt;br /&gt;
&lt;br /&gt;
The Rotational Projection Statistics (RoPS) feature is a bit different from the other descriptors because it works with a triangle mesh, so a previous triangulation step is needed for generating this mesh from the cloud. Apart from that, most concepts are similar.&lt;br /&gt;
&lt;br /&gt;
In order to compute RoPS for a keypoint, the local surface is cropped according to a support radius, so only points and triangles lying inside are taken into account. Then, a local reference frame (LRF) is computed, giving the descriptor its rotational invariance. A coordinate system is created with the point as the origin, and the axes aligned with the LRF. Then, for every axis, several steps are performed.&lt;br /&gt;
&lt;br /&gt;
First, the local surface is rotated around the current axis. The angle is determined by one of the parameters, which sets the number of rotations. Then, all points in the local surface are projected onto the XY, XZ and YZ planes. For each one, statistical information about the distribution of the projected points is computed, and concatenated to form the final descriptor.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/point_types_conversion.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/gp3.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/rops_estimation.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// A handy typedef.&lt;br /&gt;
typedef pcl::Histogram&amp;lt;135&amp;gt; ROPS135;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing both the points and the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;::Ptr cloudNormals(new pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
	// Object for storing the ROPS descriptor for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;ROPS135&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;ROPS135&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Perform triangulation.&lt;br /&gt;
	pcl::concatenateFields(*cloud, *normals, *cloudNormals);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointNormal&amp;gt;::Ptr kdtree2(new pcl::search::KdTree&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
	kdtree2-&amp;gt;setInputCloud(cloudNormals);&lt;br /&gt;
	pcl::GreedyProjectionTriangulation&amp;lt;pcl::PointNormal&amp;gt; triangulation;&lt;br /&gt;
	pcl::PolygonMesh triangles;&lt;br /&gt;
	triangulation.setSearchRadius(0.025);&lt;br /&gt;
	triangulation.setMu(2.5);&lt;br /&gt;
	triangulation.setMaximumNearestNeighbors(100);&lt;br /&gt;
	triangulation.setMaximumSurfaceAngle(M_PI / 4); // 45 degrees.&lt;br /&gt;
	triangulation.setNormalConsistency(false);&lt;br /&gt;
	triangulation.setMinimumAngle(M_PI / 18); // 10 degrees.&lt;br /&gt;
	triangulation.setMaximumAngle(2 * M_PI / 3); // 120 degrees.&lt;br /&gt;
	triangulation.setInputCloud(cloudNormals);&lt;br /&gt;
	triangulation.setSearchMethod(kdtree2);&lt;br /&gt;
	triangulation.reconstruct(triangles);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should only compute descriptors for chosen keypoints. It has&lt;br /&gt;
	// been omitted here for simplicity.&lt;br /&gt;
&lt;br /&gt;
	// RoPs estimation object.&lt;br /&gt;
	pcl::ROPSEstimation&amp;lt;pcl::PointXYZ, ROPS135&amp;gt; rops;&lt;br /&gt;
	rops.setInputCloud(cloud);&lt;br /&gt;
	rops.setSearchMethod(kdtree);&lt;br /&gt;
	rops.setRadiusSearch(0.03);&lt;br /&gt;
	rops.setTriangles(triangles.polygons);&lt;br /&gt;
	// Number of partition bins that is used for distribution matrix calculation.&lt;br /&gt;
	rops.setNumberOfPartitionBins(5);&lt;br /&gt;
	// The greater the number of rotations is, the bigger the resulting descriptor.&lt;br /&gt;
	// Make sure to change the histogram size accordingly.&lt;br /&gt;
	rops.setNumberOfRotations(3);&lt;br /&gt;
	// Support radius that is used to crop the local surface of the point.&lt;br /&gt;
	rops.setSupportRadius(0.025);&lt;br /&gt;
&lt;br /&gt;
	rops.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Triangles, Search method, Support radius, Number of rotations, Number of partition bins&lt;br /&gt;
* '''Output''': RoPS descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://arxiv.org/pdf/1304.3192v1.pdf Rotational Projection Statistics for 3D Local Surface Description and Object Recognition] (Yulan Guo et al., 2013)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_r_o_p_s_estimation.html pcl::ROPSEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_RoPS.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Global descriptors=&lt;br /&gt;
&lt;br /&gt;
Global descriptors encode object geometry. They are not computed for individual points, but for a whole cluster that represents an object. Because of this, a preprocessing step ([[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Segmentation|segmentation]]) is required, in order to retrieve possible candidates.&lt;br /&gt;
&lt;br /&gt;
Global descriptors are used for object recognition and classification, geometric analysis (object type, shape...), and pose estimation.&lt;br /&gt;
&lt;br /&gt;
You should also know that many local descriptors can also be used as global ones. This can be done with descriptors that use a radius to search for neighbors (as PFH does). The trick is to compute it for one single point in the object cluster, and set the radius to the maximum possible distance between any two points (so all points in the cluster are considered as neighbors).&lt;br /&gt;
&lt;br /&gt;
==VFH==&lt;br /&gt;
&lt;br /&gt;
The Viewpoint Feature Histogram is based on the FPFH. Because the latter is invariant to the object's pose, the authors decided to expand it by including information about the viewpoint. Also, the FPFH is estimated once for the whole cluster, not for every point.&lt;br /&gt;
&lt;br /&gt;
The VFH is made up by two parts: a viewpoint direction component, and an extended FPFH component. To compute the first one, the object's [[PCL/OpenNI_tutorial_2:_Cloud_processing_%28basic%29#Computing_the_centroid|centroid]] is found, which is the point that results from averaging the X, Y and Z coordinates of all points. Then, the vector between the viewpoint (the position of the sensor) and this centroid is computed, and normalized. Finally, for all points in the cluster, the angle between this vector and their normal is calculated, and the result is binned into an histogram. The vector is translated to each point when computing the angle because it makes the descriptor scale invariant.&lt;br /&gt;
&lt;br /&gt;
The second component is computed like the FPFH (that results in 3 histograms for the 3 angular features, α, φ and θ), with some differences: it is only computed for the centroid, using the computed viewpoint direction vector as its normal (as the point, obviously, does not have a normal), and setting all the cluster's points as neighbors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:VFH_viewpoint_component.png | Viewpoint component of the VFH (image from [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf original paper]).&lt;br /&gt;
File:VFH_extended_FPFH_component.png | Extended FPFH component of the VFH (image from [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf original paper]).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The resulting 4 histograms (1 for the viewpoint component, 3 for the extended FPFH component) are concatenated to build the final VFH descriptor. By default, the bins are normalized using the total number of points in the cluster. This makes the VFH descriptor invariant to scale.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:VFH_histogram.png|thumb|center|400px| VFH histogram (image from [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The PCL implementation computes an additional fifth histogram with the distances of the cluster points to the centroid (the Shape Distribution Component, SDC), increading the size of the output descriptor from 263 to 308. The SDC is taken from the CVFH descriptor that we will see in the next section, and makes the result more robust.&lt;br /&gt;
&lt;br /&gt;
The VFH of an already clustered object can be computed this way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/vfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the VFH descriptor.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// VFH estimation object.&lt;br /&gt;
	pcl::VFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; vfh;&lt;br /&gt;
	vfh.setInputCloud(object);&lt;br /&gt;
	vfh.setInputNormals(normals);&lt;br /&gt;
	vfh.setSearchMethod(kdtree);&lt;br /&gt;
	// Optionally, we can normalize the bins of the resulting histogram,&lt;br /&gt;
	// using the total number of points.&lt;br /&gt;
	vfh.setNormalizeBins(true);&lt;br /&gt;
	// Also, we can normalize the SDC with the maximum size found between&lt;br /&gt;
	// the centroid and any of the cluster's points.&lt;br /&gt;
	vfh.setNormalizeDistance(false);&lt;br /&gt;
&lt;br /&gt;
	vfh.compute(*descriptor);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Because only one VFH descriptor is computed for the whole cluster, the size of the cloud object that stores the result will be 1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Normals, Search method, [Normalize bins], [Normalize SDC]&lt;br /&gt;
* '''Output''': VFH descriptor&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/vfh_estimation.php Estimating VFH signatures for a set of points]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf Fast 3D Recognition and Pose Using the Viewpoint Feature Histogram] (Radu Bogdan Rusu et al., 2010)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_v_f_h_estimation.html pcl::VFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_VFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===CVFH===&lt;br /&gt;
&lt;br /&gt;
The original VFH descriptor is not robust to occlusion or other sensor artifacts, or measurement errors. If the object cluster is missing many points, the resulting computed centroid will differ from the original, altering the final descriptor, and preventing a positive match from being found. Because of that, the Clustered Viewpoint Feature Histogram (CVFH) was introduced.&lt;br /&gt;
&lt;br /&gt;
The idea is very simple: instead of computing a single VFH histogram for the whole cluster, the object is first divided in stable, smooth regions using [[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Region_growing | region-growing segmentation]], that enforces several constraints in the distances and differences of normals of the points belonging to every region. Then, a VFH is computed for every region. Thanks to this, an object can be found in a scene, as long as at least one of its regions is fully visible.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:CVFH_occlusion.png | Typical occlusion issues in a point cloud (image from original paper).&lt;br /&gt;
File:CVFH_regions.png | Object regions computed for the CVFH (image from original paper).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additionally, a Shape Distribution Component (SDC) is also computed and included. It encodes information about the distribution of the points arond the region's centroid, measuring the distances. The SDC allows to differentiate objects with similar characteristics (size and normal distribution), like two planar surfaces from each other.&lt;br /&gt;
&lt;br /&gt;
The authors proposed to discard the histogram normalization step that is performed in VFH. This has the effect of making the descriptor dependant of scale, so an object of a certain size would not match a bigger or smaller copy of itself. It also makes CVFH more robust to occlusion.&lt;br /&gt;
&lt;br /&gt;
CVFH is invariant to the camera roll angle, like most global descriptors. This is so because rotations about that camera axis do not change the observable geometry that descriptors are computed from, limiting the pose estimation to 5 DoF. The use of a Camera Roll Histogram (CRH) has been proposed to overcome this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/cvfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the CVFH descriptors.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// CVFH estimation object.&lt;br /&gt;
	pcl::CVFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; cvfh;&lt;br /&gt;
	cvfh.setInputCloud(object);&lt;br /&gt;
	cvfh.setInputNormals(normals);&lt;br /&gt;
	cvfh.setSearchMethod(kdtree);&lt;br /&gt;
	// Set the maximum allowable deviation of the normals,&lt;br /&gt;
	// for the region segmentation step.&lt;br /&gt;
	cvfh.setEPSAngleThreshold(5.0 / 180.0 * M_PI); // 5 degrees.&lt;br /&gt;
	// Set the curvature threshold (maximum disparity between curvatures),&lt;br /&gt;
	// for the region segmentation step.&lt;br /&gt;
	cvfh.setCurvatureThreshold(1.0);&lt;br /&gt;
	// Set to true to normalize the bins of the resulting histogram,&lt;br /&gt;
	// using the total number of points. Note: enabling it will make CVFH&lt;br /&gt;
	// invariant to scale just like VFH, but the authors encourage the opposite.&lt;br /&gt;
	cvfh.setNormalizeBins(false);&lt;br /&gt;
&lt;br /&gt;
	cvfh.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can further customize the segmentation step with &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setClusterTolerance()&amp;quot;&amp;lt;/span&amp;gt; (to set the maximum Euclidean distance between points in the same cluster) and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setMinPoints()&amp;quot;&amp;lt;/span&amp;gt;. The size of the output will be equal to the number of regions the object was divided in. Also, check the functions &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getCentroidClusters()&amp;quot;&amp;lt;/span&amp;gt; and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getCentroidNormalClusters()&amp;quot;&amp;lt;/span&amp;gt;, you can use them to get information about the centroids used to compute the different CVFH descriptors.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Normals, Search method, Angle threshold, Curvature threshold, [Normalize bins], [Cluster tolerance], [Minimum points]&lt;br /&gt;
* '''Output''': CVFH descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6130296 CAD-Model Recognition and 6DOF Pose Estimation Using 3D Cues] (requires IEEE Xplore subscription) (Aitor Aldoma et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_c_v_f_h_estimation.html pcl::CVFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_CVFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====OUR-CVFH====&lt;br /&gt;
&lt;br /&gt;
The Oriented, Unique and Repeatable CVFH expands the previous descriptor, adding the computation of an unique reference frame to make it more robust.&lt;br /&gt;
&lt;br /&gt;
OUR-CVFH relies on the use of Semi-Global Unique Reference Frames (SGURFs), which are repeatable coordinate systems computed for each region. Not only they remove the invariance to camera roll and allow to extract the 6DoF pose directly without additional steps, but they also improve the spatial descriptiveness.&lt;br /&gt;
&lt;br /&gt;
The first part of the computation is akin to CVFH, but after segmentation, the points in each region are filtered once more according to the difference between their normals and the region's average normal. This results in better shaped regions, improving the estimation of the Reference Frames (RFs).&lt;br /&gt;
&lt;br /&gt;
After this, the SGURF is computed for each region. Disambiguation is performed to decide the sign of the axes, according to the points' distribution. If this is not enough and the sign remains ambiguous, multiple RFs will need to be created to account for it. Finally, the OUR-CVFH descriptor is computed. The original Shape Distribution Component (SDC) is discarded, and the surface is now described according to the RFs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:OUR-CVFH.png|thumb|center|600px| SGURF frame and resulting histogram of a region (image from [http://vision.deis.unibo.it/fede/papers/dagm12.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/our_cvfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the OUR-CVFH descriptors.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// OUR-CVFH estimation object.&lt;br /&gt;
	pcl::OURCVFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; ourcvfh;&lt;br /&gt;
	ourcvfh.setInputCloud(object);&lt;br /&gt;
	ourcvfh.setInputNormals(normals);&lt;br /&gt;
	ourcvfh.setSearchMethod(kdtree);&lt;br /&gt;
	ourcvfh.setEPSAngleThreshold(5.0 / 180.0 * M_PI); // 5 degrees.&lt;br /&gt;
	ourcvfh.setCurvatureThreshold(1.0);&lt;br /&gt;
	ourcvfh.setNormalizeBins(false);&lt;br /&gt;
	// Set the minimum axis ratio between the SGURF axes. At the disambiguation phase,&lt;br /&gt;
	// this will decide if additional Reference Frames need to be created, if ambiguous.&lt;br /&gt;
	ourcvfh.setAxisRatio(0.8);&lt;br /&gt;
&lt;br /&gt;
	ourcvfh.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can use the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getTransforms()&amp;quot;&amp;lt;/span&amp;gt; function to get the transformations aligning the cloud to the corresponding SGURF.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Normals, Search method, Angle threshold, Curvature threshold, [Normalize bins], [Cluster tolerance], [Minimum points], [Axis ratio]&lt;br /&gt;
* '''Output''': OUR-CVFH descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://vision.deis.unibo.it/fede/papers/dagm12.pdf OUR-CVFH – Oriented, Unique and Repeatable Clustered Viewpoint Feature Histogram for Object Recognition and 6DOF Pose Estimation] (Aitor Aldoma et al., 2012)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_o_u_r_c_v_f_h_estimation.html pcl::OURCVFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_OUR-CVFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==ESF==&lt;br /&gt;
&lt;br /&gt;
The Ensemble of Shape Functions (ESF) is a combination of 3 different shape functions that describe certain properties of the cloud's points: distances, angles and area. This descriptor is very unique because it does not require normal information. Actually, it does not need any preprocessing, as it is robust to noise and incomplete surfaces.&lt;br /&gt;
&lt;br /&gt;
The algorithm uses a voxel grid as an approximation of the real surface. It iterates through all the points in the cloud: for every iteration, 3 random points are chosen. For these points, the shape functions are computed:&lt;br /&gt;
&lt;br /&gt;
* '''D2''': this function computes the distances between point pairs (3 overall). Then, for every pair, it checks if the line that connects both points lies entirely inside the surface, entirely outside (crossing free space), or both. Depending on this, the distance value will be binned to one of three possible histograms: IN, OUT or MIXED.&lt;br /&gt;
* '''D2 ratio''': an additional histogram for the ratio between parts of the line inside the surface, and parts outside. This value will be 0 if the line is completely outside, 1 if completely inside, and some value in between if mixed.&lt;br /&gt;
* '''D3''': this computes the square root of the area of the triangle formed by the 3 points. Like D2, the result is also classified as IN, OUT or MIXED, each with its own histogram.&lt;br /&gt;
* '''A3''': this function computes the angle formed by the points. Then, the value is binned depending on how the line opposite to the angle is (once again, as IN, OUT or MIXED).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:ESF.png|thumb|center|600px| ESF descriptor (image from [http://www.inf.ethz.ch/personal/zeislb/publications/aldoma_2012jram_PCLTutorial.pdf this paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After the loop is over, we are left with 10 subhistograms (IN, OUT and MIXED for D2, D3 and A3, and an additional one for the ratio). Each one has 64 bins, so the size of the final ESF descriptor is 640.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/esf.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the ESF descriptor.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::ESFSignature640&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::ESFSignature640&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// ESF estimation object.&lt;br /&gt;
	pcl::ESFEstimation&amp;lt;pcl::PointXYZ, pcl::ESFSignature640&amp;gt; esf;&lt;br /&gt;
	esf.setInputCloud(object);&lt;br /&gt;
&lt;br /&gt;
	esf.compute(*descriptor);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster)&lt;br /&gt;
* '''Output''': ESF descriptor&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6181760 Ensemble of shape functions for 3D object classification] (requires IEEE Xplore subscription) (Walter Wohlkinger and Markus Vincze, 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_e_s_f_estimation.html pcl::ESFEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_ESF.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==GFPFH==&lt;br /&gt;
&lt;br /&gt;
As you may have guessed, GFPFH stands for Global Fast Point Feature Histogram, the global version of the FPFH descriptor. GFPFH was designed for the task of helping a robot navigate its environment, having some context of the objects around it.&lt;br /&gt;
&lt;br /&gt;
The first step before being able to compute the descriptor is surface categorization. A set of logical primitives (the classes, or categories) is created, which depends on the type of objects we expect the robot to find on the scene. For example, if we know there will be a coffee mug, we create three: one for the handle, and the other two for the outer and inner faces. Then, FPFH descriptors are computed, and everything is fed to a [https://en.wikipedia.org/wiki/Conditional_random_field Conditional Random Field] (CRF) algorithm. The CRF will label each surface with one of the previous categories, so we end up with a cloud where each point has been classified depending of the type of object (or object's region) it belongs to.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:FPFH_CRF.png|thumb|center|300px| Classification of objects made with FPFH and CRF (image from [https://www.willowgarage.com/sites/default/files/iccv09.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Now, the GFPFH descriptor can be computed with the result of the classification step. It will encode what the object is made of, so the robot can easily recognize it. First, an [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#Octree | octree]] is created, dividing the object in voxel leaves. For every leaf, a set of probabilities is created, one for each class. Each one stores the probability of that leaf belonging to the class, and it is computed according to the number of points in that leaf that have been labelled as that class, and the total number of points. Then, for every pair of leaves in the octree, a line is casted, connecting them. Every leaf in its path is checked for occupancy, storing the result in an histogram. If the leaf is empty (free space), a value of 0 is saved. Otherwise, the leaf probabilities are used.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:GFPFH.png|thumb|center|500px| Computing the GFPFH with a voxel grid (image from [https://www.willowgarage.com/sites/default/files/iccv09.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following code will compute the GFPFH for a cloud with label information. The categorization step is up to you, as it depends largely on the type of the scene, and the use you are going to give it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/gfpfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZL&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZL&amp;gt;);&lt;br /&gt;
	// Object for storing the GFPFH descriptor.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::GFPFHSignature16&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::GFPFHSignature16&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZL&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you should now perform classification on the cloud's points. See the&lt;br /&gt;
	// original paper for more details. For this example, we will now consider 4&lt;br /&gt;
	// different classes, and randomly label each point as one of them.&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; object-&amp;gt;points.size(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		object-&amp;gt;points[i].label = 1 + i % 4;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// ESF estimation object.&lt;br /&gt;
	pcl::GFPFHEstimation&amp;lt;pcl::PointXYZL, pcl::PointXYZL, pcl::GFPFHSignature16&amp;gt; gfpfh;&lt;br /&gt;
	gfpfh.setInputCloud(object);&lt;br /&gt;
	// Set the object that contains the labels for each point. Thanks to the&lt;br /&gt;
	// PointXYZL type, we can use the same object we store the cloud in.&lt;br /&gt;
	gfpfh.setInputLabels(object);&lt;br /&gt;
	// Set the size of the octree leaves to 1cm (cubic).&lt;br /&gt;
	gfpfh.setOctreeLeafSize(0.01);&lt;br /&gt;
	// Set the number of classes the cloud has been labelled with (default is 16).&lt;br /&gt;
	gfpfh.setNumberOfClasses(4);&lt;br /&gt;
&lt;br /&gt;
	gfpfh.compute(*descriptor);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Labels, Number of classes, Leaf size&lt;br /&gt;
* '''Output''': GFPFH descriptor&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [https://www.willowgarage.com/sites/default/files/iccv09.pdf Detecting and Segmenting Objects for Mobile Manipulation] (Radu Bogdan Rusu et al., 2009)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_g_f_p_f_h_estimation.html pcl::GFPFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_GFPFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==GRSD==&lt;br /&gt;
&lt;br /&gt;
The global version of the Radius-based Surface Descriptor works in a similar fashion to GFPFH. A voxelization and a surface categorization step are performed beforehand, labelling all surface patches according to the geometric category (plane, cylinder, edge, rim, sphere), using RSD. Then, the whole cluster is classified into one of these categories, and the GRSD descriptor is computed from this.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:GRSD.png|thumb|center|400px| Classification of objects for GRSD and resulting histogram (image from [https://ias.in.tum.de/_media/spezial/bib/grsd10humanoids.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To compute it:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/grsd.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the GRSD descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::GRSDSignature21&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::GRSDSignature21&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// GRSD estimation object.&lt;br /&gt;
	GRSDEstimation&amp;lt;PointXYZ, Normal, GRSDSignature21&amp;gt; grsd;&lt;br /&gt;
	grsd.setInputCloud(cloud);&lt;br /&gt;
	grsd.setInputNormals(normals);&lt;br /&gt;
	grsd.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. Note: the value given here has to be&lt;br /&gt;
	// larger than the radius used to estimate the normals.&lt;br /&gt;
	grsd.setRadiusSearch(0.05);&lt;br /&gt;
	&lt;br /&gt;
	grsd.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: This code will only compile with PCL versions 1.8 and above (the current trunk).'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Normals, Search method, Radius&lt;br /&gt;
* '''Output''': GRSD descriptor&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [https://ias.in.tum.de/_media/spezial/bib/grsd10humanoids.pdf Hierarchical Object Geometric Categorization and Appearance Classification for Mobile Manipulation] (Zoltan-Csaba Marton et al., 2010)&lt;br /&gt;
** [https://ias.cs.tum.edu/_media/spezial/bib/marton11ijrr.pdf Combined 2D-3D Categorization and Classification for Multimodal Perception Systems] (Zoltan-Csaba Marton et al., 2011)&lt;br /&gt;
** [http://www.mi.t.u-tokyo.ac.jp/top/downloadpublication/36 Voxelized Shape and Color Histograms for RGB-D] (Zoltan-Csaba Marton et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_g_r_s_d_estimation.html pcl::GRSDEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_GRSD.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Saving and loading=&lt;br /&gt;
&lt;br /&gt;
You can save a descriptor to a file just [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#Writing_to_file|like with any other cloud type]]. One caveat, though. If you are using a descriptor that has its own custom type, like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PFHSignature125&amp;quot;&amp;lt;/span&amp;gt;, everything will be OK. But with descriptors that do not (where you have to use &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::Histogram&amp;lt;&amp;gt;&amp;quot;&amp;lt;/span&amp;gt;), you will get this error: &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;POINT_TYPE_NOT_PROPERLY_REGISTERED&amp;quot;''&amp;lt;/span&amp;gt;. In order to save or load from a file, PCL's IO functions need to know about the number, type and size of the fields. To solve this, you will have to properly register a new point type for your descriptor. For example, this will work for the [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#RoPS|RoPS]] descriptor example we saw earlier:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;POINT_CLOUD_REGISTER_POINT_STRUCT(ROPS135,&lt;br /&gt;
                                  (float[135], histogram, histogram)&lt;br /&gt;
)&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add the previous to your code (change it accordingly), and you will be able to save and load descriptors as usual.&lt;br /&gt;
&lt;br /&gt;
=Visualization=&lt;br /&gt;
&lt;br /&gt;
Sometimes it is desired to check a visual representation of a descriptor, perhaps to analyze the distribution of data over different bins. Because they are saved as histograms, this is something trivial to do. PCL offers a couple of classes to do this.&lt;br /&gt;
&lt;br /&gt;
==PCLHistogramVisualizer==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PCLHistogramVisualizer&amp;quot;&amp;lt;/span&amp;gt; is the simplest way to plot an histogram. The class has little functionality, but it does its job. Only one call is necessary to give the histogram and its size:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/vfh.h&amp;gt;&lt;br /&gt;
#include&amp;lt;pcl/visualization/histogram_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Clouds for storing everything.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Estimate VFH descriptor.&lt;br /&gt;
	pcl::VFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; vfh;&lt;br /&gt;
	vfh.setInputCloud(object);&lt;br /&gt;
	vfh.setInputNormals(normals);&lt;br /&gt;
	vfh.setSearchMethod(kdtree);&lt;br /&gt;
	vfh.setNormalizeBins(true);&lt;br /&gt;
	vfh.setNormalizeDistance(false);&lt;br /&gt;
	vfh.compute(*descriptor);&lt;br /&gt;
&lt;br /&gt;
	// Plotter object.&lt;br /&gt;
	pcl::visualization::PCLHistogramVisualizer viewer;&lt;br /&gt;
	// We need to set the size of the descriptor beforehand.&lt;br /&gt;
	viewer.addFeatureHistogram(*descriptor, 308);&lt;br /&gt;
&lt;br /&gt;
	viewer.spin();&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Histogram_visualizer_VFH.png|thumb|center|500px| VFH histogram seen with the Histogram Visualizer.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Descriptor, descriptor size, [Window size], [Background color], [Y axis range]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_histogram_visualizer.html pcl::visualization::PCLHistogramVisualizer]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_histogram_visualizer.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==PCLPlotter==&lt;br /&gt;
&lt;br /&gt;
This class has all the methods from &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PCLHistogramVisualizer&amp;quot;&amp;lt;/span&amp;gt; (which will be deprecated soon) plus a lot more features. The code is almost the same:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/vfh.h&amp;gt;&lt;br /&gt;
#include&amp;lt;pcl/visualization/pcl_plotter.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Clouds for storing everything.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Estimate VFH descriptor.&lt;br /&gt;
	pcl::VFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; vfh;&lt;br /&gt;
	vfh.setInputCloud(object);&lt;br /&gt;
	vfh.setInputNormals(normals);&lt;br /&gt;
	vfh.setSearchMethod(kdtree);&lt;br /&gt;
	vfh.setNormalizeBins(true);&lt;br /&gt;
	vfh.setNormalizeDistance(false);&lt;br /&gt;
	vfh.compute(*descriptor);&lt;br /&gt;
&lt;br /&gt;
	// Plotter object.&lt;br /&gt;
	pcl::visualization::PCLPlotter plotter;&lt;br /&gt;
	// We need to set the size of the descriptor beforehand.&lt;br /&gt;
	plotter.addFeatureHistogram(*descriptor, 308);&lt;br /&gt;
&lt;br /&gt;
	plotter.plot();&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:PCL_plotter_VFH.png|thumb|center|500px| VFH histogram seen with the PCLPlotter class.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you have raw data (such as a vector of floats) you can use the [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_plotter.html#aaf23b2b1c2f91c517cfde387ee1b654e addHistogramData()] function to plot it as an histogram.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Descriptor, descriptor size, [Window size], [Background color], [Y axis range]&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/pcl_plotter.php PCLPlotter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_plotter.html pcl::visualization::PCLPlotter]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_plotter.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==PCL Viewer==&lt;br /&gt;
&lt;br /&gt;
This program, included with PCL, will also let you open and visualize a saved descriptor. Internally, it uses [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#PCLPlotter|PCLPlotter]]. You can invoke the viewer from the command line like this, so it comes handy:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_viewer &amp;lt;descriptor_file&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:PCL_viewer_VFH.png|thumb|center|500px| VFH histogram seen with ''pcl_viewer''.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_1:_Installing_and_testing&amp;diff=4750</id>
		<title>PCL/OpenNI tutorial 1: Installing and testing</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_1:_Installing_and_testing&amp;diff=4750"/>
				<updated>2015-11-04T19:10:34Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Kinect.jpg|thumb|right|200px|Microsoft Kinect device.]]&lt;br /&gt;
&lt;br /&gt;
This series of tutorials will explain the usage of a depth camera like [https://en.wikipedia.org/wiki/Kinect Kinect] for &amp;quot;serious&amp;quot; researching purposes. As you may know, Kinect is in fact an affordable depth sensor, developed with technology from [https://en.wikipedia.org/wiki/PrimeSense PrimeSense], based on infrarred structured light method. It also has a common camera (which makes it a RGB-D device), a microphone and a motorized pivot. Its use is not limited to playing with a Xbox360 console, you can plug it to a computer and use it like any other sensor. Many open-source drivers and frameworks are available.&lt;br /&gt;
&lt;br /&gt;
Since its release on November 2010, it has gained a lot of popularity, specially among the scientific community. Many researches have procured themselves one because, despite the low cost (about 100 $), it has proven to be a powerful solution for depth sensing projects. Current investigations focus on real-time surface mapping, object recognition and tracking, and localization. Impressive results (like the [http://research.microsoft.com/en-us/projects/surfacerecon/ KinectFusion] project from Microsoft) are already possible.&lt;br /&gt;
&lt;br /&gt;
The new [https://en.wikipedia.org/wiki/Xbox_One Xbox One] ships with an upgraded version, [https://en.wikipedia.org/wiki/Kinect_for_Xbox_One Kinect v2], with enhanced resolution, that is able to detect your facial expression, measure your heart rate, and track every one of your fingers. A PC development-ready version ([https://en.wikipedia.org/wiki/Kinect_for_Xbox_One#Kinect_for_Windows Kinect for Windows v2]) was released in July 2014, but it could only be used with the official Windows SDK (open source support [https://github.com/OpenKinect/libfreenect2 exists] but is still young). In October 2014 an adapter that allows to plug the standard Kinect v2 to a PC was released, so the development version of the sensor was discontinued in April 2015. Now you can just buy the [http://www.microsoftstore.com/store/msusa/en_US/pdp/Kinect-for-Xbox-One/productID.307499400 standard one] for the console plus the [http://www.microsoftstore.com/store/msusa/en_US/pdp/Kinect-Adapter-for-Windows/productID.308803600 adapter].&lt;br /&gt;
&lt;br /&gt;
I will explain the installation and usage of one of the &amp;quot;legacy&amp;quot; Kinect 1.0 devices with a common PC, and the possibilities it offers. I will do it in an easy to understand way, intended for students that have just acquired it and want to start from scratch.&lt;br /&gt;
&lt;br /&gt;
Keep in mind that the software that we are going to use (Point Cloud Library and OpenNI drivers) will also let you use any other device like the [https://www.asus.com/3D-Sensor/Xtion_PRO/ Xtion PRO] or [https://www.asus.com/3D-Sensor/Xtion_PRO_LIVE/ Xtion PRO LIVE] from ASUS (the PRO only has a depth sensor, the PRO LIVE is a RGB-D camera) without changing a line of code.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: The tutorials are written for Linux platforms. Also, 64-bit versions seem to work better than 32-bit.'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Requirements=&lt;br /&gt;
&lt;br /&gt;
You will need the following:&lt;br /&gt;
&lt;br /&gt;
* A common Kinect device, out of the box. You can buy it in your local electronics shop, or online. It also includes a free copy of ''[https://en.wikipedia.org/wiki/Kinect_Adventures! Kinect Adventures!]'', which is useless if you do not own the console. Microsoft has released a ''Kinect for Windows'' device, which is a normal looking Kinect no longer compatible with Xbox360, that will only work with their official SDK, intended for developers only. Also, like I stated earlier, you can use an ASUS Xtion indistinctly.&lt;br /&gt;
* A computer running Linux (Debian or Ubuntu preferably).&lt;br /&gt;
* A medium-sized room. Kinect has some limitations for depth measurement: 40cm minimum, 8m maximum (make it 6).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: ''Kinect for Windows'' may have problems working with open source drivers on Linux .'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Connecting everything=&lt;br /&gt;
&lt;br /&gt;
Kinect does not work with a common USB port. Its power consumption is a bit higher because of the motor, so Microsoft came up with a connector that combines USB and power supply. Old Xbox 360 models needed a special adapter, new S model ones already have this new port. Luckily, Kinect comes with the official adapter out of the box (otherwise you will have to [http://www.amazon.com/Power-Supply-Cable-Kinect-Xbox-360/dp/B004S7GA46/ref=sr_1_1?ie=UTF8&amp;amp;qid=1378288327&amp;amp;sr=8-1 buy one]).&lt;br /&gt;
&lt;br /&gt;
Just plug the adapter to any power socket, and the USB to your computer. Let's check typing this in a terminal:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;lsusb&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Output should list the following devices:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;Bus 001 Device 005: ID 045e:02b0 Microsoft Corp. Xbox NUI Motor&lt;br /&gt;
Bus 001 Device 006: ID 045e:02ad Microsoft Corp. Xbox NUI Audio&lt;br /&gt;
Bus 001 Device 007: ID 045e:02ae Microsoft Corp. Xbox NUI Camera&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are using a Xtion, you should see:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;Bus 001 Device 004: ID 1d27:0601 ASUS&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;0601&amp;quot; identifies the new Xtion model, &amp;quot;0600&amp;quot; the older one. Both should work the same. But try to avoid [[PCL/OpenNI_troubleshooting#ASUS_Xtion_PRO|USB 3.0 ports]]!&lt;br /&gt;
&lt;br /&gt;
=Installing the software=&lt;br /&gt;
&lt;br /&gt;
There is more than one way to get your Kinect working on your PC, and start developing applications for it:&lt;br /&gt;
&lt;br /&gt;
* [https://www.microsoft.com/en-us/download/details.aspx?id=40278 Kinect for Windows SDK] and [https://www.microsoft.com/en-us/download/details.aspx?id=40276 Developer Toolkit]: released on June 16, 2011 as a non-commercial SDK intended for application development. Version 1.8, the last there will ever be now that Kinect v2 is out, was released on September 2013. Because it comes from Microsoft, it is obviously the easiest way to get everything working. Sadly, there is no Linux version.&lt;br /&gt;
* [https://github.com/OpenKinect/libfreenect libfreenect] library: from the [http://openkinect.org/wiki/Main_Page OpenKinect] project, it is intended to be a free and open source alternative to the official drivers. libfreenect is used by projects like [https://github.com/ofTheo/ofxKinect ofxKinect], an addon for the [http://www.openframeworks.cc/ openFrameworks] toolkit (and as of version 0.8, included in the core package) that runs on Linux and OS X. ofxKinect packs a nice example application to show the RGB and point cloud taken from Kinect.&lt;br /&gt;
* [https://github.com/PrimeSense/Sensor PrimeSense drivers]: they were released as open source after the the [https://en.wikipedia.org/wiki/OpenNI OpenNI] project was created, along with the motion tracking middleware, ''NITE'', and the SDK. NI stands for Natural Interaction, and the project tried to enforce a common standard for human input using Kinect-like sensors. These official drivers are used by [http://wiki.ros.org/openni_kinect ROS] (the Robot Operating System, a massive collection of libraries and tools for robotic researchers) and [http://pointclouds.org/ PCL] (the Point Cloud Library, with everything needed for 3D point cloud processing). Sadly, version 2.0 of the OpenNI SDK dropped support for Kinect on Linux due to licensing issues, so for now PCL relies on legacy 1.x versions. Also, Apple bought PrimeSense on November 2013, and on April 2014 OpenNI's webpage was closed. The source is now being maintained by [http://structure.io/openni a third party].&lt;br /&gt;
* [https://github.com/avin2/SensorKinect SensorKinect]: a modified version of the official PrimeSense drivers, used for example by [https://github.com/gameoverhack/ofxOpenNI ofxOpenNI] (another openFrameworks addon). Last updated on 2012.&lt;br /&gt;
&lt;br /&gt;
For this tutorial, we are going to use PCL with the OpenNI drivers, so owners of a Xtion can also get it to work.&lt;br /&gt;
&lt;br /&gt;
==Precompiled PCL for Ubuntu==&lt;br /&gt;
&lt;br /&gt;
If you have a valid [http://wiki.ros.org/ROS/Installation installation of ROS] (through their repository), you do not have to install anything. Both OpenNI and PrimeSense drivers, as well as PCL, should be already installed. You can check it with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install libpcl-1.7-all libpcl-1.7-all-dev libopenni-dev libopenni-sensor-primesense-dev&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The previous command should state that all packages are already installed (change the PCL version number as needed), install them if not. If you get an error about overwriting some file, check [[PCL/OpenNI_troubleshooting#Trying_to_overwrite_X.2C_which_is_also_in_package_Y|this]].&lt;br /&gt;
&lt;br /&gt;
In case you do not want ROS, there is a [https://launchpad.net/~v-launchpad-jochen-sprickerhof-de/+archive/ubuntu/pcl PPA] (Personal Package Archive, a private repository) which has everything we need. Add it to your sources, and install everything:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo add-apt-repository ppa:v-launchpad-jochen-sprickerhof-de/pcl&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt-get install build-essential cmake libpcl1.7 libpcl-dev pcl-tools&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Trying to mix ROS and PCL repositories and packages can cause some errors, so choose one of them and stick with it. Check the [[PCL/OpenNI troubleshooting]] page because your Kinect may not work by default in 32 bits.&lt;br /&gt;
&lt;br /&gt;
==Compiling PCL from source==&lt;br /&gt;
&lt;br /&gt;
For Linuxes without a precompiled version of PCL, you will need to compile it yourself. This has an advantage, actually: you can customize the build options and choose what you want. Also, the resulting binaries and libraries should be a bit faster. And you always get the latest version! The instructions are [http://pointclouds.org/documentation/tutorials/compiling_pcl_posix.php here], but the steps are easy so I will show them to you.&lt;br /&gt;
&lt;br /&gt;
===Installing the dependencies===&lt;br /&gt;
&lt;br /&gt;
Some of PCL dependencies can be installed via the package manager. Others will require additional work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash enclose=&amp;quot;div&amp;quot;&amp;gt;sudo apt-get install build-essential cmake cmake-curses-gui libboost-all-dev libeigen3-dev libflann-dev libvtk5-dev libvtk5-qt4-dev libglew-dev libxmu-dev libsuitesparse-dev libqhull-dev libpcap-dev libxmu-dev libxi-dev libgtest-dev libqt4-opengl-dev&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The trunk (1.8) version of PCL uses [https://en.wikipedia.org/wiki/VTK VTK] 6 and [https://en.wikipedia.org/wiki/Qt_%28software%29 Qt] 5, so if you intend to compile it, you must install the following packages (say yes if you are asked to remove VTK 5 and Qt 4 packages):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash enclose=&amp;quot;div&amp;quot;&amp;gt;sudo apt-get install libvtk6-dev libqt5opengl5-dev&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====JDK====&lt;br /&gt;
&lt;br /&gt;
OpenNI requires Sun's official JDK (Java Development Kit), which is no longer available on official apt repositories. You can use [http://www.webupd8.org/2012/09/install-oracle-java-8-in-ubuntu-via-ppa.html unofficial ones], or do it manually. For the last method, go to the [http://www.oracle.com/technetwork/java/javase/downloads/index.html Java SE downloads] page (SE means Standard Edition) and download the latest version (e.g., &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''jdk-8u66-linux-x64.tar.gz''&amp;lt;/span&amp;gt;). Extract it, then move the contents to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''/usr/lib/jvm/''&amp;lt;/span&amp;gt; so it is available system-wide:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo mkdir -p /usr/lib/jvm/&lt;br /&gt;
sudo cp -r jdk1.8.0_66/ /usr/lib/jvm/&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, make it the default choice to compile and run Java programs. Remember to change the version number as needed!&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo update-alternatives --install &amp;quot;/usr/bin/java&amp;quot; &amp;quot;java&amp;quot; &amp;quot;/usr/lib/jvm/jdk1.8.0_66/bin/java&amp;quot; 1&lt;br /&gt;
sudo update-alternatives --install &amp;quot;/usr/bin/javac&amp;quot; &amp;quot;javac&amp;quot; &amp;quot;/usr/lib/jvm/jdk1.8.0_66/bin/javac&amp;quot; 1&lt;br /&gt;
sudo update-alternatives --install &amp;quot;/usr/bin/jar&amp;quot; &amp;quot;jar&amp;quot; &amp;quot;/usr/lib/jvm/jdk1.8.0_66/bin/jar&amp;quot; 1&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To be sure, use the following commands to manually select the correct option, in case there is more than one choice:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo update-alternatives --config java&lt;br /&gt;
sudo update-alternatives --config javac&lt;br /&gt;
sudo update-alternatives --config jar&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are still not sure, run them and display the version, making sure it is the one you installed:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;java -version&lt;br /&gt;
javac -version&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sun's JDK is now installed.&lt;br /&gt;
&lt;br /&gt;
====OpenNI====&lt;br /&gt;
&lt;br /&gt;
PCL uses OpenNI and the PrimeSense drivers to get data from devices like Kinect or Xtion. It is optional, but it would not make much sense not to install it, would it? If you are using Ubuntu, add the [[#Precompiled PCL for Ubuntu | PPA]] and install &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''libopenni-dev''&amp;lt;/span&amp;gt; and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''libopenni-sensor-primesense-dev''&amp;lt;/span&amp;gt;, which should be done already. Otherwise, fetch the [https://github.com/OpenNI/OpenNI OpenNI] and [https://github.com/PrimeSense/Sensor PrimeSense Sensor] sources from GitHub (download them as .zip, the link is on the right). Extract them, and install the dependencies:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install python libusb-1.0-0-dev freeglut3-dev doxygen graphviz&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Go to the directory where you extracted OpenNI (&amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''OpenNI-master/''&amp;lt;/span&amp;gt; for me), and open a terminal in the &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/CreateRedist/''&amp;lt;/span&amp;gt; subdirectory. Issue:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;./RedistMaker&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When it finishes, and if there are no errors (check the [[PCL/OpenNI troubleshooting]] page if you get any), go to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/Redist/OpenNI-Bin-Dev-Linux-x64-v1.5.7.10/''&amp;lt;/span&amp;gt; (or your equivalent), and install (you must be root):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo ./install.sh&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, go to the directory where you extracted the PrimeSense drivers (&amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Sensor-master/''&amp;lt;/span&amp;gt; for me), and repeat the exact same procedure (go to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/CreateRedist/''&amp;lt;/span&amp;gt;, issue &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''./RedistMaker''&amp;lt;/span&amp;gt;, go to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/Redist/Sensor-Bin-Linux-x64-v5.1.6.6/''&amp;lt;/span&amp;gt;, issue &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''sudo ./install.sh''&amp;lt;/span&amp;gt;). Congratulations, you have now installed OpenNI.&lt;br /&gt;
&lt;br /&gt;
====CUDA====&lt;br /&gt;
&lt;br /&gt;
Like OpenNI, [https://en.wikipedia.org/wiki/CUDA nVidia CUDA] is an optional dependency, that will allow PCL to use your GPU (Graphics Processing Unit, that is, your graphics card) for certain computations. This is mandatory for tools like KinFu (do not bother unless you have at least a series 400 card with 1.5 GB of VRAM).&lt;br /&gt;
&lt;br /&gt;
Some distributions provide packages for CUDA in the repositories. For example, in Ubuntu:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install nvidia-cuda-dev nvidia-cuda-toolkit&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you want to install it manually (which is incompatible with the previous method), go to the [https://developer.nvidia.com/cuda-downloads CUDA downloads] page, which is self-explanatory, and get the small .deb file or the huge .run toolkit/SDK installer file for your system (you should have installed the nVidia drivers already, but the installer will also do it for you if needed).&lt;br /&gt;
&lt;br /&gt;
If you chose the .deb file, install it using the method of your choice, like Gdebi package manager, or through console:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo dpkg -i &amp;lt;package.deb&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The .deb does not contain all CUDA stuff, it just adds their repository to your software lists. Now you must install everything:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get update&lt;br /&gt;
sudo apt-get install cuda&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If, on the other hand, you downloaded the .run, give it execution permissions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;chmod +x cuda_7.0.28_linux.run&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And install it. You can use the default options, although if you have a working nVidia graphics driver for your card, you may want to say &amp;quot;no&amp;quot; when the installer offers to install it for you:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo ./cuda_7.0.28_linux.run&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The environment variables need to be changed so your system can find CUDA's libraries and binaries. Open &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''/etc/ld.so.conf''&amp;lt;/span&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo nano /etc/ld.so.conf&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And append one of these two lines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;/usr/local/cuda/lib64 # Add this on 64-bit only.&lt;br /&gt;
/usr/local/cuda/lib # Add this on 32-bit only.&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Ctrl+O'''&amp;lt;/span&amp;gt; and Enter, exit with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Ctrl+X'''&amp;lt;/span&amp;gt;. Reload the cache of the dynamic linker with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo ldconfig&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, append CUDA's bin directory to your PATH. Do this by editing your local &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''.bashrc''&amp;lt;/span&amp;gt; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;nano ~/.bashrc&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And append this line:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;export PATH=$PATH:/usr/local/cuda/bin&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CUDA is now installed.&lt;br /&gt;
&lt;br /&gt;
===Getting the source===&lt;br /&gt;
&lt;br /&gt;
Every dependency is installed. Time to download PCL's source code. First, you must choose whether to install the stable or the experimental branch of PCL. The stable branch is the latest official release and it is guaranteed to work without problems. The experimental branch may give you a compiling error seldomly, but you can find some interesting features that stable users will have to wait some months for. Apart from that, both are built the same way.&lt;br /&gt;
&lt;br /&gt;
To get the stable version, go to the [https://github.com/PointCloudLibrary/pcl/releases downloads] page, get &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''pcl-pcl-1.7.2.tar.gz''&amp;lt;/span&amp;gt; or whatever the latest release is, and extract it somewhere. For the experimental version, use Git:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install git&lt;br /&gt;
git clone https://github.com/PointCloudLibrary/pcl PCL-trunk-Source&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The compiled trunk version of PCL will take up more than 8 GB. So make sure you put the source folder in a partition with enough free space!&lt;br /&gt;
&lt;br /&gt;
===Compiling===&lt;br /&gt;
&lt;br /&gt;
Go the the PCL source directory (&amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''pcl-pcl-1.7.2/''&amp;lt;/span&amp;gt; or &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''PCL-trunk-Source/''&amp;lt;/span&amp;gt;), and create a new subdirectory to keep the build files in:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;mkdir build&lt;br /&gt;
cd build&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now it is time to configure the project using CMake. We will tell it to build in Release (fully optimized, no debug capabilities) mode now, and customize the rest of the options later:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;cmake -DCMAKE_BUILD_TYPE=Release ..&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CMake should be able to find every dependency, thus being able to build every subsystem except for the ones marked as &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;Disabled by default&amp;quot;''&amp;lt;/span&amp;gt;. If you are happy, you can build now, otherwise let's invoke CMake's curses interface to change a couple of things (mind the final dot):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;ccmake .&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Ccmake_GUI_PCL.png|thumb|center|900px|Interface of ''ccmake''.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here you can change the build options. The program usage can be found at the bottom of the screen. Try turning all functionality &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;ON&amp;quot;&amp;lt;/span&amp;gt;. The most important thing, in case you want to use CUDA, is to enable it and give CMake the path to your SDK. Go to the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;CUDA_SDK_ROOT_DIR&amp;quot;&amp;lt;/span&amp;gt; option and enter the correct path (probably &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''/usr/local/cuda/''&amp;lt;/span&amp;gt; or similar).&lt;br /&gt;
&lt;br /&gt;
When you are done, press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''C'''&amp;lt;/span&amp;gt; to configure and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''G'''&amp;lt;/span&amp;gt; to generate and exit the tool. Sometimes, the options you change can activate previously omitted parameters, or prompt some warning text. Just press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''E'''&amp;lt;/span&amp;gt; when you are finished reading the message, and keep pressing &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''C'''&amp;lt;/span&amp;gt; until it lets you generate (new parameters will be marked with an asterisk, so you can check them and decide whether or not you want further customization).&lt;br /&gt;
&lt;br /&gt;
If you are done configuring, it is time to build:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;make&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: Additionally, you can append the parameter -jX to speed up the compilation, X being the number of cores or processors of your PC, plus one.'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Remember that, at any time, you can manually force the project to be reconfigured and built from scratch by emptying the &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''build/''&amp;lt;/span&amp;gt; directory with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;rm -rf ./*&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Installing===&lt;br /&gt;
&lt;br /&gt;
It will take some time to compile PCL (up to a few hours if your PC is not powerful enough). When it is finished, install it system-wide with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo make install&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And you should reboot and proceed to the next section, to see if your computer now recognizes (and uses) your Kinect device.&lt;br /&gt;
&lt;br /&gt;
=Testing (OpenNI viewer)=&lt;br /&gt;
&lt;br /&gt;
We are going to write a simple example program that will fetch data from the Kinect or Xtion and present it to the user, using the PCL library. It will also allow to save the current frame (as point cloud) to disk. If you feel lazy, you can download it [http://robotica.unileon.es/~victorm/PCL_openni_viewer.tar.gz here], and skip the next two sections. Otherwise, create a new directory anywhere in your hard disk and proceed.&lt;br /&gt;
&lt;br /&gt;
==CMakeLists.txt==&lt;br /&gt;
&lt;br /&gt;
Inside that directory, create a new text file named &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''CMakeLists.txt''&amp;lt;/span&amp;gt;. PCL-based programs use the CMake build system, too. Open it with any editor and paste the following content:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CMake&amp;gt;cmake_minimum_required(VERSION 2.8 FATAL_ERROR)&lt;br /&gt;
 &lt;br /&gt;
project(PCL_openni_viewer)&lt;br /&gt;
 &lt;br /&gt;
find_package(PCL 1.7 REQUIRED)&lt;br /&gt;
 &lt;br /&gt;
include_directories(${PCL_INCLUDE_DIRS})&lt;br /&gt;
link_directories(${PCL_LIBRARY_DIRS})&lt;br /&gt;
add_definitions(${PCL_DEFINITIONS})&lt;br /&gt;
 &lt;br /&gt;
set(PCL_BUILD_TYPE Release)&lt;br /&gt;
 &lt;br /&gt;
file(GLOB PCL_openni_viewer_SRC&lt;br /&gt;
    &amp;quot;src/*.h&amp;quot;&lt;br /&gt;
    &amp;quot;src/*.cpp&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
add_executable(openniViewer ${PCL_openni_viewer_SRC})&lt;br /&gt;
 &lt;br /&gt;
target_link_libraries (openniViewer ${PCL_LIBRARIES})&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CMake syntax is quite self-explanatory. We ask for a CMake version 2.8 installation, minimum. We declare a new project named &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;openni_PCL_viewer&amp;quot;&amp;lt;/span&amp;gt;. We tell CMake to check for the presence of PCL library development files, version 1.7. If our system can not meet the CMake and PCL version requirement, the process will fail.&lt;br /&gt;
&lt;br /&gt;
Next, we feed the compiler and linker the directories where PCL includes and libraries can be found, and the defined symbols. We tell CMake to use the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Release&amp;quot;&amp;lt;/span&amp;gt; build type, which will activate certain optimizations depending on the compiler we use. Other build types are available, like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Debug&amp;quot;&amp;lt;/span&amp;gt;, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;MinSizeRel&amp;quot;&amp;lt;/span&amp;gt;, and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;RelWithDebInfo&amp;quot;&amp;lt;/span&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Finally, we create a variable, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;opennipclviewer_SRC&amp;quot;&amp;lt;/span&amp;gt;, that will store a list of files to be compiled (though we will only have one). We create a new binary to be compiled from these source files, and we link it with the PCL library.&lt;br /&gt;
&lt;br /&gt;
Check the [https://cmake.org/Wiki/CMake_Useful_Variables CMake help] for more interesting options.&lt;br /&gt;
&lt;br /&gt;
==main.cpp==&lt;br /&gt;
&lt;br /&gt;
We told CMake it could find the source files in a &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''src/''&amp;lt;/span&amp;gt; subdirectory, so let's keep to out word and create it. Then, add a new &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''main.cpp''&amp;lt;/span&amp;gt; file inside and paste the following lines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;// Original code by Geoffrey Biggs, taken from the PCL tutorial in&lt;br /&gt;
// http://pointclouds.org/documentation/tutorials/pcl_visualizer.php&lt;br /&gt;
&lt;br /&gt;
// Simple OpenNI viewer that also allows to write the current scene to a .pcd&lt;br /&gt;
// when pressing SPACE.&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;pcl/io/openni_grabber.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/cloud_viewer.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/console/parse.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
using namespace std;&lt;br /&gt;
using namespace pcl;&lt;br /&gt;
&lt;br /&gt;
PointCloud&amp;lt;PointXYZRGBA&amp;gt;::Ptr cloudptr(new PointCloud&amp;lt;PointXYZRGBA&amp;gt;); // A cloud that will store color info.&lt;br /&gt;
PointCloud&amp;lt;PointXYZ&amp;gt;::Ptr fallbackCloud(new PointCloud&amp;lt;PointXYZ&amp;gt;);    // A fallback cloud with just depth data.&lt;br /&gt;
boost::shared_ptr&amp;lt;visualization::CloudViewer&amp;gt; viewer;                 // Point cloud viewer object.&lt;br /&gt;
Grabber* openniGrabber;                                               // OpenNI grabber that takes data from the device.&lt;br /&gt;
unsigned int filesSaved = 0;                                          // For the numbering of the clouds saved to disk.&lt;br /&gt;
bool saveCloud(false), noColor(false);                                // Program control.&lt;br /&gt;
&lt;br /&gt;
void&lt;br /&gt;
printUsage(const char* programName)&lt;br /&gt;
{&lt;br /&gt;
	cout &amp;lt;&amp;lt; &amp;quot;Usage: &amp;quot; &amp;lt;&amp;lt; programName &amp;lt;&amp;lt; &amp;quot; [options]&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; endl&lt;br /&gt;
		 &amp;lt;&amp;lt; endl&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;Options:\n&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; endl&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;\t&amp;lt;none&amp;gt;     start capturing from an OpenNI device.\n&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;\t-v FILE    visualize the given .pcd file.\n&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;\t-h         shows this help.\n&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// This function is called every time the device has new data.&lt;br /&gt;
void&lt;br /&gt;
grabberCallback(const PointCloud&amp;lt;PointXYZRGBA&amp;gt;::ConstPtr&amp;amp; cloud)&lt;br /&gt;
{&lt;br /&gt;
	if (! viewer-&amp;gt;wasStopped())&lt;br /&gt;
		viewer-&amp;gt;showCloud(cloud);&lt;br /&gt;
&lt;br /&gt;
	if (saveCloud)&lt;br /&gt;
	{&lt;br /&gt;
		stringstream stream;&lt;br /&gt;
		stream &amp;lt;&amp;lt; &amp;quot;inputCloud&amp;quot; &amp;lt;&amp;lt; filesSaved &amp;lt;&amp;lt; &amp;quot;.pcd&amp;quot;;&lt;br /&gt;
		string filename = stream.str();&lt;br /&gt;
		if (io::savePCDFile(filename, *cloud, true) == 0)&lt;br /&gt;
		{&lt;br /&gt;
			filesSaved++;&lt;br /&gt;
			cout &amp;lt;&amp;lt; &amp;quot;Saved &amp;quot; &amp;lt;&amp;lt; filename &amp;lt;&amp;lt; &amp;quot;.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
		}&lt;br /&gt;
		else PCL_ERROR(&amp;quot;Problem saving %s.\n&amp;quot;, filename.c_str());&lt;br /&gt;
&lt;br /&gt;
		saveCloud = false;&lt;br /&gt;
	}&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// For detecting when SPACE is pressed.&lt;br /&gt;
void&lt;br /&gt;
keyboardEventOccurred(const visualization::KeyboardEvent&amp;amp; event,&lt;br /&gt;
					  void* nothing)&lt;br /&gt;
{&lt;br /&gt;
	if (event.getKeySym() == &amp;quot;space&amp;quot; &amp;amp;&amp;amp; event.keyDown())&lt;br /&gt;
		saveCloud = true;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Creates, initializes and returns a new viewer.&lt;br /&gt;
boost::shared_ptr&amp;lt;visualization::CloudViewer&amp;gt;&lt;br /&gt;
createViewer()&lt;br /&gt;
{&lt;br /&gt;
	boost::shared_ptr&amp;lt;visualization::CloudViewer&amp;gt; v&lt;br /&gt;
	(new visualization::CloudViewer(&amp;quot;OpenNI viewer&amp;quot;));&lt;br /&gt;
	v-&amp;gt;registerKeyboardCallback(keyboardEventOccurred);&lt;br /&gt;
&lt;br /&gt;
	return (v);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	if (console::find_argument(argc, argv, &amp;quot;-h&amp;quot;) &amp;gt;= 0)&lt;br /&gt;
	{&lt;br /&gt;
		printUsage(argv[0]);&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	bool justVisualize(false);&lt;br /&gt;
	string filename;&lt;br /&gt;
	if (console::find_argument(argc, argv, &amp;quot;-v&amp;quot;) &amp;gt;= 0)&lt;br /&gt;
	{&lt;br /&gt;
		if (argc != 3)&lt;br /&gt;
		{&lt;br /&gt;
			printUsage(argv[0]);&lt;br /&gt;
			return -1;&lt;br /&gt;
		}&lt;br /&gt;
&lt;br /&gt;
		filename = argv[2];&lt;br /&gt;
		justVisualize = true;&lt;br /&gt;
	}&lt;br /&gt;
	else if (argc != 1)&lt;br /&gt;
	{&lt;br /&gt;
		printUsage(argv[0]);&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// First mode, open and show a cloud from disk.&lt;br /&gt;
	if (justVisualize)&lt;br /&gt;
	{&lt;br /&gt;
		// Try with color information...&lt;br /&gt;
		try&lt;br /&gt;
		{&lt;br /&gt;
			io::loadPCDFile&amp;lt;PointXYZRGBA&amp;gt;(filename.c_str(), *cloudptr);&lt;br /&gt;
		}&lt;br /&gt;
		catch (PCLException e1)&lt;br /&gt;
		{&lt;br /&gt;
			try&lt;br /&gt;
			{&lt;br /&gt;
				// ...and if it fails, fall back to just depth.&lt;br /&gt;
				io::loadPCDFile&amp;lt;PointXYZ&amp;gt;(filename.c_str(), *fallbackCloud);&lt;br /&gt;
			}&lt;br /&gt;
			catch (PCLException e2)&lt;br /&gt;
			{&lt;br /&gt;
				return -1;&lt;br /&gt;
			}&lt;br /&gt;
&lt;br /&gt;
			noColor = true;&lt;br /&gt;
		}&lt;br /&gt;
&lt;br /&gt;
		cout &amp;lt;&amp;lt; &amp;quot;Loaded &amp;quot; &amp;lt;&amp;lt; filename &amp;lt;&amp;lt; &amp;quot;.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
		if (noColor)&lt;br /&gt;
			cout &amp;lt;&amp;lt; &amp;quot;This cloud has no RGBA color information present.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
		else cout &amp;lt;&amp;lt; &amp;quot;This cloud has RGBA color information present.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
	}&lt;br /&gt;
	// Second mode, start fetching and displaying frames from the device.&lt;br /&gt;
	else&lt;br /&gt;
	{&lt;br /&gt;
		openniGrabber = new OpenNIGrabber();&lt;br /&gt;
		if (openniGrabber == 0)&lt;br /&gt;
			return -1;&lt;br /&gt;
		boost::function&amp;lt;void (const PointCloud&amp;lt;PointXYZRGBA&amp;gt;::ConstPtr&amp;amp;)&amp;gt; f =&lt;br /&gt;
			boost::bind(&amp;amp;grabberCallback, _1);&lt;br /&gt;
		openniGrabber-&amp;gt;registerCallback(f);&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	viewer = createViewer();&lt;br /&gt;
&lt;br /&gt;
	if (justVisualize)&lt;br /&gt;
	{&lt;br /&gt;
		if (noColor)&lt;br /&gt;
			viewer-&amp;gt;showCloud(fallbackCloud);&lt;br /&gt;
		else viewer-&amp;gt;showCloud(cloudptr);&lt;br /&gt;
	}&lt;br /&gt;
	else openniGrabber-&amp;gt;start();&lt;br /&gt;
&lt;br /&gt;
	// Main loop.&lt;br /&gt;
	while (! viewer-&amp;gt;wasStopped())&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::seconds(1));&lt;br /&gt;
&lt;br /&gt;
	if (! justVisualize)&lt;br /&gt;
		openniGrabber-&amp;gt;stop();&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save and close.&lt;br /&gt;
&lt;br /&gt;
==Compiling==&lt;br /&gt;
&lt;br /&gt;
Follow the same steps you used to build PCL. That is, create a new &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''build/''&amp;lt;/span&amp;gt; subdirectory next to the &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''src/''&amp;lt;/span&amp;gt; one. Open a terminal there and issue:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;cmake -DCMAKE_BUILD_TYPE=Release ..&lt;br /&gt;
make&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Executing==&lt;br /&gt;
&lt;br /&gt;
Still from the same terminal, run the compiled example program:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;./openniViewer&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After some seconds, the main window will appear and the application will start grabbing frames from the device. You can inspect the current point cloud using the mouse, holding the left button to rotate, the right one (or the mouse wheel) to zoom, and the middle one to pan the camera around. At first, you may see only a black screen, or some big colored axes, but no cloud. Try zooming out, to see the whole scene. Another useful key is &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''R'''&amp;lt;/span&amp;gt;, which will reset the camera parameters when pressed. Use it whenever you notice that zooming has gotten slow after some camera movement, or if you still can not see the cloud. See the [http://pointclouds.org/documentation/tutorials/pcl_visualizer.php PCLVisualizer] tutorial for additional controls and features.&lt;br /&gt;
&lt;br /&gt;
Whenever you feel ready, press the &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''SPACE'''&amp;lt;/span&amp;gt; key. The program will pause for a fraction of a second and the output &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;Saved inputCloud0.pcd.&amp;quot;''&amp;lt;/span&amp;gt; will appear on the console. Check the current folder to see that file &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''inputCloud0.pcd''&amp;lt;/span&amp;gt; has indeed been written. You can now close the program with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Q'''&amp;lt;/span&amp;gt; or &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Alt+F4'''&amp;lt;/span&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Next, run it again giving the following parameter:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;./openniViewer -v inputCloud0.pcd&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will tell the program not to take data from the device, but from the saved point cloud file instead. After it loads, you will realize that you are presented the same scene you saved to disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: PCD data is saved relative to the sensor. No matter how much you have manipulated the view, it will reset to default when you load the file.'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
&lt;br /&gt;
At this point, your Kinect device should be working and getting depth data for you. There is a collection of excellent [http://pointclouds.org/documentation/tutorials/ tutorials] for PCL in the official webpage. I encourage you to finish them all before proceeding your experiments with the Kinect sensor. You can also find a good introduction/tutorial at the PCL library [http://www.pointclouds.org/assets/uploads/PCL-IAS13_Introduction.pdf here].&lt;br /&gt;
&lt;br /&gt;
If you use an ASUS Xtion PRO device instead, you should have gotten everything to work without problems or additional steps (except maybe for [[PCL/OpenNI_troubleshooting#ASUS_Xtion_PRO | this one]]).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)&amp;diff=4749</id>
		<title>PCL/OpenNI tutorial 4: 3D object recognition (descriptors)</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)&amp;diff=4749"/>
				<updated>2015-11-02T11:12:00Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It is time to learn the basics of one of the most interesting applications of point cloud processing: 3D object recognition. Akin to 2D recognition, this technique relies on finding good keypoints (characteristic points) in the cloud, and matching them to a set of previously saved ones. But 3D has several advantages over 2D: namely, we will be able to estimate with decent accuracy the exact position and orientation of the object, relative to the sensor. Also, 3D object recognition tends to be more robust to clutter (crowded scenes where objects in the front occluding objects in the background). And finally, having information about the object's shape will help with collision avoidance or grasping operations.&lt;br /&gt;
&lt;br /&gt;
In this first tutorial we will see what descriptors are, how many types are there available in PCL, and how to compute them.&lt;br /&gt;
&lt;br /&gt;
=Overview=&lt;br /&gt;
&lt;br /&gt;
The basis of 3D object recognition is to find a set of correspondences between two different clouds, one of them containing the object we are looking for. In order to do this, we need a way to compare points in an unambiguous manner. Until now, we have worked with points that store the XYZ coordinates, the RGB color... but none of those properties are unique enough. In two sequential scans, two points could share the same coordinates despite belonging to different surfaces, and using the color information takes us back to 2D recognition, will all the lightning related problems.&lt;br /&gt;
&lt;br /&gt;
In a previous tutorial, we talked about [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#Feature_estimation | features]], before introducing the normals. Normals are an example of feature, because they encode information about the vicinity of the point. That is, the neighboring points are taken into account when computing it, giving us an idea of how the surrounding surface is. But this is not enough. For a feature to be optimal, it must meet the following criteria:&lt;br /&gt;
&lt;br /&gt;
# '''It must be robust to transformations''': rigid transformations (the ones that do not change the distance between points) like translations and rotations must not affect the feature. Even if we play with the cloud a bit beforehand, there should be no difference.&lt;br /&gt;
# '''It must be robust to noise''': measurement errors that cause noise should not change the feature estimation much.&lt;br /&gt;
# '''It must be resolution invariant''': if sampled with different density (like after performing downsampling), the result must be identical or similar.&lt;br /&gt;
&lt;br /&gt;
This is where descriptors come in. They are more complex (and precise) signatures of a point, that encode a lot of information about the surrounding geometry. The purpose is to unequivocally identify a point across multiple point clouds, no matter the noise, resolution or transformations. Also, some of them capture additional data about the object they belong to, like the viewpoint (that lets us retrieve the pose).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Correspondence.jpg|thumb|center|600px|Finding correspondences between point features of two clouds (image from http://pointclouds.org/).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are many 3D descriptors implemented into PCL. Each one has its own method for computing unique values for a point. Some use the difference between the angles of the normals of the point and its neighbors, for example. Others use the distances between the points. Because of this, some are inherently better or worse for certain purposes. A given descriptor may be scale invariant, and another one may be better with occlusions and partial views of objects. Which one you choose depends on what you want to do.&lt;br /&gt;
&lt;br /&gt;
After calculating the necessary values, an additional step is performed to reduce the descriptor size: the result is binned into an [https://en.wikipedia.org/wiki/Histogram histogram]. To do this, the value range of each variable that makes up the descriptor is divided into ''n'' subdivisions, and the number of occurrences in each one is counted. Try to imagine a descriptor that computes a single variable, that ranges from 1 to 100. We choose to create 10 bins for it, so the first bin would gather all occurrences between 1 and 10, the second from 11 to 20, and so on. We look at the value of the variable for the first point-neighbor pair, and it is 27, so we increment the value of the third bin by 1. We keep doing this until we get a final histogram for that keypoint. The bin size must be carefully chosen depending on how descriptive that variable is (the variables do not have to share the same number of bins, and also the bins do not have to be of the same size; if for example most values from the previous example fell in the 50-100 range then it would be sensible to have more bins of smaller size in that range).&lt;br /&gt;
&lt;br /&gt;
Descriptors can be classified in two main categories: global and local. The process for computing and using each one (recognition pipeline) is different, so each will be explained in its own section in this article.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/how_features_work.php How 3D Features work in PCL]&lt;br /&gt;
** [https://github.com/PointCloudLibrary/pcl/wiki/Overview-and-Comparison-of-Features  Overview and Comparison of Features]&lt;br /&gt;
** [http://www.pointclouds.org/assets/icra2013/pcl_features_icra13.pdf How does a good feature look like?]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [https://www.inf.ethz.ch/personal/zeislb/publications/aldoma_2012jram_PCLTutorial.pdf Point Cloud Library: Three-Dimensional Object Recognition and 6 DoF Pose Estimation] (Aitor Aldoma et al., 2012)&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Table==&lt;br /&gt;
&lt;br /&gt;
The following table will give you a hint of how many descriptors are there in PCL, and some of their features:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot; style=&amp;quot;margin: auto; text-align:center&amp;quot; cellpadding=&amp;quot;15&amp;quot;&lt;br /&gt;
|+ 3D descriptors&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Name&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Type&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Size&amp;lt;sup&amp;gt;†&amp;lt;/sup&amp;gt;&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Custom PointType&amp;lt;sup&amp;gt;††&amp;lt;/sup&amp;gt;&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#PFH|PFH]] (Point Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 125&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#FPFH|FPFH]] (Fast Point Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 33&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#RSD|RSD]] (Radius-Based Surface Descriptor)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 289&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#3DSC|3DSC]] (3D Shape Context)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 1980&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#USC|USC]] (Unique Shape Context)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 1960&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#SHOT|SHOT]] (Signatures of Histograms of Orientations)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 352&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#Spin_image|Spin image]]&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 153*&lt;br /&gt;
| style=&amp;quot;color: red;&amp;quot; | No&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#RIFT|RIFT]] (Rotation-Invariant Feature Transform)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 32*&lt;br /&gt;
| style=&amp;quot;color: red;&amp;quot; | No&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#NARF|NARF]] (Normal Aligned Radial Feature)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 36&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#RoPS|RoPS]] (Rotational Projection Statistics)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 135*&lt;br /&gt;
| style=&amp;quot;color: red;&amp;quot; | No&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#VFH|VFH]] (Viewpoint Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 308&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#CVFH|CVFH]] (Clustered Viewpoint Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 308&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#OUR-CVFH|OUR-CVFH]] (Oriented, Unique and Repeatable Clustered Viewpoint Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 308&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#ESF|ESF]] (Ensemble of Shape Functions)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 640&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#GFPFH|GFPFH]] (Global Fast Point Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 16&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#GRSD|GRSD]] (Global Radius-Based Surface Descriptor)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 21&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
† Values marked with an asterisk (*) indicate that the descriptor's size depends on some parameter(s), and the one given is for the default values.&lt;br /&gt;
&lt;br /&gt;
†† Descriptors without their own custom PointType use the generic &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::Histogram&amp;lt;&amp;gt;&amp;quot;&amp;lt;/span&amp;gt; type. See [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#Saving_and_loading|Saving and loading]].&lt;br /&gt;
&lt;br /&gt;
Optionally, you can download a document with a less simple version of the table. Page format is A4, landscape:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Downloads''':&lt;br /&gt;
** [http://robotica.unileon.es/~victorm/3D_descriptors_comparison.odt Table as original ODT] (uses font embedding, requires LibreOffice 4)&lt;br /&gt;
** [http://robotica.unileon.es/~victorm/3D_descriptors_comparison.pdf Table as PDF]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Local descriptors=&lt;br /&gt;
&lt;br /&gt;
Local descriptors are computed for individual points that we give as input. They have no notion of what an object is, they just describe how the local geometry is around that point. Usually, it is your task to choose which points you want a descriptor to be computed for: the ''keypoints''. Most of the time, you can get away by just performing a downsampling and choosing all remaining points, but keypoint detectors are available, like the one used for [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#Finding_keypoints|NARF]], or [[PCL/OpenNI_tutorial_5:_3D_object_recognition_%28pipeline%29#ISS|ISS]].&lt;br /&gt;
&lt;br /&gt;
Local descriptors are used for object recognition and [[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Registration | registration]]. Now we will see which ones are implemented into PCL.&lt;br /&gt;
&lt;br /&gt;
==PFH==&lt;br /&gt;
&lt;br /&gt;
PFH stands for Point Feature Histogram. It is one of the most important descriptors offered by PCL and the basis of others such as FPFH. The PFH tries to capture information of the geometry surrounding the point by analyzing the difference between the directions of the normals in the vicinity (and because of this, an imprecise normal estimation may produce low-quality descriptors).&lt;br /&gt;
&lt;br /&gt;
First, the algorithm pairs all points in the vicinity (not just the chosen keypoint with its neighbors, but also the neighbors with themselves). Then, for each pair, a [https://en.wikipedia.org/wiki/Darboux_frame fixed coordinate frame] is computed from their normals. With this frame, the difference between the normals can be encoded with 3 angular variables. These variables, together with the euclidean distance between the points, are saved, and then binned to an histogram when all pairs have been computed. The final descriptor is the concatenation of the histograms of each variable (4 in total).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:PFH_neighbors.png | Point pairs established when computing the PFH for a point (image from http://pointclouds.org).&lt;br /&gt;
File:PFH_frame.png | Fixed coordinate frame and angular features computed for one of the pairs (image from http://pointclouds.org).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Computing descriptors in PCL is very easy, and the PFH is not an exception:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/pfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the PFH descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PFHSignature125&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::PFHSignature125&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// PFH estimation object.&lt;br /&gt;
	pcl::PFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::PFHSignature125&amp;gt; pfh;&lt;br /&gt;
	pfh.setInputCloud(cloud);&lt;br /&gt;
	pfh.setInputNormals(normals);&lt;br /&gt;
	pfh.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. Note: the value given here has to be&lt;br /&gt;
	// larger than the radius used to estimate the normals.&lt;br /&gt;
	pfh.setRadiusSearch(0.05);&lt;br /&gt;
&lt;br /&gt;
	pfh.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, PCL uses the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PFHSignature125&amp;quot;&amp;lt;/span&amp;gt; type to save the descriptor to. This means that the descriptor's size is 125 (the dimensionality of the feature vector). Dividing a feature in ''D'' dimensional space in ''B'' divisions requires a total of B&amp;lt;sup&amp;gt;D&amp;lt;/sup&amp;gt; bins. The original proposal makes use of the distance between the points, but the implementation of PCL does not, as it was not considered discriminative enough (specially in 2.5D scans, where the distance between points increases the further away from the sensor). Hence, the remaining features (with one dimension each) can be divided in 5 divisions, resulting in a 125-dimensional (5&amp;lt;sup&amp;gt;3&amp;lt;/sup&amp;gt;) vector.&lt;br /&gt;
&lt;br /&gt;
The final object that stores the computed descriptors can be handled like a normal cloud (even saved to, or read from, disk), and it has the same number of &amp;quot;points&amp;quot; than the original one. The &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PFHSignature125&amp;quot;&amp;lt;/span&amp;gt; object at position 0, for example, stores the PFH descriptor for the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; point at the same position in the cloud.&lt;br /&gt;
&lt;br /&gt;
For additional details about the descriptor, check the original publications listed below, or PCL's tutorial.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Search method, Radius&lt;br /&gt;
* '''Output''': PFH descriptors&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/pfh_estimation.php Point Feature Histograms (PFH) descriptors]&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://ias.in.tum.de/_media/spezial/bib/rusu08ias.pdf Persistent Point Feature Histograms for 3D Point Clouds] (Radu Bogdan Rusu et al., 2008)&lt;br /&gt;
** [http://files.rbrusu.com/publications/RusuPhDThesis.pdf Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments] (Radu Bogdan Rusu, 2009, section 4.4, page 50)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_p_f_h_estimation.html pcl::PFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_PFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===FPFH===&lt;br /&gt;
&lt;br /&gt;
PFH gives accurate results, but it has a drawback: it is too computationally expensive to perform at real time. For a cloud of ''n'' keypoints with ''k'' neighbors considered, it has a [https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations complexity] of ''O(nk&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;)''. Because of this, a derived descriptor was created, named FPFH (Fast Point Feature Histogram).&lt;br /&gt;
&lt;br /&gt;
The FPFH considers only the direct connections between the current keypoint and its neighbors, removing additional links between neighbors. This takes the complexity down to ''O(nk)''. Because of this, the resulting histogram is referred to as SPFH (Simplified Point Feature Histogram). The reference frame and the angular variables are computed as always.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:FPFH_neighbors.png|thumb|center|400px|Point pairs established when computing the FPFH for a point (image from http://pointclouds.org).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To account for the loss of these extra connections, an additional step takes place after all histograms have been computed: the SPFHs of a point's neighbors are &amp;quot;merged&amp;quot; with its own, weighted according to the distance. This has the effect of giving a point surface information of points as far away as 2 times the radius used. Finally, the 3 histograms (distance is not used) are concatenated to compose the final descriptor.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/fpfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the FPFH descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::FPFHSignature33&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::FPFHSignature33&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// FPFH estimation object.&lt;br /&gt;
	pcl::FPFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::FPFHSignature33&amp;gt; fpfh;&lt;br /&gt;
	fpfh.setInputCloud(cloud);&lt;br /&gt;
	fpfh.setInputNormals(normals);&lt;br /&gt;
	fpfh.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. Note: the value given here has to be&lt;br /&gt;
	// larger than the radius used to estimate the normals.&lt;br /&gt;
	fpfh.setRadiusSearch(0.05);&lt;br /&gt;
&lt;br /&gt;
	fpfh.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An additional implementation of the FPFH estimation that takes advantage of multithreaded optimizations (with the [https://en.wikipedia.org/wiki/OpenMP OpenMP API]) is available in the class &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;FPFHEstimationOMP&amp;quot;&amp;lt;/span&amp;gt;. Its interface is identical to the standard unoptimized implementation. Using it will result in a big performance boost on multi-core systems, meaning faster computation times. Remember to include the header &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;fpfh_omp.h&amp;quot;&amp;lt;/span&amp;gt; instead.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Search method, Radius&lt;br /&gt;
* '''Output''': FPFH descriptors&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/fpfh_estimation.php Fast Point Feature Histograms (FPFH) descriptors]&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://files.rbrusu.com/publications/Rusu09ICRA.pdf Fast Point Feature Histograms (FPFH) for 3D Registration] (Radu Bogdan Rusu et al., 2009)&lt;br /&gt;
** [http://files.rbrusu.com/publications/RusuPhDThesis.pdf Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments] (Radu Bogdan Rusu, 2009, section 4.5, page 57)&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_f_p_f_h_estimation.html pcl::FPFHEstimation]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_f_p_f_h_estimation_o_m_p.html pcl::FPFHEstimationOMP]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_FPFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==RSD==&lt;br /&gt;
&lt;br /&gt;
The Radius-Based Surface Descriptor encodes the radial relationship of the point and its neighborhood. For every pair of the keypoint with a neighbor, the algorithm computes the distance between them, and the difference between their normals. Then, by assuming that both points lie on the surface of a sphere, said sphere is found by fitting not only the points, but also the normals (otherwise, there would be infinite possible spheres). Finally, from all the point-neighbor spheres, only the ones with the maximum and minimum radii are kept and saved to the descriptor of that point.&lt;br /&gt;
&lt;br /&gt;
As you may have deduced already, when two points lie on a flat surface, the sphere radius will be infinite. If, on the other hand, they lie on the curved face of a cylinder, the radius will be more or less the same as that of the cylinder. This allows us to tell objects apart with RSD. The algorithm takes a parameter that sets the maximum radius at which the points will be considered to be part of a plane.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:RSD_sphere.png | Sphere that fits two points with their normals (image from [http://ias.in.tum.de/_media/events/rgbd2011/marton.pdf original presentation]).&lt;br /&gt;
File:RSD_values.png | RSD values for a cloud; points on a flat surface have maximum values (image from [http://ias.in.tum.de/_media/events/rgbd2011/marton.pdf original presentation]).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the code for compiling the RSD descriptor:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/rsd.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the RSD descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PrincipalRadiiRSD&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::PrincipalRadiiRSD&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// RSD estimation object.&lt;br /&gt;
	RSDEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::PrincipalRadiiRSD&amp;gt; rsd;&lt;br /&gt;
	rsd.setInputCloud(cloud);&lt;br /&gt;
	rsd.setInputNormals(normals);&lt;br /&gt;
	rsd.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. Note: the value given here has to be&lt;br /&gt;
	// larger than the radius used to estimate the normals.&lt;br /&gt;
	rsd.setRadiusSearch(0.05);&lt;br /&gt;
	// Plane radius. Any radius larger than this is considered infinite (a plane).&lt;br /&gt;
	rsd.setPlaneRadius(0.1);&lt;br /&gt;
	// Do we want to save the full distance-angle histograms?&lt;br /&gt;
	rsd.setSaveHistograms(false);&lt;br /&gt;
	&lt;br /&gt;
	rsd.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: This code will only compile with PCL versions 1.8 and above (the current trunk).'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Optionally, you can use the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setSaveHistograms()&amp;quot;&amp;lt;/span&amp;gt; function to enable the saving of the full distance-angle histograms, and then use &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getHistograms()&amp;quot;&amp;lt;/span&amp;gt; to retrieve them.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Search method, Radius, Maximum Radius, [Subdivisions], [Save full histograms]&lt;br /&gt;
* '''Output''': RSD descriptors&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.391.7398&amp;amp;rep=rep1&amp;amp;type=pdf General 3D Modelling of Novel Objects from a Single View] (Zoltan-Csaba Marton et al., 2010)&lt;br /&gt;
** [https://ias.cs.tum.edu/_media/spezial/bib/marton11ijrr.pdf Combined 2D-3D Categorization and Classification for Multimodal Perception Systems] (Zoltan-Csaba Marton et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_r_s_d_estimation.html pcl::RSDEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_RSD.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==3DSC==&lt;br /&gt;
&lt;br /&gt;
The 3D Shape Context is a descriptor that extends its existing 2D counterpart to the third dimension. It works by creating a support structure (a sphere, to be precise) centered at the point we are computing the descriptor for, with the given search radius. The &amp;quot;north pole&amp;quot; of that sphere (the notion of &amp;quot;up&amp;quot;) is pointed to match the normal at that point. Then, the sphere is divided in 3D regions or bins. In the first 2 coordinates (azimuth and elevation) the divisions are equally spaced, but in the third (the radial dimension), divisions are logarithmically spaced so they are smaller towards the center. A minimum radius can be specified to prevent very small bins, that would be too sensitive to small changes in the surface.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:3DSC_support_structure.png|thumb|center|200px|Support structure to compute the 3DSC for a point (image from [http://www.cs.jhu.edu/~misha/Papers/Frome04.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For each bin, a weighted count is accumulated for every neighboring point that lies within. The weight depends on the volume of the bin and the local point density (number of points around the current neighbor). This gives the descriptor some degree of resolution invariance.&lt;br /&gt;
&lt;br /&gt;
We have mentioned that the sphere is given the direction of the normal. This still leaves one degree of freedom (only two axes have been locked, the azimuth remains free). Because of this, the descriptor so far does not cope with rotation. To overcome this (so the same point in two different clouds has the same value), the support sphere is rotated around the normal ''N'' times (a number of degrees that corresponds with the divisions in the azimuth) and the process is repeated for each, giving a total of ''N'' descriptors for that point.&lt;br /&gt;
&lt;br /&gt;
You can compute the 3DSC descriptor the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/3dsc.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the 3DSC descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::ShapeContext1980&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::ShapeContext1980&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// 3DSC estimation object.&lt;br /&gt;
	pcl::ShapeContext3DEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::ShapeContext1980&amp;gt; sc3d;&lt;br /&gt;
	sc3d.setInputCloud(cloud);&lt;br /&gt;
	sc3d.setInputNormals(normals);&lt;br /&gt;
	sc3d.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. It will also be the radius of the support sphere.&lt;br /&gt;
	sc3d.setRadiusSearch(0.05);&lt;br /&gt;
	// The minimal radius value for the search sphere, to avoid being too sensitive&lt;br /&gt;
	// in bins close to the center of the sphere.&lt;br /&gt;
	sc3d.setMinimalRadius(0.05 / 10.0);&lt;br /&gt;
	// Radius used to compute the local point density for the neighbors&lt;br /&gt;
	// (the density is the number of points within that radius).&lt;br /&gt;
	sc3d.setPointDensityRadius(0.05 / 5.0);&lt;br /&gt;
&lt;br /&gt;
	sc3d.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Search method, Radius, Minimal radius, Point density radius&lt;br /&gt;
* '''Output''': 3DSC descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://www.cs.jhu.edu/~misha/Papers/Frome04.pdf Recognizing Objects in Range Data Using Regional Point Descriptors] (Andrea Frome et al., 2004)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_shape_context3_d_estimation.html pcl::ShapeContext3DEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_3DSC.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===USC===&lt;br /&gt;
&lt;br /&gt;
The Unique Shape Context descriptor extends the 3DSC by defining a local reference frame, in order to provide an unique orientation for each point. This not only improves the accuracy of the descriptor, it also reduces its size, as computing multiple descriptors to account for orientation is no longer necessary.&lt;br /&gt;
&lt;br /&gt;
You can check the second publication listed below to learn more about how the LRF is computed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/usc.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the USC descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::UniqueShapeContext1960&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::UniqueShapeContext1960&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// USC estimation object.&lt;br /&gt;
	pcl::UniqueShapeContext&amp;lt;pcl::PointXYZ, pcl::UniqueShapeContext1960, pcl::ReferenceFrame&amp;gt; usc;&lt;br /&gt;
	usc.setInputCloud(cloud);&lt;br /&gt;
	// Search radius, to look for neighbors. It will also be the radius of the support sphere.&lt;br /&gt;
	usc.setRadiusSearch(0.05);&lt;br /&gt;
	// The minimal radius value for the search sphere, to avoid being too sensitive&lt;br /&gt;
	// in bins close to the center of the sphere.&lt;br /&gt;
	usc.setMinimalRadius(0.05 / 10.0);&lt;br /&gt;
	// Radius used to compute the local point density for the neighbors&lt;br /&gt;
	// (the density is the number of points within that radius).&lt;br /&gt;
	usc.setPointDensityRadius(0.05 / 5.0);&lt;br /&gt;
	// Set the radius to compute the Local Reference Frame.&lt;br /&gt;
	usc.setLocalRadius(0.05);&lt;br /&gt;
&lt;br /&gt;
	usc.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: This code will only compile with PCL versions 1.8 and above (the current trunk). For 1.7 and below, change UniqueShapeContext1960 to ShapeContext1980, and edit CMakeLists.txt.'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius, Minimal radius, Point density radius, Local radius&lt;br /&gt;
* '''Output''': USC descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://vision.deis.unibo.it/fede/papers/3dor10.pdf Unique Shape Context for 3D Data Description] (Federico Tombari et al., 2010)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_unique_shape_context.html pcl::UniqueShapeContext]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_USC.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==SHOT==&lt;br /&gt;
&lt;br /&gt;
SHOT stands for Signature of Histograms of Orientations. Like 3DSC, it encodes information about the topology (surface) withing a spherical support structure. This sphere is divided in 32 bins or volumes, with 8 divisions along the azimuth, 2 along the elevation, and 2 along the radius. For every volume, a one-dimensional local histogram is computed. The variable chosen is the angle between the normal of the keypoint and the current point within that volume (to be precise, the cosine, which was found to be better suitable).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:SHOT_support_structure.png|thumb|center|200px|Support structure to compute SHOT. Only 4 azimuth divisions are shown for clarity (image from [http://vision.deis.unibo.it/fede/papers/eccv10.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When all local histograms have been computed, they are stitched together in a final descriptor. Like the USC descriptor, SHOT makes use of a local reference frame, making it rotation invariant. It is also robust to noise and clutter.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/shot.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the SHOT descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// SHOT estimation object.&lt;br /&gt;
	pcl::SHOTEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::SHOT352&amp;gt; shot;&lt;br /&gt;
	shot.setInputCloud(cloud);&lt;br /&gt;
	shot.setInputNormals(normals);&lt;br /&gt;
	// The radius that defines which of the keypoint's neighbors are described.&lt;br /&gt;
	// If too large, there may be clutter, and if too small, not enough points may be found.&lt;br /&gt;
	shot.setRadiusSearch(0.02);&lt;br /&gt;
&lt;br /&gt;
	shot.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Like with FPFH, a multithreading-optimized variant is available with &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;SHOTEstimationOMP&amp;quot;&amp;lt;/span&amp;gt;, that makes use of OpenMP. You need to include the header &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;shot_omp.h&amp;quot;&amp;lt;/span&amp;gt;. Also, another variant that uses the texture for matching is available, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;SHOTColorEstimation&amp;quot;&amp;lt;/span&amp;gt;, with an optimized version too (see the second publication for more details). It outputs a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;SHOT1344&amp;quot;&amp;lt;/span&amp;gt; descriptor.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Radius&lt;br /&gt;
* '''Output''': SHOT descriptors&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://vision.deis.unibo.it/fede/papers/eccv10.pdf Unique Signatures of Histograms for Local Surface Description] (Federico Tombari et al., 2010)&lt;br /&gt;
** [http://www.vision.deis.unibo.it/fede/papers/icip11.pdf A Combined Texture-Shaped Descriptor for Enhanced 3D Feature Matching] (Federico Tombari et al., 2011)&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_estimation.html pcl::SHOTEstimation], &lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_color_estimation.html pcl::SHOTColorEstimation]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_estimation_o_m_p.html pcl::SHOTEstimationOMP]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_color_estimation_o_m_p.html pcl::SHOTColorEstimationOMP]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_SHOT.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Spin image==&lt;br /&gt;
&lt;br /&gt;
The Spin Image (SI) is the oldest descriptor we are going to see here. It has been around since 1997, but it still sees some use for certain applications. It was originally designed to describe surfaces made by vertices, edges and polygons, but it has been since adapted for point clouds. The descriptor is unlike all others in that the output resembles an image that can be compared with another with the usual means.&lt;br /&gt;
&lt;br /&gt;
The support structure used is a cylinder, centered at the point, with a given radius and height, and aligned with the normal. This cylinder is divided radially and vertically into volumes. For each one, the number of neighbors lying inside is added up, eventually producing a descriptor. Weighting and interpolation are used to improve the result. The final descriptor can be seen as a grayscale image where dark areas correspond to volumes with higher point density.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Spin images.png|thumb|center|400px|Spin images computed for 3 points of a model (image from [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.71.4190&amp;amp;rep=rep1&amp;amp;type=pdf original thesis]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/spin_image.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// A handy typedef.&lt;br /&gt;
typedef pcl::Histogram&amp;lt;153&amp;gt; SpinImage;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the spin image for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;SpinImage&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;SpinImage&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Spin image estimation object.&lt;br /&gt;
	pcl::SpinImageEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, SpinImage&amp;gt; si;&lt;br /&gt;
	si.setInputCloud(cloud);&lt;br /&gt;
	si.setInputNormals(normals);&lt;br /&gt;
	// Radius of the support cylinder.&lt;br /&gt;
	si.setRadiusSearch(0.02);&lt;br /&gt;
	// Set the resolution of the spin image (the number of bins along one dimension).&lt;br /&gt;
	// Note: you must change the output histogram size to reflect this.&lt;br /&gt;
	si.setImageWidth(8);&lt;br /&gt;
&lt;br /&gt;
	si.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Spin Image estimation object provides more methods for tuning the estimation, so checking the API is recommended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Radius, Image resolution&lt;br /&gt;
* '''Output''': Spin images&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.71.4190&amp;amp;rep=rep1&amp;amp;type=pdf Spin-Images: A Representation for 3-D Surface Matching] (Andrew Edie Johnson, 1997)&lt;br /&gt;
** [http://www.cs.jhu.edu/~misha/Papers/Johnson99.pdf Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes] (Andrew Edie Johnson and Martial Hebert, 1999)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_spin_image_estimation.html pcl::SpinImageEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_spin_image.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==RIFT==&lt;br /&gt;
&lt;br /&gt;
The Rotation-Invariant Feature Transform, like the spin image, takes some concepts from 2D features, in this case from the Scale-Invariant Feature Transform ([https://en.wikipedia.org/wiki/Scale-invariant_feature_transform SIFT]). It is the only descriptor seen here that requires intensity information in order to compute it (it can be obtained from the RGB color values). This means, of course, that you will not be able to use RIFT with standard XYZ clouds, you also need the texture.&lt;br /&gt;
&lt;br /&gt;
In the first step, a circular patch (with the given radius) is fitted on the surface the point lies on. This patch is divided into concentric rings, according to the chosen distance bin size. Then, an histogram is populated with all the point's neighbors lying inside a sphere centered at that point and with the mentioned radius. The distance and the orientation of the intensity gradient at each point are considered. To make it rotation invariant, the angle between the gradient orientation and the vector pointing outward from the center of the patch is measured.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:RIFT.png|thumb|center|600px|RIFT feature values at 3 different locations in the descriptor (image from [http://www.cs.illinois.edu/~slazebni/publications/pami05.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The authors' original implementation uses 4 rings and 8 histogram orientations, which produce a descriptor of size 32. RIFT is not robust to texture flipping, though this was never considered a big issue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/point_types_conversion.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/intensity_gradient.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/rift.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// A handy typedef.&lt;br /&gt;
typedef pcl::Histogram&amp;lt;32&amp;gt; RIFT32;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud with color information.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;::Ptr cloudColor(new pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;);&lt;br /&gt;
	// Object for storing the point cloud with intensity value.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZI&amp;gt;::Ptr cloudIntensity(new pcl::PointCloud&amp;lt;pcl::PointXYZI&amp;gt;);&lt;br /&gt;
	// Object for storing the intensity gradients.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::IntensityGradient&amp;gt;::Ptr gradients(new pcl::PointCloud&amp;lt;pcl::IntensityGradient&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the RIFT descriptor for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;RIFT32&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;RIFT32&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGB&amp;gt;(argv[1], *cloudColor) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Convert the RGB to intensity.&lt;br /&gt;
	pcl::PointCloudXYZRGBtoXYZI(*cloudColor, *cloudIntensity);&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZI, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloudIntensity);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZI&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZI&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Compute the intensity gradients.&lt;br /&gt;
	pcl::IntensityGradientEstimation &amp;lt; pcl::PointXYZI, pcl::Normal, pcl::IntensityGradient,&lt;br /&gt;
		pcl::common::IntensityFieldAccessor&amp;lt;pcl::PointXYZI&amp;gt; &amp;gt; ge;&lt;br /&gt;
	ge.setInputCloud(cloudIntensity);&lt;br /&gt;
	ge.setInputNormals(normals);&lt;br /&gt;
	ge.setRadiusSearch(0.03);&lt;br /&gt;
	ge.compute(*gradients);&lt;br /&gt;
&lt;br /&gt;
	// RIFT estimation object.&lt;br /&gt;
	pcl::RIFTEstimation&amp;lt;pcl::PointXYZI, pcl::IntensityGradient, RIFT32&amp;gt; rift;&lt;br /&gt;
	rift.setInputCloud(cloudIntensity);&lt;br /&gt;
	rift.setSearchMethod(kdtree);&lt;br /&gt;
	// Set the intensity gradients to use.&lt;br /&gt;
	rift.setInputGradient(gradients);&lt;br /&gt;
	// Radius, to get all neighbors within.&lt;br /&gt;
	rift.setRadiusSearch(0.02);&lt;br /&gt;
	// Set the number of bins to use in the distance dimension.&lt;br /&gt;
	rift.setNrDistanceBins(4);&lt;br /&gt;
	// Set the number of bins to use in the gradient orientation dimension.&lt;br /&gt;
	rift.setNrGradientBins(8);&lt;br /&gt;
	// Note: you must change the output histogram size to reflect the previous values.&lt;br /&gt;
&lt;br /&gt;
	rift.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search method, Intensity gradients, Radius, Distance bins, Gradient bins&lt;br /&gt;
* '''Output''': RIFT descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://www.cs.illinois.edu/~slazebni/publications/pami05.pdf A Sparse Texture Representation Using Local Affine Regions] (Svetlana Lazebnik et al., 2005)&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_r_i_f_t_estimation.html pcl::RIFTEstimation]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_intensity_gradient_estimation.html pcl::IntensityGradientEstimation]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/structpcl_1_1_intensity_gradient.html pcl::IntensityGradient]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_RIFT.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==NARF==&lt;br /&gt;
&lt;br /&gt;
The Normal Aligned Radial Feature is the only descriptor here that does not take a point cloud as input. Instead, it works with range images. A range image is a common RGB image in which the distance to the point that corresponds to a certain pixel is encoded as a color value in the [https://en.wikipedia.org/wiki/Visible_spectrum visible light spectrum]: the points that are closer to the camera would be violet, while the points near the maximum sensor range would be red.&lt;br /&gt;
&lt;br /&gt;
NARF also requires us to find suitable keypoints to compute the descriptor for. NARF keypoints are located near an object's corners, and this also requires to find the borders (transitions from foreground to background), which are trivial to find with a range image. Because of this lengthy pipeline, I will describe the whole process in different sections.&lt;br /&gt;
&lt;br /&gt;
===Obtaining a range image===&lt;br /&gt;
&lt;br /&gt;
Because we always work with point clouds, I will now explain how you can convert one into a range image, in order to use it for the NARF descriptor. PCL provides a couple of handy classes to perform the conversion, given that you fill the camera data correctly.&lt;br /&gt;
&lt;br /&gt;
A range image can be created in two ways. First, we can use spherical projection, which would give us an image similar to the ones produced by a LIDAR sensor. Second, we can use planar projection, which is better suitable for camera-like sensors as the Kinect or the Xtion, and will not have the characteristic distortion of the first one.&lt;br /&gt;
&lt;br /&gt;
====Spherical projection====&lt;br /&gt;
&lt;br /&gt;
The following code will take a point cloud and create a range image from it, using spherical projection:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/range_image_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Parameters needed by the range image object:&lt;br /&gt;
&lt;br /&gt;
	// Angular resolution is the angular distance between pixels.&lt;br /&gt;
	// Kinect: 57° horizontal FOV, 43° vertical FOV, 640x480 (chosen here).&lt;br /&gt;
	// Xtion: 58° horizontal FOV, 45° vertical FOV, 640x480.&lt;br /&gt;
	float angularResolutionX = (float)(57.0f / 640.0f * (M_PI / 180.0f));&lt;br /&gt;
	float angularResolutionY = (float)(43.0f / 480.0f * (M_PI / 180.0f));&lt;br /&gt;
	// Maximum horizontal and vertical angles. For example, for a full panoramic scan,&lt;br /&gt;
	// the first would be 360º. Choosing values that adjust to the real sensor will&lt;br /&gt;
	// decrease the time it takes, but don't worry. If the values are bigger than&lt;br /&gt;
	// the real ones, the image will be automatically cropped to discard empty zones.&lt;br /&gt;
	float maxAngleX = (float)(60.0f * (M_PI / 180.0f));&lt;br /&gt;
	float maxAngleY = (float)(50.0f * (M_PI / 180.0f));&lt;br /&gt;
	// Sensor pose. Thankfully, the cloud includes the data.&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	// Noise level. If greater than 0, values of neighboring points will be averaged.&lt;br /&gt;
	// This would set the search radius (e.g., 0.03 == 3cm).&lt;br /&gt;
	float noiseLevel = 0.0f;&lt;br /&gt;
	// Minimum range. If set, any point closer to the sensor than this will be ignored.&lt;br /&gt;
	float minimumRange = 0.0f;&lt;br /&gt;
	// Border size. If greater than 0, a border of &amp;quot;unobserved&amp;quot; points will be left&lt;br /&gt;
	// in the image when it is cropped.&lt;br /&gt;
	int borderSize = 1;&lt;br /&gt;
&lt;br /&gt;
	// Range image object.&lt;br /&gt;
	pcl::RangeImage rangeImage;&lt;br /&gt;
	rangeImage.createFromPointCloud(*cloud, angularResolutionX, angularResolutionY,&lt;br /&gt;
									maxAngleX, maxAngleY, sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
									noiseLevel, minimumRange, borderSize);&lt;br /&gt;
&lt;br /&gt;
	// Visualize the image.&lt;br /&gt;
	pcl::visualization::RangeImageVisualizer viewer(&amp;quot;Range image&amp;quot;);&lt;br /&gt;
	viewer.showRangeImage(rangeImage);&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer.spinOnce();&lt;br /&gt;
		// Sleep 100ms to go easy on the CPU.&lt;br /&gt;
		pcl_sleep(0.1);&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here you can see an example of the output range image:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_spherical.png|thumb|center|400px|Range image of a point cloud, using spherical projection.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Angular resolution, Maximum angle, Sensor pose, Coordinate frame, Noise level, Maximum range, Border size&lt;br /&gt;
* '''Output''': Range image (spherical projection)&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/range_image_creation.php How to create a range image from a point cloud]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/range_image_visualization.php How to visualize a range image]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_range_image.html pcl::RangeImage]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_range_image_spherical.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Planar projection====&lt;br /&gt;
&lt;br /&gt;
As mentioned, planar projection will give better results with clouds taken from a depth camera:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image_planar.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/range_image_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Parameters needed by the planar range image object:&lt;br /&gt;
&lt;br /&gt;
	// Image size. Both Kinect and Xtion work at 640x480.&lt;br /&gt;
	int imageSizeX = 640;&lt;br /&gt;
	int imageSizeY = 480;&lt;br /&gt;
	// Center of projection. here, we choose the middle of the image.&lt;br /&gt;
	float centerX = 640.0f / 2.0f;&lt;br /&gt;
	float centerY = 480.0f / 2.0f;&lt;br /&gt;
	// Focal length. The value seen here has been taken from the original depth images.&lt;br /&gt;
	// It is safe to use the same value vertically and horizontally.&lt;br /&gt;
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;&lt;br /&gt;
	// Sensor pose. Thankfully, the cloud includes the data.&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	// Noise level. If greater than 0, values of neighboring points will be averaged.&lt;br /&gt;
	// This would set the search radius (e.g., 0.03 == 3cm).&lt;br /&gt;
	float noiseLevel = 0.0f;&lt;br /&gt;
	// Minimum range. If set, any point closer to the sensor than this will be ignored.&lt;br /&gt;
	float minimumRange = 0.0f;&lt;br /&gt;
&lt;br /&gt;
	// Planar range image object.&lt;br /&gt;
	pcl::RangeImagePlanar rangeImagePlanar;&lt;br /&gt;
	rangeImagePlanar.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,&lt;br /&gt;
			centerX, centerY, focalLengthX, focalLengthX,&lt;br /&gt;
			sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
			noiseLevel, minimumRange);&lt;br /&gt;
&lt;br /&gt;
	// Visualize the image.&lt;br /&gt;
	pcl::visualization::RangeImageVisualizer viewer(&amp;quot;Planar range image&amp;quot;);&lt;br /&gt;
	viewer.showRangeImage(rangeImagePlanar);&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer.spinOnce();&lt;br /&gt;
		// Sleep 100ms to go easy on the CPU.&lt;br /&gt;
		pcl_sleep(0.1);&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_planar.png|thumb|center|400px|Range image of a point cloud, using planar projection.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Image size, Projection center, Focal length, Sensor pose, Coordinate frame, Noise level, Maximum range&lt;br /&gt;
* '''Output''': Range image (planar projection)&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/range_image_creation.php How to create a range image from a point cloud]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/range_image_visualization.php How to visualize a range image]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_range_image_planar.html pcl::RangeImagePlanar]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_range_image_planar.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you prefer to do the conversion in real time while you inspect the cloud, PCL ships with an [https://github.com/PointCloudLibrary/pcl/tree/master/doc/tutorials/content/sources/openni_range_image_visualization example] that fetches an &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;openni_wrapper::DepthImage&amp;quot;&amp;lt;/span&amp;gt; from an OpenNI device and creates the range image from it. You can adapt the code of the [[PCL/OpenNI_tutorial_1:_Installing_and_testing#Testing | example]] example from tutorial 1 to save it to disk with the function [http://docs.pointclouds.org/trunk/group__io.html#ga7291a029cdcde32ca3639d07dc6491b9 pcl::io::saveRangeImagePlanarFilePNG()].&lt;br /&gt;
&lt;br /&gt;
===Extracting borders===&lt;br /&gt;
&lt;br /&gt;
NARF keypoints are located near the edges of objects in the range image, so in order to find them, we first have to extract the borders. A border is defined as an abrupt change from foreground to background. In a range image, this can be easily seen because there is a &amp;quot;jump&amp;quot; in the depth value of two adjacent pixels.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_border_detection.png|thumb|center|400px|Border detection on a range image (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are three types of borders. ''Object borders'' consist of the pixels (or points) located on the very edge of an object (the outermost points still belonging to the object). ''Shadow borders'' are points in the background on the edge of occlusions (empty areas in the background due to the objects in front covering them). Notice that, when the cloud is seen from the sensor's viewpoint, object and shadow points will seem adjacent. Finally, ''veil points'' are points interpolated between the previous two which appear in scans taken with LIDAR sensors, so we do not have to worry about them here.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:Range_image_border_type.png | Types of borders (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).&lt;br /&gt;
File:Range_image_borders_example.png | Example of object and shadow borders on a cloud.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The algorithm basically compares a point's depth with the values of its neighbors, and if a big difference is found, we know it is due to a border. Points closer to the sensor will be marked as object borders, and the other ones as shadow borders.&lt;br /&gt;
&lt;br /&gt;
PCL provides a class for extracting borders of a range image:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image_planar.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/range_image_border_extractor.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/range_image_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the borders.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::BorderDescription&amp;gt;::Ptr borders(new pcl::PointCloud&amp;lt;pcl::BorderDescription&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Convert the cloud to range image.&lt;br /&gt;
	int imageSizeX = 640, imageSizeY = 480;&lt;br /&gt;
	float centerX = (640.0f / 2.0f), centerY = (480.0f / 2.0f);&lt;br /&gt;
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	float noiseLevel = 0.0f, minimumRange = 0.0f;&lt;br /&gt;
	pcl::RangeImagePlanar rangeImage;&lt;br /&gt;
	rangeImage.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,&lt;br /&gt;
			centerX, centerY, focalLengthX, focalLengthX,&lt;br /&gt;
			sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
			noiseLevel, minimumRange);&lt;br /&gt;
&lt;br /&gt;
	// Border extractor object.&lt;br /&gt;
	pcl::RangeImageBorderExtractor borderExtractor(&amp;amp;rangeImage);&lt;br /&gt;
&lt;br /&gt;
	borderExtractor.compute(*borders);&lt;br /&gt;
&lt;br /&gt;
	// Visualize the borders.&lt;br /&gt;
	pcl::visualization::RangeImageVisualizer* viewer = NULL;&lt;br /&gt;
	viewer = pcl::visualization::RangeImageVisualizer::getRangeImageBordersWidget(rangeImage,&lt;br /&gt;
			 -std::numeric_limits&amp;lt;float&amp;gt;::infinity(),&lt;br /&gt;
			 std::numeric_limits&amp;lt;float&amp;gt;::infinity(),&lt;br /&gt;
			 false, *borders, &amp;quot;Borders&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce();&lt;br /&gt;
		// Sleep 100ms to go easy on the CPU.&lt;br /&gt;
		pcl_sleep(0.1);&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_borders.png|thumb|center|400px|Borders found on the range image.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can use the extractor's &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getParameters()&amp;quot;&amp;lt;/span&amp;gt; function to get a [http://docs.pointclouds.org/trunk/structpcl_1_1_range_image_border_extractor_1_1_parameters.html pcl::RangeImageBorderExtractor::Parameters] struct with the settings that will be used.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Range image&lt;br /&gt;
* '''Output''': Borders&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/range_image_border_extraction.php How to extract borders from range images]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://europa.informatik.uni-freiburg.de/files/steder11icra.pdf Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries] (Bastian Steder et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_range_image_border_extractor.html pcl::RangeImageBorderExtractor]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_range_image_borders.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Finding keypoints===&lt;br /&gt;
&lt;br /&gt;
Citing the [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original publication]:''&lt;br /&gt;
&lt;br /&gt;
''&amp;quot;We have the following requirements for our interest point extraction procedure:&lt;br /&gt;
&lt;br /&gt;
# ''It must take information about borders and the surface structure into account.''&lt;br /&gt;
# ''It must select positions that can be reliably detected even if the object is observed from another perspective.''&lt;br /&gt;
# ''The points must be on positions that provide stable areas for normal estimation or the descriptor calculation in general.&amp;quot;''&lt;br /&gt;
&lt;br /&gt;
The procedure is the following: for every point in the range image, a score is computed that conveys how much the surface changes in its neighborhood (this is tuned with the ''support size'' σ, which is the diameter of the sphere used to find neighboring points). Also, the dominant direction of this change is computed. Then, this direction is compared with those of the neighbors, trying to find how stable the point is (if the directions are very different, that means the point is not stable, and that the surface around changes a lot). Points that are near the object's corners (but not exactly on the very edge) will be good keypoints, yet stable enough.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:NARF_keypoints.png | NARF keypoints (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).&lt;br /&gt;
File:NARF_keypoints_support_sizes.png | Interest regions with a support size of 20cm (up) and 1m (down) (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In PCL, NARF keypoints can be found this way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image_planar.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/range_image_border_extractor.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/keypoints/narf_keypoint.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/range_image_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the keypoints' indices.&lt;br /&gt;
	pcl::PointCloud&amp;lt;int&amp;gt;::Ptr keypoints(new pcl::PointCloud&amp;lt;int&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Convert the cloud to range image.&lt;br /&gt;
	int imageSizeX = 640, imageSizeY = 480;&lt;br /&gt;
	float centerX = (640.0f / 2.0f), centerY = (480.0f / 2.0f);&lt;br /&gt;
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	float noiseLevel = 0.0f, minimumRange = 0.0f;&lt;br /&gt;
	pcl::RangeImagePlanar rangeImage;&lt;br /&gt;
	rangeImage.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,&lt;br /&gt;
			centerX, centerY, focalLengthX, focalLengthX,&lt;br /&gt;
			sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
			noiseLevel, minimumRange);&lt;br /&gt;
&lt;br /&gt;
	pcl::RangeImageBorderExtractor borderExtractor;&lt;br /&gt;
	// Keypoint detection object.&lt;br /&gt;
	pcl::NarfKeypoint detector(&amp;amp;borderExtractor);&lt;br /&gt;
	detector.setRangeImage(&amp;amp;rangeImage);&lt;br /&gt;
	// The support size influences how big the surface of interest will be,&lt;br /&gt;
	// when finding keypoints from the border information.&lt;br /&gt;
	detector.getParameters().support_size = 0.2f;&lt;br /&gt;
&lt;br /&gt;
	detector.compute(*keypoints);&lt;br /&gt;
&lt;br /&gt;
	// Visualize the keypoints.&lt;br /&gt;
	pcl::visualization::RangeImageVisualizer viewer(&amp;quot;NARF keypoints&amp;quot;);&lt;br /&gt;
	viewer.showRangeImage(rangeImage);&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; keypoints-&amp;gt;points.size(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		viewer.markPoint(keypoints-&amp;gt;points[i] % rangeImage.width,&lt;br /&gt;
						 keypoints-&amp;gt;points[i] / rangeImage.width,&lt;br /&gt;
						 // Set the color of the pixel to red (the background&lt;br /&gt;
						 // circle is already that color). All other parameters&lt;br /&gt;
						 // are left untouched, check the API for more options.&lt;br /&gt;
						 pcl::visualization::Vector3ub(1.0f, 0.0f, 0.0f));&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer.spinOnce();&lt;br /&gt;
		// Sleep 100ms to go easy on the CPU.&lt;br /&gt;
		pcl_sleep(0.1);&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_NARF_keypoints.png|thumb|center|400px|NARF keypoints found on the range image.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Range image, Border extractor, Support Size&lt;br /&gt;
* '''Output''': NARF keypoints&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/narf_keypoint_extraction.php How to extract NARF keypoint from a range image]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://europa.informatik.uni-freiburg.de/files/steder11icra.pdf Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries] (Bastian Steder et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_narf_keypoint.html pcl::NarfKeypoint]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_NARF_keypoints.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Computing the descriptor===&lt;br /&gt;
&lt;br /&gt;
We have created the range image from a point cloud, and we have extracted the borders in order to find good keypoints. Now it is time to compute the NARF descriptor for each keypoint.&lt;br /&gt;
&lt;br /&gt;
The NARF descriptor encodes information about surface changes around a point. First, a local range patch is created around the point. It is like a small range image centered at that point, aligned with the normal (it would seem as if we were looking at the point along the normal). Then, a star pattern with ''n'' beams is overlaid onto the patch, also centered at the point. For every beam, a value is computed, that reflects how much the surface under it changes. The stronger the change is, and the closer to the center it is, the higher the final value will be. The ''n'' resulting values compose the final descriptor.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:NARF_descriptor.png|thumb|center|600px|Computing the NARF descriptor for a keypoint (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The descriptor right now is not invariant to rotations around the normal. To achieve this, the whole possible 360 degrees are binned into an histogram. The value of each bin is computed from the descriptor values according to the angle. Then, the bin with the highest value is considered the dominant orientation, and the descriptor is shifted according to it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image_planar.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/range_image_border_extractor.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/keypoints/narf_keypoint.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/narf_descriptor.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the keypoints' indices.&lt;br /&gt;
	pcl::PointCloud&amp;lt;int&amp;gt;::Ptr keypoints(new pcl::PointCloud&amp;lt;int&amp;gt;);&lt;br /&gt;
	// Object for storing the NARF descriptors.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Narf36&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::Narf36&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Convert the cloud to range image.&lt;br /&gt;
	int imageSizeX = 640, imageSizeY = 480;&lt;br /&gt;
	float centerX = (640.0f / 2.0f), centerY = (480.0f / 2.0f);&lt;br /&gt;
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	float noiseLevel = 0.0f, minimumRange = 0.0f;&lt;br /&gt;
	pcl::RangeImagePlanar rangeImage;&lt;br /&gt;
	rangeImage.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,&lt;br /&gt;
			centerX, centerY, focalLengthX, focalLengthX,&lt;br /&gt;
			sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
			noiseLevel, minimumRange);&lt;br /&gt;
&lt;br /&gt;
	// Extract the keypoints.&lt;br /&gt;
	pcl::RangeImageBorderExtractor borderExtractor;&lt;br /&gt;
	pcl::NarfKeypoint detector(&amp;amp;borderExtractor);&lt;br /&gt;
	detector.setRangeImage(&amp;amp;rangeImage);&lt;br /&gt;
	detector.getParameters().support_size = 0.2f;&lt;br /&gt;
	detector.compute(*keypoints);&lt;br /&gt;
&lt;br /&gt;
	// The NARF estimator needs the indices in a vector, not a cloud.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; keypoints2;&lt;br /&gt;
	keypoints2.resize(keypoints-&amp;gt;points.size());&lt;br /&gt;
	for (unsigned int i = 0; i &amp;lt; keypoints-&amp;gt;size(); ++i)&lt;br /&gt;
		keypoints2[i] = keypoints-&amp;gt;points[i];&lt;br /&gt;
	// NARF estimation object.&lt;br /&gt;
	pcl::NarfDescriptor narf(&amp;amp;rangeImage, &amp;amp;keypoints2);&lt;br /&gt;
	// Support size: choose the same value you used for keypoint extraction.&lt;br /&gt;
	narf.getParameters().support_size = 0.2f;&lt;br /&gt;
	// If true, the rotation invariant version of NARF will be used. The histogram&lt;br /&gt;
	// will be shifted according to the dominant orientation to provide robustness to&lt;br /&gt;
	// rotations around the normal.&lt;br /&gt;
	narf.getParameters().rotation_invariant = true;&lt;br /&gt;
&lt;br /&gt;
	narf.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Range image, Key points, Support Size&lt;br /&gt;
* '''Output''': NARF descriptors&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/narf_feature_extraction.php How to extract NARF Features from a range image]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://europa.informatik.uni-freiburg.de/files/steder11icra.pdf Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries] (Bastian Steder et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_narf_descriptor.html pcl::NarfDescriptor]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_NARF_descriptor.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==RoPS==&lt;br /&gt;
&lt;br /&gt;
The Rotational Projection Statistics (RoPS) feature is a bit different from the other descriptors because it works with a triangle mesh, so a previous triangulation step is needed for generating this mesh from the cloud. Apart from that, most concepts are similar.&lt;br /&gt;
&lt;br /&gt;
In order to compute RoPS for a keypoint, the local surface is cropped according to a support radius, so only points and triangles lying inside are taken into account. Then, a local reference frame (LRF) is computed, giving the descriptor its rotational invariance. A coordinate system is created with the point as the origin, and the axes aligned with the LRF. Then, for every axis, several steps are performed.&lt;br /&gt;
&lt;br /&gt;
First, the local surface is rotated around the current axis. The angle is determined by one of the parameters, which sets the number of rotations. Then, all points in the local surface are projected onto the XY, XZ and YZ planes. For each one, statistical information about the distribution of the projected points is computed, and concatenated to form the final descriptor.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/point_types_conversion.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/gp3.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/rops_estimation.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// A handy typedef.&lt;br /&gt;
typedef pcl::Histogram&amp;lt;135&amp;gt; ROPS135;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing both the points and the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;::Ptr cloudNormals(new pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
	// Object for storing the ROPS descriptor for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;ROPS135&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;ROPS135&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Perform triangulation.&lt;br /&gt;
	pcl::concatenateFields(*cloud, *normals, *cloudNormals);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointNormal&amp;gt;::Ptr kdtree2(new pcl::search::KdTree&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
	kdtree2-&amp;gt;setInputCloud(cloudNormals);&lt;br /&gt;
	pcl::GreedyProjectionTriangulation&amp;lt;pcl::PointNormal&amp;gt; triangulation;&lt;br /&gt;
	pcl::PolygonMesh triangles;&lt;br /&gt;
	triangulation.setSearchRadius(0.025);&lt;br /&gt;
	triangulation.setMu(2.5);&lt;br /&gt;
	triangulation.setMaximumNearestNeighbors(100);&lt;br /&gt;
	triangulation.setMaximumSurfaceAngle(M_PI / 4); // 45 degrees.&lt;br /&gt;
	triangulation.setNormalConsistency(false);&lt;br /&gt;
	triangulation.setMinimumAngle(M_PI / 18); // 10 degrees.&lt;br /&gt;
	triangulation.setMaximumAngle(2 * M_PI / 3); // 120 degrees.&lt;br /&gt;
	triangulation.setInputCloud(cloudNormals);&lt;br /&gt;
	triangulation.setSearchMethod(kdtree2);&lt;br /&gt;
	triangulation.reconstruct(triangles);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should only compute descriptors for chosen keypoints. It has&lt;br /&gt;
	// been omitted here for simplicity.&lt;br /&gt;
&lt;br /&gt;
	// RoPs estimation object.&lt;br /&gt;
	pcl::ROPSEstimation&amp;lt;pcl::PointXYZ, ROPS135&amp;gt; rops;&lt;br /&gt;
	rops.setInputCloud(cloud);&lt;br /&gt;
	rops.setSearchMethod(kdtree);&lt;br /&gt;
	rops.setRadiusSearch(0.03);&lt;br /&gt;
	rops.setTriangles(triangles.polygons);&lt;br /&gt;
	// Number of partition bins that is used for distribution matrix calculation.&lt;br /&gt;
	rops.setNumberOfPartitionBins(5);&lt;br /&gt;
	// The greater the number of rotations is, the bigger the resulting descriptor.&lt;br /&gt;
	// Make sure to change the histogram size accordingly.&lt;br /&gt;
	rops.setNumberOfRotations(3);&lt;br /&gt;
	// Support radius that is used to crop the local surface of the point.&lt;br /&gt;
	rops.setSupportRadius(0.025);&lt;br /&gt;
&lt;br /&gt;
	rops.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Triangles, Search method, Support radius, Number of rotations, Number of partition bins&lt;br /&gt;
* '''Output''': RoPS descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://arxiv.org/pdf/1304.3192v1.pdf Rotational Projection Statistics for 3D Local Surface Description and Object Recognition] (Yulan Guo et al., 2013)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_r_o_p_s_estimation.html pcl::ROPSEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_RoPS.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Global descriptors=&lt;br /&gt;
&lt;br /&gt;
Global descriptors encode object geometry. They are not computed for individual points, but for a whole cluster that represents an object. Because of this, a preprocessing step ([[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Segmentation|segmentation]]) is required, in order to retrieve possible candidates.&lt;br /&gt;
&lt;br /&gt;
Global descriptors are used for object recognition and classification, geometric analysis (object type, shape...), and pose estimation.&lt;br /&gt;
&lt;br /&gt;
You should also know that many local descriptors can also be used as global ones. This can be done with descriptors that use a radius to search for neighbors (as PFH does). The trick is to compute it for one single point in the object cluster, and set the radius to the maximum possible distance between any two points (so all points in the cluster are considered as neighbors).&lt;br /&gt;
&lt;br /&gt;
==VFH==&lt;br /&gt;
&lt;br /&gt;
The Viewpoint Feature Histogram is based on the FPFH. Because the latter is invariant to the object's pose, the authors decided to expand it by including information about the viewpoint. Also, the FPFH is estimated once for the whole cluster, not for every point.&lt;br /&gt;
&lt;br /&gt;
The VFH is made up by two parts: a viewpoint direction component, and an extended FPFH component. To compute the first one, the object's [[PCL/OpenNI_tutorial_2:_Cloud_processing_%28basic%29#Computing_the_centroid|centroid]] is found, which is the point that results from averaging the X, Y and Z coordinates of all points. Then, the vector between the viewpoint (the position of the sensor) and this centroid is computed, and normalized. Finally, for all points in the cluster, the angle between this vector and their normal is calculated, and the result is binned into an histogram. The vector is translated to each point when computing the angle because it makes the descriptor scale invariant.&lt;br /&gt;
&lt;br /&gt;
The second component is computed like the FPFH (that results in 3 histograms for the 3 angular features, α, φ and θ), with some differences: it is only computed for the centroid, using the computed viewpoint direction vector as its normal (as the point, obviously, does not have a normal), and setting all the cluster's points as neighbors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:VFH_viewpoint_component.png | Viewpoint component of the VFH (image from [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf original paper]).&lt;br /&gt;
File:VFH_extended_FPFH_component.png | Extended FPFH component of the VFH (image from [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf original paper]).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The resulting 4 histograms (1 for the viewpoint component, 3 for the extended FPFH component) are concatenated to build the final VFH descriptor. By default, the bins are normalized using the total number of points in the cluster. This makes the VFH descriptor invariant to scale.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:VFH_histogram.png|thumb|center|400px| VFH histogram (image from [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The PCL implementation computes an additional fifth histogram with the distances of the cluster points to the centroid (the Shape Distribution Component, SDC), increading the size of the output descriptor from 263 to 308. The SDC is taken from the CVFH descriptor that we will see in the next section, and makes the result more robust.&lt;br /&gt;
&lt;br /&gt;
The VFH of an already clustered object can be computed this way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/vfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the VFH descriptor.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// VFH estimation object.&lt;br /&gt;
	pcl::VFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; vfh;&lt;br /&gt;
	vfh.setInputCloud(object);&lt;br /&gt;
	vfh.setInputNormals(normals);&lt;br /&gt;
	vfh.setSearchMethod(kdtree);&lt;br /&gt;
	// Optionally, we can normalize the bins of the resulting histogram,&lt;br /&gt;
	// using the total number of points.&lt;br /&gt;
	vfh.setNormalizeBins(true);&lt;br /&gt;
	// Also, we can normalize the SDC with the maximum size found between&lt;br /&gt;
	// the centroid and any of the cluster's points.&lt;br /&gt;
	vfh.setNormalizeDistance(false);&lt;br /&gt;
&lt;br /&gt;
	vfh.compute(*descriptor);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Because only one VFH descriptor is computed for the whole cluster, the size of the cloud object that stores the result will be 1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Normals, Search method, [Normalize bins], [Normalize SDC]&lt;br /&gt;
* '''Output''': VFH descriptor&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/vfh_estimation.php Estimating VFH signatures for a set of points]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf Fast 3D Recognition and Pose Using the Viewpoint Feature Histogram] (Radu Bogdan Rusu et al., 2010)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_v_f_h_estimation.html pcl::VFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_VFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===CVFH===&lt;br /&gt;
&lt;br /&gt;
The original VFH descriptor is not robust to occlusion or other sensor artifacts, or measurement errors. If the object cluster is missing many points, the resulting computed centroid will differ from the original, altering the final descriptor, and preventing a positive match from being found. Because of that, the Clustered Viewpoint Feature Histogram (CVFH) was introduced.&lt;br /&gt;
&lt;br /&gt;
The idea is very simple: instead of computing a single VFH histogram for the whole cluster, the object is first divided in stable, smooth regions using [[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Region_growing | region-growing segmentation]], that enforces several constraints in the distances and differences of normals of the points belonging to every region. Then, a VFH is computed for every region. Thanks to this, an object can be found in a scene, as long as at least one of its regions is fully visible.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:CVFH_occlusion.png | Typical occlusion issues in a point cloud (image from original paper).&lt;br /&gt;
File:CVFH_regions.png | Object regions computed for the CVFH (image from original paper).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additionally, a Shape Distribution Component (SDC) is also computed and included. It encodes information about the distribution of the points arond the region's centroid, measuring the distances. The SDC allows to differentiate objects with similar characteristics (size and normal distribution), like two planar surfaces from each other.&lt;br /&gt;
&lt;br /&gt;
The authors proposed to discard the histogram normalization step that is performed in VFH. This has the effect of making the descriptor dependant of scale, so an object of a certain size would not match a bigger or smaller copy of itself. It also makes CVFH more robust to occlusion.&lt;br /&gt;
&lt;br /&gt;
CVFH is invariant to the camera roll angle, like most global descriptors. This is so because rotations about that camera axis do not change the observable geometry that descriptors are computed from, limiting the pose estimation to 5 DoF. The use of a Camera Roll Histogram (CRH) has been proposed to overcome this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/cvfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the CVFH descriptors.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// CVFH estimation object.&lt;br /&gt;
	pcl::CVFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; cvfh;&lt;br /&gt;
	cvfh.setInputCloud(object);&lt;br /&gt;
	cvfh.setInputNormals(normals);&lt;br /&gt;
	cvfh.setSearchMethod(kdtree);&lt;br /&gt;
	// Set the maximum allowable deviation of the normals,&lt;br /&gt;
	// for the region segmentation step.&lt;br /&gt;
	cvfh.setEPSAngleThreshold(5.0 / 180.0 * M_PI); // 5 degrees.&lt;br /&gt;
	// Set the curvature threshold (maximum disparity between curvatures),&lt;br /&gt;
	// for the region segmentation step.&lt;br /&gt;
	cvfh.setCurvatureThreshold(1.0);&lt;br /&gt;
	// Set to true to normalize the bins of the resulting histogram,&lt;br /&gt;
	// using the total number of points. Note: enabling it will make CVFH&lt;br /&gt;
	// invariant to scale just like VFH, but the authors encourage the opposite.&lt;br /&gt;
	cvfh.setNormalizeBins(false);&lt;br /&gt;
&lt;br /&gt;
	cvfh.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can further customize the segmentation step with &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setClusterTolerance()&amp;quot;&amp;lt;/span&amp;gt; (to set the maximum Euclidean distance between points in the same cluster) and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setMinPoints()&amp;quot;&amp;lt;/span&amp;gt;. The size of the output will be equal to the number of regions the object was divided in. Also, check the functions &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getCentroidClusters()&amp;quot;&amp;lt;/span&amp;gt; and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getCentroidNormalClusters()&amp;quot;&amp;lt;/span&amp;gt;, you can use them to get information about the centroids used to compute the different CVFH descriptors.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Normals, Search method, Angle threshold, Curvature threshold, [Normalize bins], [Cluster tolerance], [Minimum points]&lt;br /&gt;
* '''Output''': CVFH descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6130296 CAD-Model Recognition and 6DOF Pose Estimation Using 3D Cues] (requires IEEE Xplore subscription) (Aitor Aldoma et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_c_v_f_h_estimation.html pcl::CVFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_CVFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====OUR-CVFH====&lt;br /&gt;
&lt;br /&gt;
The Oriented, Unique and Repeatable CVFH expands the previous descriptor, adding the computation of an unique reference frame to make it more robust.&lt;br /&gt;
&lt;br /&gt;
OUR-CVFH relies on the use of Semi-Global Unique Reference Frames (SGURFs), which are repeatable coordinate systems computed for each region. Not only they remove the invariance to camera roll and allow to extract the 6DoF pose directly without additional steps, but they also improve the spatial descriptiveness.&lt;br /&gt;
&lt;br /&gt;
The first part of the computation is akin to CVFH, but after segmentation, the points in each region are filtered once more according to the difference between their normals and the region's average normal. This results in better shaped regions, improving the estimation of the Reference Frames (RFs).&lt;br /&gt;
&lt;br /&gt;
After this, the SGURF is computed for each region. Disambiguation is performed to decide the sign of the axes, according to the points' distribution. If this is not enough and the sign remains ambiguous, multiple RFs will need to be created to account for it. Finally, the OUR-CVFH descriptor is computed. The original Shape Distribution Component (SDC) is discarded, and the surface is now described according to the RFs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:OUR-CVFH.png|thumb|center|600px| SGURF frame and resulting histogram of a region (image from [http://vision.deis.unibo.it/fede/papers/dagm12.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/our_cvfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the OUR-CVFH descriptors.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// OUR-CVFH estimation object.&lt;br /&gt;
	pcl::OURCVFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; ourcvfh;&lt;br /&gt;
	ourcvfh.setInputCloud(object);&lt;br /&gt;
	ourcvfh.setInputNormals(normals);&lt;br /&gt;
	ourcvfh.setSearchMethod(kdtree);&lt;br /&gt;
	ourcvfh.setEPSAngleThreshold(5.0 / 180.0 * M_PI); // 5 degrees.&lt;br /&gt;
	ourcvfh.setCurvatureThreshold(1.0);&lt;br /&gt;
	ourcvfh.setNormalizeBins(false);&lt;br /&gt;
	// Set the minimum axis ratio between the SGURF axes. At the disambiguation phase,&lt;br /&gt;
	// this will decide if additional Reference Frames need to be created, if ambiguous.&lt;br /&gt;
	ourcvfh.setAxisRatio(0.8);&lt;br /&gt;
&lt;br /&gt;
	ourcvfh.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can use the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getTransforms()&amp;quot;&amp;lt;/span&amp;gt; function to get the transformations aligning the cloud to the corresponding SGURF.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Normals, Search method, Angle threshold, Curvature threshold, [Normalize bins], [Cluster tolerance], [Minimum points], [Axis ratio]&lt;br /&gt;
* '''Output''': OUR-CVFH descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://vision.deis.unibo.it/fede/papers/dagm12.pdf OUR-CVFH – Oriented, Unique and Repeatable Clustered Viewpoint Feature Histogram for Object Recognition and 6DOF Pose Estimation] (Aitor Aldoma et al., 2012)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_o_u_r_c_v_f_h_estimation.html pcl::OURCVFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_OUR-CVFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==ESF==&lt;br /&gt;
&lt;br /&gt;
The Ensemble of Shape Functions (ESF) is a combination of 3 different shape functions that describe certain properties of the cloud's points: distances, angles and area. This descriptor is very unique because it does not require normal information. Actually, it does not need any preprocessing, as it is robust to noise and incomplete surfaces.&lt;br /&gt;
&lt;br /&gt;
The algorithm uses a voxel grid as an approximation of the real surface. It iterates through all the points in the cloud: for every iteration, 3 random points are chosen. For these points, the shape functions are computed:&lt;br /&gt;
&lt;br /&gt;
* '''D2''': this function computes the distances between point pairs (3 overall). Then, for every pair, it checks if the line that connects both points lies entirely inside the surface, entirely outside (crossing free space), or both. Depending on this, the distance value will be binned to one of three possible histograms: IN, OUT or MIXED.&lt;br /&gt;
* '''D2 ratio''': an additional histogram for the ratio between parts of the line inside the surface, and parts outside. This value will be 0 if the line is completely outside, 1 if completely inside, and some value in between if mixed.&lt;br /&gt;
* '''D3''': this computes the square root of the area of the triangle formed by the 3 points. Like D2, the result is also classified as IN, OUT or MIXED, each with its own histogram.&lt;br /&gt;
* '''A3''': this function computes the angle formed by the points. Then, the value is binned depending on how the line opposite to the angle is (once again, as IN, OUT or MIXED).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:ESF.png|thumb|center|600px| ESF descriptor (image from [http://www.inf.ethz.ch/personal/zeislb/publications/aldoma_2012jram_PCLTutorial.pdf this paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After the loop is over, we are left with 10 subhistograms (IN, OUT and MIXED for D2, D3 and A3, and an additional one for the ratio). Each one has 64 bins, so the size of the final ESF descriptor is 640.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/esf.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the ESF descriptor.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::ESFSignature640&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::ESFSignature640&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// ESF estimation object.&lt;br /&gt;
	pcl::ESFEstimation&amp;lt;pcl::PointXYZ, pcl::ESFSignature640&amp;gt; esf;&lt;br /&gt;
	esf.setInputCloud(object);&lt;br /&gt;
&lt;br /&gt;
	esf.compute(*descriptor);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster)&lt;br /&gt;
* '''Output''': ESF descriptor&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6181760 Ensemble of shape functions for 3D object classification] (requires IEEE Xplore subscription) (Walter Wohlkinger and Markus Vincze, 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_e_s_f_estimation.html pcl::ESFEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_ESF.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==GFPFH==&lt;br /&gt;
&lt;br /&gt;
As you may have guessed, GFPFH stands for Global Fast Point Feature Histogram, the global version of the FPFH descriptor. GFPFH was designed for the task of helping a robot navigate its environment, having some context of the objects around it.&lt;br /&gt;
&lt;br /&gt;
The first step before being able to compute the descriptor is surface categorization. A set of logical primitives (the classes, or categories) is created, which depends on the type of objects we expect the robot to find on the scene. For example, if we know there will be a coffee mug, we create three: one for the handle, and the other two for the outer and inner faces. Then, FPFH descriptors are computed, and everything is fed to a [https://en.wikipedia.org/wiki/Conditional_random_field Conditional Random Field] (CRF) algorithm. The CRF will label each surface with one of the previous categories, so we end up with a cloud where each point has been classified depending of the type of object (or object's region) it belongs to.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:FPFH_CRF.png|thumb|center|300px| Classification of objects made with FPFH and CRF (image from [https://www.willowgarage.com/sites/default/files/iccv09.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Now, the GFPFH descriptor can be computed with the result of the classification step. It will encode what the object is made of, so the robot can easily recognize it. First, an [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#Octree | octree]] is created, dividing the object in voxel leaves. For every leaf, a set of probabilities is created, one for each class. Each one stores the probability of that leaf belonging to the class, and it is computed according to the number of points in that leaf that have been labelled as that class, and the total number of points. Then, for every pair of leaves in the octree, a line is casted, connecting them. Every leaf in its path is checked for occupancy, storing the result in an histogram. If the leaf is empty (free space), a value of 0 is saved. Otherwise, the leaf probabilities are used.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:GFPFH.png|thumb|center|500px| Computing the GFPFH with a voxel grid (image from [https://www.willowgarage.com/sites/default/files/iccv09.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following code will compute the GFPFH for a cloud with label information. The categorization step is up to you, as it depends largely on the type of the scene, and the use you are going to give it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/gfpfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZL&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZL&amp;gt;);&lt;br /&gt;
	// Object for storing the GFPFH descriptor.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::GFPFHSignature16&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::GFPFHSignature16&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZL&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you should now perform classification on the cloud's points. See the&lt;br /&gt;
	// original paper for more details. For this example, we will now consider 4&lt;br /&gt;
	// different classes, and randomly label each point as one of them.&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; object-&amp;gt;points.size(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		object-&amp;gt;points[i].label = 1 + i % 4;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// ESF estimation object.&lt;br /&gt;
	pcl::GFPFHEstimation&amp;lt;pcl::PointXYZL, pcl::PointXYZL, pcl::GFPFHSignature16&amp;gt; gfpfh;&lt;br /&gt;
	gfpfh.setInputCloud(object);&lt;br /&gt;
	// Set the object that contains the labels for each point. Thanks to the&lt;br /&gt;
	// PointXYZL type, we can use the same object we store the cloud in.&lt;br /&gt;
	gfpfh.setInputLabels(object);&lt;br /&gt;
	// Set the size of the octree leaves to 1cm (cubic).&lt;br /&gt;
	gfpfh.setOctreeLeafSize(0.01);&lt;br /&gt;
	// Set the number of classes the cloud has been labelled with (default is 16).&lt;br /&gt;
	gfpfh.setNumberOfClasses(4);&lt;br /&gt;
&lt;br /&gt;
	gfpfh.compute(*descriptor);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Labels, Number of classes, Leaf size&lt;br /&gt;
* '''Output''': GFPFH descriptor&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [https://www.willowgarage.com/sites/default/files/iccv09.pdf Detecting and Segmenting Objects for Mobile Manipulation] (Radu Bogdan Rusu et al., 2009)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_g_f_p_f_h_estimation.html pcl::GFPFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_GFPFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==GRSD==&lt;br /&gt;
&lt;br /&gt;
The global version of the Radius-based Surface Descriptor works in a similar fashion to GFPFH. A voxelization and a surface categorization step are performed beforehand, labelling all surface patches according to the geometric category (plane, cylinder, edge, rim, sphere), using RSD. Then, the whole cluster is classified into one of these categories, and the GRSD descriptor is computed from this.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:GRSD.png|thumb|center|400px| Classification of objects for GRSD and resulting histogram (image from [https://ias.in.tum.de/_media/spezial/bib/grsd10humanoids.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To compute it:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/grsd.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the GRSD descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::GRSDSignature21&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::GRSDSignature21&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// GRSD estimation object.&lt;br /&gt;
	GRSDEstimation&amp;lt;PointXYZ, Normal, GRSDSignature21&amp;gt; grsd;&lt;br /&gt;
	grsd.setInputCloud(cloud);&lt;br /&gt;
	grsd.setInputNormals(normals);&lt;br /&gt;
	grsd.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. Note: the value given here has to be&lt;br /&gt;
	// larger than the radius used to estimate the normals.&lt;br /&gt;
	grsd.setRadiusSearch(0.05);&lt;br /&gt;
	&lt;br /&gt;
	grsd.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: This code will only compile with PCL versions 1.8 and above (the current trunk).'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Normals, Search method, Radius&lt;br /&gt;
* '''Output''': GRSD descriptor&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [https://ias.in.tum.de/_media/spezial/bib/grsd10humanoids.pdf Hierarchical Object Geometric Categorization and Appearance Classification for Mobile Manipulation] (Zoltan-Csaba Marton et al., 2010)&lt;br /&gt;
** [https://ias.cs.tum.edu/_media/spezial/bib/marton11ijrr.pdf Combined 2D-3D Categorization and Classification for Multimodal Perception Systems] (Zoltan-Csaba Marton et al., 2011)&lt;br /&gt;
** [http://www.mi.t.u-tokyo.ac.jp/top/downloadpublication/36 Voxelized Shape and Color Histograms for RGB-D] (Zoltan-Csaba Marton et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_g_r_s_d_estimation.html pcl::GRSDEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_GRSD.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Saving and loading=&lt;br /&gt;
&lt;br /&gt;
You can save a descriptor to a file just [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#Writing_to_file|like with any other cloud type]]. One caveat, though. If you are using a descriptor that has its own custom type, like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PFHSignature125&amp;quot;&amp;lt;/span&amp;gt;, everything will be OK. But with descriptors that do not (where you have to use &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::Histogram&amp;lt;&amp;gt;&amp;quot;&amp;lt;/span&amp;gt;), you will get this error: &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;POINT_TYPE_NOT_PROPERLY_REGISTERED&amp;quot;''&amp;lt;/span&amp;gt;. In order to save or load from a file, PCL's IO functions need to know about the number, type and size of the fields. To solve this, you will have to properly register a new point type for your descriptor. For example, this will work for the [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#RoPS|RoPS]] descriptor example we saw earlier:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;POINT_CLOUD_REGISTER_POINT_STRUCT(ROPS135,&lt;br /&gt;
                                  (float[135], histogram, histogram)&lt;br /&gt;
)&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add the previous to your code (change it accordingly), and you will be able to save and load descriptors as usual.&lt;br /&gt;
&lt;br /&gt;
=Visualization=&lt;br /&gt;
&lt;br /&gt;
Sometimes it is desired to check a visual representation of a descriptor, perhaps to analyze the distribution of data over different bins. Because they are saved as histograms, this is something trivial to do. PCL offers a couple of classes to do this.&lt;br /&gt;
&lt;br /&gt;
==PCLHistogramVisualizer==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PCLHistogramVisualizer&amp;quot;&amp;lt;/span&amp;gt; is the simplest way to plot an histogram. The class has little functionality, but it does its job. Only one call is necessary to give the histogram and its size:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/vfh.h&amp;gt;&lt;br /&gt;
#include&amp;lt;pcl/visualization/histogram_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Clouds for storing everything.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Estimate VFH descriptor.&lt;br /&gt;
	pcl::VFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; vfh;&lt;br /&gt;
	vfh.setInputCloud(object);&lt;br /&gt;
	vfh.setInputNormals(normals);&lt;br /&gt;
	vfh.setSearchMethod(kdtree);&lt;br /&gt;
	vfh.setNormalizeBins(true);&lt;br /&gt;
	vfh.setNormalizeDistance(false);&lt;br /&gt;
	vfh.compute(*descriptor);&lt;br /&gt;
&lt;br /&gt;
	// Plotter object.&lt;br /&gt;
	pcl::visualization::PCLHistogramVisualizer viewer;&lt;br /&gt;
	// We need to set the size of the descriptor beforehand.&lt;br /&gt;
	viewer.addFeatureHistogram(*descriptor, 308);&lt;br /&gt;
&lt;br /&gt;
	viewer.spin();&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Histogram_visualizer_VFH.png|thumb|center|500px| VFH histogram seen with the Histogram Visualizer.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Descriptor, descriptor size, [Window size], [Background color], [Y axis range]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_histogram_visualizer.html pcl::visualization::PCLHistogramVisualizer]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_histogram_visualizer.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==PCLPlotter==&lt;br /&gt;
&lt;br /&gt;
This class has all the methods from &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PCLHistogramVisualizer&amp;quot;&amp;lt;/span&amp;gt; (which will be deprecated soon) plus a lot more features. The code is almost the same:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/vfh.h&amp;gt;&lt;br /&gt;
#include&amp;lt;pcl/visualization/pcl_plotter.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Clouds for storing everything.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Estimate VFH descriptor.&lt;br /&gt;
	pcl::VFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; vfh;&lt;br /&gt;
	vfh.setInputCloud(object);&lt;br /&gt;
	vfh.setInputNormals(normals);&lt;br /&gt;
	vfh.setSearchMethod(kdtree);&lt;br /&gt;
	vfh.setNormalizeBins(true);&lt;br /&gt;
	vfh.setNormalizeDistance(false);&lt;br /&gt;
	vfh.compute(*descriptor);&lt;br /&gt;
&lt;br /&gt;
	// Plotter object.&lt;br /&gt;
	pcl::visualization::PCLPlotter plotter;&lt;br /&gt;
	// We need to set the size of the descriptor beforehand.&lt;br /&gt;
	plotter.addFeatureHistogram(*descriptor, 308);&lt;br /&gt;
&lt;br /&gt;
	plotter.plot();&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:PCL_plotter_VFH.png|thumb|center|500px| VFH histogram seen with the PCLPlotter class.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you have raw data (such as a vector of floats) you can use the [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_plotter.html#aaf23b2b1c2f91c517cfde387ee1b654e addHistogramData()] function to plot it as an histogram.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Descriptor, descriptor size, [Window size], [Background color], [Y axis range]&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/pcl_plotter.php PCLPlotter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_plotter.html pcl::visualization::PCLPlotter]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_plotter.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==PCL Viewer==&lt;br /&gt;
&lt;br /&gt;
This program, included with PCL, will also let you open and visualize a saved descriptor. Internally, it uses [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#PCLPlotter|PCLPlotter]]. You can invoke the viewer from the command line like this, so it comes handy:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_viewer &amp;lt;descriptor_file&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:PCL_viewer_VFH.png|thumb|center|500px| VFH histogram seen with ''pcl_viewer''.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)&amp;diff=4748</id>
		<title>PCL/OpenNI tutorial 4: 3D object recognition (descriptors)</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)&amp;diff=4748"/>
				<updated>2015-11-02T11:08:04Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It is time to learn the basics of one of the most interesting applications of point cloud processing: 3D object recognition. Akin to 2D recognition, this technique relies on finding good keypoints (characteristic points) in the cloud, and matching them to a set of previously saved ones. But 3D has several advantages over 2D: namely, we will be able to estimate with decent accuracy the exact position and orientation of the object, relative to the sensor. Also, 3D object recognition tends to be more robust to clutter (crowded scenes where objects in the front occluding objects in the background). And finally, having information about the object's shape will help with collision avoidance or grasping operations.&lt;br /&gt;
&lt;br /&gt;
In this first tutorial we will see what descriptors are, how many types are there available in PCL, and how to compute them.&lt;br /&gt;
&lt;br /&gt;
=Overview=&lt;br /&gt;
&lt;br /&gt;
The basis of 3D object recognition is to find a set of correspondences between two different clouds, one of them containing the object we are looking for. In order to do this, we need a way to compare points in an unambiguous manner. Until now, we have worked with points that store the XYZ coordinates, the RGB color... but none of those properties are unique enough. In two sequential scans, two points could share the same coordinates despite belonging to different surfaces, and using the color information takes us back to 2D recognition, will all the lightning related problems.&lt;br /&gt;
&lt;br /&gt;
In a previous tutorial, we talked about [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#Feature_estimation | features]], before introducing the normals. Normals are an example of feature, because they encode information about the vicinity of the point. That is, the neighboring points are taken into account when computing it, giving us an idea of how the surrounding surface is. But this is not enough. For a feature to be optimal, it must meet the following criteria:&lt;br /&gt;
&lt;br /&gt;
# '''It must be robust to transformations''': rigid transformations (the ones that do not change the distance between points) like translations and rotations must not affect the feature. Even if we play with the cloud a bit beforehand, there should be no difference.&lt;br /&gt;
# '''It must be robust to noise''': measurement errors that cause noise should not change the feature estimation much.&lt;br /&gt;
# '''It must be resolution invariant''': if sampled with different density (like after performing downsampling), the result must be identical or similar.&lt;br /&gt;
&lt;br /&gt;
This is where descriptors come in. They are more complex (and precise) signatures of a point, that encode a lot of information about the surrounding geometry. The purpose is to unequivocally identify a point across multiple point clouds, no matter the noise, resolution or transformations. Also, some of them capture additional data about the object they belong to, like the viewpoint (that lets us retrieve the pose).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Correspondence.jpg|thumb|center|600px|Finding correspondences between point features of two clouds (image from http://pointclouds.org/).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are many 3D descriptors implemented into PCL. Each one has its own method for computing unique values for a point. Some use the difference between the angles of the normals of the point and its neighbors, for example. Others use the distances between the points. Because of this, some are inherently better or worse for certain purposes. A given descriptor may be scale invariant, and another one may be better with occlusions and partial views of objects. Which one you choose depends on what you want to do.&lt;br /&gt;
&lt;br /&gt;
After calculating the necessary values, an additional step is performed to reduce the descriptor size: the result is binned into an [https://en.wikipedia.org/wiki/Histogram histogram]. To do this, the value range of each variable that makes up the descriptor is divided into ''n'' subdivisions, and the number of occurrences in each one is counted. Try to imagine a descriptor that computes a single variable, that ranges from 1 to 100. We choose to create 10 bins for it, so the first bin would gather all occurrences between 1 and 10, the second from 11 to 20, and so on. We look at the value of the variable for the first point-neighbor pair, and it is 27, so we increment the value of the third bin by 1. We keep doing this until we get a final histogram for that keypoint. The bin size must be carefully chosen depending on how descriptive that variable is (the variables do not have to share the same number of bins, and also the bins do not have to be of the same size; if for example most values from the previous example fell in the 50-100 range then it would be sensible to have more bins of smaller size in that range).&lt;br /&gt;
&lt;br /&gt;
Descriptors can be classified in two main categories: global and local. The process for computing and using each one (recognition pipeline) is different, so each will be explained in its own section in this article.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/how_features_work.php How 3D Features work in PCL]&lt;br /&gt;
** [https://github.com/PointCloudLibrary/pcl/wiki/Overview-and-Comparison-of-Features  Overview and Comparison of Features]&lt;br /&gt;
** [http://www.pointclouds.org/assets/icra2013/pcl_features_icra13.pdf How does a good feature look like?]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [https://www.inf.ethz.ch/personal/zeislb/publications/aldoma_2012jram_PCLTutorial.pdf Point Cloud Library: Three-Dimensional Object Recognition and 6 DoF Pose Estimation] (Aitor Aldoma et al., 2012)&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Table==&lt;br /&gt;
&lt;br /&gt;
The following table will give you a hint of how many descriptors are there in PCL, and some of their features:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot; style=&amp;quot;margin: auto; text-align:center&amp;quot; cellpadding=&amp;quot;15&amp;quot;&lt;br /&gt;
|+ 3D descriptors&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Name&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Type&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Size&amp;lt;sup&amp;gt;†&amp;lt;/sup&amp;gt;&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Custom PointType&amp;lt;sup&amp;gt;††&amp;lt;/sup&amp;gt;&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#PFH|PFH]] (Point Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 125&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#FPFH|FPFH]] (Fast Point Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 33&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#RSD|RSD]] (Radius-Based Surface Descriptor)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 289&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#3DSC|3DSC]] (3D Shape Context)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 1980&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#USC|USC]] (Unique Shape Context)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 1960&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#SHOT|SHOT]] (Signatures of Histograms of Orientations)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 352&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#Spin_image|Spin image]]&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 153*&lt;br /&gt;
| style=&amp;quot;color: red;&amp;quot; | No&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#RIFT|RIFT]] (Rotation-Invariant Feature Transform)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 32*&lt;br /&gt;
| style=&amp;quot;color: red;&amp;quot; | No&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#NARF|NARF]] (Normal Aligned Radial Feature)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 36&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#RoPS|RoPS]] (Rotational Projection Statistics)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 135*&lt;br /&gt;
| style=&amp;quot;color: red;&amp;quot; | No&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#VFH|VFH]] (Viewpoint Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 308&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#CVFH|CVFH]] (Clustered Viewpoint Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 308&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#OUR-CVFH|OUR-CVFH]] (Oriented, Unique and Repeatable Clustered Viewpoint Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 308&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#ESF|ESF]] (Ensemble of Shape Functions)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 640&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#GFPFH|GFPFH]] (Global Fast Point Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 16&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#GRSD|GRSD]] (Global Radius-Based Surface Descriptor)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 21&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
† Values marked with an asterisk (*) indicate that the descriptor's size depends on some parameter(s), and the one given is for the default values.&lt;br /&gt;
&lt;br /&gt;
†† Descriptors without their own custom PointType use the generic &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::Histogram&amp;lt;&amp;gt;&amp;quot;&amp;lt;/span&amp;gt; type. See [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#Saving_and_loading|Saving and loading]].&lt;br /&gt;
&lt;br /&gt;
Optionally, you can download a document with a less simple version of the table. Page format is A4, landscape:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Downloads''':&lt;br /&gt;
** [http://robotica.unileon.es/~victorm/3D_descriptors_comparison.odt Table as original ODT] (uses font embedding, requires LibreOffice 4)&lt;br /&gt;
** [http://robotica.unileon.es/~victorm/3D_descriptors_comparison.pdf Table as PDF]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Local descriptors=&lt;br /&gt;
&lt;br /&gt;
Local descriptors are computed for individual points that we give as input. They have no notion of what an object is, they just describe how the local geometry is around that point. Usually, it is your task to choose which points you want a descriptor to be computed for: the ''keypoints''. Most of the time, you can get away by just performing a downsampling and choosing all remaining points, but keypoint detectors are available, like the one used for [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#Finding_keypoints|NARF]], or [[PCL/OpenNI_tutorial_5:_3D_object_recognition_%28pipeline%29#ISS|ISS]].&lt;br /&gt;
&lt;br /&gt;
Local descriptors are used for object recognition and [[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Registration | registration]]. Now we will see which ones are implemented into PCL.&lt;br /&gt;
&lt;br /&gt;
==PFH==&lt;br /&gt;
&lt;br /&gt;
PFH stands for Point Feature Histogram. It is one of the most important descriptors offered by PCL and the basis of others such as FPFH. The PFH tries to capture information of the geometry surrounding the point by analyzing the difference between the directions of the normals in the vicinity (and because of this, an imprecise normal estimation may produce low-quality descriptors).&lt;br /&gt;
&lt;br /&gt;
First, the algorithm pairs all points in the vicinity (not just the chosen keypoint with its neighbors, but also the neighbors with themselves). Then, for each pair, a [https://en.wikipedia.org/wiki/Darboux_frame fixed coordinate frame] is computed from their normals. With this frame, the difference between the normals can be encoded with 3 angular variables. These variables, together with the euclidean distance between the points, are saved, and then binned to an histogram when all pairs have been computed. The final descriptor is the concatenation of the histograms of each variable (4 in total).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:PFH_neighbors.png | Point pairs established when computing the PFH for a point (image from http://pointclouds.org).&lt;br /&gt;
File:PFH_frame.png | Fixed coordinate frame and angular features computed for one of the pairs (image from http://pointclouds.org).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Computing descriptors in PCL is very easy, and the PFH is not an exception:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/pfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the PFH descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PFHSignature125&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::PFHSignature125&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// PFH estimation object.&lt;br /&gt;
	pcl::PFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::PFHSignature125&amp;gt; pfh;&lt;br /&gt;
	pfh.setInputCloud(cloud);&lt;br /&gt;
	pfh.setInputNormals(normals);&lt;br /&gt;
	pfh.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. Note: the value given here has to be&lt;br /&gt;
	// larger than the radius used to estimate the normals.&lt;br /&gt;
	pfh.setRadiusSearch(0.05);&lt;br /&gt;
&lt;br /&gt;
	pfh.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, PCL uses the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PFHSignature125&amp;quot;&amp;lt;/span&amp;gt; type to save the descriptor to. This means that the descriptor's size is 125 (the dimensionality of the feature vector). Dividing a feature in ''D'' dimensional space in ''B'' divisions requires a total of B&amp;lt;sup&amp;gt;D&amp;lt;/sup&amp;gt; bins. The original proposal makes use of the distance between the points, but the implementation of PCL does not, as it was not considered discriminative enough (specially in 2.5D scans, where the distance between points increases the further away from the sensor). Hence, the remaining features (with one dimension each) can be divided in 5 divisions, resulting in a 125-dimensional (5&amp;lt;sup&amp;gt;3&amp;lt;/sup&amp;gt;) vector.&lt;br /&gt;
&lt;br /&gt;
The final object that stores the computed descriptors can be handled like a normal cloud (even saved to, or read from, disk), and it has the same number of &amp;quot;points&amp;quot; than the original one. The &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PFHSignature125&amp;quot;&amp;lt;/span&amp;gt; object at position 0, for example, stores the PFH descriptor for the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; point at the same position in the cloud.&lt;br /&gt;
&lt;br /&gt;
For additional details about the descriptor, check the original publications listed below, or PCL's tutorial.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Search method, Radius&lt;br /&gt;
* '''Output''': PFH descriptors&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/pfh_estimation.php Point Feature Histograms (PFH) descriptors]&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://ias.in.tum.de/_media/spezial/bib/rusu08ias.pdf Persistent Point Feature Histograms for 3D Point Clouds] (Radu Bogdan Rusu et al., 2008)&lt;br /&gt;
** [http://files.rbrusu.com/publications/RusuPhDThesis.pdf Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments] (Radu Bogdan Rusu, 2009, section 4.4, page 50)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_p_f_h_estimation.html pcl::PFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_PFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===FPFH===&lt;br /&gt;
&lt;br /&gt;
PFH gives accurate results, but it has a drawback: it is too computationally expensive to perform at real time. For a cloud of ''n'' keypoints with ''k'' neighbors considered, it has a [https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations complexity] of ''O(nk&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;)''. Because of this, a derived descriptor was created, named FPFH (Fast Point Feature Histogram).&lt;br /&gt;
&lt;br /&gt;
The FPFH considers only the direct connections between the current keypoint and its neighbors, removing additional links between neighbors. This takes the complexity down to ''O(nk)''. Because of this, the resulting histogram is referred to as SPFH (Simplified Point Feature Histogram). The reference frame and the angular variables are computed as always.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:FPFH_neighbors.png|thumb|center|400px|Point pairs established when computing the FPFH for a point (image from http://pointclouds.org).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To account for the loss of these extra connections, an additional step takes place after all histograms have been computed: the SPFHs of a point's neighbors are &amp;quot;merged&amp;quot; with its own, weighted according to the distance. This has the effect of giving a point surface information of points as far away as 2 times the radius used. Finally, the 3 histograms (distance is not used) are concatenated to compose the final descriptor.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/fpfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the FPFH descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::FPFHSignature33&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::FPFHSignature33&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// FPFH estimation object.&lt;br /&gt;
	pcl::FPFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::FPFHSignature33&amp;gt; fpfh;&lt;br /&gt;
	fpfh.setInputCloud(cloud);&lt;br /&gt;
	fpfh.setInputNormals(normals);&lt;br /&gt;
	fpfh.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. Note: the value given here has to be&lt;br /&gt;
	// larger than the radius used to estimate the normals.&lt;br /&gt;
	fpfh.setRadiusSearch(0.05);&lt;br /&gt;
&lt;br /&gt;
	fpfh.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An additional implementation of the FPFH estimation that takes advantage of multithreaded optimizations (with the [https://en.wikipedia.org/wiki/OpenMP OpenMP API]) is available in the class &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;FPFHEstimationOMP&amp;quot;&amp;lt;/span&amp;gt;. Its interface is identical to the standard unoptimized implementation. Using it will result in a big performance boost on multi-core systems, meaning faster computation times. Remember to include the header &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;fpfh_omp.h&amp;quot;&amp;lt;/span&amp;gt; instead.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Search method, Radius&lt;br /&gt;
* '''Output''': FPFH descriptors&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/fpfh_estimation.php Fast Point Feature Histograms (FPFH) descriptors]&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://files.rbrusu.com/publications/Rusu09ICRA.pdf Fast Point Feature Histograms (FPFH) for 3D Registration] (Radu Bogdan Rusu et al., 2009)&lt;br /&gt;
** [http://files.rbrusu.com/publications/RusuPhDThesis.pdf Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments] (Radu Bogdan Rusu, 2009, section 4.5, page 57)&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_f_p_f_h_estimation.html pcl::FPFHEstimation]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_f_p_f_h_estimation_o_m_p.html pcl::FPFHEstimationOMP]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_FPFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==RSD==&lt;br /&gt;
&lt;br /&gt;
The Radius-Based Surface Descriptor encodes the radial relationship of the point and its neighborhood. For every pair of the keypoint with a neighbor, the algorithm computes the distance between them, and the difference between their normals. Then, by assuming that both points lie on the surface of a sphere, said sphere is found by fitting not only the points, but also the normals (otherwise, there would be infinite possible spheres). Finally, from all the point-neighbor spheres, only the ones with the maximum and minimum radii are kept and saved to the descriptor of that point.&lt;br /&gt;
&lt;br /&gt;
As you may have deduced already, when two points lie on a flat surface, the sphere radius will be infinite. If, on the other hand, they lie on the curved face of a cylinder, the radius will be more or less the same as that of the cylinder. This allows us to tell objects apart with RSD. The algorithm takes a parameter that sets the maximum radius at which the points will be considered to be part of a plane.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:RSD_sphere.png | Sphere that fits two points with their normals (image from [http://ias.in.tum.de/_media/events/rgbd2011/marton.pdf original presentation]).&lt;br /&gt;
File:RSD_values.png | RSD values for a cloud; points on a flat surface have maximum values (image from [http://ias.in.tum.de/_media/events/rgbd2011/marton.pdf original presentation]).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the code for compiling the RSD descriptor:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/rsd.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the RSD descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PrincipalRadiiRSD&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::PrincipalRadiiRSD&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// RSD estimation object.&lt;br /&gt;
	RSDEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::PrincipalRadiiRSD&amp;gt; rsd;&lt;br /&gt;
	rsd.setInputCloud(cloud);&lt;br /&gt;
	rsd.setInputNormals(normals);&lt;br /&gt;
	rsd.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. Note: the value given here has to be&lt;br /&gt;
	// larger than the radius used to estimate the normals.&lt;br /&gt;
	rsd.setRadiusSearch(0.05);&lt;br /&gt;
	// Plane radius. Any radius larger than this is considered infinite (a plane).&lt;br /&gt;
	rsd.setPlaneRadius(0.1);&lt;br /&gt;
	// Do we want to save the full distance-angle histograms?&lt;br /&gt;
	rsd.setSaveHistograms(false);&lt;br /&gt;
	&lt;br /&gt;
	rsd.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: This code will only compile with PCL versions 1.8 and above (the current trunk).'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Optionally, you can use the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setSaveHistograms()&amp;quot;&amp;lt;/span&amp;gt; function to enable the saving of the full distance-angle histograms, and then use &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getHistograms()&amp;quot;&amp;lt;/span&amp;gt; to retrieve them.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Search method, Radius, Maximum Radius, [Subdivisions], [Save full histograms]&lt;br /&gt;
* '''Output''': RSD descriptors&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.391.7398&amp;amp;rep=rep1&amp;amp;type=pdf General 3D Modelling of Novel Objects from a Single View] (Zoltan-Csaba Marton et al., 2010)&lt;br /&gt;
** [https://ias.cs.tum.edu/_media/spezial/bib/marton11ijrr.pdf Combined 2D-3D Categorization and Classification for Multimodal Perception Systems] (Zoltan-Csaba Marton et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_r_s_d_estimation.html pcl::RSDEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_RSD.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==3DSC==&lt;br /&gt;
&lt;br /&gt;
The 3D Shape Context is a descriptor that extends its existing 2D counterpart to the third dimension. It works by creating a support structure (a sphere, to be precise) centered at the point we are computing the descriptor for, with the given search radius. The &amp;quot;north pole&amp;quot; of that sphere (the notion of &amp;quot;up&amp;quot;) is pointed to match the normal at that point. Then, the sphere is divided in 3D regions or bins. In the first 2 coordinates (azimuth and elevation) the divisions are equally spaced, but in the third (the radial dimension), divisions are logarithmically spaced so they are smaller towards the center. A minimum radius can be specified to prevent very small bins, that would be too sensitive to small changes in the surface.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:3DSC_support_structure.png|thumb|center|200px|Support structure to compute the 3DSC for a point (image from [http://www.cs.jhu.edu/~misha/Papers/Frome04.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For each bin, a weighted count is accumulated for every neighboring point that lies within. The weight depends on the volume of the bin and the local point density (number of points around the current neighbor). This gives the descriptor some degree of resolution invariance.&lt;br /&gt;
&lt;br /&gt;
We have mentioned that the sphere is given the direction of the normal. This still leaves one degree of freedom (only two axes have been locked, the azimuth remains free). Because of this, the descriptor so far does not cope with rotation. To overcome this (so the same point in two different clouds has the same value), the support sphere is rotated around the normal ''N'' times (a number of degrees that corresponds with the divisions in the azimuth) and the process is repeated for each, giving a total of ''N'' descriptors for that point.&lt;br /&gt;
&lt;br /&gt;
You can compute the 3DSC descriptor the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/3dsc.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the 3DSC descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::ShapeContext1980&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::ShapeContext1980&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// 3DSC estimation object.&lt;br /&gt;
	pcl::ShapeContext3DEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::ShapeContext1980&amp;gt; sc3d;&lt;br /&gt;
	sc3d.setInputCloud(cloud);&lt;br /&gt;
	sc3d.setInputNormals(normals);&lt;br /&gt;
	sc3d.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. It will also be the radius of the support sphere.&lt;br /&gt;
	sc3d.setRadiusSearch(0.05);&lt;br /&gt;
	// The minimal radius value for the search sphere, to avoid being too sensitive&lt;br /&gt;
	// in bins close to the center of the sphere.&lt;br /&gt;
	sc3d.setMinimalRadius(0.05 / 10.0);&lt;br /&gt;
	// Radius used to compute the local point density for the neighbors&lt;br /&gt;
	// (the density is the number of points within that radius).&lt;br /&gt;
	sc3d.setPointDensityRadius(0.05 / 5.0);&lt;br /&gt;
&lt;br /&gt;
	sc3d.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Search method, Radius, Minimal radius, Point density radius&lt;br /&gt;
* '''Output''': 3DSC descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://www.cs.jhu.edu/~misha/Papers/Frome04.pdf Recognizing Objects in Range Data Using Regional Point Descriptors] (Andrea Frome et al., 2004)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_shape_context3_d_estimation.html pcl::ShapeContext3DEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_3DSC.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===USC===&lt;br /&gt;
&lt;br /&gt;
The Unique Shape Context descriptor extends the 3DSC by defining a local reference frame, in order to provide an unique orientation for each point. This not only improves the accuracy of the descriptor, it also reduces its size, as computing multiple descriptors to account for orientation is no longer necessary.&lt;br /&gt;
&lt;br /&gt;
You can check the second publication listed below to learn more about how the LRF is computed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/usc.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the USC descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::UniqueShapeContext1960&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::UniqueShapeContext1960&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// USC estimation object.&lt;br /&gt;
	pcl::UniqueShapeContext&amp;lt;pcl::PointXYZ, pcl::UniqueShapeContext1960, pcl::ReferenceFrame&amp;gt; usc;&lt;br /&gt;
	usc.setInputCloud(cloud);&lt;br /&gt;
	// Search radius, to look for neighbors. It will also be the radius of the support sphere.&lt;br /&gt;
	usc.setRadiusSearch(0.05);&lt;br /&gt;
	// The minimal radius value for the search sphere, to avoid being too sensitive&lt;br /&gt;
	// in bins close to the center of the sphere.&lt;br /&gt;
	usc.setMinimalRadius(0.05 / 10.0);&lt;br /&gt;
	// Radius used to compute the local point density for the neighbors&lt;br /&gt;
	// (the density is the number of points within that radius).&lt;br /&gt;
	usc.setPointDensityRadius(0.05 / 5.0);&lt;br /&gt;
	// Set the radius to compute the Local Reference Frame.&lt;br /&gt;
	usc.setLocalRadius(0.05);&lt;br /&gt;
&lt;br /&gt;
	usc.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: This code will only compile with PCL versions 1.8 and above (the current trunk). For 1.7 and below, change UniqueShapeContext1960 to ShapeContext1980, and edit CMakeLists.txt.'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius, Minimal radius, Point density radius, Local radius&lt;br /&gt;
* '''Output''': USC descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://vision.deis.unibo.it/fede/papers/3dor10.pdf Unique Shape Context for 3D Data Description] (Federico Tombari et al., 2010)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_unique_shape_context.html pcl::UniqueShapeContext]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_USC.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==SHOT==&lt;br /&gt;
&lt;br /&gt;
SHOT stands for Signature of Histograms of Orientations. Like 3DSC, it encodes information about the topology (surface) withing a spherical support structure. This sphere is divided in 32 bins or volumes, with 8 divisions along the azimuth, 2 along the elevation, and 2 along the radius. For every volume, a one-dimensional local histogram is computed. The variable chosen is the angle between the normal of the keypoint and the current point within that volume (to be precise, the cosine, which was found to be better suitable).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:SHOT_support_structure.png|thumb|center|200px|Support structure to compute SHOT. Only 4 azimuth divisions are shown for clarity (image from [http://vision.deis.unibo.it/fede/papers/eccv10.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When all local histograms have been computed, they are stitched together in a final descriptor. Like the USC descriptor, SHOT makes use of a local reference frame, making it rotation invariant. It is also robust to noise and clutter.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/shot.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the SHOT descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// SHOT estimation object.&lt;br /&gt;
	pcl::SHOTEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::SHOT352&amp;gt; shot;&lt;br /&gt;
	shot.setInputCloud(cloud);&lt;br /&gt;
	shot.setInputNormals(normals);&lt;br /&gt;
	// The radius that defines which of the keypoint's neighbors are described.&lt;br /&gt;
	// If too large, there may be clutter, and if too small, not enough points may be found.&lt;br /&gt;
	shot.setRadiusSearch(0.02);&lt;br /&gt;
&lt;br /&gt;
	shot.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Like with FPFH, a multithreading-optimized variant is available with &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;SHOTEstimationOMP&amp;quot;&amp;lt;/span&amp;gt;, that makes use of OpenMP. You need to include the header &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;shot_omp.h&amp;quot;&amp;lt;/span&amp;gt;. Also, another variant that uses the texture for matching is available, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;SHOTColorEstimation&amp;quot;&amp;lt;/span&amp;gt;, with an optimized version too (see the second publication for more details). It outputs a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;SHOT1344&amp;quot;&amp;lt;/span&amp;gt; descriptor.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Radius&lt;br /&gt;
* '''Output''': SHOT descriptors&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://vision.deis.unibo.it/fede/papers/eccv10.pdf Unique Signatures of Histograms for Local Surface Description] (Federico Tombari et al., 2010)&lt;br /&gt;
** [http://www.vision.deis.unibo.it/fede/papers/icip11.pdf A Combined Texture-Shaped Descriptor for Enhanced 3D Feature Matching] (Federico Tombari et al., 2011)&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_estimation.html pcl::SHOTEstimation], &lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_color_estimation.html pcl::SHOTColorEstimation]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_estimation_o_m_p.html pcl::SHOTEstimationOMP]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_color_estimation_o_m_p.html pcl::SHOTColorEstimationOMP]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_SHOT.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Spin image==&lt;br /&gt;
&lt;br /&gt;
The Spin Image (SI) is the oldest descriptor we are going to see here. It has been around since 1997, but it still sees some use for certain applications. It was originally designed to describe surfaces made by vertices, edges and polygons, but it has been since adapted for point clouds. The descriptor is unlike all others in that the output resembles an image that can be compared with another with the usual means.&lt;br /&gt;
&lt;br /&gt;
The support structure used is a cylinder, centered at the point, with a given radius and height, and aligned with the normal. This cylinder is divided radially and vertically into volumes. For each one, the number of neighbors lying inside is added up, eventually producing a descriptor. Weighting and interpolation are used to improve the result. The final descriptor can be seen as a grayscale image where dark areas correspond to volumes with higher point density.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Spin images.png|thumb|center|400px|Spin images computed for 3 points of a model (image from [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.71.4190&amp;amp;rep=rep1&amp;amp;type=pdf original thesis]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/spin_image.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// A handy typedef.&lt;br /&gt;
typedef pcl::Histogram&amp;lt;153&amp;gt; SpinImage;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the spin image for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;SpinImage&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;SpinImage&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Spin image estimation object.&lt;br /&gt;
	pcl::SpinImageEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, SpinImage&amp;gt; si;&lt;br /&gt;
	si.setInputCloud(cloud);&lt;br /&gt;
	si.setInputNormals(normals);&lt;br /&gt;
	// Radius of the support cylinder.&lt;br /&gt;
	si.setRadiusSearch(0.02);&lt;br /&gt;
	// Set the resolution of the spin image (the number of bins along one dimension).&lt;br /&gt;
	// Note: you must change the output histogram size to reflect this.&lt;br /&gt;
	si.setImageWidth(8);&lt;br /&gt;
&lt;br /&gt;
	si.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Spin Image estimation object provides more methods for tuning the estimation, so checking the API is recommended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Radius, Image resolution&lt;br /&gt;
* '''Output''': Spin images&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.71.4190&amp;amp;rep=rep1&amp;amp;type=pdf Spin-Images: A Representation for 3-D Surface Matching] (Andrew Edie Johnson, 1997)&lt;br /&gt;
** [http://www.cs.jhu.edu/~misha/Papers/Johnson99.pdf Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes] (Andrew Edie Johnson and Martial Hebert, 1999)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_spin_image_estimation.html pcl::SpinImageEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_spin_image.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==RIFT==&lt;br /&gt;
&lt;br /&gt;
The Rotation-Invariant Feature Transform, like the spin image, takes some concepts from 2D features, in this case from the Scale-Invariant Feature Transform ([https://en.wikipedia.org/wiki/Scale-invariant_feature_transform SIFT]). It is the only descriptor seen here that requires intensity information in order to compute it (it can be obtained from the RGB color values). This means, of course, that you will not be able to use RIFT with standard XYZ clouds, you also need the texture.&lt;br /&gt;
&lt;br /&gt;
In the first step, a circular patch (with the given radius) is fitted on the surface the point lies on. This patch is divided into concentric rings, according to the chosen distance bin size. Then, an histogram is populated with all the point's neighbors lying inside a sphere centered at that point and with the mentioned radius. The distance and the orientation of the intensity gradient at each point are considered. To make it rotation invariant, the angle between the gradient orientation and the vector pointing outward from the center of the patch is measured.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:RIFT.png|thumb|center|600px|RIFT feature values at 3 different locations in the descriptor (image from [http://www.cs.illinois.edu/~slazebni/publications/pami05.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The authors' original implementation uses 4 rings and 8 histogram orientations, which produce a descriptor of size 32. RIFT is not robust to texture flipping, though this was never considered a big issue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/point_types_conversion.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/intensity_gradient.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/rift.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// A handy typedef.&lt;br /&gt;
typedef pcl::Histogram&amp;lt;32&amp;gt; RIFT32;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud with color information.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;::Ptr cloudColor(new pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;);&lt;br /&gt;
	// Object for storing the point cloud with intensity value.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZI&amp;gt;::Ptr cloudIntensity(new pcl::PointCloud&amp;lt;pcl::PointXYZI&amp;gt;);&lt;br /&gt;
	// Object for storing the intensity gradients.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::IntensityGradient&amp;gt;::Ptr gradients(new pcl::PointCloud&amp;lt;pcl::IntensityGradient&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the RIFT descriptor for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;RIFT32&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;RIFT32&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGB&amp;gt;(argv[1], *cloudColor) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Convert the RGB to intensity.&lt;br /&gt;
	pcl::PointCloudXYZRGBtoXYZI(*cloudColor, *cloudIntensity);&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZI, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloudIntensity);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZI&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZI&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Compute the intensity gradients.&lt;br /&gt;
	pcl::IntensityGradientEstimation &amp;lt; pcl::PointXYZI, pcl::Normal, pcl::IntensityGradient,&lt;br /&gt;
		pcl::common::IntensityFieldAccessor&amp;lt;pcl::PointXYZI&amp;gt; &amp;gt; ge;&lt;br /&gt;
	ge.setInputCloud(cloudIntensity);&lt;br /&gt;
	ge.setInputNormals(normals);&lt;br /&gt;
	ge.setRadiusSearch(0.03);&lt;br /&gt;
	ge.compute(*gradients);&lt;br /&gt;
&lt;br /&gt;
	// RIFT estimation object.&lt;br /&gt;
	pcl::RIFTEstimation&amp;lt;pcl::PointXYZI, pcl::IntensityGradient, RIFT32&amp;gt; rift;&lt;br /&gt;
	rift.setInputCloud(cloudIntensity);&lt;br /&gt;
	rift.setSearchMethod(kdtree);&lt;br /&gt;
	// Set the intensity gradients to use.&lt;br /&gt;
	rift.setInputGradient(gradients);&lt;br /&gt;
	// Radius, to get all neighbors within.&lt;br /&gt;
	rift.setRadiusSearch(0.02);&lt;br /&gt;
	// Set the number of bins to use in the distance dimension.&lt;br /&gt;
	rift.setNrDistanceBins(4);&lt;br /&gt;
	// Set the number of bins to use in the gradient orientation dimension.&lt;br /&gt;
	rift.setNrGradientBins(8);&lt;br /&gt;
	// Note: you must change the output histogram size to reflect the previous values.&lt;br /&gt;
&lt;br /&gt;
	rift.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search method, Intensity gradients, Radius, Distance bins, Gradient bins&lt;br /&gt;
* '''Output''': RIFT descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://www.cs.illinois.edu/~slazebni/publications/pami05.pdf A Sparse Texture Representation Using Local Affine Regions] (Svetlana Lazebnik et al., 2005)&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_r_i_f_t_estimation.html pcl::RIFTEstimation]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_intensity_gradient_estimation.html pcl::IntensityGradientEstimation]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/structpcl_1_1_intensity_gradient.html pcl::IntensityGradient]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_RIFT.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==NARF==&lt;br /&gt;
&lt;br /&gt;
The Normal Aligned Radial Feature is the only descriptor here that does not take a point cloud as input. Instead, it works with range images. A range image is a common RGB image in which the distance to the point that corresponds to a certain pixel is encoded as a color value in the [https://en.wikipedia.org/wiki/Visible_spectrum visible light spectrum]: the points that are closer to the camera would be violet, while the points near the maximum sensor range would be red.&lt;br /&gt;
&lt;br /&gt;
NARF also requires us to find suitable keypoints to compute the descriptor for. NARF keypoints are located near an object's corners, and this also requires to find the borders (transitions from foreground to background), which are trivial to find with a range image. Because of this lengthy pipeline, I will describe the whole process in different sections.&lt;br /&gt;
&lt;br /&gt;
===Obtaining a range image===&lt;br /&gt;
&lt;br /&gt;
Because we always work with point clouds, I will now explain how you can convert one into a range image, in order to use it for the NARF descriptor. PCL provides a couple of handy classes to perform the conversion, given that you fill the camera data correctly.&lt;br /&gt;
&lt;br /&gt;
A range image can be created in two ways. First, we can use spherical projection, which would give us an image similar to the ones produced by a LIDAR sensor. Second, we can use planar projection, which is better suitable for camera-like sensors as the Kinect or the Xtion, and will not have the characteristic distortion of the first one.&lt;br /&gt;
&lt;br /&gt;
====Spherical projection====&lt;br /&gt;
&lt;br /&gt;
The following code will take a point cloud and create a range image from it, using spherical projection:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/range_image_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Parameters needed by the range image object:&lt;br /&gt;
&lt;br /&gt;
	// Angular resolution is the angular distance between pixels.&lt;br /&gt;
	// Kinect: 57° horizontal FOV, 43° vertical FOV, 640x480 (chosen here).&lt;br /&gt;
	// Xtion: 58° horizontal FOV, 45° vertical FOV, 640x480.&lt;br /&gt;
	float angularResolutionX = (float)(57.0f / 640.0f * (M_PI / 180.0f));&lt;br /&gt;
	float angularResolutionY = (float)(43.0f / 480.0f * (M_PI / 180.0f));&lt;br /&gt;
	// Maximum horizontal and vertical angles. For example, for a full panoramic scan,&lt;br /&gt;
	// the first would be 360º. Choosing values that adjust to the real sensor will&lt;br /&gt;
	// decrease the time it takes, but don't worry. If the values are bigger than&lt;br /&gt;
	// the real ones, the image will be automatically cropped to discard empty zones.&lt;br /&gt;
	float maxAngleX = (float)(60.0f * (M_PI / 180.0f));&lt;br /&gt;
	float maxAngleY = (float)(50.0f * (M_PI / 180.0f));&lt;br /&gt;
	// Sensor pose. Thankfully, the cloud includes the data.&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	// Noise level. If greater than 0, values of neighboring points will be averaged.&lt;br /&gt;
	// This would set the search radius (e.g., 0.03 == 3cm).&lt;br /&gt;
	float noiseLevel = 0.0f;&lt;br /&gt;
	// Minimum range. If set, any point closer to the sensor than this will be ignored.&lt;br /&gt;
	float minimumRange = 0.0f;&lt;br /&gt;
	// Border size. If greater than 0, a border of &amp;quot;unobserved&amp;quot; points will be left&lt;br /&gt;
	// in the image when it is cropped.&lt;br /&gt;
	int borderSize = 1;&lt;br /&gt;
&lt;br /&gt;
	// Range image object.&lt;br /&gt;
	pcl::RangeImage rangeImage;&lt;br /&gt;
	rangeImage.createFromPointCloud(*cloud, angularResolutionX, angularResolutionY,&lt;br /&gt;
									maxAngleX, maxAngleY, sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
									noiseLevel, minimumRange, borderSize);&lt;br /&gt;
&lt;br /&gt;
	// Visualize the image.&lt;br /&gt;
	pcl::visualization::RangeImageVisualizer viewer(&amp;quot;Range image&amp;quot;);&lt;br /&gt;
	viewer.showRangeImage(rangeImage);&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer.spinOnce();&lt;br /&gt;
		// Sleep 100ms to go easy on the CPU.&lt;br /&gt;
		pcl_sleep(0.1);&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here you can see an example of the output range image:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_spherical.png|thumb|center|400px|Range image of a point cloud, using spherical projection.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Angular resolution, Maximum angle, Sensor pose, Coordinate frame, Noise level, Maximum range, Border size&lt;br /&gt;
* '''Output''': Range image (spherical projection)&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/range_image_creation.php How to create a range image from a point cloud]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/range_image_visualization.php How to visualize a range image]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_range_image.html pcl::RangeImage]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_range_image_spherical.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Planar projection====&lt;br /&gt;
&lt;br /&gt;
As mentioned, planar projection will give better results with clouds taken from a depth camera:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image_planar.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/range_image_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Parameters needed by the planar range image object:&lt;br /&gt;
&lt;br /&gt;
	// Image size. Both Kinect and Xtion work at 640x480.&lt;br /&gt;
	int imageSizeX = 640;&lt;br /&gt;
	int imageSizeY = 480;&lt;br /&gt;
	// Center of projection. here, we choose the middle of the image.&lt;br /&gt;
	float centerX = 640.0f / 2.0f;&lt;br /&gt;
	float centerY = 480.0f / 2.0f;&lt;br /&gt;
	// Focal length. The value seen here has been taken from the original depth images.&lt;br /&gt;
	// It is safe to use the same value vertically and horizontally.&lt;br /&gt;
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;&lt;br /&gt;
	// Sensor pose. Thankfully, the cloud includes the data.&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	// Noise level. If greater than 0, values of neighboring points will be averaged.&lt;br /&gt;
	// This would set the search radius (e.g., 0.03 == 3cm).&lt;br /&gt;
	float noiseLevel = 0.0f;&lt;br /&gt;
	// Minimum range. If set, any point closer to the sensor than this will be ignored.&lt;br /&gt;
	float minimumRange = 0.0f;&lt;br /&gt;
&lt;br /&gt;
	// Planar range image object.&lt;br /&gt;
	pcl::RangeImagePlanar rangeImagePlanar;&lt;br /&gt;
	rangeImagePlanar.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,&lt;br /&gt;
			centerX, centerY, focalLengthX, focalLengthX,&lt;br /&gt;
			sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
			noiseLevel, minimumRange);&lt;br /&gt;
&lt;br /&gt;
	// Visualize the image.&lt;br /&gt;
	pcl::visualization::RangeImageVisualizer viewer(&amp;quot;Planar range image&amp;quot;);&lt;br /&gt;
	viewer.showRangeImage(rangeImagePlanar);&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer.spinOnce();&lt;br /&gt;
		// Sleep 100ms to go easy on the CPU.&lt;br /&gt;
		pcl_sleep(0.1);&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_planar.png|thumb|center|400px|Range image of a point cloud, using planar projection.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Image size, Projection center, Focal length, Sensor pose, Coordinate frame, Noise level, Maximum range&lt;br /&gt;
* '''Output''': Range image (planar projection)&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/range_image_creation.php How to create a range image from a point cloud]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/range_image_visualization.php How to visualize a range image]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_range_image_planar.html pcl::RangeImagePlanar]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_range_image_planar.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you prefer to do the conversion in real time while you inspect the cloud, PCL ships with an [https://github.com/PointCloudLibrary/pcl/tree/master/doc/tutorials/content/sources/openni_range_image_visualization example] that fetches an &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;openni_wrapper::DepthImage&amp;quot;&amp;lt;/span&amp;gt; from an OpenNI device and creates the range image from it. You can adapt the code of the [[PCL/OpenNI_tutorial_1:_Installing_and_testing#Testing | example]] example from tutorial 1 to save it to disk with the function [http://docs.pointclouds.org/trunk/group__io.html#ga7291a029cdcde32ca3639d07dc6491b9 pcl::io::saveRangeImagePlanarFilePNG()].&lt;br /&gt;
&lt;br /&gt;
===Extracting borders===&lt;br /&gt;
&lt;br /&gt;
NARF keypoints are located near the edges of objects in the range image, so in order to find them, we first have to extract the borders. A border is defined as an abrupt change from foreground to background. In a range image, this can be easily seen because there is a &amp;quot;jump&amp;quot; in the depth value of two adjacent pixels.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_border_detection.png|thumb|center|400px|Border detection on a range image (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are three types of borders. ''Object borders'' consist of the pixels (or points) located on the very edge of an object (the outermost points still belonging to the object). ''Shadow borders'' are points in the background on the edge of occlusions (empty areas in the background due to the objects in front covering them). Notice that, when the cloud is seen from the sensor's viewpoint, object and shadow points will seem adjacent. Finally, ''veil points'' are points interpolated between the previous two which appear in scans taken with LIDAR sensors, so we do not have to worry about them here.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:Range_image_border_type.png | Types of borders (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).&lt;br /&gt;
File:Range_image_borders_example.png | Example of object and shadow borders on a cloud.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The algorithm basically compares a point's depth with the values of its neighbors, and if a big difference is found, we know it is due to a border. Points closer to the sensor will be marked as object borders, and the other ones as shadow borders.&lt;br /&gt;
&lt;br /&gt;
PCL provides a class for extracting borders of a range image:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image_planar.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/range_image_border_extractor.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/range_image_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the borders.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::BorderDescription&amp;gt;::Ptr borders(new pcl::PointCloud&amp;lt;pcl::BorderDescription&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Convert the cloud to range image.&lt;br /&gt;
	int imageSizeX = 640, imageSizeY = 480;&lt;br /&gt;
	float centerX = (640.0f / 2.0f), centerY = (480.0f / 2.0f);&lt;br /&gt;
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	float noiseLevel = 0.0f, minimumRange = 0.0f;&lt;br /&gt;
	pcl::RangeImagePlanar rangeImage;&lt;br /&gt;
	rangeImage.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,&lt;br /&gt;
			centerX, centerY, focalLengthX, focalLengthX,&lt;br /&gt;
			sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
			noiseLevel, minimumRange);&lt;br /&gt;
&lt;br /&gt;
	// Border extractor object.&lt;br /&gt;
	pcl::RangeImageBorderExtractor borderExtractor(&amp;amp;rangeImage);&lt;br /&gt;
&lt;br /&gt;
	borderExtractor.compute(*borders);&lt;br /&gt;
&lt;br /&gt;
	// Visualize the borders.&lt;br /&gt;
	pcl::visualization::RangeImageVisualizer* viewer = NULL;&lt;br /&gt;
	viewer = pcl::visualization::RangeImageVisualizer::getRangeImageBordersWidget(rangeImage,&lt;br /&gt;
			 -std::numeric_limits&amp;lt;float&amp;gt;::infinity(),&lt;br /&gt;
			 std::numeric_limits&amp;lt;float&amp;gt;::infinity(),&lt;br /&gt;
			 false, *borders, &amp;quot;Borders&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce();&lt;br /&gt;
		// Sleep 100ms to go easy on the CPU.&lt;br /&gt;
		pcl_sleep(0.1);&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_borders.png|thumb|center|400px|Borders found on the range image.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can use the extractor's &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getParameters()&amp;quot;&amp;lt;/span&amp;gt; function to get a [http://docs.pointclouds.org/trunk/structpcl_1_1_range_image_border_extractor_1_1_parameters.html pcl::RangeImageBorderExtractor::Parameters] struct with the settings that will be used.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Range image&lt;br /&gt;
* '''Output''': Borders&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/range_image_border_extraction.php How to extract borders from range images]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://europa.informatik.uni-freiburg.de/files/steder11icra.pdf Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries] (Bastian Steder et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_range_image_border_extractor.html pcl::RangeImageBorderExtractor]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_range_image_borders.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Finding keypoints===&lt;br /&gt;
&lt;br /&gt;
Citing the [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original publication]:''&lt;br /&gt;
&lt;br /&gt;
''&amp;quot;We have the following requirements for our interest point extraction procedure:&lt;br /&gt;
&lt;br /&gt;
# ''It must take information about borders and the surface structure into account.''&lt;br /&gt;
# ''It must select positions that can be reliably detected even if the object is observed from another perspective.''&lt;br /&gt;
# ''The points must be on positions that provide stable areas for normal estimation or the descriptor calculation in general.&amp;quot;''&lt;br /&gt;
&lt;br /&gt;
The procedure is the following: for every point in the range image, a score is computed that conveys how much the surface changes in its neighborhood (this is tuned with the ''support size'' σ, which is the diameter of the sphere used to find neighboring points). Also, the dominant direction of this change is computed. Then, this direction is compared with those of the neighbors, trying to find how stable the point is (if the directions are very different, that means the point is not stable, and that the surface around changes a lot). Points that are near the object's corners (but not exactly on the very edge) will be good keypoints, yet stable enough.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:NARF_keypoints.png | NARF keypoints (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).&lt;br /&gt;
File:NARF_keypoints_support_sizes.png | Interest regions with a support size of 20cm (up) and 1m (down) (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In PCL, NARF keypoints can be found this way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image_planar.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/range_image_border_extractor.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/keypoints/narf_keypoint.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/range_image_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the keypoints' indices.&lt;br /&gt;
	pcl::PointCloud&amp;lt;int&amp;gt;::Ptr keypoints(new pcl::PointCloud&amp;lt;int&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Convert the cloud to range image.&lt;br /&gt;
	int imageSizeX = 640, imageSizeY = 480;&lt;br /&gt;
	float centerX = (640.0f / 2.0f), centerY = (480.0f / 2.0f);&lt;br /&gt;
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	float noiseLevel = 0.0f, minimumRange = 0.0f;&lt;br /&gt;
	pcl::RangeImagePlanar rangeImage;&lt;br /&gt;
	rangeImage.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,&lt;br /&gt;
			centerX, centerY, focalLengthX, focalLengthX,&lt;br /&gt;
			sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
			noiseLevel, minimumRange);&lt;br /&gt;
&lt;br /&gt;
	pcl::RangeImageBorderExtractor borderExtractor;&lt;br /&gt;
	// Keypoint detection object.&lt;br /&gt;
	pcl::NarfKeypoint detector(&amp;amp;borderExtractor);&lt;br /&gt;
	detector.setRangeImage(&amp;amp;rangeImage);&lt;br /&gt;
	// The support size influences how big the surface of interest will be,&lt;br /&gt;
	// when finding keypoints from the border information.&lt;br /&gt;
	detector.getParameters().support_size = 0.2f;&lt;br /&gt;
&lt;br /&gt;
	detector.compute(*keypoints);&lt;br /&gt;
&lt;br /&gt;
	// Visualize the keypoints.&lt;br /&gt;
	pcl::visualization::RangeImageVisualizer viewer(&amp;quot;NARF keypoints&amp;quot;);&lt;br /&gt;
	viewer.showRangeImage(rangeImage);&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; keypoints-&amp;gt;points.size(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		viewer.markPoint(keypoints-&amp;gt;points[i] % rangeImage.width,&lt;br /&gt;
						 keypoints-&amp;gt;points[i] / rangeImage.width,&lt;br /&gt;
						 // Set the color of the pixel to red (the background&lt;br /&gt;
						 // circle is already that color). All other parameters&lt;br /&gt;
						 // are left untouched, check the API for more options.&lt;br /&gt;
						 pcl::visualization::Vector3ub(1.0f, 0.0f, 0.0f));&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer.spinOnce();&lt;br /&gt;
		// Sleep 100ms to go easy on the CPU.&lt;br /&gt;
		pcl_sleep(0.1);&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_NARF_keypoints.png|thumb|center|400px|NARF keypoints found on the range image.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Range image, Border extractor, Support Size&lt;br /&gt;
* '''Output''': NARF keypoints&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/narf_keypoint_extraction.php How to extract NARF keypoint from a range image]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://europa.informatik.uni-freiburg.de/files/steder11icra.pdf Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries] (Bastian Steder et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_narf_keypoint.html pcl::NarfKeypoint]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_NARF_keypoints.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Computing the descriptor===&lt;br /&gt;
&lt;br /&gt;
We have created the range image from a point cloud, and we have extracted the borders in order to find good keypoints. Now it is time to compute the NARF descriptor for each keypoint.&lt;br /&gt;
&lt;br /&gt;
The NARF descriptor encodes information about surface changes around a point. First, a local range patch is created around the point. It is like a small range image centered at that point, aligned with the normal (it would seem as if we were looking at the point along the normal). Then, a star pattern with ''n'' beams is overlaid onto the patch, also centered at the point. For every beam, a value is computed, that reflects how much the surface under it changes. The stronger the change is, and the closer to the center it is, the higher the final value will be. The ''n'' resulting values compose the final descriptor.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:NARF_descriptor.png|thumb|center|600px|Computing the NARF descriptor for a keypoint (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The descriptor right now is not invariant to rotations around the normal. To achieve this, the whole possible 360 degrees are binned into an histogram. The value of each bin is computed from the descriptor values according to the angle. Then, the bin with the highest value is considered the dominant orientation, and the descriptor is shifted according to it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image_planar.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/range_image_border_extractor.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/keypoints/narf_keypoint.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/narf_descriptor.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the keypoints' indices.&lt;br /&gt;
	pcl::PointCloud&amp;lt;int&amp;gt;::Ptr keypoints(new pcl::PointCloud&amp;lt;int&amp;gt;);&lt;br /&gt;
	// Object for storing the NARF descriptors.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Narf36&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::Narf36&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Convert the cloud to range image.&lt;br /&gt;
	int imageSizeX = 640, imageSizeY = 480;&lt;br /&gt;
	float centerX = (640.0f / 2.0f), centerY = (480.0f / 2.0f);&lt;br /&gt;
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	float noiseLevel = 0.0f, minimumRange = 0.0f;&lt;br /&gt;
	pcl::RangeImagePlanar rangeImage;&lt;br /&gt;
	rangeImage.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,&lt;br /&gt;
			centerX, centerY, focalLengthX, focalLengthX,&lt;br /&gt;
			sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
			noiseLevel, minimumRange);&lt;br /&gt;
&lt;br /&gt;
	// Extract the keypoints.&lt;br /&gt;
	pcl::RangeImageBorderExtractor borderExtractor;&lt;br /&gt;
	pcl::NarfKeypoint detector(&amp;amp;borderExtractor);&lt;br /&gt;
	detector.setRangeImage(&amp;amp;rangeImage);&lt;br /&gt;
	detector.getParameters().support_size = 0.2f;&lt;br /&gt;
	detector.compute(*keypoints);&lt;br /&gt;
&lt;br /&gt;
	// The NARF estimator needs the indices in a vector, not a cloud.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; keypoints2;&lt;br /&gt;
	keypoints2.resize(keypoints-&amp;gt;points.size());&lt;br /&gt;
	for (unsigned int i = 0; i &amp;lt; keypoints-&amp;gt;size(); ++i)&lt;br /&gt;
		keypoints2[i] = keypoints-&amp;gt;points[i];&lt;br /&gt;
	// NARF estimation object.&lt;br /&gt;
	pcl::NarfDescriptor narf(&amp;amp;rangeImage, &amp;amp;keypoints2);&lt;br /&gt;
	// Support size: choose the same value you used for keypoint extraction.&lt;br /&gt;
	narf.getParameters().support_size = 0.2f;&lt;br /&gt;
	// If true, the rotation invariant version of NARF will be used. The histogram&lt;br /&gt;
	// will be shifted according to the dominant orientation to provide robustness to&lt;br /&gt;
	// rotations around the normal.&lt;br /&gt;
	narf.getParameters().rotation_invariant = true;&lt;br /&gt;
&lt;br /&gt;
	narf.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Range image, Key points, Support Size&lt;br /&gt;
* '''Output''': NARF descriptors&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/narf_feature_extraction.php How to extract NARF Features from a range image]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://europa.informatik.uni-freiburg.de/files/steder11icra.pdf Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries] (Bastian Steder et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_narf_descriptor.html pcl::NarfDescriptor]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_NARF_descriptor.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==RoPS==&lt;br /&gt;
&lt;br /&gt;
The Rotational Projection Statistics (RoPS) feature is a bit different from the other descriptors because it works with a triangle mesh, so a previous triangulation step is needed for generating this mesh from the cloud. Apart from that, most concepts are similar.&lt;br /&gt;
&lt;br /&gt;
In order to compute RoPS for a keypoint, the local surface is cropped according to a support radius, so only points and triangles lying inside are taken into account. Then, a local reference frame (LRF) is computed, giving the descriptor its rotational invariance. A coordinate system is created with the point as the origin, and the axes aligned with the LRF. Then, for every axis, several steps are performed.&lt;br /&gt;
&lt;br /&gt;
First, the local surface is rotated around the current axis. The angle is determined by one of the parameters, which sets the number of rotations. Then, all points in the local surface are projected onto the XY, XZ and YZ planes. For each one, statistical information about the distribution of the projected points is computed, and concatenated to form the final descriptor.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/point_types_conversion.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/gp3.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/rops_estimation.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// A handy typedef.&lt;br /&gt;
typedef pcl::Histogram&amp;lt;135&amp;gt; ROPS135;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing both the points and the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;::Ptr cloudNormals(new pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
	// Object for storing the ROPS descriptor for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;ROPS135&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;ROPS135&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Perform triangulation.&lt;br /&gt;
	pcl::concatenateFields(*cloud, *normals, *cloudNormals);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointNormal&amp;gt;::Ptr kdtree2(new pcl::search::KdTree&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
	kdtree2-&amp;gt;setInputCloud(cloudNormals);&lt;br /&gt;
	pcl::GreedyProjectionTriangulation&amp;lt;pcl::PointNormal&amp;gt; triangulation;&lt;br /&gt;
	pcl::PolygonMesh triangles;&lt;br /&gt;
	triangulation.setSearchRadius(0.025);&lt;br /&gt;
	triangulation.setMu(2.5);&lt;br /&gt;
	triangulation.setMaximumNearestNeighbors(100);&lt;br /&gt;
	triangulation.setMaximumSurfaceAngle(M_PI / 4); // 45 degrees.&lt;br /&gt;
	triangulation.setNormalConsistency(false);&lt;br /&gt;
	triangulation.setMinimumAngle(M_PI / 18); // 10 degrees.&lt;br /&gt;
	triangulation.setMaximumAngle(2 * M_PI / 3); // 120 degrees.&lt;br /&gt;
	triangulation.setInputCloud(cloudNormals);&lt;br /&gt;
	triangulation.setSearchMethod(kdtree2);&lt;br /&gt;
	triangulation.reconstruct(triangles);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should only compute descriptors for chosen keypoints. It has&lt;br /&gt;
	// been omitted here for simplicity.&lt;br /&gt;
&lt;br /&gt;
	// RoPs estimation object.&lt;br /&gt;
	pcl::ROPSEstimation&amp;lt;pcl::PointXYZ, ROPS135&amp;gt; rops;&lt;br /&gt;
	rops.setInputCloud(cloud);&lt;br /&gt;
	rops.setSearchMethod(kdtree);&lt;br /&gt;
	rops.setRadiusSearch(0.03);&lt;br /&gt;
	rops.setTriangles(triangles.polygons);&lt;br /&gt;
	// Number of partition bins that is used for distribution matrix calculation.&lt;br /&gt;
	rops.setNumberOfPartitionBins(5);&lt;br /&gt;
	// The greater the number of rotations is, the bigger the resulting descriptor.&lt;br /&gt;
	// Make sure to change the histogram size accordingly.&lt;br /&gt;
	rops.setNumberOfRotations(3);&lt;br /&gt;
	// Support radius that is used to crop the local surface of the point.&lt;br /&gt;
	rops.setSupportRadius(0.025);&lt;br /&gt;
&lt;br /&gt;
	rops.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: This code will only compile with PCL versions 1.8 and above (the current trunk).'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Triangles, Search method, Support radius, Number of rotations, Number of partition bins&lt;br /&gt;
* '''Output''': RoPS descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://arxiv.org/pdf/1304.3192v1.pdf Rotational Projection Statistics for 3D Local Surface Description and Object Recognition] (Yulan Guo et al., 2013)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_r_o_p_s_estimation.html pcl::ROPSEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_RoPS.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Global descriptors=&lt;br /&gt;
&lt;br /&gt;
Global descriptors encode object geometry. They are not computed for individual points, but for a whole cluster that represents an object. Because of this, a preprocessing step ([[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Segmentation|segmentation]]) is required, in order to retrieve possible candidates.&lt;br /&gt;
&lt;br /&gt;
Global descriptors are used for object recognition and classification, geometric analysis (object type, shape...), and pose estimation.&lt;br /&gt;
&lt;br /&gt;
You should also know that many local descriptors can also be used as global ones. This can be done with descriptors that use a radius to search for neighbors (as PFH does). The trick is to compute it for one single point in the object cluster, and set the radius to the maximum possible distance between any two points (so all points in the cluster are considered as neighbors).&lt;br /&gt;
&lt;br /&gt;
==VFH==&lt;br /&gt;
&lt;br /&gt;
The Viewpoint Feature Histogram is based on the FPFH. Because the latter is invariant to the object's pose, the authors decided to expand it by including information about the viewpoint. Also, the FPFH is estimated once for the whole cluster, not for every point.&lt;br /&gt;
&lt;br /&gt;
The VFH is made up by two parts: a viewpoint direction component, and an extended FPFH component. To compute the first one, the object's [[PCL/OpenNI_tutorial_2:_Cloud_processing_%28basic%29#Computing_the_centroid|centroid]] is found, which is the point that results from averaging the X, Y and Z coordinates of all points. Then, the vector between the viewpoint (the position of the sensor) and this centroid is computed, and normalized. Finally, for all points in the cluster, the angle between this vector and their normal is calculated, and the result is binned into an histogram. The vector is translated to each point when computing the angle because it makes the descriptor scale invariant.&lt;br /&gt;
&lt;br /&gt;
The second component is computed like the FPFH (that results in 3 histograms for the 3 angular features, α, φ and θ), with some differences: it is only computed for the centroid, using the computed viewpoint direction vector as its normal (as the point, obviously, does not have a normal), and setting all the cluster's points as neighbors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:VFH_viewpoint_component.png | Viewpoint component of the VFH (image from [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf original paper]).&lt;br /&gt;
File:VFH_extended_FPFH_component.png | Extended FPFH component of the VFH (image from [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf original paper]).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The resulting 4 histograms (1 for the viewpoint component, 3 for the extended FPFH component) are concatenated to build the final VFH descriptor. By default, the bins are normalized using the total number of points in the cluster. This makes the VFH descriptor invariant to scale.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:VFH_histogram.png|thumb|center|400px| VFH histogram (image from [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The PCL implementation computes an additional fifth histogram with the distances of the cluster points to the centroid (the Shape Distribution Component, SDC), increading the size of the output descriptor from 263 to 308. The SDC is taken from the CVFH descriptor that we will see in the next section, and makes the result more robust.&lt;br /&gt;
&lt;br /&gt;
The VFH of an already clustered object can be computed this way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/vfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the VFH descriptor.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// VFH estimation object.&lt;br /&gt;
	pcl::VFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; vfh;&lt;br /&gt;
	vfh.setInputCloud(object);&lt;br /&gt;
	vfh.setInputNormals(normals);&lt;br /&gt;
	vfh.setSearchMethod(kdtree);&lt;br /&gt;
	// Optionally, we can normalize the bins of the resulting histogram,&lt;br /&gt;
	// using the total number of points.&lt;br /&gt;
	vfh.setNormalizeBins(true);&lt;br /&gt;
	// Also, we can normalize the SDC with the maximum size found between&lt;br /&gt;
	// the centroid and any of the cluster's points.&lt;br /&gt;
	vfh.setNormalizeDistance(false);&lt;br /&gt;
&lt;br /&gt;
	vfh.compute(*descriptor);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Because only one VFH descriptor is computed for the whole cluster, the size of the cloud object that stores the result will be 1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Normals, Search method, [Normalize bins], [Normalize SDC]&lt;br /&gt;
* '''Output''': VFH descriptor&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/vfh_estimation.php Estimating VFH signatures for a set of points]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf Fast 3D Recognition and Pose Using the Viewpoint Feature Histogram] (Radu Bogdan Rusu et al., 2010)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_v_f_h_estimation.html pcl::VFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_VFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===CVFH===&lt;br /&gt;
&lt;br /&gt;
The original VFH descriptor is not robust to occlusion or other sensor artifacts, or measurement errors. If the object cluster is missing many points, the resulting computed centroid will differ from the original, altering the final descriptor, and preventing a positive match from being found. Because of that, the Clustered Viewpoint Feature Histogram (CVFH) was introduced.&lt;br /&gt;
&lt;br /&gt;
The idea is very simple: instead of computing a single VFH histogram for the whole cluster, the object is first divided in stable, smooth regions using [[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Region_growing | region-growing segmentation]], that enforces several constraints in the distances and differences of normals of the points belonging to every region. Then, a VFH is computed for every region. Thanks to this, an object can be found in a scene, as long as at least one of its regions is fully visible.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:CVFH_occlusion.png | Typical occlusion issues in a point cloud (image from original paper).&lt;br /&gt;
File:CVFH_regions.png | Object regions computed for the CVFH (image from original paper).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additionally, a Shape Distribution Component (SDC) is also computed and included. It encodes information about the distribution of the points arond the region's centroid, measuring the distances. The SDC allows to differentiate objects with similar characteristics (size and normal distribution), like two planar surfaces from each other.&lt;br /&gt;
&lt;br /&gt;
The authors proposed to discard the histogram normalization step that is performed in VFH. This has the effect of making the descriptor dependant of scale, so an object of a certain size would not match a bigger or smaller copy of itself. It also makes CVFH more robust to occlusion.&lt;br /&gt;
&lt;br /&gt;
CVFH is invariant to the camera roll angle, like most global descriptors. This is so because rotations about that camera axis do not change the observable geometry that descriptors are computed from, limiting the pose estimation to 5 DoF. The use of a Camera Roll Histogram (CRH) has been proposed to overcome this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/cvfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the CVFH descriptors.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// CVFH estimation object.&lt;br /&gt;
	pcl::CVFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; cvfh;&lt;br /&gt;
	cvfh.setInputCloud(object);&lt;br /&gt;
	cvfh.setInputNormals(normals);&lt;br /&gt;
	cvfh.setSearchMethod(kdtree);&lt;br /&gt;
	// Set the maximum allowable deviation of the normals,&lt;br /&gt;
	// for the region segmentation step.&lt;br /&gt;
	cvfh.setEPSAngleThreshold(5.0 / 180.0 * M_PI); // 5 degrees.&lt;br /&gt;
	// Set the curvature threshold (maximum disparity between curvatures),&lt;br /&gt;
	// for the region segmentation step.&lt;br /&gt;
	cvfh.setCurvatureThreshold(1.0);&lt;br /&gt;
	// Set to true to normalize the bins of the resulting histogram,&lt;br /&gt;
	// using the total number of points. Note: enabling it will make CVFH&lt;br /&gt;
	// invariant to scale just like VFH, but the authors encourage the opposite.&lt;br /&gt;
	cvfh.setNormalizeBins(false);&lt;br /&gt;
&lt;br /&gt;
	cvfh.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can further customize the segmentation step with &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setClusterTolerance()&amp;quot;&amp;lt;/span&amp;gt; (to set the maximum Euclidean distance between points in the same cluster) and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setMinPoints()&amp;quot;&amp;lt;/span&amp;gt;. The size of the output will be equal to the number of regions the object was divided in. Also, check the functions &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getCentroidClusters()&amp;quot;&amp;lt;/span&amp;gt; and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getCentroidNormalClusters()&amp;quot;&amp;lt;/span&amp;gt;, you can use them to get information about the centroids used to compute the different CVFH descriptors.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Normals, Search method, Angle threshold, Curvature threshold, [Normalize bins], [Cluster tolerance], [Minimum points]&lt;br /&gt;
* '''Output''': CVFH descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6130296 CAD-Model Recognition and 6DOF Pose Estimation Using 3D Cues] (requires IEEE Xplore subscription) (Aitor Aldoma et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_c_v_f_h_estimation.html pcl::CVFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_CVFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====OUR-CVFH====&lt;br /&gt;
&lt;br /&gt;
The Oriented, Unique and Repeatable CVFH expands the previous descriptor, adding the computation of an unique reference frame to make it more robust.&lt;br /&gt;
&lt;br /&gt;
OUR-CVFH relies on the use of Semi-Global Unique Reference Frames (SGURFs), which are repeatable coordinate systems computed for each region. Not only they remove the invariance to camera roll and allow to extract the 6DoF pose directly without additional steps, but they also improve the spatial descriptiveness.&lt;br /&gt;
&lt;br /&gt;
The first part of the computation is akin to CVFH, but after segmentation, the points in each region are filtered once more according to the difference between their normals and the region's average normal. This results in better shaped regions, improving the estimation of the Reference Frames (RFs).&lt;br /&gt;
&lt;br /&gt;
After this, the SGURF is computed for each region. Disambiguation is performed to decide the sign of the axes, according to the points' distribution. If this is not enough and the sign remains ambiguous, multiple RFs will need to be created to account for it. Finally, the OUR-CVFH descriptor is computed. The original Shape Distribution Component (SDC) is discarded, and the surface is now described according to the RFs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:OUR-CVFH.png|thumb|center|600px| SGURF frame and resulting histogram of a region (image from [http://vision.deis.unibo.it/fede/papers/dagm12.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/our_cvfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the OUR-CVFH descriptors.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// OUR-CVFH estimation object.&lt;br /&gt;
	pcl::OURCVFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; ourcvfh;&lt;br /&gt;
	ourcvfh.setInputCloud(object);&lt;br /&gt;
	ourcvfh.setInputNormals(normals);&lt;br /&gt;
	ourcvfh.setSearchMethod(kdtree);&lt;br /&gt;
	ourcvfh.setEPSAngleThreshold(5.0 / 180.0 * M_PI); // 5 degrees.&lt;br /&gt;
	ourcvfh.setCurvatureThreshold(1.0);&lt;br /&gt;
	ourcvfh.setNormalizeBins(false);&lt;br /&gt;
	// Set the minimum axis ratio between the SGURF axes. At the disambiguation phase,&lt;br /&gt;
	// this will decide if additional Reference Frames need to be created, if ambiguous.&lt;br /&gt;
	ourcvfh.setAxisRatio(0.8);&lt;br /&gt;
&lt;br /&gt;
	ourcvfh.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can use the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getTransforms()&amp;quot;&amp;lt;/span&amp;gt; function to get the transformations aligning the cloud to the corresponding SGURF.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Normals, Search method, Angle threshold, Curvature threshold, [Normalize bins], [Cluster tolerance], [Minimum points], [Axis ratio]&lt;br /&gt;
* '''Output''': OUR-CVFH descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://vision.deis.unibo.it/fede/papers/dagm12.pdf OUR-CVFH – Oriented, Unique and Repeatable Clustered Viewpoint Feature Histogram for Object Recognition and 6DOF Pose Estimation] (Aitor Aldoma et al., 2012)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_o_u_r_c_v_f_h_estimation.html pcl::OURCVFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_OUR-CVFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==ESF==&lt;br /&gt;
&lt;br /&gt;
The Ensemble of Shape Functions (ESF) is a combination of 3 different shape functions that describe certain properties of the cloud's points: distances, angles and area. This descriptor is very unique because it does not require normal information. Actually, it does not need any preprocessing, as it is robust to noise and incomplete surfaces.&lt;br /&gt;
&lt;br /&gt;
The algorithm uses a voxel grid as an approximation of the real surface. It iterates through all the points in the cloud: for every iteration, 3 random points are chosen. For these points, the shape functions are computed:&lt;br /&gt;
&lt;br /&gt;
* '''D2''': this function computes the distances between point pairs (3 overall). Then, for every pair, it checks if the line that connects both points lies entirely inside the surface, entirely outside (crossing free space), or both. Depending on this, the distance value will be binned to one of three possible histograms: IN, OUT or MIXED.&lt;br /&gt;
* '''D2 ratio''': an additional histogram for the ratio between parts of the line inside the surface, and parts outside. This value will be 0 if the line is completely outside, 1 if completely inside, and some value in between if mixed.&lt;br /&gt;
* '''D3''': this computes the square root of the area of the triangle formed by the 3 points. Like D2, the result is also classified as IN, OUT or MIXED, each with its own histogram.&lt;br /&gt;
* '''A3''': this function computes the angle formed by the points. Then, the value is binned depending on how the line opposite to the angle is (once again, as IN, OUT or MIXED).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:ESF.png|thumb|center|600px| ESF descriptor (image from [http://www.inf.ethz.ch/personal/zeislb/publications/aldoma_2012jram_PCLTutorial.pdf this paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After the loop is over, we are left with 10 subhistograms (IN, OUT and MIXED for D2, D3 and A3, and an additional one for the ratio). Each one has 64 bins, so the size of the final ESF descriptor is 640.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/esf.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the ESF descriptor.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::ESFSignature640&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::ESFSignature640&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// ESF estimation object.&lt;br /&gt;
	pcl::ESFEstimation&amp;lt;pcl::PointXYZ, pcl::ESFSignature640&amp;gt; esf;&lt;br /&gt;
	esf.setInputCloud(object);&lt;br /&gt;
&lt;br /&gt;
	esf.compute(*descriptor);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster)&lt;br /&gt;
* '''Output''': ESF descriptor&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6181760 Ensemble of shape functions for 3D object classification] (requires IEEE Xplore subscription) (Walter Wohlkinger and Markus Vincze, 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_e_s_f_estimation.html pcl::ESFEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_ESF.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==GFPFH==&lt;br /&gt;
&lt;br /&gt;
As you may have guessed, GFPFH stands for Global Fast Point Feature Histogram, the global version of the FPFH descriptor. GFPFH was designed for the task of helping a robot navigate its environment, having some context of the objects around it.&lt;br /&gt;
&lt;br /&gt;
The first step before being able to compute the descriptor is surface categorization. A set of logical primitives (the classes, or categories) is created, which depends on the type of objects we expect the robot to find on the scene. For example, if we know there will be a coffee mug, we create three: one for the handle, and the other two for the outer and inner faces. Then, FPFH descriptors are computed, and everything is fed to a [https://en.wikipedia.org/wiki/Conditional_random_field Conditional Random Field] (CRF) algorithm. The CRF will label each surface with one of the previous categories, so we end up with a cloud where each point has been classified depending of the type of object (or object's region) it belongs to.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:FPFH_CRF.png|thumb|center|300px| Classification of objects made with FPFH and CRF (image from [https://www.willowgarage.com/sites/default/files/iccv09.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Now, the GFPFH descriptor can be computed with the result of the classification step. It will encode what the object is made of, so the robot can easily recognize it. First, an [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#Octree | octree]] is created, dividing the object in voxel leaves. For every leaf, a set of probabilities is created, one for each class. Each one stores the probability of that leaf belonging to the class, and it is computed according to the number of points in that leaf that have been labelled as that class, and the total number of points. Then, for every pair of leaves in the octree, a line is casted, connecting them. Every leaf in its path is checked for occupancy, storing the result in an histogram. If the leaf is empty (free space), a value of 0 is saved. Otherwise, the leaf probabilities are used.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:GFPFH.png|thumb|center|500px| Computing the GFPFH with a voxel grid (image from [https://www.willowgarage.com/sites/default/files/iccv09.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following code will compute the GFPFH for a cloud with label information. The categorization step is up to you, as it depends largely on the type of the scene, and the use you are going to give it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/gfpfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZL&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZL&amp;gt;);&lt;br /&gt;
	// Object for storing the GFPFH descriptor.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::GFPFHSignature16&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::GFPFHSignature16&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZL&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you should now perform classification on the cloud's points. See the&lt;br /&gt;
	// original paper for more details. For this example, we will now consider 4&lt;br /&gt;
	// different classes, and randomly label each point as one of them.&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; object-&amp;gt;points.size(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		object-&amp;gt;points[i].label = 1 + i % 4;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// ESF estimation object.&lt;br /&gt;
	pcl::GFPFHEstimation&amp;lt;pcl::PointXYZL, pcl::PointXYZL, pcl::GFPFHSignature16&amp;gt; gfpfh;&lt;br /&gt;
	gfpfh.setInputCloud(object);&lt;br /&gt;
	// Set the object that contains the labels for each point. Thanks to the&lt;br /&gt;
	// PointXYZL type, we can use the same object we store the cloud in.&lt;br /&gt;
	gfpfh.setInputLabels(object);&lt;br /&gt;
	// Set the size of the octree leaves to 1cm (cubic).&lt;br /&gt;
	gfpfh.setOctreeLeafSize(0.01);&lt;br /&gt;
	// Set the number of classes the cloud has been labelled with (default is 16).&lt;br /&gt;
	gfpfh.setNumberOfClasses(4);&lt;br /&gt;
&lt;br /&gt;
	gfpfh.compute(*descriptor);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Labels, Number of classes, Leaf size&lt;br /&gt;
* '''Output''': GFPFH descriptor&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [https://www.willowgarage.com/sites/default/files/iccv09.pdf Detecting and Segmenting Objects for Mobile Manipulation] (Radu Bogdan Rusu et al., 2009)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_g_f_p_f_h_estimation.html pcl::GFPFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_GFPFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==GRSD==&lt;br /&gt;
&lt;br /&gt;
The global version of the Radius-based Surface Descriptor works in a similar fashion to GFPFH. A voxelization and a surface categorization step are performed beforehand, labelling all surface patches according to the geometric category (plane, cylinder, edge, rim, sphere), using RSD. Then, the whole cluster is classified into one of these categories, and the GRSD descriptor is computed from this.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:GRSD.png|thumb|center|400px| Classification of objects for GRSD and resulting histogram (image from [https://ias.in.tum.de/_media/spezial/bib/grsd10humanoids.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To compute it:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/grsd.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the GRSD descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::GRSDSignature21&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::GRSDSignature21&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// GRSD estimation object.&lt;br /&gt;
	GRSDEstimation&amp;lt;PointXYZ, Normal, GRSDSignature21&amp;gt; grsd;&lt;br /&gt;
	grsd.setInputCloud(cloud);&lt;br /&gt;
	grsd.setInputNormals(normals);&lt;br /&gt;
	grsd.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. Note: the value given here has to be&lt;br /&gt;
	// larger than the radius used to estimate the normals.&lt;br /&gt;
	grsd.setRadiusSearch(0.05);&lt;br /&gt;
	&lt;br /&gt;
	grsd.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: This code will only compile with PCL versions 1.8 and above (the current trunk).'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Normals, Search method, Radius&lt;br /&gt;
* '''Output''': GRSD descriptor&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [https://ias.in.tum.de/_media/spezial/bib/grsd10humanoids.pdf Hierarchical Object Geometric Categorization and Appearance Classification for Mobile Manipulation] (Zoltan-Csaba Marton et al., 2010)&lt;br /&gt;
** [https://ias.cs.tum.edu/_media/spezial/bib/marton11ijrr.pdf Combined 2D-3D Categorization and Classification for Multimodal Perception Systems] (Zoltan-Csaba Marton et al., 2011)&lt;br /&gt;
** [http://www.mi.t.u-tokyo.ac.jp/top/downloadpublication/36 Voxelized Shape and Color Histograms for RGB-D] (Zoltan-Csaba Marton et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_g_r_s_d_estimation.html pcl::GRSDEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_GRSD.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Saving and loading=&lt;br /&gt;
&lt;br /&gt;
You can save a descriptor to a file just [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#Writing_to_file|like with any other cloud type]]. One caveat, though. If you are using a descriptor that has its own custom type, like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PFHSignature125&amp;quot;&amp;lt;/span&amp;gt;, everything will be OK. But with descriptors that do not (where you have to use &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::Histogram&amp;lt;&amp;gt;&amp;quot;&amp;lt;/span&amp;gt;), you will get this error: &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;POINT_TYPE_NOT_PROPERLY_REGISTERED&amp;quot;''&amp;lt;/span&amp;gt;. In order to save or load from a file, PCL's IO functions need to know about the number, type and size of the fields. To solve this, you will have to properly register a new point type for your descriptor. For example, this will work for the [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#RoPS|RoPS]] descriptor example we saw earlier:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;POINT_CLOUD_REGISTER_POINT_STRUCT(ROPS135,&lt;br /&gt;
                                  (float[135], histogram, histogram)&lt;br /&gt;
)&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add the previous to your code (change it accordingly), and you will be able to save and load descriptors as usual.&lt;br /&gt;
&lt;br /&gt;
=Visualization=&lt;br /&gt;
&lt;br /&gt;
Sometimes it is desired to check a visual representation of a descriptor, perhaps to analyze the distribution of data over different bins. Because they are saved as histograms, this is something trivial to do. PCL offers a couple of classes to do this.&lt;br /&gt;
&lt;br /&gt;
==PCLHistogramVisualizer==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PCLHistogramVisualizer&amp;quot;&amp;lt;/span&amp;gt; is the simplest way to plot an histogram. The class has little functionality, but it does its job. Only one call is necessary to give the histogram and its size:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/vfh.h&amp;gt;&lt;br /&gt;
#include&amp;lt;pcl/visualization/histogram_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Clouds for storing everything.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Estimate VFH descriptor.&lt;br /&gt;
	pcl::VFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; vfh;&lt;br /&gt;
	vfh.setInputCloud(object);&lt;br /&gt;
	vfh.setInputNormals(normals);&lt;br /&gt;
	vfh.setSearchMethod(kdtree);&lt;br /&gt;
	vfh.setNormalizeBins(true);&lt;br /&gt;
	vfh.setNormalizeDistance(false);&lt;br /&gt;
	vfh.compute(*descriptor);&lt;br /&gt;
&lt;br /&gt;
	// Plotter object.&lt;br /&gt;
	pcl::visualization::PCLHistogramVisualizer viewer;&lt;br /&gt;
	// We need to set the size of the descriptor beforehand.&lt;br /&gt;
	viewer.addFeatureHistogram(*descriptor, 308);&lt;br /&gt;
&lt;br /&gt;
	viewer.spin();&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Histogram_visualizer_VFH.png|thumb|center|500px| VFH histogram seen with the Histogram Visualizer.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Descriptor, descriptor size, [Window size], [Background color], [Y axis range]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_histogram_visualizer.html pcl::visualization::PCLHistogramVisualizer]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_histogram_visualizer.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==PCLPlotter==&lt;br /&gt;
&lt;br /&gt;
This class has all the methods from &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PCLHistogramVisualizer&amp;quot;&amp;lt;/span&amp;gt; (which will be deprecated soon) plus a lot more features. The code is almost the same:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/vfh.h&amp;gt;&lt;br /&gt;
#include&amp;lt;pcl/visualization/pcl_plotter.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Clouds for storing everything.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Estimate VFH descriptor.&lt;br /&gt;
	pcl::VFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; vfh;&lt;br /&gt;
	vfh.setInputCloud(object);&lt;br /&gt;
	vfh.setInputNormals(normals);&lt;br /&gt;
	vfh.setSearchMethod(kdtree);&lt;br /&gt;
	vfh.setNormalizeBins(true);&lt;br /&gt;
	vfh.setNormalizeDistance(false);&lt;br /&gt;
	vfh.compute(*descriptor);&lt;br /&gt;
&lt;br /&gt;
	// Plotter object.&lt;br /&gt;
	pcl::visualization::PCLPlotter plotter;&lt;br /&gt;
	// We need to set the size of the descriptor beforehand.&lt;br /&gt;
	plotter.addFeatureHistogram(*descriptor, 308);&lt;br /&gt;
&lt;br /&gt;
	plotter.plot();&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:PCL_plotter_VFH.png|thumb|center|500px| VFH histogram seen with the PCLPlotter class.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you have raw data (such as a vector of floats) you can use the [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_plotter.html#aaf23b2b1c2f91c517cfde387ee1b654e addHistogramData()] function to plot it as an histogram.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Descriptor, descriptor size, [Window size], [Background color], [Y axis range]&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/pcl_plotter.php PCLPlotter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_plotter.html pcl::visualization::PCLPlotter]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_plotter.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==PCL Viewer==&lt;br /&gt;
&lt;br /&gt;
This program, included with PCL, will also let you open and visualize a saved descriptor. Internally, it uses [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#PCLPlotter|PCLPlotter]]. You can invoke the viewer from the command line like this, so it comes handy:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_viewer &amp;lt;descriptor_file&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:PCL_viewer_VFH.png|thumb|center|500px| VFH histogram seen with ''pcl_viewer''.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)&amp;diff=4747</id>
		<title>PCL/OpenNI tutorial 2: Cloud processing (basic)</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)&amp;diff=4747"/>
				<updated>2015-11-02T11:03:09Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Point_cloud.png|thumb|right|200px|Rendering of a point cloud, showing a desktop.]]&lt;br /&gt;
&lt;br /&gt;
All right: in the [[PCL/OpenNI tutorial 1: Installing and testing| previous tutorial]] you installed OpenNI and PCL. You managed to get your device working. You compiled and ran the example program, watched as it fetched one frame after another, and even saved a couple of point clouds to disk. Now you may be wondering &amp;quot;What about now? What use depth sensors are? What can I do with point clouds?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Well, depth cameras have a lot of useful applications. For example, you could use them as [http://tech.integrate.biz/kinect_mocap.htm motion tracking] hardware and save you thousands of dollars in professional gear. The original Kinect was actually designed as an alternative to physical controllers (e.g., gamepads) in games. [https://frantracerkinectft.codeplex.com/ Hand gestures] can also be recognized, so people have managed to successfully implement control interfaces similar to what could be seen in Minority Report.&lt;br /&gt;
&lt;br /&gt;
Working low-level with point clouds offers interesting possibilities too. As they are essentially a 3D &amp;quot;scan&amp;quot; or the scene, you could capture your whole room as a [http://pointclouds.org/documentation/tutorials/using_kinfu_large_scale.php continuous 3D mesh], then import it in your modelling software of choice and use it as a reference for a better reconstruction. You could use it to measure distances, or detect obstacles, for a robot navigation system. But the most exciting alternative by far, and the one where most of the research is being focused on, is to perform 3D object recognition and pose estimation, even in [https://www.youtube.com/watch?feature=player_detailpage&amp;amp;v=quGhaggn3cQ#t=349 real time]. This allows a robot to fulfill tasks like going to the kitchen to get you a cup of coffee.&lt;br /&gt;
&lt;br /&gt;
Most of the advanced stuff that can be done with a point cloud requires some previous steps, like filtering, reconstruction or normal estimation. In this tutorial, I will introduce you to the basics of point cloud processing, and leave the complicated methods for the [[PCL/OpenNI tutorial 3: Cloud processing (advanced) | next tutorial]]. I will explain what every technique does and what it should be used for, and include PCL code snippets so you can check how to implement it in your programs.&lt;br /&gt;
&lt;br /&gt;
=Point clouds=&lt;br /&gt;
&lt;br /&gt;
First, a little explanation. A point cloud as taken from a depth sensor consists of a series of points in 3D space, as simple as that. A &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::PointCloud&amp;lt;PointT&amp;gt;&amp;quot;&amp;lt;/span&amp;gt; object stores the points inside a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;std::vector&amp;lt;PointT&amp;gt;&amp;quot;&amp;lt;/span&amp;gt; structure. The cloud object exposes some functions that let you get information about them, like the point count. As you can see, the class is templated so you can instantiate it for many types of points. For example, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt;, which stores three floats for the X, Y and Z coordinates, and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZRGB&amp;quot;&amp;lt;/span&amp;gt;, which also stores color (texture) information for each point, the kind we would get with Kinect or Xtion RGB-D cameras. A list of [http://pointclouds.org/documentation/tutorials/adding_custom_ptype.php#what-pointt-types-are-available-in-pcl point types] and a basic description of the [http://pointclouds.org/documentation/tutorials/basic_structures.php cloud object] are available on PCL's website.&lt;br /&gt;
&lt;br /&gt;
Clouds will usually be handled with pointers, as they tend to be large. PCL implements the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;boost::shared_ptr&amp;quot;&amp;lt;/span&amp;gt; class from the [https://en.wikipedia.org/wiki/Boost_%28C%2B%2B_libraries%29 Boost C++ libraries], that keeps track of the references to a certain object. Every time you pass a shared pointer and a copy of it is created, a reference counter is automatically increased. When the pointer is destroyed, it is decreased. If the counter ever reaches 0, it means that no one is holding an active reference to the object, and its destructor is called to free memory. This can take a while to get used to, and you will be mixing cloud objects and cloud object pointers until then, but do not worry.&lt;br /&gt;
&lt;br /&gt;
==PCD files==&lt;br /&gt;
&lt;br /&gt;
When clouds are saved to disk, a [http://pointclouds.org/documentation/tutorials/pcd_file_format.php PCD] (Point Cloud Data) file is produced. You can choose to do it so in binary or plain text format. The first one is faster to work with, whereas the second one will let you to inspect the cloud by opening it with a common text editor. If you do so, you may see something like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;# .PCD v.7 − Point Cloud Data file format&lt;br /&gt;
VERSION .7&lt;br /&gt;
FIELDS x y z&lt;br /&gt;
SIZE 4 4 4&lt;br /&gt;
TYPE F F F&lt;br /&gt;
WIDTH 2&lt;br /&gt;
HEIGHT 1&lt;br /&gt;
VIEWPOINT 0 0 0 1 0 0 0&lt;br /&gt;
POINTS 2&lt;br /&gt;
DATA ascii&lt;br /&gt;
0.73412 -1.12643 0.82218&lt;br /&gt;
0.44739 -0.34735 -0.04624&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This tells us that the cloud consist of 2 points of type &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; (because each point has only 3 fields; the X, Y and Z coordinates, each stored in a 4-byte float), and lists their coordinates. It also stores the viewport information (the relative position of the points to the sensor) as a translation quaternion. Knowing the position of the camera can be useful for certain procedures like normal estimation.&lt;br /&gt;
&lt;br /&gt;
===Reading from file===&lt;br /&gt;
&lt;br /&gt;
Reading a point cloud from a PCD file to and storing it in memory can be done with a single function call. You must specify the correct point type. For example, for .pcd files saved with the OpenNI viewer application that we saw in the [[PCL/OpenNI tutorial 1: Installing and testing | previous tutorial]], you could use either &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZRGBA&amp;quot;&amp;lt;/span&amp;gt; or &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; if you are not going to use the color information.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I have not bothered to include proper error/argument checking, but you should do it in your programs. Be sure to check the [http://docs.pointclouds.org PCL API] every time I introduce you to one of its classes or methods.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/reading_pcd.php Reading Point Cloud data from PCD files]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl_1_1io.html pcl::io namespace]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_file_read.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Writing to file===&lt;br /&gt;
&lt;br /&gt;
Saving a point cloud to a file is as easy as reading it. You can choose between doing it in plain readable (ASCII) or binary format. The first one can be read and altered with a text editor, the second one is faster.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Write it back to disk under a different name.&lt;br /&gt;
	// Another possibility would be &amp;quot;savePCDFileBinary()&amp;quot;.&lt;br /&gt;
	pcl::io::savePCDFileASCII(&amp;quot;output.pcd&amp;quot;, *cloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/writing_pcd.php Writing Point Cloud data to PCD files]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl_1_1io.html pcl::io namespace]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_file_write.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you want a better alternative for loading or saving PCD files (some of the functions we saw are a bit outdated, but they do the job), take a look at the [http://docs.pointclouds.org/trunk/classpcl_1_1_p_c_d_reader.html PCDReader] and [http://docs.pointclouds.org/trunk/classpcl_1_1_p_c_d_writer.html PCDWriter] classes.&lt;br /&gt;
&lt;br /&gt;
==Visualization==&lt;br /&gt;
&lt;br /&gt;
Of course, PCL ships with a dedicated program to load and visualize point clouds and a lot of other stuff, &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''pcl_viewer''&amp;lt;/span&amp;gt;. You can run it from a terminal with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_viewer &amp;lt;file 1&amp;gt; [file 2] ... [file N]&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you load several files, each cloud will be rendered with a different color to help you tell them apart. The program uses internally the [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_visualizer.html pcl::visualization::PCLVisualizer] class. Press the &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''H'''&amp;lt;/span&amp;gt; key while it is running to print the help on the terminal. Keys I tend to find useful are &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''1'''&amp;lt;/span&amp;gt;, which chooses a different color for the cloud (useful if the current one is too hard to see), &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''R'''&amp;lt;/span&amp;gt;, which resets the camera, and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''J'''&amp;lt;/span&amp;gt;, which saves a PNG snapshot.&lt;br /&gt;
&lt;br /&gt;
In your own code, it is possible to iterate through the cloud and print the information of each point to console, but this is cumbersome. Sometimes you just want to visualize the cloud to check something. PCL can create a window for you and display it, letting users pan and zoom their way around. The simplest method is the following one:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/cloud_viewer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Cloud viewer object. You can set the window title.&lt;br /&gt;
	pcl::visualization::CloudViewer viewer(argv[1]);&lt;br /&gt;
	viewer.showCloud(cloud);&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		// Do nothing but wait.&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;CloudViewer&amp;quot;&amp;lt;/span&amp;gt; has less features than &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PCLVisualizer&amp;quot;&amp;lt;/span&amp;gt; but it is easier to set up. The program will remain active until you close the window with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Alt+F4'''&amp;lt;/span&amp;gt; or press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Q'''&amp;lt;/span&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/cloud_viewer.php The CloudViewer]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/pcl_visualizer.php PCLVisualizer]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/walkthrough.php Visualization library overview]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_cloud_viewer.html pcl::visualization::CloudViewer]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_visualize_cloud.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Concatenating two clouds==&lt;br /&gt;
&lt;br /&gt;
There are two types of concatenation with point clouds:&lt;br /&gt;
&lt;br /&gt;
* '''Points''':  You initially have two clouds, &amp;quot;A&amp;quot; and &amp;quot;B&amp;quot;, with &amp;quot;N&amp;quot; and &amp;quot;M&amp;quot; points each. A third cloud is created (&amp;quot;C&amp;quot;), that has &amp;quot;N + M&amp;quot; points (all points of the first two clouds). For this to work, the point type must be the same for all clouds (&amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt;, or &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt;, or whatever).&lt;br /&gt;
* '''Fields''': With this, you can join the fields of two clouds with different types. The only constraint is that both must have the same number of points. For example, if you have a cloud of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; type and another of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt; type with &amp;quot;N&amp;quot; points each, you can concatenate them to create a cloud of type &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointNormal&amp;quot;&amp;lt;/span&amp;gt; (that stores X, Y and Z coordinates, plus the normal) that also has &amp;quot;N&amp;quot; points.&lt;br /&gt;
&lt;br /&gt;
Point concatenation is the easiest one becase you can use the cloud's add operator (+), which is overloaded.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudA(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudB(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudC(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read two PCD files from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGBA&amp;gt;(argv[1], *cloudA) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGBA&amp;gt;(argv[2], *cloudB) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Create cloud &amp;quot;C&amp;quot;, with the points of both &amp;quot;A&amp;quot; and &amp;quot;B&amp;quot;.&lt;br /&gt;
	*cloudC = (*cloudA) + (*cloudB);&lt;br /&gt;
	// Now cloudC-&amp;gt;points.size() equals cloudA-&amp;gt;points.size() + cloudB-&amp;gt;points.size().&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/concatenate_clouds.php Concatenate the points of two Point Clouds]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_concatenate_clouds.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Field concatenation is done with a function call:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudPoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr cloudNormals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;::Ptr cloudAll(new pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudPoints) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Compute the normals of the cloud (do not worry, we will see this later).&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloudPoints);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.05);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*cloudNormals);&lt;br /&gt;
&lt;br /&gt;
	// Concatenate the fields (PointXYZ + Normal = PointNormal).&lt;br /&gt;
	pcl::concatenateFields(*cloudPoints, *cloudNormals, *cloudAll);&lt;br /&gt;
&lt;br /&gt;
	// Print the data to standard output.&lt;br /&gt;
	for (size_t currentPoint = 0; currentPoint &amp;lt; cloudAll-&amp;gt;points.size(); currentPoint++)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tXYZ:&amp;quot; &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].x &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].y &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].z &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tNormal:&amp;quot; &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[0] &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[1] &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[2] &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/concatenate_clouds.php Concatenate the points of two Point Clouds]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__common.html#gac6add803f86fd16a998dce541e9ef402 pcl::concatenateFields()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_concatenate_fields.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Matrix transformations==&lt;br /&gt;
&lt;br /&gt;
Clouds can be directly manipulated with [https://en.wikipedia.org/wiki/Transformation_matrix transformations] (translation, rotation, scaling). 3D transformations use a 4x4 matrix, which can be a little complicated to understand (specially when it comes to rotations), so check some book or [http://web.iitd.ac.in/~hegde/cad/lecture/L6_3dtrans.pdf tutorial] before trying to use one. The transformed cloud will be saved to another object so you can keep both versions and use them as you wish.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/common/transforms.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr transformed(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Transformation matrix object, initialized to the identity matrix&lt;br /&gt;
	// (a null transformation).&lt;br /&gt;
	Eigen::Matrix4f transformation = Eigen::Matrix4f::Identity();&lt;br /&gt;
&lt;br /&gt;
	// Set a rotation around the Z axis.&lt;br /&gt;
	float theta = M_PI / 2; // 180 degrees.&lt;br /&gt;
	transformation(0, 0) = cos(theta);&lt;br /&gt;
	transformation(0, 1) = -sin(theta);&lt;br /&gt;
	transformation(1, 0) = sin(theta);&lt;br /&gt;
	transformation(1, 1) = cos(theta);&lt;br /&gt;
&lt;br /&gt;
	// Set a translation on the X axis.&lt;br /&gt;
	transformation(0, 3) = 1.0f; // 1 meter (positive direction).&lt;br /&gt;
&lt;br /&gt;
	pcl::transformPointCloud(*cloud, *transformed, transformation);&lt;br /&gt;
&lt;br /&gt;
	// Visualize both the original and the result.&lt;br /&gt;
	pcl::visualization::PCLVisualizer viewer(argv[1]);&lt;br /&gt;
	viewer.addPointCloud(cloud, &amp;quot;original&amp;quot;);&lt;br /&gt;
	// The transformed one's points will be red in color.&lt;br /&gt;
	pcl::visualization::PointCloudColorHandlerCustom&amp;lt;pcl::PointXYZ&amp;gt; colorHandler(transformed, 255, 0, 0);&lt;br /&gt;
	viewer.addPointCloud(transformed, colorHandler, &amp;quot;transformed&amp;quot;);&lt;br /&gt;
	// Add 3D colored axes to help see the transformation.&lt;br /&gt;
	viewer.addCoordinateSystem(1.0, 0);&lt;br /&gt;
&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer.spinOnce();&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Cloud_transformation.png|thumb|center|600px|Rotation and translation applied to a cloud.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check the rest of the functions, because you can also use a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Affine3&amp;quot;&amp;lt;/span&amp;gt; or a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Quaternion&amp;quot;&amp;lt;/span&amp;gt; instead of a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Matrix4&amp;quot;&amp;lt;/span&amp;gt; object.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Transformation&lt;br /&gt;
* '''Output''': Transformed points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/matrix_transform.php Using a matrix to transform a point cloud]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl.html#aefbe4956d1c8fb785a97df6708d57c56 pcl::transformPointCloud()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_matrix_transform.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Using indices==&lt;br /&gt;
&lt;br /&gt;
Many of PCL's algorithms return indices. Indices are just a list of some of the points of a cloud (not the points themselves with all their data, but just their index in the cloud). For example, a cylinder segmentation algorithm will give you as output a list of points that were considered to fit the cylinder model. You can then use the indices to extract the points to a second cloud for further work with the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::ExtractIndices&amp;quot;&amp;lt;/span&amp;gt; class. Its &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt; function will instruct it to extract the points NOT indexed (so you can remove all cylinder-shaped clusters from the cloud).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/extract_indices.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/segmentation/sac_segmentation.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudAll(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudExtracted(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudAll) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Plane segmentation (do not worry, we will see this later).&lt;br /&gt;
	pcl::ModelCoefficients::Ptr coefficients(new pcl::ModelCoefficients);&lt;br /&gt;
	pcl::SACSegmentation&amp;lt;pcl::PointXYZ&amp;gt; segmentation;&lt;br /&gt;
	segmentation.setOptimizeCoefficients(true);&lt;br /&gt;
	segmentation.setModelType(pcl::SACMODEL_PLANE);&lt;br /&gt;
	segmentation.setMethodType(pcl::SAC_RANSAC);&lt;br /&gt;
	segmentation.setDistanceThreshold(0.01);&lt;br /&gt;
	segmentation.setInputCloud(cloudAll);&lt;br /&gt;
&lt;br /&gt;
	// Object for storing the indices.&lt;br /&gt;
	pcl::PointIndices::Ptr pointIndices(new pcl::PointIndices);&lt;br /&gt;
&lt;br /&gt;
	segmentation.segment(*pointIndices, *coefficients);&lt;br /&gt;
&lt;br /&gt;
	// Object for extracting points from a list of indices.&lt;br /&gt;
	pcl::ExtractIndices&amp;lt;pcl::PointXYZ&amp;gt; extract;&lt;br /&gt;
	extract.setInputCloud(cloudAll);&lt;br /&gt;
	extract.setIndices(pointIndices);&lt;br /&gt;
	// We will extract the points that are NOT indexed (the ones that are not in a plane).&lt;br /&gt;
	extract.setNegative(true);&lt;br /&gt;
	extract.filter(*cloudExtracted);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Indices, [Negative mode]&lt;br /&gt;
* '''Output''': Points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/extract_indices.php Extracting indices from a PointCloud]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/structpcl_1_1_point_indices.html pcl::PointIndices]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_extract_indices.html pcl::ExtractIndices]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_extract_indices.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Removing NaNs==&lt;br /&gt;
&lt;br /&gt;
The cloud retrieved from the sensor may contain several kind of measurement errors and/or inaccuracies. One of them is the presence of [https://en.wikipedia.org/wiki/NaN NaN (Not a Number)] values in the coordinates of some points, as you can see in the following file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;# .PCD v0.7 - Point Cloud Data file format&lt;br /&gt;
VERSION 0.7&lt;br /&gt;
FIELDS x y z rgba&lt;br /&gt;
SIZE 4 4 4 4&lt;br /&gt;
TYPE F F F U&lt;br /&gt;
COUNT 1 1 1 1&lt;br /&gt;
WIDTH 640&lt;br /&gt;
HEIGHT 480&lt;br /&gt;
VIEWPOINT 0 0 0 1 0 0 0&lt;br /&gt;
POINTS 307200&lt;br /&gt;
DATA ascii&lt;br /&gt;
nan nan nan 10135463&lt;br /&gt;
nan nan nan 10398635&lt;br /&gt;
nan nan nan 10070692&lt;br /&gt;
nan nan nan 10268071&lt;br /&gt;
...&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cloud objects have a member function called &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;is_dense()&amp;quot;&amp;lt;/span&amp;gt; that will return true if all points have valid, finite values. A NaN indicates that there was a problem measuring the distance to that point, maybe because the sensor was too close or too far, or because the surface was reflective.&lt;br /&gt;
&lt;br /&gt;
Sadly, many of PCL's algorithms will fail if even a single NaN is present in the cloud you feed them as input, with a message like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;Assertion `point_representation_-&amp;gt;isValid (point) &amp;amp;&amp;amp; &amp;quot;Invalid (NaN, Inf) point coordinates given to radiusSearch!&amp;quot;' failed.&amp;quot;''&amp;lt;/span&amp;gt;. If this happens, you will have to remove such values from the cloud. You can use a program like the following to &amp;quot;clean&amp;quot; a cloud from all NaN points:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/filter.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	if (argc != 3)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tUsage: &amp;quot; &amp;lt;&amp;lt; argv[0] &amp;lt;&amp;lt; &amp;quot; &amp;lt;input cloud&amp;gt; &amp;lt;output cloud&amp;gt;&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// The mapping tells you to what points of the old cloud the new ones correspond,&lt;br /&gt;
	// but we will not use it.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; mapping;&lt;br /&gt;
	pcl::removeNaNFromPointCloud(*cloud, *cloud, mapping);&lt;br /&gt;
&lt;br /&gt;
	// Save it back.&lt;br /&gt;
	pcl::io::savePCDFileASCII(argv[2], *cloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The problem with this method is that it will not keep the cloud organized. All clouds store a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;width&amp;quot;&amp;lt;/span&amp;gt; and a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;height&amp;quot;&amp;lt;/span&amp;gt; variable. In ''unorganized'' clouds, the total number of points is the same as the width, and the height is set to 1. In ''organized'' clouds (like the ones taken from camera-like sensors such as Kinect or Xtion), the width and the height are the same that the pixels of the image resolution the sensor works with. Points are distributed in rows like in the depth image, and every one of them corresponds to a pixel. The member function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;isOrganized()&amp;quot;&amp;lt;/span&amp;gt; will return true if the height is greater than 1.&lt;br /&gt;
&lt;br /&gt;
Because removing NaNs will change the number of points of the cloud, it can no longer be kept organized with the original width/height ratio, so the function will set the height to 1. This is not a big issue, as only a few of PCL's algorithms work explicitly with organized clouds (and they mostly use it for optimization), but you must take it into account.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Cloud&lt;br /&gt;
* '''Output''': Cloud without NaNs&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__filters.html#gac463283a9e9c18a66d3d29b28a575064 pcl::removeNaNFromPointCloud()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_remove_NaNs.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Saving a PNG==&lt;br /&gt;
&lt;br /&gt;
PCL offers the possibility of saving the values of a point cloud to a PNG image file. Obviously, this can only be done with organized clouds, because the rows and columns of the resulting image will correspond exactly to the ones of the cloud. For example, if you have a point cloud taken from a sensor like Kinect or Xtion, you can use this to retrieve the 640x480 RGB image that matches the cloud.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/io/png_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGB&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Save the image (cloud must be organized).&lt;br /&gt;
	pcl::io::savePNGFile(&amp;quot;output.png&amp;quot;, *cloud, &amp;quot;rgb&amp;quot;);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;savePNGFile()&amp;quot;&amp;lt;/span&amp;gt; has several implementations; in this case, we have chosen the one that lets you specify which fields to save. If you omit that parameter, the function will save the RGB fields by default.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__io.html#gabd03e43fac3048635512f447c8bc50b3 pcl::io::savePNGFile()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_save_PNG.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Computing the centroid==&lt;br /&gt;
&lt;br /&gt;
The [https://en.wikipedia.org/wiki/Centroid centroid] of a cloud is a point with coordinates that result from computing the mean of the values of all points in the cloud. You could say it is the &amp;quot;center of mass&amp;quot;, and it has several uses for certain algorithms. If you intend to compute the actual center of gravity of a [[PCL/OpenNI_tutorial_3:_Cloud_processing_%28advanced%29#Segmentation|clustered]] object though, remember that the sensor does not retrieve the part of the object that is hidden from the camera, like the back faces that are occluded by the front ones, or the inside. You only have the part of the surface that faces the camera.&lt;br /&gt;
&lt;br /&gt;
The centroid of a cloud can be computed with a single function call:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/common/centroid.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object to store the centroid coordinates.&lt;br /&gt;
	Eigen::Vector4f centroid;&lt;br /&gt;
	&lt;br /&gt;
	pcl::compute3DCentroid(*cloud, centroid);&lt;br /&gt;
&lt;br /&gt;
	std::cout &amp;lt;&amp;lt; &amp;quot;The XYZ coordinates of the centroid are: (&amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[0] &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[1] &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[2] &amp;lt;&amp;lt; &amp;quot;).&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl.html#a23daec3829d2d4100a2f185372b3753a pcl::compute3DCentroid()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_compute_centroid.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Feature estimation=&lt;br /&gt;
&lt;br /&gt;
A ''feature'' of a point is some characteristic that can help us to tell it apart from a different point. Of course, you could say that the XYZ coordinates would be characteristic enough, but when interpolating the data of two different, yet corresponding, clouds, they become useless. We need something better, that will have similar values when computed for the same spot of a surface in both sets.&lt;br /&gt;
&lt;br /&gt;
Normals are an example of a very simple feature. 3D [http://pointclouds.org/documentation/tutorials/how_features_work.php features] will be important later in our tutorials, when we talk about [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29|descriptors]] (more detailed &amp;quot;signatures&amp;quot; of points used for close matching).&lt;br /&gt;
&lt;br /&gt;
==Normal estimation==&lt;br /&gt;
&lt;br /&gt;
As you may remember from geometry class, the [https://en.wikipedia.org/wiki/Surface_normal normal] of a plane is an unit vector that is perpendicular to it. The normal of a surface at a point is defined as the vector that is perpendicular to the plane that is tangent to the surface at the point. Surface normals can be calculated for the points of a cloud, too. It is considered a feature, albeit not a very discriminative one.&lt;br /&gt;
&lt;br /&gt;
I will not go into detail with the math of the estimation method, but you just have to know that is uses the nearest neighbors (the points that are closest to the one we are calculating the normal for) to find out the tangent plane and the normal vector. You can customize the method with the search radius (think about a sphere of that radius, centered in the point; all neighboring points that lie within will be used for the computation) and the viewpoint (by default, the output normals will be directionless; by supposing that all vectors must point towards the camera - because otherwise they would belong to surfaces that are not visible from the sensor - they can all be re-oriented accordingly).&lt;br /&gt;
&lt;br /&gt;
Normals are also important because they give us information about the ''curvature'' of the surface at some point, which can be used to our advantage. Many of PCL's algorithms will require us to provide the normals of the input cloud. To estimate them, you can use the following code:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for normal estimation.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	// For every point, use all neighbors in a radius of 3cm.&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	// A kd-tree is a data structure that makes searches efficient. More about it later.&lt;br /&gt;
	// The normal estimation object will use it to find nearest neighbors.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
&lt;br /&gt;
	// Calculate the normals.&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Visualize them.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Normals&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(cloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	// Display one normal out of 20, as a line of length 3cm.&lt;br /&gt;
	viewer-&amp;gt;addPointCloudNormals&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt;(cloud, normals, 20, 0.03, &amp;quot;normals&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, normals are stored in &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointCloud&amp;quot;&amp;lt;/span&amp;gt; objects too, instantiated to the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt; type. If you ever extract some indices from a cloud before feeding them to an algorithm, be sure to do the same with the normals. I have not made use of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setViewPoint()&amp;quot;&amp;lt;/span&amp;gt; function, but you get the idea.&lt;br /&gt;
&lt;br /&gt;
Instead of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setRadiusSearch()&amp;quot;&amp;lt;/span&amp;gt;, you can make use of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setKSearch()&amp;quot;&amp;lt;/span&amp;gt;, which takes an integer, ''K''. This will take into account a point's ''K'' nearest neighbors to compute the normal. Please mind that this can produce some odd results for points that are not in dense areas, as some of the &amp;quot;neighbors&amp;quot; will be too far from them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Normal_estimation.png|thumb|center|600px|Visualization of the normals computed for a point cloud.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search method, Radius | Neighbor count, [Indices], [Viewpoint]&lt;br /&gt;
* '''Output''': Normals&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/normal_estimation.php Estimating Surface Normals in a Point Cloud]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_normal_estimation.html pcl::NormalEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_estimate_normals.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Integral images===&lt;br /&gt;
&lt;br /&gt;
Integral images are a method for normal estimation on organized clouds. The algorithm sees the cloud as a depth image, and creates certain rectangular areas over which the normals are computed, by taking into account the relationship between neighboring &amp;quot;pixels&amp;quot; (points), without the need to run lookups on a search structure like a tree. Because of this, it is very efficient and the normals are usually computed in a split second. If you are performing real time computations on clouds taken from a RGB-D sensor, this is the approach you should use because the previous method will probably take several seconds to finish.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/integral_image_normal.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for normal estimation.&lt;br /&gt;
	pcl::IntegralImageNormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	// Other estimation methods: COVARIANCE_MATRIX, AVERAGE_DEPTH_CHANGE, SIMPLE_3D_GRADIENT.&lt;br /&gt;
	// They determine the smoothness of the result, and the running time.&lt;br /&gt;
	normalEstimation.setNormalEstimationMethod(normalEstimation.AVERAGE_3D_GRADIENT);&lt;br /&gt;
	// Depth threshold for computing object borders based on depth changes, in meters.&lt;br /&gt;
	normalEstimation.setMaxDepthChangeFactor(0.02f);&lt;br /&gt;
	// Factor that influences the size of the area used to smooth the normals.&lt;br /&gt;
	normalEstimation.setNormalSmoothingSize(10.0f);&lt;br /&gt;
&lt;br /&gt;
	// Calculate the normals.&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Visualize them.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Normals&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(cloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	// Display one normal out of 20, as a line of length 3cm.&lt;br /&gt;
	viewer-&amp;gt;addPointCloudNormals&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt;(cloud, normals, 20, 0.03, &amp;quot;normals&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Estimation method, [Smoothing factor], [Depth threshold], [Rectangle size], [Border policy], [Depth dependent smoothing]&lt;br /&gt;
* '''Output''': Normals&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/normal_estimation_using_integral_images.php Normal Estimation Using Integral Images]&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://islab.ulsan.ac.kr/files/announcement/411/Adaptive%20Neighborhood%20Selection%20for%20Real-Time%20Surface%20Normal.pdf Adaptive Neighborhood Selection for Real-Time Surface Normal Estimation from Organized Point Cloud Data Using Integral Images] (S. Holzer et al., 2012)&lt;br /&gt;
** [https://www.willowgarage.com/sites/default/files/holz_rgbd_seg.pdf Real-Time Plane Segmentation using RGB-D Cameras] (Dirk Holz et al., 2012)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_integral_image_normal_estimation.html pcl::IntegralImageNormalEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_estimate_normals_integral.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Decomposition=&lt;br /&gt;
&lt;br /&gt;
When I talk about ''decomposition'', I mean the process of creating an ordered, organized data structure from the points of the cloud, with some purpose in mind like making further analysis easier, or performing filters.&lt;br /&gt;
&lt;br /&gt;
==''k''-d tree==&lt;br /&gt;
&lt;br /&gt;
A [https://en.wikipedia.org/wiki/K-d_tree ''k''-d tree] (''k''-dimensional tree) is a data structure that organizes a set of points in a ''k''-dimensional space, in a way that makes range search operations very efficient (for example, finding the nearest neighbor of a point, which is the point that is closest to it in space; or finding all neighbors within a radius).&lt;br /&gt;
&lt;br /&gt;
It is a binary tree, that is, every non-leaf node in it has two children nodes, the &amp;quot;left&amp;quot; and &amp;quot;right&amp;quot; ones. Each level splits space on a specific dimension. For example, in 3-dimensional space, at the root node (first level) all children would be split based on the first dimension, X (points having coordinates with greater X values would go to the right subtree, points with lesser values to the left one). At the second level (the nodes we just created), the split would be done on the Y axis, following the same criteria. At the third level (the grandchildren), we would use the Z axis. At the fourth level, we would get back to the X axis, and so on. Usually, the median point is chosen to be root at every level.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Kdtree.png|thumb|center|700px|Example of a 2D ''k''d-tree (image from Wikimedia Commons).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In PCL, creating a ''k''-d tree from a cloud is a straightforward and fast operation:&lt;br /&gt;
&lt;br /&gt;
===Searching===&lt;br /&gt;
&lt;br /&gt;
The following code snippet shows you how to use a ''k''-d tree to perform efficient searches:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/search/kdtree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// kd-tree object.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt; kdtree;&lt;br /&gt;
	kdtree.setInputCloud(cloud);&lt;br /&gt;
&lt;br /&gt;
	// We will find the 5 nearest neighbors of this point&lt;br /&gt;
	// (it does not have to be one of the cloud's, we can use any coordinate).&lt;br /&gt;
	pcl::PointXYZ point;&lt;br /&gt;
	point.x = 0.0524343;&lt;br /&gt;
	point.y = -0.58016;&lt;br /&gt;
	point.z = 1.776;&lt;br /&gt;
	// This vector will store the output neighbors.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; pointIndices(5);&lt;br /&gt;
	// This vector will store their squared distances to the search point.&lt;br /&gt;
	std::vector&amp;lt;float&amp;gt; squaredDistances(5);&lt;br /&gt;
	// Perform the search, and print out results.&lt;br /&gt;
	if (kdtree.nearestKSearch(point, 5, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;5 nearest neighbors of the point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Now we find all neighbors within 3cm of the point&lt;br /&gt;
	// (inside a sphere of radius 3cm centered at the point).&lt;br /&gt;
	if (kdtree.radiusSearch(point, 0.03, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors within 3cm:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another alternative would be &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;KdTreeFLANN&amp;quot;&amp;lt;/span&amp;gt;, which makes use of [http://www.cs.ubc.ca/research/flann/ FLANN] (Fast Library for Approximate Nearest Neighbors).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius | Neighbor count&lt;br /&gt;
* '''Output''': Point indices, Squared distances&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/kdtree_search.php How to use a KdTree to search]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1search_1_1_kd_tree.html pcl::search::KdTree]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_kd_tree_f_l_a_n_n.html pcl::KdTreeFLANN]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_kdtree_search.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Octree==&lt;br /&gt;
&lt;br /&gt;
Like the ''k''-d tree, the [https://en.wikipedia.org/wiki/Octree octree] is an hierarchical tree data structure useful for searches, but also compression or downsampling. Each octree node (called a ''voxel'', a term that will sound familiar for users of 3D engines, and which could be described as a &amp;quot;3D pixel&amp;quot;, that is, a cube) has either eight children or no children. The root node describes a cubic bounding box which encapsulates all points. At every level, it becomes subdivided by a factor of 2 which results in an increased&lt;br /&gt;
voxel resolution. That way, we can selectively give more resolution to certain zones.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Octree.png|thumb|center|700px|Depiction of an octree (image from Wikimedia Commons).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Searching===&lt;br /&gt;
&lt;br /&gt;
Downsampling will be covered in a different section, so now I will show you how to perform searches with an octree:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/octree/octree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Octree object, with a maximum resolution of 128&lt;br /&gt;
	// (resolution at lowest octree level).&lt;br /&gt;
	pcl::octree::OctreePointCloudSearch&amp;lt;pcl::PointXYZ&amp;gt; octree(128);&lt;br /&gt;
	octree.setInputCloud(cloud);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
&lt;br /&gt;
	// We will find all neighbors within the same voxel as this point.&lt;br /&gt;
	// (it does not have to be one of the cloud's, we can use any coordinate).&lt;br /&gt;
	pcl::PointXYZ point;&lt;br /&gt;
	point.x = 0.0524343;&lt;br /&gt;
	point.y = -0.58016;&lt;br /&gt;
	point.z = 1.776;&lt;br /&gt;
	// This vector will store the output neighbors.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; pointIndices;&lt;br /&gt;
	// Perform the search, and print out results.&lt;br /&gt;
	if (! octree.voxelSearch(point, pointIndices) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors in the same voxel:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// We will find the 5 nearest neighbors of the point.&lt;br /&gt;
	// This vector will store their squared distances to the search point.&lt;br /&gt;
	std::vector&amp;lt;float&amp;gt; squaredDistances;&lt;br /&gt;
	if (octree.nearestKSearch(point, 5, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;5 nearest neighbors of the point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Now we find all neighbors within 3cm of the point&lt;br /&gt;
	// (inside a sphere of radius 3cm centered at the point).&lt;br /&gt;
	// The point DOES have to belong in the cloud.&lt;br /&gt;
	if (octree.radiusSearch(point, 0.03, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors within 3cm:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, the code is mostly interchangeable with the one of the ''k''d-tree.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius | Neighbor count&lt;br /&gt;
* '''Output''': Point indices, Squared distances&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/octree.php Spatial Partitioning and Search Operations with Octrees]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1search_1_1_octree.html pcl::search::Octree]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1octree_1_1_octree_point_cloud_search.html pcl::octree::OctreePointCloudSearch]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_octree_search.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Compressing/Decompressing===&lt;br /&gt;
&lt;br /&gt;
On the other hand, this is the code needed to compress and decompress a point cloud with PCL:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/compression/octree_pointcloud_compression.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr decompressedCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Octree compressor object.&lt;br /&gt;
	// Check /usr/include/pcl-&amp;lt;version&amp;gt;/pcl/compression/compression_profiles.h for more profiles.&lt;br /&gt;
	// The second parameter enables the output of information.&lt;br /&gt;
	pcl::io::OctreePointCloudCompression&amp;lt;pcl::PointXYZ&amp;gt; octreeCompression(pcl::io::MED_RES_ONLINE_COMPRESSION_WITHOUT_COLOR, true);&lt;br /&gt;
	// Stringstream that will hold the compressed cloud.&lt;br /&gt;
	std::stringstream compressedData;&lt;br /&gt;
&lt;br /&gt;
	// Compress the cloud (you would save the stream to disk).&lt;br /&gt;
	octreeCompression.encodePointCloud(cloud, compressedData);&lt;br /&gt;
&lt;br /&gt;
	// Decompress the cloud.&lt;br /&gt;
	octreeCompression.decodePointCloud(compressedData, decompressedCloud);&lt;br /&gt;
&lt;br /&gt;
	// Display the decompressed cloud.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Octree compression&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(decompressedCloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforeCompression.png | Original point cloud.&lt;br /&gt;
File:AfterCompression.png | Point cloud retrieved after octree compression.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As you can see, the process has decreased the amount of points in the cloud. The default parameters of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;OctreePointCloudCompression&amp;quot;&amp;lt;/span&amp;gt; object establish an octree resolution (at lowest level) of 1cm, and a coordinate precision of 1mm. These values can of course be changed to suit your needs, but simplifying the cloud in this way is an useful operation that we will cover later.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Resolution, Compression profile&lt;br /&gt;
* '''Output''': Compressed stream&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/compression.php Point Cloud Compression]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1io_1_1_octree_point_cloud_compression.html pcl::io::OctreePointCloudCompression]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_octree_compress.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Detecting changes===&lt;br /&gt;
&lt;br /&gt;
PCL has an octree-based object that allows to detect changes between two point clouds (for example, two clouds taken one after another with an OpenNI device). This object stores the trees of both clouds at the same time, and compares them, returning a list of indices of the points that are new in the second cloud (they did not exist in the first):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/octree/octree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudA(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudB(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read two PCD files from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudA) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[2], *cloudB) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Change detector object, with a resolution of 128&lt;br /&gt;
	// (resolution at lowest octree level).&lt;br /&gt;
	pcl::octree::OctreePointCloudChangeDetector&amp;lt;pcl::PointXYZ&amp;gt; octree(128.0f);&lt;br /&gt;
&lt;br /&gt;
	// Add cloudA to the octree.&lt;br /&gt;
	octree.setInputCloud(cloudA);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
	// The change detector object is able to store two clouds at the same time;&lt;br /&gt;
	// with this we can reset the buffer but keep the previous tree saved.&lt;br /&gt;
	octree.switchBuffers();&lt;br /&gt;
	// Now, add cloudB to the octree like we did before.&lt;br /&gt;
	octree.setInputCloud(cloudB);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; newPoints;&lt;br /&gt;
	// Get a vector with the indices of all points that are new in cloud B,&lt;br /&gt;
	// when compared with the ones that existed in cloud A.&lt;br /&gt;
	octree.getPointIndicesFromNewVoxels(newPoints);&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; newPoints.size(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Point (&amp;quot; &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].x &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].y &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].z&lt;br /&gt;
				  &amp;lt;&amp;lt; &amp;quot;) was not in cloud A but is in cloud B&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
	std::cout &amp;lt;&amp;lt; newPoints.size() &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
} &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': First cloud, Second cloud, Resolution&lt;br /&gt;
* '''Output''': Point indices&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/octree_change.php Spatial change detection on unorganized point cloud data]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1octree_1_1_octree_point_cloud_change_detector.html pcl::octree::OctreePointCloudChangeDetector]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_change_detection.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Filtering=&lt;br /&gt;
&lt;br /&gt;
The raw point cloud, as returned from the sensor, usually needs some kind of polishing before being handed to processing algorithms. Normally, this is due to the low precision of the sensor, which leads to noise, incorrect measurements, holes in surfaces... Sometimes it is just the opposite: the cloud has so many points that processing it would take forever. Luckily, several filtering methods exist to &amp;quot;fix&amp;quot; many recurring errors in point clouds.&lt;br /&gt;
&lt;br /&gt;
==Passthrough filter==&lt;br /&gt;
&lt;br /&gt;
A passthrough filter will remove any point from the cloud whose values do not fall in a certain range, user-given. For example, if you wanted to discard all points farther away than, say, 3 meters, you would run the filter on the Z coordinate with a range of [0,3]. This filter can be useful to discard unneeded objects from the cloud, but you may have to adapt a different reference frame if the default one (relative to the sensor) is not fitting. For example, filtering on the Y value to remove all points not sitting on a table will yield unwanted results if the camera was at an odd angle.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/passthrough.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::PassThrough&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Filter out all points with Z values not in the [0-2] range.&lt;br /&gt;
	filter.setFilterFieldName(&amp;quot;z&amp;quot;);&lt;br /&gt;
	filter.setFilterLimits(0.0, 2.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforePassthrough.png | Original point cloud.&lt;br /&gt;
File:AfterPassthrough.png | Point cloud after applying a passthrough filter with Z in [0,2].&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setFilterLimitsNegative()&amp;quot;&amp;lt;/span&amp;gt; function (not seen in the example) will tell the filter object whether to return the points inside the range (&amp;quot;false&amp;quot;, the default value) or outside it (&amp;quot;true&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Filter field, Filter limits, [Negative mode]&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/passthrough.php Filtering a PointCloud using a PassThrough filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_pass_through.html pcl::PassThrough]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_passthrough_filter.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Conditional removal==&lt;br /&gt;
&lt;br /&gt;
Another way of getting the exact same result than with the previous example would be to use a conditional removal filter. We can build any kind of condition we want for the values of the point:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/conditional_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// We must build a condition.&lt;br /&gt;
	// And &amp;quot;And&amp;quot; condition requires all tests to check true. &amp;quot;Or&amp;quot; conditions also available.&lt;br /&gt;
	pcl::ConditionAnd&amp;lt;pcl::PointXYZ&amp;gt;::Ptr condition(new pcl::ConditionAnd&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// First test, the point's Z value must be greater than (GT) 0.&lt;br /&gt;
	condition-&amp;gt;addComparison(pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;::ConstPtr(new pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;(&amp;quot;z&amp;quot;, pcl::ComparisonOps::GT, 0.0)));&lt;br /&gt;
	// Second test, the point's Z value must be less than (LT) 2.&lt;br /&gt;
	condition-&amp;gt;addComparison(pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;::ConstPtr(new pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;(&amp;quot;z&amp;quot;, pcl::ComparisonOps::LT, 2.0)));&lt;br /&gt;
	// Checks available: GT, GE, LT, LE, EQ.&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::ConditionalRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setCondition(condition);&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// If true, points that do not pass the filter will be set to a certain value (default NaN).&lt;br /&gt;
	// If false, they will be just removed, but that could break the structure of the cloud&lt;br /&gt;
	// (organized clouds are clouds taken from camera-like sensors that return a matrix-like image).&lt;br /&gt;
	filter.setKeepOrganized(true);&lt;br /&gt;
	// If keep organized was set true, points that failed the test will have their Z value set to this.&lt;br /&gt;
	filter.setUserFilterValue(0.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Condition, [Keep organized]&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/remove_outliers.php Removing outliers using a Conditional or RadiusOutlier removal]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_conditional_removal.html pcl::ConditionalRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_conditional_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Outlier removal==&lt;br /&gt;
&lt;br /&gt;
Outliers are lonely points that are spread here and there in the cloud, like annoying mosquitoes. They are the product of the sensor's inaccuracy, which registers measurements where there should not be any. Outliers are considered undesired noise because they can introduce errors in calculations, like for example the normal estimation. Hence, trimming they out of the cloud will not only make computations faster (because the less points there are, the less time it takes, as we will see with downsampling) but also help us get more precise values.&lt;br /&gt;
&lt;br /&gt;
===Radius-based===&lt;br /&gt;
&lt;br /&gt;
The radius-based outlier removal is the simplest method of all. You must specify a search radius and the minimum number of neighbors than a point must have to avoid being labelled as outlier. The algorithm will then iterate through all points (which can be slow in if the cloud is big) and perform the check: if less than that number of points are found within the radius, it is removed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/radius_outlier_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::RadiusOutlierRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Every point must have 10 neighbors within 15cm, or it will be removed.&lt;br /&gt;
	filter.setRadiusSearch(0.15);&lt;br /&gt;
	filter.setMinNeighborsInRadius(10);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All PCL classes that inherit from &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::FilterIndices&amp;quot;&amp;lt;/span&amp;gt; (like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::RadiusOutlierRemoval&amp;quot;&amp;lt;/span&amp;gt;) provide the function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt;, which if set to true, makes the filter return the points that did NOT pass the filter.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius, Neighbor count&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/remove_outliers.php Removing outliers using a Conditional or RadiusOutlier removal]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_radius_outlier_removal.html pcl::RadiusOutlierRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_radius_outlier_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Statistical===&lt;br /&gt;
&lt;br /&gt;
The statistical outlier removal process is a bit more refined. First, for every point, the mean distance to its ''K'' neighbors is computed. Then, if we asume that the result is a [https://en.wikipedia.org/wiki/Normal_distribution normal (gaussian) distribution] with a mean ''μ'' and a standard deviation ''σ'', we can deem it safe to remove all points with mean distances that fall out of the global mean plus deviation. Basically, it runs a statistical analysis of the distances between neighboring points, and trims all which are not considered &amp;quot;normal&amp;quot; (you define what &amp;quot;normal&amp;quot; is with the parameters of the algorithm).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/statistical_outlier_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::StatisticalOutlierRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Set number of neighbors to consider to 50.&lt;br /&gt;
	filter.setMeanK(50);&lt;br /&gt;
	// Set standard deviation multiplier to 1.&lt;br /&gt;
	// Points with a distance larger than 1 standard deviation of the mean distance will be outliers.&lt;br /&gt;
	filter.setStddevMulThresh(1.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can make use of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt; with this filter too.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Neighbor count, Deviation multiplier&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/statistical_outlier.php Removing outliers using a StatisticalOutlierRemoval filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_statistical_outlier_removal.html pcl::StatisticalOutlierRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_statistical_outlier_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Resampling==&lt;br /&gt;
&lt;br /&gt;
Resampling means changing the number of points of the cloud, either by reducing (downsampling) or increasing (upsampling). Both have a purpose, as we will see.&lt;br /&gt;
&lt;br /&gt;
===Downsampling===&lt;br /&gt;
&lt;br /&gt;
A sensor like Kinect or Xtion produces a cloud with 307200 (640x480) points. Believe it or not, that is a lot of information, and the new Kinect is already out there, working at a higher resolution. Keep in mind that, if we were to perform a simple operation on every point of a cloud, it would be of ''O(n)'', n being the number of points. If we had to compare every point with its ''k'' nearest neighbors, it would be ''O(nk)''. You may have noticed by now that some of the algorithms I showed you take several seconds to run on your PC; you now know why.&lt;br /&gt;
&lt;br /&gt;
Optimization techniques are being implemented as I write this to allow PCL to employ the computer's GPU (graphic card) to perform the computations using [https://en.wikipedia.org/wiki/CUDA CUDA]. Many classes already provide an alternative in the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::gpu&amp;quot;&amp;lt;/span&amp;gt; namespace. This offers dramatic improvements in speed, but it is not always a possibility.&lt;br /&gt;
&lt;br /&gt;
Another option is to reduce the complexity of the cloud, that is, to remove all points that are not needed for the purpose. There are several ways to do this. You could trim all outliers from the cloud, then perform segmentation to extract only the parts that interest you. A common operation is also downsampling, which will give you back a cloud that, for all intents and purposes, equivalent to the original one, despite having less points.&lt;br /&gt;
&lt;br /&gt;
Downsampling is done via ''voxelization''. I already explained what voxels were in the [[#Octree | octree section]]. The cloud is divided in multiple cube-shaped regions with the desired resolution. Then, all points inside every voxel are processed so only one remains. The simplest way would be to randomly select one of them, but a more accurate approach would be to compute the [[PCL/OpenNI_tutorial_2:_Cloud_processing_%28basic%29#Computing_the_centroid|centroid]], which is the point whose coordinates are the mean values of all the points that belong to the voxel.&lt;br /&gt;
&lt;br /&gt;
If done with sensible parameters, downsampling will yield results that are precise enough to work with, at the cost of less CPU power.&lt;br /&gt;
&lt;br /&gt;
====Voxel grid====&lt;br /&gt;
&lt;br /&gt;
A straightforward way of downsampling a cloud:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/voxel_grid.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::VoxelGrid&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// We set the size of every voxel to be 1x1x1cm&lt;br /&gt;
	// (only one point per every cubic centimeter will survive).&lt;br /&gt;
	filter.setLeafSize(0.01f, 0.01f, 0.01f);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforeDownsampling.png | Original point cloud.&lt;br /&gt;
File:AfterDownsampling.png | Point cloud after being downsampled with a leaf size of 1cm.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Voxel size&lt;br /&gt;
* '''Output''': Downsampled cloud&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/voxel_grid.php Downsampling a PointCloud using a VoxelGrid filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_voxel_grid.html pcl::VoxelGrid]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_downsample.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Uniform sampling====&lt;br /&gt;
&lt;br /&gt;
This class does basically the same, but it outputs the indices of the points that survived the process. It is a common way of choosing keypoints when computing [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29|descriptors]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/keypoints/uniform_sampling.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Uniform sampling object.&lt;br /&gt;
	pcl::UniformSampling&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// We set the size of every voxel to be 1x1x1cm&lt;br /&gt;
	// (only one point per every cubic centimeter will survive).&lt;br /&gt;
	filter.setRadiusSearch(0.01f);&lt;br /&gt;
	// We need an additional object to store the indices of surviving points.&lt;br /&gt;
	pcl::PointCloud&amp;lt;int&amp;gt; keypointIndices;&lt;br /&gt;
&lt;br /&gt;
	filter.compute(keypointIndices);&lt;br /&gt;
	pcl::copyPointCloud(*cloud, keypointIndices.points, *filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Voxel size&lt;br /&gt;
* '''Output''': Point indices&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_uniform_sampling.html pcl::UniformSampling]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_downsample_uniform.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Upsampling===&lt;br /&gt;
&lt;br /&gt;
Upsampling is a form of [[PCL/OpenNI_tutorial_2:_Cloud_processing_%28basic%29#Reconstruction| surface reconstruction]], but I put it here. When you have fewer points than you would like, upsampling can help you to recover the original surface(s), by [https://en.wikipedia.org/wiki/Upsampling interpolating] the ones you currently have, which is just a sophisticated guess. The result will never be a hundred percent accurate, but sometimes it is the only choice. So, before downsampling your clouds, be sure to keep a copy of the original data!&lt;br /&gt;
&lt;br /&gt;
PCL uses the [https://en.wikipedia.org/wiki/Moving_least_squares Moving Least Squares (MLS)] algorithm for this. The code looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/mls.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filtering object.&lt;br /&gt;
	pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Object for searching.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree;&lt;br /&gt;
	filter.setSearchMethod(kdtree);&lt;br /&gt;
	// Use all neighbors in a radius of 3cm.&lt;br /&gt;
	filter.setSearchRadius(0.03);&lt;br /&gt;
	// Upsampling method. Other possibilites are DISTINCT_CLOUD, RANDOM_UNIFORM_DENSITY&lt;br /&gt;
	// and VOXEL_GRID_DILATION. NONE disables upsampling. Check the API for details.&lt;br /&gt;
	filter.setUpsamplingMethod(pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointXYZ&amp;gt;::SAMPLE_LOCAL_PLANE);&lt;br /&gt;
	// Radius around each point, where the local plane will be sampled.&lt;br /&gt;
	filter.setUpsamplingRadius(0.03);&lt;br /&gt;
	// Sampling step size. Bigger values will yield less (if any) new points.&lt;br /&gt;
	filter.setUpsamplingStepSize(0.02);&lt;br /&gt;
&lt;br /&gt;
	filter.process(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:PCL_upsampling_original.png | Original point cloud (coffee mug on a table).&lt;br /&gt;
File:PCL_upsampling_downsampled.png | Downsampled with a leaf size of 1cm.&lt;br /&gt;
File:PCL_upsampling_local_plane.png | Upsampled using SAMPLE_LOCAL_PLANE (upsampling radius 3cm, step size 2cm).&lt;br /&gt;
File:PCL_upsampling_uniform_density.png | Upsampled using RANDOM_UNIFORM_DENSITY (target point density of 300 per 3cm radius). &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check the [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html#ac29ad97b98353d64ce64e2ff924f7d20 setUpsamplingMethod()] API to see all upsampling methods available, and the parameters you need to set for them to work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search radius, Upsampling method, [Upsampling parameters]&lt;br /&gt;
* '''Output''': Upsampled cloud&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html pcl::MovingLeastSquares]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_upsample.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Reconstruction=&lt;br /&gt;
&lt;br /&gt;
==Surface smoothing==&lt;br /&gt;
&lt;br /&gt;
As stated, depth sensors are not very accurate, and the resulting clouds have measurement errors, outliers, holes in surfaces, etc. Surfaces can be reconstructed by means of an algorithm, that iterates through all points and interpolates the data, trying to guess how the original surface was. Like with upsampling, PCL uses the [https://en.wikipedia.org/wiki/Moving_least_squares MLS] algorithm and class. Performing this step is important, because the resulting cloud's normals will be more accurate.&lt;br /&gt;
&lt;br /&gt;
Surface smoothing is easily done with the following code:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/mls.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// The output will also contain the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;::Ptr smoothedCloud(new pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Smoothing object (we choose what point types we want as input and output).&lt;br /&gt;
	pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointNormal&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Use all neighbors in a radius of 3cm.&lt;br /&gt;
	filter.setSearchRadius(0.03);&lt;br /&gt;
	// If true, the surface and normal are approximated using a polynomial estimation&lt;br /&gt;
	// (if false, only a tangent one).&lt;br /&gt;
	filter.setPolynomialFit(true);&lt;br /&gt;
	// We can tell the algorithm to also compute smoothed normals (optional).&lt;br /&gt;
	filter.setComputeNormals(true);&lt;br /&gt;
	// kd-tree object for performing searches.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree;&lt;br /&gt;
	filter.setSearchMethod(kdtree);&lt;br /&gt;
&lt;br /&gt;
	filter.process(*smoothedCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The output cloud will also contain the improved normals of the smoother cloud.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search radius, [Polynomial fit], [Compute normals]&lt;br /&gt;
* '''Output''': Smoothed cloud, [Smoothed normals]&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/resampling.php Smoothing and normal estimation based on polynomial reconstruction]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html pcl::MovingLeastSquares]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_surface_smoothing.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_troubleshooting&amp;diff=4746</id>
		<title>PCL/OpenNI troubleshooting</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_troubleshooting&amp;diff=4746"/>
				<updated>2015-11-01T23:29:07Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This page tries to cover all of the issues I ran into while trying to get my Kinect device ready for developing with PCL. If you have any questions regarding this subject or know of a different bug, feel free to send me an email describing it, and I will add it to this section. Just remember to try Google first!&lt;br /&gt;
&lt;br /&gt;
=CUDA=&lt;br /&gt;
&lt;br /&gt;
==Unsupported configuration==&lt;br /&gt;
&lt;br /&gt;
When you run the CUDA installer on a fairly modern Linux, you may get a message saying that you are trying to install it on an unsupported configuration. What this means is that CUDA can not work with the newest versions of the GCC compiler. As of version 7, the highest compatible GCC is 4.8.2 for Ubuntu 14.04 LTS and 4.9.1 for 14.10 (12.04 LTS has been deprecated). Check the compatibility table [http://docs.nvidia.com/cuda/cuda-getting-started-guide-for-linux/#system-requirements here].&lt;br /&gt;
&lt;br /&gt;
In order to solve this, you must install an older version, like 4.6 for Ubuntu 12.04, and set it as the &amp;quot;official&amp;quot; alternative for compiling (change 4.7 for your current version):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install gcc-4.6 g++-4.6&lt;br /&gt;
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.6 60 --slave /usr/bin/g++ g++ /usr/bin/g++-4.6 &lt;br /&gt;
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.7 40 --slave /usr/bin/g++ g++ /usr/bin/g++-4.7&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, run the following command and choose GCC 4.6 as alternative:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo update-alternatives --config gcc&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Whenever you want to switch back to GCC 4.7 or newer, re-run the previous command. Please mind, that both C and C++ compilers are changed, not just C.&lt;br /&gt;
&lt;br /&gt;
==Unsupported GPU architecture==&lt;br /&gt;
&lt;br /&gt;
When compiling PCL with CUDA support enabled, you may get a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;nvcc fatal : Unsupported gpu architecture 'compute_52'&amp;quot;''&amp;lt;/span&amp;gt; error or similar. This means that your current graphic card does not support some of the capabilities of the most modern versions of CUDA, and the SDK can not compile the binaries. You can see the compatibility info for your card model [https://en.wikipedia.org/wiki/CUDA#Supported_GPUs here].&lt;br /&gt;
&lt;br /&gt;
In order to solve this, you must clean your PCL build folder by erasing all content:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;rm -rf ./*&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Configure again with CMake:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;cmake ..&lt;br /&gt;
ccmake .&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And look for the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;CUDA_ARCH_BIN&amp;quot;&amp;lt;/span&amp;gt; option. Edit it and remove any architectures that are not compatible with your GPU, like for example 5.2 (in the above case). Then, configure and generate as usual with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''C'''&amp;lt;/span&amp;gt; and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''G'''&amp;lt;/span&amp;gt;, and compile.&lt;br /&gt;
&lt;br /&gt;
=Apt=&lt;br /&gt;
&lt;br /&gt;
==Unmet dependencies==&lt;br /&gt;
&lt;br /&gt;
If you get an error like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;Depends: X but it is not going to be installed&amp;quot;''&amp;lt;/span&amp;gt; while installing a package through apt (''X'' being the name of another package), it probably means that there is a conflict between two repositories offering different versions of the same package (e.g. PCL and ROS ones). To solve this, you will have to completely remove one of the repositories and stick with the other.&lt;br /&gt;
&lt;br /&gt;
Another possibility is that the package in question is just broken, maybe because it is deprecated and pending removal, or because you are using a very recent release of Ubuntu and there is no version for it. I used to get this error a lot when trying to install OpenNI, but I solved it by following the advice above. Also, you can try compiling everything from source.&lt;br /&gt;
&lt;br /&gt;
==Trying to overwrite X, which is also in package Y==&lt;br /&gt;
&lt;br /&gt;
During the installation of the PrimeSense drivers (using the ROS approach), you may get an error that says &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;trying to overwrite '/etc/openni/GlobalDefaults.ini', which is also in package libopenni-sensor-pointclouds0&amp;quot;''&amp;lt;/span&amp;gt;. This is actually a common problem, that can happen with other packages. It means that a file included in one of the new packages is also provided by another package that is already installed. In this case, it is the &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''/etc/openni/GlobalDefaults.ini''&amp;lt;/span&amp;gt; file, which is already included in the package &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''libopenni-sensor-pointclouds0''&amp;lt;/span&amp;gt;, making the installation of &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''libopenni-sensor-primesense0''&amp;lt;/span&amp;gt; fail.&lt;br /&gt;
&lt;br /&gt;
In order to fix it, run the following command to manually retry the installation (change the file name as necessary):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo dpkg -i --force-overwrite /var/cache/apt/archives/libopenni-sensor-primesense0_5.1.0.41-3+trusty1_amd64.deb&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This time, you will be prompted whether or not you want to overwrite the file. Choose yes, and then use the following command to make sure there are no packages left pending:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get -f install&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=CMake=&lt;br /&gt;
&lt;br /&gt;
==VTK warnings==&lt;br /&gt;
&lt;br /&gt;
When running CMake to configure your PCL project before building it, you may get a lot of messages that say something like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=make&amp;gt;-- The imported target &amp;quot;vtkPythonCore&amp;quot; references the file&lt;br /&gt;
    &amp;quot;/usr/lib/libvtkPythonCore.so.5.8.0&amp;quot;&lt;br /&gt;
but this file does not exist.  Possible reasons include:&lt;br /&gt;
* The file was deleted, renamed, or moved to another location.&lt;br /&gt;
* An install or uninstall procedure did not complete successfully.&lt;br /&gt;
* The installation package was faulty and contained&lt;br /&gt;
    &amp;quot;/usr/lib/vtk-5.8/VTKTargets-release.cmake&amp;quot;&lt;br /&gt;
but not all the files it references.&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The program will build anyway, so this is not a big issue. But if you find the messages annoying, you can fix it by installing two packages:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install python-vtk libvtk-java&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==QVTK_LIBRARY_DEBUG or QVTK_LIBRARY_RELEASE not found==&lt;br /&gt;
&lt;br /&gt;
If this error comes up when trying to configure the CMake build of PCL trunk, then you must install these two packages:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install libvtk6-dev libqt5opengl5-dev&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Doing this will replace VTK 5 with version 6 and Qt 4 with version 5, which are the ones now officially supported by PCL 1.8.&lt;br /&gt;
&lt;br /&gt;
=GCC=&lt;br /&gt;
&lt;br /&gt;
==No such instruction==&lt;br /&gt;
&lt;br /&gt;
When building PCL, it is possible that compilation will fail with a message like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;no such instruction: `vfmadd312ss 32(%rsp),%xmm15,%xmm7'&amp;quot;''&amp;lt;/span&amp;gt;. What this means is that you have a combination of a brand new i7 CPU and an outdated compiler (such as GCC 4.6, which comes with Ubuntu 12.04). You could try to update to GCC 4.8, with it is a bit complicated, but there is an easier solution. Configure again the project with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;ccmake .&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, you must press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''T'''&amp;lt;/span&amp;gt; to toggle advanced mode. Then, look for the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;CMAKE_C_FLAGS&amp;quot;&amp;lt;/span&amp;gt; and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;CMAKE_CXX_FLAGS&amp;quot;&amp;lt;/span&amp;gt; options. Edit them and add &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;-march=corei7&amp;quot;&amp;lt;/span&amp;gt; (by default, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;-march=native&amp;quot;&amp;lt;/span&amp;gt; is set in the Makefiles). Configure and generate (&amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''C'''&amp;lt;/span&amp;gt; and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''G'''&amp;lt;/span&amp;gt;), and resume the compilation.&lt;br /&gt;
&lt;br /&gt;
=PCL=&lt;br /&gt;
&lt;br /&gt;
==Point type not properly registered==&lt;br /&gt;
&lt;br /&gt;
You will get this error if you try to save or load a descriptor that does not have its own defined custom PointType (and uses &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::Histogram&amp;lt;&amp;gt;&amp;quot;&amp;lt;/span&amp;gt;). See [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#Saving_and_loading|this]] for the solution.&lt;br /&gt;
&lt;br /&gt;
=OpenNI=&lt;br /&gt;
&lt;br /&gt;
==No device found / No devices connected==&lt;br /&gt;
&lt;br /&gt;
===Microsoft Kinect===&lt;br /&gt;
&lt;br /&gt;
If your system does not seem to be able to use Kinect, despite it being correctly plugged and listed by &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''lsusb''&amp;lt;/span&amp;gt;, then you may have a problem with the installed drivers. I eventually discovered that, in 32-bit Ubuntu, it was necessary to downgrade &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''openni-dev''&amp;lt;/span&amp;gt; and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''ps-engine''&amp;lt;/span&amp;gt; to versions 1.3.2.1 and 5.0.3.3 respectively, using your package manager.&lt;br /&gt;
&lt;br /&gt;
If you use &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''Synaptic''&amp;lt;/span&amp;gt;, the instructions are simple. Select both mentioned packages, press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Ctrl+E'''&amp;lt;/span&amp;gt; to choose the desired version you want installed, and then apply the changes, accepting to overwrite any files. Then, to prevent them from being restored in the next upgrade, select each and click the &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''&amp;quot;Lock Version&amp;quot;''&amp;lt;/span&amp;gt; option in the &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''&amp;quot;Package&amp;quot;''&amp;lt;/span&amp;gt; menu. The packages will then be picked up by the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Status: Pinned&amp;quot;&amp;lt;/span&amp;gt; filter on the left. Reboot, and everything should be working now.&lt;br /&gt;
&lt;br /&gt;
[[Image:Synaptic_Kinect_lock.png|thumb|center|700px|Downgraded and locked ''openni-dev'' and ''ps-engine'' in Synaptic package manager.]]&lt;br /&gt;
&lt;br /&gt;
===ASUS Xtion PRO===&lt;br /&gt;
&lt;br /&gt;
In order to get our Xtion device working with OpenNI, we have to modify a configuration file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo nano /etc/openni/GlobalDefaults.ini&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you installed OpenNI manually instead of using the repositories, the file is located in a different path:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo nano /usr/etc/primesense/GlobalDefaults.ini&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find (&amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Ctrl+W'''&amp;lt;/span&amp;gt;) the line that says &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;;UsbInterface=2&amp;quot;&amp;lt;/span&amp;gt; and uncomment it (remove the semicolon at the start). Save the file (&amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Ctrl+O'''&amp;lt;/span&amp;gt;) and close (&amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Ctrl+X'''&amp;lt;/span&amp;gt;). Reboot your computer, and you should be able to use your Xtion device with OpenNI/PCL. Whenever you want to switch back to a Kinect, you must comment the line, or else you will get the same &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;No devices connected&amp;quot;''&amp;lt;/span&amp;gt; or &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;USB interface is not supported!&amp;quot;''&amp;lt;/span&amp;gt; error.&lt;br /&gt;
&lt;br /&gt;
If you still get this error, or you have another problem, like inconsistent framerate or freeze-ups, check that your Xtion is not plugged to an USB 3.0 port. Sadly, the firmware has a bug that prevents the device from working with 3.0 ports. Either switch to a 2.0 one (if your computer does not have one, you can use a [https://en.wikipedia.org/wiki/USB_hub hub]) or try applying [https://github.com/nh2/asus-xtion-fix this fix].&lt;br /&gt;
&lt;br /&gt;
==glInit() not found / OpenGL isues==&lt;br /&gt;
&lt;br /&gt;
If you get this error while compiling OpenNI, and you are sure you have installed all dependencies, then this is due to faulty Makefiles that do not link against GL libraries where they should.&lt;br /&gt;
&lt;br /&gt;
Go to the &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/Build/Samples/''&amp;lt;/span&amp;gt; subdirectory and find the samples that are giving errors (look at the compiler output to find them). For each one, enter its subdirectory and open the &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''Makefile''&amp;lt;/span&amp;gt; you will find there. Go to the line that says:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;USED_LIBS += glut&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And change it to the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;USED_LIBS += glut GL&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The samples should now compile.&lt;br /&gt;
&lt;br /&gt;
==Permission denied==&lt;br /&gt;
&lt;br /&gt;
If you get this error related to some header (.h) in &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''/usr/include/''&amp;lt;/span&amp;gt;, it means that OpenNI's installer has given the wrong permissions when moving files there, and the compiler can not access them. Locate the faulty file and change its permissions to allow reading for everyone:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo chmod +r -R /usr/include/&amp;lt;file&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, a very drastic (and not recommended) way of fixing this for OpenNI's headers would be:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo chmod 0777 -R /usr/include/ni/&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_troubleshooting&amp;diff=4745</id>
		<title>PCL/OpenNI troubleshooting</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_troubleshooting&amp;diff=4745"/>
				<updated>2015-11-01T23:28:23Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This page tries to cover all of the issues I ran into while trying to get my Kinect device ready for developing with PCL. If you have any questions regarding this subject or know of a different bug, feel free to send me an email describing it, and I will add it to this section. Just remember to try Google first!&lt;br /&gt;
&lt;br /&gt;
=CUDA=&lt;br /&gt;
&lt;br /&gt;
==Unsupported configuration==&lt;br /&gt;
&lt;br /&gt;
When you run the CUDA installer on a fairly modern Linux, you may get a message saying that you are trying to install it on an unsupported configuration. What this means is that CUDA can not work with the newest versions of the GCC compiler. As of version 7, the highest compatible GCC is 4.8.2 for Ubuntu 14.04 LTS and 4.9.1 for 14.10 (12.04 LTS has been deprecated). Check the compatibility table [http://docs.nvidia.com/cuda/cuda-getting-started-guide-for-linux/#system-requirements here].&lt;br /&gt;
&lt;br /&gt;
In order to solve this, you must install an older version, like 4.6 for Ubuntu 12.04, and set it as the &amp;quot;official&amp;quot; alternative for compiling (change 4.7 for your current version):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install gcc-4.6 g++-4.6&lt;br /&gt;
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.6 60 --slave /usr/bin/g++ g++ /usr/bin/g++-4.6 &lt;br /&gt;
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.7 40 --slave /usr/bin/g++ g++ /usr/bin/g++-4.7&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, run the following command and choose GCC 4.6 as alternative:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo update-alternatives --config gcc&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Whenever you want to switch back to GCC 4.7 or newer, re-run the previous command. Please mind, that both C and C++ compilers are changed, not just C.&lt;br /&gt;
&lt;br /&gt;
==Unsupported GPU architecture==&lt;br /&gt;
&lt;br /&gt;
When compiling PCL with CUDA support enabled, you may get a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;nvcc fatal : Unsupported gpu architecture 'compute_52'&amp;quot;''&amp;lt;/span&amp;gt; error or similar. This means that your current graphic card does not support some of the capabilities of the most modern versions of CUDA, and the SDK can not compile the binaries. You can see the compatibility info for your card model [https://en.wikipedia.org/wiki/CUDA#Supported_GPUs here].&lt;br /&gt;
&lt;br /&gt;
In order to solve this, you must clean your PCL build folder by erasing all content:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;rm -rf ./*&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Configure again with CMake:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;cmake ..&lt;br /&gt;
ccmake .&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And look for the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;CUDA_ARCH_BIN&amp;quot;&amp;lt;/span&amp;gt; option. Edit it and remove any architectures that are not compatible with your GPU, like for example 5.2 (in the above case). Then, configure and generate as usual with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''C'''&amp;lt;/span&amp;gt; and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''G'''&amp;lt;/span&amp;gt;, and compile.&lt;br /&gt;
&lt;br /&gt;
=Apt=&lt;br /&gt;
&lt;br /&gt;
==Unmet dependencies==&lt;br /&gt;
&lt;br /&gt;
If you get an error like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;Depends: X but it is not going to be installed&amp;quot;''&amp;lt;/span&amp;gt; while installing a package through apt (''X'' being the name of another package), it probably means that there is a conflict between two repositories offering different versions of the same package (e.g. PCL and ROS ones). To solve this, you will have to completely remove one of the repositories and stick with the other.&lt;br /&gt;
&lt;br /&gt;
Another possibility is that the package in question is just broken, maybe because it is deprecated and pending removal, or because you are using a very recent release of Ubuntu and there is no version for it. I used to get this error a lot when trying to install OpenNI, but I solved it by following the advice above. Also, you can try compiling everything from source.&lt;br /&gt;
&lt;br /&gt;
==Trying to overwrite X, which is also in package Y==&lt;br /&gt;
&lt;br /&gt;
During the installation of the PrimeSense drivers (using the ROS approach), you may get an error that says &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;trying to overwrite '/etc/openni/GlobalDefaults.ini', which is also in package libopenni-sensor-pointclouds0&amp;quot;''&amp;lt;/span&amp;gt;. This is actually a common problem, that can happen with other packages. It means that a file included in one of the new packages is also provided by another package that is already installed. In this case, it is the &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''/etc/openni/GlobalDefaults.ini''&amp;lt;/span&amp;gt; file, which is already included in the package &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''libopenni-sensor-pointclouds0''&amp;lt;/span&amp;gt;, making the installation of &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''libopenni-sensor-primesense0''&amp;lt;/span&amp;gt; fail.&lt;br /&gt;
&lt;br /&gt;
In order to fix it, run the following command to manually retry the installation (change the file name as necessary):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo dpkg -i --force-overwrite /var/cache/apt/archives/libopenni-sensor-primesense0_5.1.0.41-3+trusty1_amd64.deb&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This time, you will be prompted whether or not you want to overwrite the file. Choose yes, and then use the following command to make sure there are no packages left pending:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get -f install&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=CMake=&lt;br /&gt;
&lt;br /&gt;
==VTK warnings==&lt;br /&gt;
&lt;br /&gt;
When running CMake to configure your PCL project before building it, you may get a lot of messages that say something like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=make&amp;gt;-- The imported target &amp;quot;vtkPythonCore&amp;quot; references the file&lt;br /&gt;
    &amp;quot;/usr/lib/libvtkPythonCore.so.5.8.0&amp;quot;&lt;br /&gt;
but this file does not exist.  Possible reasons include:&lt;br /&gt;
* The file was deleted, renamed, or moved to another location.&lt;br /&gt;
* An install or uninstall procedure did not complete successfully.&lt;br /&gt;
* The installation package was faulty and contained&lt;br /&gt;
    &amp;quot;/usr/lib/vtk-5.8/VTKTargets-release.cmake&amp;quot;&lt;br /&gt;
but not all the files it references.&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The program will build anyway, so this is not a big issue. But if you find the messages annoying, you can fix it by installing two packages:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install python-vtk libvtk-java&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==QVTK_LIBRARY_DEBUG or QVTK_LIBRARY_RELEASE not found==&lt;br /&gt;
&lt;br /&gt;
If this error comes up when trying to configure the CMake build of PCL trunk, then you must install these two packages:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install libvtk6-dev libqt5opengl5-dev&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Doing this will replace VTK 5 with version 6 and Qt 4 with version 5, which are the ones now officially supported by PCL 1.8.&lt;br /&gt;
&lt;br /&gt;
=GCC=&lt;br /&gt;
&lt;br /&gt;
==No such instruction==&lt;br /&gt;
&lt;br /&gt;
When building PCL, it is possible that compilation will fail with a message like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;no such instruction: `vfmadd312ss 32(%rsp),%xmm15,%xmm7'&amp;quot;''&amp;lt;/span&amp;gt;. What this means is that you have a combination of a brand new i7 CPU and an outdated compiler (such as GCC 4.6, which comes with Ubuntu 12.04). You could try to update to GCC 4.8, with it is a bit complicated, but there is an easier solution. Configure again the project with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;ccmake .&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, you must press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''T'''&amp;lt;/span&amp;gt; to toggle advanced mode. Then, look for the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;CMAKE_C_FLAGS&amp;quot;&amp;lt;/span&amp;gt; and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;CMAKE_CXX_FLAGS&amp;quot;&amp;lt;/span&amp;gt; options. Edit them and add &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;-march=corei7&amp;quot;&amp;lt;/span&amp;gt; (by default, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;-march=native&amp;quot;&amp;lt;/span&amp;gt; is set in the Makefiles). Configure and generate (&amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''C'''&amp;lt;/span&amp;gt; and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''G'''&amp;lt;/span&amp;gt;), and resume the compilation.&lt;br /&gt;
&lt;br /&gt;
=PCL=&lt;br /&gt;
&lt;br /&gt;
==Point type not properly registered==&lt;br /&gt;
&lt;br /&gt;
You will get this error if you try to save or load a descriptor that does not have its own defined custom PointType (and uses &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::Histogram&amp;lt;&amp;gt;&amp;quot;&amp;lt;/span&amp;gt;). See [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#Saving_and_loading|this]] for the solution.&lt;br /&gt;
&lt;br /&gt;
=OpenNI=&lt;br /&gt;
&lt;br /&gt;
==No device found / No devices connected==&lt;br /&gt;
&lt;br /&gt;
===Microsoft Kinect===&lt;br /&gt;
&lt;br /&gt;
If your system does not seem to be able to use Kinect, despite it being correctly plugged and listed by &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''lsusb''&amp;lt;/span&amp;gt;, then you may have a problem with the installed drivers. I eventually discovered that, in 32-bit Ubuntu, it was necessary to downgrade &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''openni-dev''&amp;lt;/span&amp;gt; and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''ps-engine''&amp;lt;/span&amp;gt; to versions 1.3.2.1 and 5.0.3.3 respectively, using your package manager.&lt;br /&gt;
&lt;br /&gt;
If you use &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''Synaptic''&amp;lt;/span&amp;gt;, the instructions are simple. Select both mentioned packages, press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Ctrl+E'''&amp;lt;/span&amp;gt; to choose the desired version you want installed, and then apply the changes, accepting to overwrite any files. Then, to prevent them from being restored in the next upgrade, select each and click the &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''&amp;quot;Lock Version&amp;quot;''&amp;lt;/span&amp;gt; option in the &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''&amp;quot;Package&amp;quot;''&amp;lt;/span&amp;gt; menu. The packages will then be picked up by the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Status: Pinned&amp;quot;&amp;lt;/span&amp;gt; filter on the left. Reboot, and everything should be working now.&lt;br /&gt;
&lt;br /&gt;
[[Image:Synaptic_Kinect_lock.png|thumb|center|700px|Downgraded and locked ''openni-dev'' and ''ps-engine'' in Synaptic package manager.]]&lt;br /&gt;
&lt;br /&gt;
===ASUS Xtion PRO===&lt;br /&gt;
&lt;br /&gt;
In order to get our Xtion device working with OpenNI, we have to modify a configuration file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo nano /etc/openni/GlobalDefaults.ini&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you installed OpenNI manually instead of using the repositories, the file is located in a different path:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo nano /usr/etc/primesense/GlobalDefaults.ini&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find (&amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Ctrl+W'''&amp;lt;/span&amp;gt;) the line that says &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;;UsbInterface=2&amp;quot;&amp;lt;/span&amp;gt; and uncomment it (remove the semicolon at the start). Save the file (&amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Ctrl+O'''&amp;lt;/span&amp;gt;) and close (&amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Ctrl+X'''&amp;lt;/span&amp;gt;). Reboot your computer, and you should be able to use your Xtion device with OpenNI/PCL. Whenever you want to switch back to a Kinect, you must comment the line, or else you will get the same &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;No devices connected&amp;quot;''&amp;lt;/span&amp;gt; or &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;USB interface is not supported!&amp;quot;''&amp;lt;/span&amp;gt; error.&lt;br /&gt;
&lt;br /&gt;
If you still get this error, or you have another problem, like inconsistent framerate or freeze-ups, check that your Xtion is not plugged to an USB 3.0 port. Sadly, the firmware has a bug that prevents the device from working with 3.0 ports. Either switch to a 2.0 one (if your computer does not have one, you can use a [http://en.wikipedia.org/wiki/USB_hub hub]) or try applying [https://github.com/nh2/asus-xtion-fix this fix].&lt;br /&gt;
&lt;br /&gt;
==glInit() not found / OpenGL isues==&lt;br /&gt;
&lt;br /&gt;
If you get this error while compiling OpenNI, and you are sure you have installed all dependencies, then this is due to faulty Makefiles that do not link against GL libraries where they should.&lt;br /&gt;
&lt;br /&gt;
Go to the &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/Build/Samples/''&amp;lt;/span&amp;gt; subdirectory and find the samples that are giving errors (look at the compiler output to find them). For each one, enter its subdirectory and open the &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''Makefile''&amp;lt;/span&amp;gt; you will find there. Go to the line that says:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;USED_LIBS += glut&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And change it to the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;USED_LIBS += glut GL&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The samples should now compile.&lt;br /&gt;
&lt;br /&gt;
==Permission denied==&lt;br /&gt;
&lt;br /&gt;
If you get this error related to some header (.h) in &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''/usr/include/''&amp;lt;/span&amp;gt;, it means that OpenNI's installer has given the wrong permissions when moving files there, and the compiler can not access them. Locate the faulty file and change its permissions to allow reading for everyone:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo chmod +r -R /usr/include/&amp;lt;file&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, a very drastic (and not recommended) way of fixing this for OpenNI's headers would be:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo chmod 0777 -R /usr/include/ni/&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_5:_3D_object_recognition_(pipeline)&amp;diff=4744</id>
		<title>PCL/OpenNI tutorial 5: 3D object recognition (pipeline)</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_5:_3D_object_recognition_(pipeline)&amp;diff=4744"/>
				<updated>2015-11-01T23:26:02Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In our [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)|previous article]] we saw how 3D descriptors could be used to identify points or clusters. But in order to have a working object recognition system, many more things are necessary. The sequence of steps that we have to implement to make such a system is known as the ''pipeline''. This final article will explain how to do it. The scope of what we will talk about is very wide and many has been written, so you should consider this a simple introductory tutorial, to build a basic knowledge so you can experiment further.&lt;br /&gt;
&lt;br /&gt;
=Overview=&lt;br /&gt;
&lt;br /&gt;
Ideally, a 3D object recognition system should be able to grab clouds from the device, preprocess them, compute descriptors, compare them with the ones stored in our object database, and output all matches with their position and orientation in the scene, in real time. Several components must then be implemented to perform these sequential steps, each one taking as input the output of the previous. This pipeline will be different depending on what type of descriptor we are using: local or global.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:3D_recognition_pipeline.png|thumb|center|800px|Example of local and global 3D recognition pipelines in PCL (image from [http://www.inf.ethz.ch/personal/zeislb/publications/aldoma_2012jram_PCLTutorial.pdf this paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
First of all, we have to ''train'' the system. Training means, in this case, creating a database will all the objects we want to be able to recognize, and the descriptors for every associated pose. Only after that we can implement the recognition pipeline. Also, there are some postprocessing steps that are not mandatory to perform but will yield better results if done, like pose refinement and hypothesis verification.&lt;br /&gt;
&lt;br /&gt;
In the next sections we will go over every step, with some PCL code snippets, as always.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://www.pointclouds.org/assets/uploads/cglibs13_recognition.pdf 3D Object Recognition and 6DOF Pose Estimation]&lt;br /&gt;
** [http://www.pointclouds.org/assets/icra2013/ensembles.pdf Ensemble Learning for Object Recognition and Pose Estimation]&lt;br /&gt;
** [http://www.pointclouds.org/assets/uploads/PCL-IAS13_Recognition.pdf 3D Object Recognition in Clutter with the Point Cloud Library]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [https://www.inf.ethz.ch/personal/zeislb/publications/aldoma_2012jram_PCLTutorial.pdf Point Cloud Library: Three-Dimensional Object Recognition and 6 DoF Pose Estimation] (Aitor Aldoma et al., 2012)&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Training=&lt;br /&gt;
&lt;br /&gt;
As stated, we require a database with all the objects we intend to recognize later (it is possible to build a recognition system that works with object categories and high level descriptions, for example to find all things that &amp;quot;look&amp;quot; like a chair in the scene, but that is an entirely different matter). The most common way to do this is to take dozens of snapshots of every object from different angles. This can be done with a device such as a pan-tilt unit (which consists of a mechanical platform that can be rotated on two axes in known, precise amounts), or with a common table and a printed checkerboard pattern on it, to be used as ground truth to get the camera position and orientation. Of course, you need a physical copy of the object first.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Pan_tilt_unit.jpg|thumb|center|350px|Pan-tilt unit (image from http://pointclouds.org/).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another possibility is to do it virtually, rendering 3D models of the objects (modelled beforehand with CAD software like [https://en.wikipedia.org/wiki/Blender_%28software%29 Blender]) and using the Z buffer, which contains the depth information, to simulate the input a depth sensor would give. The perks of doing this include: no segmentation step is necessary to get the object cluster, and the ground truth pose information will be 100% exact.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Virtual_scanner_tesselated_sphere.png|thumb|center|600px|Visualization of the virtual scanning process (image taken from a presentation by [http://users.acin.tuwien.ac.at/mzillich/?site=0 Michael Zillich]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Many 3D object [http://pointclouds.org/media/ datasets] that are available for download have been created with one of these methods. Whichever you choose, be sure to be consistent and take all snapshots uniformly, with the same sampling level (one every X degrees). When choosing this amount, try to make a compromise between the complexity of the database (size is usually not a problem, but the more snapshots you have, the longer it will take to finish the matching step) and the precision of pose estimation.&lt;br /&gt;
&lt;br /&gt;
After this, the desired descriptor(s) must be computed for every snapshot of each object, and saved to disk. If global descriptors are being used, a [[PCL/OpenNI_tutorial_5:_3D_object_recognition_%28pipeline%29#CRH|Camera Roll Histogram]] (CRH) should be included in order to retrieve the full 6 DoF pose, as many descriptors are invariant to the camera roll angle, which would limit the pose estimation to 5 DoF. Finally, ground truth information about the camera position and orientation will make it possible to compare it against the result given back by the recognition system. Most datasets include such information in text files next to the corresponding snapshot, usually in the form of a 3x3 rotation matrix and a 3D translation vector.&lt;br /&gt;
&lt;br /&gt;
If you are using local descriptors, it is possible to work with a database of full objects, instead of partial snapshots (this can not be done with global descriptors because they are computed from the whole cluster, and a full object is something we would never get in practice; plus, they also encode viewpoint information). To do this, you would have to capture the object from all angles and then use registration or other technique to stitch the clouds together in order to create a complete, continous model. Resampling should also be needed because overlapping areas would have higher point density. Also, as normals are oriented according to the viewpoint when they are computed, you would have to merge them properly so they all point outwards, instead of recomputing them after composing the object cluster. The perk of using full objects is that the matching step will be shorter, as the system will not have to check against the descriptors of multiple snapshots.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/vfh_recognition.php Cluster Recognition and 6DOF Pose Estimation using VFH descriptors]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Getting a full model==&lt;br /&gt;
&lt;br /&gt;
Like I explained above, getting a full .pcd model for our object with a sensor requires some careful setup, good segmentation, normal computation and possibly resampling. On the other hand, if we use a 3D CAD model, it becomes much easier. A program like [https://en.wikipedia.org/wiki/Blender_%28software%29 Blender] (free and open source) can be used to design a model of the object, after taking some measurements. Programs like [https://en.wikipedia.org/wiki/MeshLab MeshLab] can convert to and from many types of 3D formats. In this case, the one favored by PCL is binary [https://en.wikipedia.org/wiki/PLY_%28file_format%29 PLY] with normals (Blender can export to ASCII PLY, but I have sometimes encountered a processing error with those).&lt;br /&gt;
&lt;br /&gt;
===Raytracing===&lt;br /&gt;
&lt;br /&gt;
PCL offers a couple of command line tools to render a .pcd cloud file from a CAD model. The first one does this by raytracing. It uses a 3D sphere or icosahedron that is tesselated (divided in polygonal regions). Then, the virtual camera is placed in each of the vertices (or the polygons' centers) pointing to the origin, where the object model is. Multiple snapshots are rendered, and the final cloud is composed from this information.&lt;br /&gt;
&lt;br /&gt;
You can invoke it like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_mesh2pcd &amp;lt;input.ply&amp;gt; &amp;lt;output.pcd&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here you can see the result of converting a mesh exported from Blender, the program's mascot, [https://en.wikipedia.org/wiki/Blender_%28software%29#Suzanne Suzanne]:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:Pcl_mesh2pcd_example_original.png | Original 3D mesh in VTK format.&lt;br /&gt;
File:Pcl_mesh2pcd_example_result.png | Resulting point cloud.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The tool has some parameters available, you can check their usage with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_mesh2pcd -h&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, one of the parameters controls the size of the voxel grid used to downsample the cloud after the raytracing operation. With this you can regulate the point density of the resulting .pcd file.&lt;br /&gt;
&lt;br /&gt;
===Sampling===&lt;br /&gt;
&lt;br /&gt;
The second tool we are going to see employs sampling, with a voxel grid. It gets similar results to the previous one. You can invoke it in an identical way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_mesh_sampling &amp;lt;input.ply&amp;gt; &amp;lt;output.pcd&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It has the same parameters as the previous tool. Check the usage with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_mesh_sampling -h&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Getting partial views==&lt;br /&gt;
&lt;br /&gt;
===Tool===&lt;br /&gt;
&lt;br /&gt;
If you need to get a collection of partial snapshots of the model, PCL has also a tool for that. It is the virtual equivalent of using one of those rotating tables I told you about. It works just like &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''pcl_mesh2pcd''&amp;lt;/span&amp;gt;, but instead of composing all renders into a single file, it saves each one to its own.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_virtual_scanner &amp;lt;input.ply&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Inside a directory, 42 .pcd files will be generated. You can check the usage by invoking it with no parameters; there are a couple of interesting options to add noise to the clouds, in order to better simulate the input from sensors like Kinect. If you need something more flexible (like a custom amount of snapshots, as the program does not let you change the number), see the next section.&lt;br /&gt;
&lt;br /&gt;
===Code===&lt;br /&gt;
&lt;br /&gt;
You can customize the snapshot generation process by making direct use of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;RenderViewsTesselatedSphere&amp;quot;&amp;lt;/span&amp;gt; class, which is similar to what the previous tool uses internally. The code is the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/vtk_lib_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;vtkPolyDataMapper.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/apps/render_views_tesselated_sphere.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Load the PLY model from a file.&lt;br /&gt;
	vtkSmartPointer&amp;lt;vtkPLYReader&amp;gt; reader = vtkSmartPointer&amp;lt;vtkPLYReader&amp;gt;::New();&lt;br /&gt;
	reader-&amp;gt;SetFileName(argv[1]);&lt;br /&gt;
	reader-&amp;gt;Update();&lt;br /&gt;
&lt;br /&gt;
	// VTK is not exactly straightforward...&lt;br /&gt;
	vtkSmartPointer &amp;lt; vtkPolyDataMapper &amp;gt; mapper = vtkSmartPointer&amp;lt;vtkPolyDataMapper&amp;gt;::New();&lt;br /&gt;
	mapper-&amp;gt;SetInputConnection(reader-&amp;gt;GetOutputPort());&lt;br /&gt;
	mapper-&amp;gt;Update();&lt;br /&gt;
&lt;br /&gt;
	vtkSmartPointer&amp;lt;vtkPolyData&amp;gt; object = mapper-&amp;gt;GetInput();&lt;br /&gt;
&lt;br /&gt;
	// Virtual scanner object.&lt;br /&gt;
	pcl::apps::RenderViewsTesselatedSphere render_views;&lt;br /&gt;
	render_views.addModelFromPolyData(object);&lt;br /&gt;
	// Pixel width of the rendering window, it directly affects the snapshot file size.&lt;br /&gt;
	render_views.setResolution(150);&lt;br /&gt;
	// Horizontal FoV of the virtual camera.&lt;br /&gt;
	render_views.setViewAngle(57.0f);&lt;br /&gt;
	// If true, the resulting clouds of the snapshots will be organized.&lt;br /&gt;
	render_views.setGenOrganized(true);&lt;br /&gt;
	// How much to subdivide the icosahedron. Increasing this will result in a lot more snapshots.&lt;br /&gt;
	render_views.setTesselationLevel(1);&lt;br /&gt;
	// If true, the camera will be placed at the vertices of the triangles. If false, at the centers.&lt;br /&gt;
	// This will affect the number of snapshots produced (if true, less will be made).&lt;br /&gt;
	// True: 42 for level 1, 162 for level 2, 642 for level 3...&lt;br /&gt;
	// False: 80 for level 1, 320 for level 2, 1280 for level 3...&lt;br /&gt;
	render_views.setUseVertices(true);&lt;br /&gt;
	// If true, the entropies (the amount of occlusions) will be computed for each snapshot (optional).&lt;br /&gt;
	render_views.setComputeEntropies(true);&lt;br /&gt;
&lt;br /&gt;
	render_views.generateViews();&lt;br /&gt;
&lt;br /&gt;
	// Object for storing the rendered views.&lt;br /&gt;
	std::vector&amp;lt;pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr&amp;gt; views;&lt;br /&gt;
	// Object for storing the poses, as 4x4 transformation matrices.&lt;br /&gt;
	std::vector&amp;lt;Eigen::Matrix4f, Eigen::aligned_allocator&amp;lt;Eigen::Matrix4f&amp;gt; &amp;gt; poses;&lt;br /&gt;
	// Object for storing the entropies (optional).&lt;br /&gt;
	std::vector&amp;lt;float&amp;gt; entropies;&lt;br /&gt;
	render_views.getViews(views);&lt;br /&gt;
	render_views.getPoses(poses);&lt;br /&gt;
	render_views.getEntropies(entropies);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At the end of the program, the generated views (as point clouds) and the corresponding poses (transformation matrices) get saved to those vectors, so you can use them as you like (e.g., saving the snapshots to .pcd files and the poses to text files). There is no API documentation available, if you want to know more, check the &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''render_views_tesselated_sphere.h''&amp;lt;/span&amp;gt; file inside PCL's source folder.&lt;br /&gt;
&lt;br /&gt;
If you get a segmentation fault, check if the .ply file is in ASCII format (you should be able to open and read it with a text editor). If it is, convert it to binary .ply using MeshLab. This usually solves it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': 3D model, [Resolution], [View angle], [Camera constraints], [Organized], [Sphere size]. [Tesselation level], [Use vertices], [Compute entropies]&lt;br /&gt;
* '''Output''': Snapshot clouds, Snapshot poses, [Snapshot entropies]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_snapshot_creator.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Local pipeline=&lt;br /&gt;
&lt;br /&gt;
The following sections enumerate and explain the steps that compose the object recognition pipeline when using local descriptors.&lt;br /&gt;
&lt;br /&gt;
==Keypoint extraction==&lt;br /&gt;
&lt;br /&gt;
In this step, we have to decide which points from the current cloud will be considered keypoints (and have the local descriptor computed for them). The main characteristics of a good keypoint detector are:&lt;br /&gt;
&lt;br /&gt;
* '''Repeatability''': there should be a good chance of the same points being chosen over several iterations, even when the scene is captured from a different angle.&lt;br /&gt;
* '''Distinctiveness''': the chosen keypoints should be highly characterizing and descriptive. It should be easy to describe and match them.&lt;br /&gt;
&lt;br /&gt;
Because the second one depends on the local descriptor being used (and how the feature is computed), a different keypoint detection technique would have to be implemented for each. Another simple alternative is to perform downsampling on the cloud, and use all remaining points. This can be done easily with [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#Downsampling|downsampling]]. Follow the link to find a code snippet that you can adapt to use in your system. You can also use the [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#Finding_keypoints|NARF keypoint detector]].&lt;br /&gt;
&lt;br /&gt;
===ISS===&lt;br /&gt;
&lt;br /&gt;
Another keypoint detector available in PCL is the ISS (Intrinsic Shape Signatures) one. ISS is a local descriptor presented in 2009, which creates a view-independent signature of the local surface patch. An algorithm for choosing keypoints was also included to better fit the descriptor. The algorithm scans the surfaces and chooses only points with large variations in the [https://en.wikipedia.org/wiki/Principal_curvature principal direction] (the shape of the surface), which are ideal for keypoints.&lt;br /&gt;
&lt;br /&gt;
You can use the associated PCL class this way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/keypoints/iss_3d.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// This function by Tommaso Cavallari and Federico Tombari, taken from the tutorial&lt;br /&gt;
// http://pointclouds.org/documentation/tutorials/correspondence_grouping.php&lt;br /&gt;
double&lt;br /&gt;
computeCloudResolution(const pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::ConstPtr&amp;amp; cloud)&lt;br /&gt;
{&lt;br /&gt;
	double resolution = 0.0;&lt;br /&gt;
	int numberOfPoints = 0;&lt;br /&gt;
	int nres;&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; indices(2);&lt;br /&gt;
	std::vector&amp;lt;float&amp;gt; squaredDistances(2);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt; tree;&lt;br /&gt;
	tree.setInputCloud(cloud);&lt;br /&gt;
&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; cloud-&amp;gt;size(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		if (! pcl_isfinite((*cloud)[i].x))&lt;br /&gt;
			continue;&lt;br /&gt;
&lt;br /&gt;
		// Considering the second neighbor since the first is the point itself.&lt;br /&gt;
		nres = tree.nearestKSearch(i, 2, indices, squaredDistances);&lt;br /&gt;
		if (nres == 2)&lt;br /&gt;
		{&lt;br /&gt;
			resolution += sqrt(squaredDistances[1]);&lt;br /&gt;
			++numberOfPoints;&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
	if (numberOfPoints != 0)&lt;br /&gt;
		resolution /= numberOfPoints;&lt;br /&gt;
&lt;br /&gt;
	return resolution;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point cloud and the keypoints.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr keypoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// ISS keypoint detector object.&lt;br /&gt;
	pcl::ISSKeypoint3D&amp;lt;pcl::PointXYZ, pcl::PointXYZ&amp;gt; detector;&lt;br /&gt;
	detector.setInputCloud(cloud);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	detector.setSearchMethod(kdtree);&lt;br /&gt;
	double resolution = computeCloudResolution(cloud);&lt;br /&gt;
	// Set the radius of the spherical neighborhood used to compute the scatter matrix.&lt;br /&gt;
	detector.setSalientRadius(6 * resolution);&lt;br /&gt;
	// Set the radius for the application of the non maxima supression algorithm.&lt;br /&gt;
	detector.setNonMaxRadius(4 * resolution);&lt;br /&gt;
	// Set the minimum number of neighbors that has to be found while applying the non maxima suppression algorithm.&lt;br /&gt;
	detector.setMinNeighbors(5);&lt;br /&gt;
	// Set the upper bound on the ratio between the second and the first eigenvalue.&lt;br /&gt;
	detector.setThreshold21(0.975);&lt;br /&gt;
	// Set the upper bound on the ratio between the third and the second eigenvalue.&lt;br /&gt;
	detector.setThreshold32(0.975);&lt;br /&gt;
	// Set the number of prpcessing threads to use. 0 sets it to automatic.&lt;br /&gt;
	detector.setNumberOfThreads(4);&lt;br /&gt;
	&lt;br /&gt;
	detector.compute(*keypoints);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search method, Salient radius, Non maxima radius, Minimum neighbors, Eigenvalues thresholds, [Number of threads]&lt;br /&gt;
* '''Output''': Keypoints&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&amp;amp;arnumber=5457637 Intrinsic shape signatures: A shape descriptor for 3D object recognition] (requires IEEE Xplore subscription) (Yu Zhong, 2009)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_i_s_s_keypoint3_d.html pcl::ISSKeypoint3D]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_ISS_keypoints.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Computing descriptors==&lt;br /&gt;
&lt;br /&gt;
The next step is to compute the desired local descriptor(s) for every keypoint. The input parameters vary from one to another, and their efectiveness depends on the scene we are capturing and the objects we are working with, so I can not give you an ideal solution. Check the list of [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#Table|available descriptors]], choose the one you deem best, and study the original publication to understand how it works and how the computation can be tuned to your needs, with the use of parameters (and try to understand how changing them affects the output). Refer to the given code snippets for each.&lt;br /&gt;
&lt;br /&gt;
==Matching==&lt;br /&gt;
&lt;br /&gt;
After the local descriptor has been computed for every keypoint in the cloud, we have to match them, finding correspondences with the ones we have stored in our object database. For this, a search structure like a [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#k-d_tree|''k''-d tree]] can be used to perform a nearest neighbor search, retrieving the Euclidean distances between descriptors (and optionally, enforcing a maximum distance value as a threshold). Every descriptor in the scene should be matched against the descriptors of every model in the database, because this accounts for the presence of multiple instances of the model, which would not be recognized if we did it the other way around.&lt;br /&gt;
&lt;br /&gt;
The following code sample shows how to find all point correspondences between the cloud and a model:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/shot.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the SHOT descriptors for the scene.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;::Ptr sceneDescriptors(new pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;());&lt;br /&gt;
	// Object for storing the SHOT descriptors for the model.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;::Ptr modelDescriptors(new pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read the already computed descriptors from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::SHOT352&amp;gt;(argv[1], *sceneDescriptors) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::SHOT352&amp;gt;(argv[2], *modelDescriptors) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// A kd-tree object that uses the FLANN library for fast search of nearest neighbors.&lt;br /&gt;
	pcl::KdTreeFLANN&amp;lt;pcl::SHOT352&amp;gt; matching;&lt;br /&gt;
	matching.setInputCloud(modelDescriptors);&lt;br /&gt;
	// A Correspondence object stores the indices of the query and the match,&lt;br /&gt;
	// and the distance/weight.&lt;br /&gt;
	pcl::CorrespondencesPtr correspondences(new pcl::Correspondences());&lt;br /&gt;
&lt;br /&gt;
	// Check every descriptor computed for the scene.&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; sceneDescriptors-&amp;gt;size(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		std::vector&amp;lt;int&amp;gt; neighbors(1);&lt;br /&gt;
		std::vector&amp;lt;float&amp;gt; squaredDistances(1);&lt;br /&gt;
		// Ignore NaNs.&lt;br /&gt;
		if (pcl_isfinite(sceneDescriptors-&amp;gt;at(i).descriptor[0]))&lt;br /&gt;
		{&lt;br /&gt;
			// Find the nearest neighbor (in descriptor space)...&lt;br /&gt;
			int neighborCount = matching.nearestKSearch(sceneDescriptors-&amp;gt;at(i), 1, neighbors, squaredDistances);&lt;br /&gt;
			// ...and add a new correspondence if the distance is less than a threshold&lt;br /&gt;
			// (SHOT distances are between 0 and 1, other descriptors use different metrics).&lt;br /&gt;
			if (neighborCount == 1 &amp;amp;&amp;amp; squaredDistances[0] &amp;lt; 0.25f)&lt;br /&gt;
			{&lt;br /&gt;
				pcl::Correspondence correspondence(neighbors[0], static_cast&amp;lt;int&amp;gt;(i), squaredDistances[0]);&lt;br /&gt;
				correspondences-&amp;gt;push_back(correspondence);&lt;br /&gt;
			}&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
	std::cout &amp;lt;&amp;lt; &amp;quot;Found &amp;quot; &amp;lt;&amp;lt; correspondences-&amp;gt;size() &amp;lt;&amp;lt; &amp;quot; correspondences.&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/correspondence_grouping.php 3D Object Recognition based on Correspondence Grouping]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_kd_tree_f_l_a_n_n.html pcl::KdTreeFLANN]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/structpcl_1_1_correspondence.html pcl::Correspondence]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_local_pipeline_matching.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Correspondence grouping==&lt;br /&gt;
&lt;br /&gt;
Right now, all we have is a list of correspondences between keypoints in the scene and keypoints from some object(s) in the database. This does not necessarily mean that a given object is present in the scene. Consider this example: a box with two keypoints located on opposite corners. The distance between the points is known. The matching stage has found two keypoints in the scene with very similar descriptors, but as it turns out, the distance between them is way different. Unless we are taking non-rigid transformations into account (such as scaling), then we have to discard the idea that those points belong to the object.&lt;br /&gt;
&lt;br /&gt;
This check can be implemented with an additional step called ''correspondence grouping''. Like the name states, it groups correspondences that are geometrically consistent (for the given object model) in clusters, and discards the ones that do not. Rotations and translations are permitted, but anything other than that will not meet the criteria. Also, as a minimum of 3 correspondences are required for retrieving the 6 DoF pose, groups with less than that can be ignored.&lt;br /&gt;
&lt;br /&gt;
PCL offers a couple of classes to perform correspondence grouping.&lt;br /&gt;
&lt;br /&gt;
===Geometric consistency===&lt;br /&gt;
&lt;br /&gt;
This is the simplest method. It just iterates over all feature correspondences not yet grouped, and adds them to the current subset if they are geometrically consistent. The set resolution can be tuned to make the algorithm more or less forgiving when enforcing the consistency. For every subset (which should correspond with an instance of the model), this PCL class also computes the transformation (rotation and translation), making the next step ([[PCL/OpenNI_tutorial_5:_3D_object_recognition_(pipeline)#Pose_estimation|pose estimation]]) unnecessary.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/correspondence.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/recognition/cg/geometric_consistency.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the keypoints of the scene and the model.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr sceneKeypoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr modelKeypoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Objects for storing the unclustered and clustered correspondences.&lt;br /&gt;
	pcl::CorrespondencesPtr correspondences(new pcl::Correspondences());&lt;br /&gt;
	std::vector&amp;lt;pcl::Correspondences&amp;gt; clusteredCorrespondences;&lt;br /&gt;
	// Object for storing the transformations (rotation plus translation).&lt;br /&gt;
	std::vector&amp;lt;Eigen::Matrix4f, Eigen::aligned_allocator&amp;lt;Eigen::Matrix4f&amp;gt; &amp;gt; transformations;&lt;br /&gt;
&lt;br /&gt;
	// Read the keypoints from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *sceneKeypoints) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[2], *modelKeypoints) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: here you would compute the correspondences.&lt;br /&gt;
	// It has been omitted here for simplicity.&lt;br /&gt;
&lt;br /&gt;
	// Object for correspondence grouping.&lt;br /&gt;
	pcl::GeometricConsistencyGrouping&amp;lt;pcl::PointXYZ, pcl::PointXYZ&amp;gt; grouping;&lt;br /&gt;
	grouping.setSceneCloud(sceneKeypoints);&lt;br /&gt;
	grouping.setInputCloud(modelKeypoints);&lt;br /&gt;
	grouping.setModelSceneCorrespondences(correspondences);&lt;br /&gt;
	// Minimum cluster size. Default is 3 (as at least 3 correspondences&lt;br /&gt;
	// are needed to compute the 6 DoF pose).&lt;br /&gt;
	grouping.setGCThreshold(3);&lt;br /&gt;
	// Resolution of the consensus set used to cluster correspondences together,&lt;br /&gt;
	// in metric units. Default is 1.0.&lt;br /&gt;
	grouping.setGCSize(0.01);&lt;br /&gt;
&lt;br /&gt;
	grouping.recognize(transformations, clusteredCorrespondences);&lt;br /&gt;
&lt;br /&gt;
	std::cout &amp;lt;&amp;lt; &amp;quot;Model instances found: &amp;quot; &amp;lt;&amp;lt; transformations.size() &amp;lt;&amp;lt; std::endl &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; transformations.size(); i++)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Instance &amp;quot; &amp;lt;&amp;lt; (i + 1) &amp;lt;&amp;lt; &amp;quot;:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tHas &amp;quot; &amp;lt;&amp;lt; clusteredCorrespondences[i].size() &amp;lt;&amp;lt; &amp;quot; correspondences.&amp;quot; &amp;lt;&amp;lt; std::endl &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
		Eigen::Matrix3f rotation = transformations[i].block&amp;lt;3, 3&amp;gt;(0, 0);&lt;br /&gt;
		Eigen::Vector3f translation = transformations[i].block&amp;lt;3, 1&amp;gt;(0, 3);&lt;br /&gt;
		printf(&amp;quot;\t\t    | %6.3f %6.3f %6.3f | \n&amp;quot;, rotation(0, 0), rotation(0, 1), rotation(0, 2));&lt;br /&gt;
		printf(&amp;quot;\t\tR = | %6.3f %6.3f %6.3f | \n&amp;quot;, rotation(1, 0), rotation(1, 1), rotation(1, 2));&lt;br /&gt;
		printf(&amp;quot;\t\t    | %6.3f %6.3f %6.3f | \n&amp;quot;, rotation(2, 0), rotation(2, 1), rotation(2, 2));&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		printf(&amp;quot;\t\tt = &amp;lt; %0.3f, %0.3f, %0.3f &amp;gt;\n&amp;quot;, translation(0), translation(1), translation(2));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you do not need to compute the transformations (or prefer to do it later in its own step), you can use the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;cluster()&amp;quot;&amp;lt;/span&amp;gt; function instead of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;recognize()&amp;quot;&amp;lt;/span&amp;gt; one, which only takes one parameter (the object for the clustered correspondences).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (model), Points (scene), Correspondences, Minimum cluster size, Consensus set resolution&lt;br /&gt;
* '''Output''': Clustered correspondences, [Transformation]&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/correspondence_grouping.php 3D Object Recognition based on Correspondence Grouping]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_geometric_consistency_grouping.html pcl::GeometricConsistencyGrouping]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_local_pipeline_GC_grouping.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Hough voting===&lt;br /&gt;
&lt;br /&gt;
This method is a bit more complex. It uses a technique known as [https://en.wikipedia.org/wiki/Hough_transform Hough transform], which was originally devised to perform line detection on 2D images, though it was later expanded to work with arbitrary shapes or higher dimensions. The method works with a voting procedure, with votes being cast on candidates that are found to be better suitable. If enough votes are accumulated for a given position in the space, then the shape is considered &amp;quot;found&amp;quot; there and its parameters are retrieved.&lt;br /&gt;
&lt;br /&gt;
The voting procedure is carried out in a parameter space, which would have 6 dimensions in case a full pose needed to be estimated (rotation plus translation). Because that would be computationally expensive, the method implemented in PCL uses only a 3D Hough space, and employs local Reference Frames (RFs) to account for the remaining 3 DoF. Its computation can then be divided in two steps.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Hough_voting_pipeline.png|thumb|center|600px|Use of the Hough voting scheme (red) in a 3D object recognition pipeline (image from [http://vision.deis.unibo.it/fede/papers/psivt10.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Computing Reference Frames====&lt;br /&gt;
&lt;br /&gt;
For every pair of correspondences, a local Reference Frame must be computed to retrieve the pose later. PCL offers a class that implements the BOrder Aware Repeatable Directions (BOARD) algorithm.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/board.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr sceneCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr modelCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Objects for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr sceneNormals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr modelNormals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Objects for storing the keypoints.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr sceneKeypoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr modelKeypoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Objects for storing the Reference Frames.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::ReferenceFrame&amp;gt;::Ptr sceneRF(new pcl::PointCloud&amp;lt;pcl::ReferenceFrame&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::ReferenceFrame&amp;gt;::Ptr modelRF(new pcl::PointCloud&amp;lt;pcl::ReferenceFrame&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read the scene and model clouds from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *sceneCloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[2], *modelCloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: here you would estimate the normals for the whole cloud, and choose&lt;br /&gt;
	// the keypoints. It has been omitted here for simplicity.&lt;br /&gt;
&lt;br /&gt;
	// BOARD RF estimation object.&lt;br /&gt;
	pcl::BOARDLocalReferenceFrameEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::ReferenceFrame&amp;gt; rf;&lt;br /&gt;
	// Search radius (maximum distance of the points used to estimate the X and Y axes&lt;br /&gt;
	// of the BOARD Reference Frame for a given point).&lt;br /&gt;
	rf.setRadiusSearch(0.02f);&lt;br /&gt;
	// Check if support is complete, or has missing regions because it is too close to mesh borders.&lt;br /&gt;
	rf.setFindHoles(true);&lt;br /&gt;
&lt;br /&gt;
	rf.setInputCloud(sceneKeypoints);&lt;br /&gt;
	rf.setInputNormals(sceneNormals);&lt;br /&gt;
	rf.setSearchSurface(sceneCloud);&lt;br /&gt;
	rf.compute(*sceneRF);&lt;br /&gt;
&lt;br /&gt;
	rf.setInputCloud(modelKeypoints);&lt;br /&gt;
	rf.setInputNormals(modelNormals);&lt;br /&gt;
	rf.setSearchSurface(modelCloud);&lt;br /&gt;
	rf.compute(*modelRF);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Be sure to check the API reference because the BOARD estimation object has many more parameters to tune.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Search radius, [Tangent radius], [Margin array size], [Margin threshold], [Find holes], [Hole size threshold], [Steepness threshold]&lt;br /&gt;
* '''Output''': Reference Frames&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/correspondence_grouping.php 3D Object Recognition based on Correspondence Grouping]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://vision.deis.unibo.it/LRF/LRF_repeatability_ICCV2011.pdf On the repeatability of the local reference frame for partial shape matching] (Alioscia Petrelli and Luigi Di Stefano, 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_b_o_a_r_d_local_reference_frame_estimation.html pcl::BOARDLocalReferenceFrameEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_local_pipeline_Hough_RFs.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Clustering====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/recognition/cg/hough_3d.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the keypoints of the scene and the model.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr sceneKeypoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr modelKeypoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Objects for storing the Reference Frames.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::ReferenceFrame&amp;gt;::Ptr sceneRF(new pcl::PointCloud&amp;lt;pcl::ReferenceFrame&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::ReferenceFrame&amp;gt;::Ptr modelRF(new pcl::PointCloud&amp;lt;pcl::ReferenceFrame&amp;gt;);&lt;br /&gt;
	// Objects for storing the unclustered and clustered correspondences.&lt;br /&gt;
	pcl::CorrespondencesPtr correspondences(new pcl::Correspondences());&lt;br /&gt;
	std::vector&amp;lt;pcl::Correspondences&amp;gt; clusteredCorrespondences;&lt;br /&gt;
	// Object for storing the transformations (rotation plus translation).&lt;br /&gt;
	std::vector&amp;lt;Eigen::Matrix4f, Eigen::aligned_allocator&amp;lt;Eigen::Matrix4f&amp;gt; &amp;gt; transformations;&lt;br /&gt;
&lt;br /&gt;
	// Read the keypoints from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *sceneKeypoints) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[2], *modelKeypoints) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: here you would compute the correspondences and the Reference Frames.&lt;br /&gt;
	// It has been omitted here for simplicity.&lt;br /&gt;
&lt;br /&gt;
	// Object for correspondence grouping.&lt;br /&gt;
	pcl::Hough3DGrouping&amp;lt;pcl::PointXYZ, pcl::PointXYZ, pcl::ReferenceFrame, pcl::ReferenceFrame&amp;gt; grouping;&lt;br /&gt;
	grouping.setInputCloud(modelKeypoints);&lt;br /&gt;
	grouping.setInputRf(modelRF);&lt;br /&gt;
	grouping.setSceneCloud(sceneKeypoints);&lt;br /&gt;
	grouping.setSceneRf(sceneRF);&lt;br /&gt;
	grouping.setModelSceneCorrespondences(correspondences);&lt;br /&gt;
	// Minimum cluster size. Default is 3 (as at least 3 correspondences&lt;br /&gt;
	// are needed to compute the 6 DoF pose).&lt;br /&gt;
	grouping.setHoughThreshold(3);&lt;br /&gt;
	// Size of each bin in the Hough space.&lt;br /&gt;
	grouping.setHoughBinSize(3);&lt;br /&gt;
	// If true, the vote casting procedure will interpolate the score&lt;br /&gt;
	// between neighboring bins in the Hough space.&lt;br /&gt;
	grouping.setUseInterpolation(true);&lt;br /&gt;
	// If true, the vote casting procedure will use the correspondence's&lt;br /&gt;
	// weighted distance to compute the Hough voting score.&lt;br /&gt;
	grouping.setUseDistanceWeight(false);&lt;br /&gt;
&lt;br /&gt;
	grouping.recognize(transformations, clusteredCorrespondences);&lt;br /&gt;
&lt;br /&gt;
	std::cout &amp;lt;&amp;lt; &amp;quot;Model instances found: &amp;quot; &amp;lt;&amp;lt; transformations.size() &amp;lt;&amp;lt; std::endl &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; transformations.size(); i++)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Instance &amp;quot; &amp;lt;&amp;lt; (i + 1) &amp;lt;&amp;lt; &amp;quot;:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tHas &amp;quot; &amp;lt;&amp;lt; clusteredCorrespondences[i].size() &amp;lt;&amp;lt; &amp;quot; correspondences.&amp;quot; &amp;lt;&amp;lt; std::endl &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
		Eigen::Matrix3f rotation = transformations[i].block&amp;lt;3, 3&amp;gt;(0, 0);&lt;br /&gt;
		Eigen::Vector3f translation = transformations[i].block&amp;lt;3, 1&amp;gt;(0, 3);&lt;br /&gt;
		printf(&amp;quot;\t\t    | %6.3f %6.3f %6.3f | \n&amp;quot;, rotation(0, 0), rotation(0, 1), rotation(0, 2));&lt;br /&gt;
		printf(&amp;quot;\t\tR = | %6.3f %6.3f %6.3f | \n&amp;quot;, rotation(1, 0), rotation(1, 1), rotation(1, 2));&lt;br /&gt;
		printf(&amp;quot;\t\t    | %6.3f %6.3f %6.3f | \n&amp;quot;, rotation(2, 0), rotation(2, 1), rotation(2, 2));&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		printf(&amp;quot;\t\tt = &amp;lt; %0.3f, %0.3f, %0.3f &amp;gt;\n&amp;quot;, translation(0), translation(1), translation(2));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you do not give the Reference Frames to the Hough grouping object, it will calculate them itself, but then you need to specify additional parameters. See the API for more details. Also, you can &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;cluster()&amp;quot;&amp;lt;/span&amp;gt; instead of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;recognize()&amp;quot;&amp;lt;/span&amp;gt; if you want to skip the pose estimation now.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (model), Points (scene), Correspondences, Reference Frames (model), Reference Frames (scene), Hough bin size, Hough threshold, [Use distance weight], [Use interpolation]&lt;br /&gt;
* '''Output''': Clustered correspondences, [Transformation]&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/correspondence_grouping.php 3D Object Recognition based on Correspondence Grouping]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://vision.deis.unibo.it/fede/papers/psivt10.pdf Object recognition in 3D scenes with occlusions and clutter by Hough voting] (Federico Tombari and Luigi Di Stefano, 2010)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_hough3_d_grouping.html pcl::Hough3DGrouping]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_local_pipeline_Hough_grouping.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Pose estimation==&lt;br /&gt;
&lt;br /&gt;
The two correspondence grouping classes included in PCL already compute the full pose, but you can do it manually if you want, or if you want to refine the results of the previous step (removing correspondences that are not compatible with the same 6 DoF pose). You can use a method like RANSAC (RANdom SAmple Consensus), to get the rotation and translation of the rigid transformation that best fits the correspondences of a model instance. In a previous tutorial we mentioned that there was a technique called [[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Registration|feature-based registration]] which was an alternative to ICP, and this is it.&lt;br /&gt;
&lt;br /&gt;
Of course, PCL has some classes for this. The one we are going to see adds a prerejection step to the RANSAC loop, in order to discard hypotheses that are probably wrong. To do this, the algorithm forms polygons with the points of the input sets, and then checks pose-invariant geometrical constraints. To be precise, it makes sure that the polygon edges are of similar length. You can let it run for a specified number of iterations (as, like you already know, RANSAC never actually converges), or set a threshold.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/shot.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/registration/sample_consensus_prerejective.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the scene and the model.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr scene(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr model(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr alignedModel(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Objects for storing the SHOT descriptors for the scene and model.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;::Ptr sceneDescriptors(new pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;());&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;::Ptr modelDescriptors(new pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read the clouds from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *scene) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[2], *model) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: here you would compute or load the descriptors for both&lt;br /&gt;
	// the scene and the model. It has been omitted here for simplicity.&lt;br /&gt;
&lt;br /&gt;
	// Object for pose estimation.&lt;br /&gt;
	pcl::SampleConsensusPrerejective&amp;lt;pcl::PointXYZ, pcl::PointXYZ, pcl::SHOT352&amp;gt; pose;&lt;br /&gt;
	pose.setInputSource(model);&lt;br /&gt;
	pose.setInputTarget(scene);&lt;br /&gt;
	pose.setSourceFeatures(modelDescriptors);&lt;br /&gt;
	pose.setTargetFeatures(sceneDescriptors);&lt;br /&gt;
	// Instead of matching a descriptor with its nearest neighbor, choose randomly between&lt;br /&gt;
	// the N closest ones, making it more robust to outliers, but increasing time.&lt;br /&gt;
	pose.setCorrespondenceRandomness(2);&lt;br /&gt;
	// Set the fraction (0-1) of inlier points required for accepting a transformation.&lt;br /&gt;
	// At least this number of points will need to be aligned to accept a pose.&lt;br /&gt;
	pose.setInlierFraction(0.25f);&lt;br /&gt;
	// Set the number of samples to use during each iteration (minimum for 6 DoF is 3).&lt;br /&gt;
	pose.setNumberOfSamples(3);&lt;br /&gt;
	// Set the similarity threshold (0-1) between edge lengths of the polygons. The&lt;br /&gt;
	// closer to 1, the more strict the rejector will be, probably discarding acceptable poses.&lt;br /&gt;
	pose.setSimilarityThreshold(0.6f);&lt;br /&gt;
	// Set the maximum distance threshold between two correspondent points in source and target.&lt;br /&gt;
	// If the distance is larger, the points will be ignored in the alignment process.&lt;br /&gt;
	pose.setMaxCorrespondenceDistance(0.01f);&lt;br /&gt;
&lt;br /&gt;
	pose.align(*alignedModel);&lt;br /&gt;
&lt;br /&gt;
	if (pose.hasConverged())&lt;br /&gt;
	{&lt;br /&gt;
		Eigen::Matrix4f transformation = pose.getFinalTransformation();&lt;br /&gt;
		Eigen::Matrix3f rotation = transformation.block&amp;lt;3, 3&amp;gt;(0, 0);&lt;br /&gt;
		Eigen::Vector3f translation = transformation.block&amp;lt;3, 1&amp;gt;(0, 3);&lt;br /&gt;
&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Transformation matrix:&amp;quot; &amp;lt;&amp;lt; std::endl &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		printf(&amp;quot;\t\t    | %6.3f %6.3f %6.3f | \n&amp;quot;, rotation(0, 0), rotation(0, 1), rotation(0, 2));&lt;br /&gt;
		printf(&amp;quot;\t\tR = | %6.3f %6.3f %6.3f | \n&amp;quot;, rotation(1, 0), rotation(1, 1), rotation(1, 2));&lt;br /&gt;
		printf(&amp;quot;\t\t    | %6.3f %6.3f %6.3f | \n&amp;quot;, rotation(2, 0), rotation(2, 1), rotation(2, 2));&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		printf(&amp;quot;\t\tt = &amp;lt; %0.3f, %0.3f, %0.3f &amp;gt;\n&amp;quot;, translation(0), translation(1), translation(2));&lt;br /&gt;
	}&lt;br /&gt;
	else std::cout &amp;lt;&amp;lt; &amp;quot;Did not converge.&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For an alternative, check the [http://docs.pointclouds.org/trunk/classpcl_1_1_sample_consensus_model_registration.html pcl::SampleConsensusModelRegistration]  or the [http://docs.pointclouds.org/trunk/classpcl_1_1_sample_consensus_initial_alignment.html pcl::SampleConsensusInitialAlignment] classes.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (model), Points (scene), Descriptors (model), Descriptors (scene), [Correspondence randomness], [Inlier fraction], [Number of samples], [Similarity threshold]&lt;br /&gt;
* '''Output''': Aligned model, [Transformation]&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/alignment_prerejective.php Robust pose estimation of rigid objects]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/template_alignment.php Aligning object templates to a point cloud]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://personal.lut.fi/users/joni.kamarainen/downloads/publications/icra2013.pdf Pose Estimation using Local Structure-Specific Shape and Appearance Context] (Anders Glent Buch et al., 2013)&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_sample_consensus_prerejective.html pcl::SampleConsensusPrerejective]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1registration_1_1_correspondence_rejector_poly.html pcl::registration::CorrespondenceRejectorPoly]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_local_pipeline_pose.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Global pipeline=&lt;br /&gt;
&lt;br /&gt;
Because of the way global descriptors work, the global pipeline for object recognition differs in some steps.&lt;br /&gt;
&lt;br /&gt;
==Segmentation==&lt;br /&gt;
&lt;br /&gt;
Unlike local descriptors, global ones understand the notion of ''object''. They are not computed for single points, but for whole clusters. Because of this, the first step is to perform [[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Segmentation|segmentation]] on the cloud in order to retrieve all objects. You can use any method you like from the ones available in PCL, as long as it works for you. Also, performing some kind of preprocessing is a good idea, like removing all clusters that are smaller or bigger than certain thresholds, which should be set according to the objects in the database. Yet another possibility is to perform [[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Plane_model|plane segmentation]] to find any table(s) in the scene, and consider only clusters sitting on it (if we can assume there is a table, of course).&lt;br /&gt;
&lt;br /&gt;
PCL provides the [http://docs.pointclouds.org/trunk/classpcl_1_1_extract_polygonal_prism_data.html pcl::ExtractPolygonalPrismData] class for the latter task. By giving a [[PCL/OpenNI_tutorial_3:_Cloud_processing_%28advanced%29#Convex_hull|convex hull]] computed from the plane coefficients, it will extrude it a certain height to create a prism, and then give back all points that lie inside. This is the code:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/sample_consensus/method_types.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/sample_consensus/model_types.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/segmentation/sac_segmentation.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/extract_indices.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/convex_hull.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/segmentation/extract_polygonal_prism_data.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/cloud_viewer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr plane(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr convexHull(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr objects(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Get the plane model, if present.&lt;br /&gt;
	pcl::ModelCoefficients::Ptr coefficients(new pcl::ModelCoefficients);&lt;br /&gt;
	pcl::SACSegmentation&amp;lt;pcl::PointXYZ&amp;gt; segmentation;&lt;br /&gt;
	segmentation.setInputCloud(cloud);&lt;br /&gt;
	segmentation.setModelType(pcl::SACMODEL_PLANE);&lt;br /&gt;
	segmentation.setMethodType(pcl::SAC_RANSAC);&lt;br /&gt;
	segmentation.setDistanceThreshold(0.01);&lt;br /&gt;
	segmentation.setOptimizeCoefficients(true);&lt;br /&gt;
	pcl::PointIndices::Ptr planeIndices(new pcl::PointIndices);&lt;br /&gt;
	segmentation.segment(*planeIndices, *coefficients);&lt;br /&gt;
&lt;br /&gt;
	if (planeIndices-&amp;gt;indices.size() == 0)&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Could not find a plane in the scene.&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	else&lt;br /&gt;
	{&lt;br /&gt;
		// Copy the points of the plane to a new cloud.&lt;br /&gt;
		pcl::ExtractIndices&amp;lt;pcl::PointXYZ&amp;gt; extract;&lt;br /&gt;
		extract.setInputCloud(cloud);&lt;br /&gt;
		extract.setIndices(planeIndices);&lt;br /&gt;
		extract.filter(*plane);&lt;br /&gt;
&lt;br /&gt;
		// Retrieve the convex hull.&lt;br /&gt;
		pcl::ConvexHull&amp;lt;pcl::PointXYZ&amp;gt; hull;&lt;br /&gt;
		hull.setInputCloud(plane);&lt;br /&gt;
		// Make sure that the resulting hull is bidimensional.&lt;br /&gt;
		hull.setDimension(2);&lt;br /&gt;
		hull.reconstruct(*convexHull);&lt;br /&gt;
&lt;br /&gt;
		// Redundant check.&lt;br /&gt;
		if (hull.getDimension() == 2)&lt;br /&gt;
		{&lt;br /&gt;
			// Prism object.&lt;br /&gt;
			pcl::ExtractPolygonalPrismData&amp;lt;pcl::PointXYZ&amp;gt; prism;&lt;br /&gt;
			prism.setInputCloud(cloud);&lt;br /&gt;
			prism.setInputPlanarHull(convexHull);&lt;br /&gt;
			// First parameter: minimum Z value. Set to 0, segments objects lying on the plane (can be negative).&lt;br /&gt;
			// Second parameter: maximum Z value, set to 10cm. Tune it according to the height of the objects you expect.&lt;br /&gt;
			prism.setHeightLimits(0.0f, 0.1f);&lt;br /&gt;
			pcl::PointIndices::Ptr objectIndices(new pcl::PointIndices);&lt;br /&gt;
&lt;br /&gt;
			prism.segment(*objectIndices);&lt;br /&gt;
&lt;br /&gt;
			// Get and show all points retrieved by the hull.&lt;br /&gt;
			extract.setIndices(objectIndices);&lt;br /&gt;
			extract.filter(*objects);&lt;br /&gt;
			pcl::visualization::CloudViewer viewerObjects(&amp;quot;Objects on table&amp;quot;);&lt;br /&gt;
			viewerObjects.showCloud(objects);&lt;br /&gt;
			while (!viewerObjects.wasStopped())&lt;br /&gt;
			{&lt;br /&gt;
				// Do nothing but wait.&lt;br /&gt;
			}&lt;br /&gt;
		}&lt;br /&gt;
		else std::cout &amp;lt;&amp;lt; &amp;quot;The chosen hull is not planar.&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:Prism_scene_before.png | Original scene, with a mug on a table.&lt;br /&gt;
File:Prism_scene_after.png | Result after using the polygonal prism class.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you get some residual points of the table in the output, try increasing the minimum Z value a bit.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Convex hull, Height limits, [Dimensionality]&lt;br /&gt;
* '''Output''': Point indices&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_extract_polygonal_prism_data.html pcl::ExtractPolygonalPrismData]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_polygonal_prism.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Computing descriptors==&lt;br /&gt;
&lt;br /&gt;
For every cluster that has survived the previous step, a global descriptor must be computed. Most only output one histogram (except for CVFH and derivatives), so the database will be smaller than with local descriptors. Check the [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#Table|list]] and choose the one that fits you best.&lt;br /&gt;
&lt;br /&gt;
===CRH===&lt;br /&gt;
&lt;br /&gt;
In a previous article about global descriptors we talked about how they were invariant to the camera roll angle. When [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#CVFH|CVFH]] was presented, the computation of a Camera Roll Histogram (CRH) was proposed to overcome this. The CRH should be computed and stored next to the global descriptor (VFH, CVFH...) for its use in a later phase, the pose estimation.&lt;br /&gt;
&lt;br /&gt;
The CRH is computed as follows: for every point, its normal is projected onto a plane orthogonal to the vector given by the camera center and the centroid of the cluster (VFH) or region (CVFH). Then, the angle of the projected normal to the up vector of the camera is computed and added to the histogram, which has 90 bins (for a resolution of 4º). The projected normals are weighted by their magnitudes to reduce the effect of input noise. Check the [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6130296 paper] for details.&lt;br /&gt;
&lt;br /&gt;
You can compute it using PCL, of course:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/common/centroid.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/crh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// A handy typedef.&lt;br /&gt;
typedef pcl::Histogram&amp;lt;90&amp;gt; CRH90;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the CRH.&lt;br /&gt;
	pcl::PointCloud&amp;lt;CRH90&amp;gt;::Ptr histogram(new pcl::PointCloud&amp;lt;CRH90&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: this cloud file should contain a snapshot of the object. Remember&lt;br /&gt;
	// that you need to compute a CRH for every VFH or CVFH descriptor that you&lt;br /&gt;
	// are going to have (that is, one for every snapshot).&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// CRH estimation object.&lt;br /&gt;
	pcl::CRHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, CRH90&amp;gt; crh;&lt;br /&gt;
	crh.setInputCloud(object);&lt;br /&gt;
	crh.setInputNormals(normals);&lt;br /&gt;
	Eigen::Vector4f centroid;&lt;br /&gt;
	pcl::compute3DCentroid(*object, centroid);&lt;br /&gt;
	crh.setCentroid(centroid);&lt;br /&gt;
&lt;br /&gt;
	// Compute the CRH.&lt;br /&gt;
	crh.compute(*histogram);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Normals, Centroid&lt;br /&gt;
* '''Output''': Camera roll histogram&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6130296 CAD-Model Recognition and 6DOF Pose Estimation Using 3D Cues] (requires IEEE Xplore subscription) (Aitor Aldoma et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_c_r_h_estimation.html pcl::CRHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_CRH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Matching==&lt;br /&gt;
&lt;br /&gt;
Like with the local pipeline, once all the descriptors have been computed we have to perform a search for their nearest neighbors in the database. The [https://en.wikipedia.org/wiki/Taxicab_geometry Manhattan or L1 distance] is recommended over the [https://en.wikipedia.org/wiki/Euclidean_distance Euclidean or L2] one because it is more robust to occlusions.&lt;br /&gt;
&lt;br /&gt;
Usually, more than one neighbor is retrieved, which means that the found object's pose is somewhere between the two. The next steps can help improving the accuraccy of the estimation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/vfh_recognition.php Cluster Recognition and 6DOF Pose Estimation using VFH descriptors]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Pose estimation==&lt;br /&gt;
&lt;br /&gt;
The object's position can be determined by computing and aligning the [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#Computing_the_centroid|centroids]] of the clusters (the one in the scene, and the one in the snapshot). For the rotation, we must use the viewpoint information encoded in the descriptor (you can check the ground truth pose information that was saved with the matched snapshot). This still leaves the roll angle unresolved, as global descriptors are invariant to it. Now is when the Camera Roll Histogram ([[PCL/OpenNI_tutorial_5:_3D_object_recognition_%28pipeline%29#CRH|CRH]]) that we should have stored next to the descriptor comes in handy. The angle can be obtained by aligning the current CRH with the stored one, completing the 6 DoF pose estimation.&lt;br /&gt;
&lt;br /&gt;
===CRH===&lt;br /&gt;
&lt;br /&gt;
This is the code for retrieving the roll angle using CRH:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/common/centroid.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/crh.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/recognition/crh_alignment.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// A handy typedef.&lt;br /&gt;
typedef pcl::Histogram&amp;lt;90&amp;gt; CRH90;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Clouds for storing the object's cluster and view.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr viewCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr clusterCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Objects for storing the CRHs of both.&lt;br /&gt;
	pcl::PointCloud&amp;lt;CRH90&amp;gt;::Ptr viewCRH(new pcl::PointCloud&amp;lt;CRH90&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;CRH90&amp;gt;::Ptr clusterCRH(new pcl::PointCloud&amp;lt;CRH90&amp;gt;);&lt;br /&gt;
	// Objects for storing the centroids.&lt;br /&gt;
	Eigen::Vector4f viewCentroid;&lt;br /&gt;
	Eigen::Vector4f clusterCentroid;&lt;br /&gt;
&lt;br /&gt;
	// Read the trained view and the scene cluster from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *viewCloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[2], *clusterCloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: here you would compute the CRHs and centroids of both clusters.&lt;br /&gt;
	// It has been omitted here for simplicity.&lt;br /&gt;
&lt;br /&gt;
	// CRH alignment object.&lt;br /&gt;
	pcl::CRHAlignment&amp;lt;pcl::PointXYZ, 90&amp;gt; alignment;&lt;br /&gt;
	alignment.setInputAndTargetView(clusterCloud, viewCloud);&lt;br /&gt;
	// CRHAlignment works with Vector3f, not Vector4f.&lt;br /&gt;
	Eigen::Vector3f viewCentroid3f(viewCentroid[0], viewCentroid[1], viewCentroid[2]);&lt;br /&gt;
	Eigen::Vector3f clusterCentroid3f(clusterCentroid[0], clusterCentroid[1], clusterCentroid[2]);&lt;br /&gt;
	alignment.setInputAndTargetCentroids(clusterCentroid3f, viewCentroid3f);&lt;br /&gt;
&lt;br /&gt;
	// Compute the roll angle(s).&lt;br /&gt;
	std::vector&amp;lt;float&amp;gt; angles;&lt;br /&gt;
	alignment.computeRollAngle(*clusterCRH, *viewCRH, angles);&lt;br /&gt;
&lt;br /&gt;
	if (angles.size() &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;List of angles where the histograms correlate:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
		for (int i = 0; i &amp;lt; angles.size(); i++)&lt;br /&gt;
		{&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; angles.at(i) &amp;lt;&amp;lt; &amp;quot; degrees.&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will return a list of most probable roll angles. If you want, you can instead use the [http://docs.pointclouds.org/trunk/classpcl_1_1_c_r_h_alignment.html#a32c40356d56e22d48eccce5e27e436f2 align()] function, which calls &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;computeRollAngle()&amp;quot;&amp;lt;/span&amp;gt; internally. It will return a list of transformations that include the translation (computed from the difference between the centroid's coordinates) and the roll angles, but not the yaw or pitch (you have to get those with pose estimation).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (input and target), Centroids (input and target), CRHs (input and target)&lt;br /&gt;
* '''Output''': Roll angles, [Transformations]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6130296 CAD-Model Recognition and 6DOF Pose Estimation Using 3D Cues] (requires IEEE Xplore subscription) (Aitor Aldoma et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_c_r_h_alignment.html pcl::CRHAlignment]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_roll_estimation.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Postprocessing=&lt;br /&gt;
&lt;br /&gt;
The pipelines, as they are, should already return a decent result, but we can implement optional steps to improve the accuraccy of the pose estimation, and decrease the probability of a false positive.&lt;br /&gt;
&lt;br /&gt;
==Pose refinement==&lt;br /&gt;
&lt;br /&gt;
The 6 DoF pose estimation can be refined with the use of the [[PCL/OpenNI_tutorial_3:_Cloud_processing_%28advanced%29#Iterative_Closest_Point_.28ICP.29|Iterative Closest Point]] (ICP) algorithm. It will try to continuously realign the clouds to improve the transformation, until the termination condition is met (maximum iterations, and distance or error threshold). This is identical to performing [[PCL/OpenNI_tutorial_3:_Cloud_processing_%28advanced%29#Registration|registration]], only between input and model.&lt;br /&gt;
&lt;br /&gt;
==Hypothesis verification==&lt;br /&gt;
&lt;br /&gt;
''(Work in progress)''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_5:_3D_object_recognition_(pipeline)&amp;diff=4743</id>
		<title>PCL/OpenNI tutorial 5: 3D object recognition (pipeline)</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_5:_3D_object_recognition_(pipeline)&amp;diff=4743"/>
				<updated>2015-11-01T23:23:58Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In our [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)|previous article]] we saw how 3D descriptors could be used to identify points or clusters. But in order to have a working object recognition system, many more things are necessary. The sequence of steps that we have to implement to make such a system is known as the ''pipeline''. This final article will explain how to do it. The scope of what we will talk about is very wide and many has been written, so you should consider this a simple introductory tutorial, to build a basic knowledge so you can experiment further.&lt;br /&gt;
&lt;br /&gt;
=Overview=&lt;br /&gt;
&lt;br /&gt;
Ideally, a 3D object recognition system should be able to grab clouds from the device, preprocess them, compute descriptors, compare them with the ones stored in our object database, and output all matches with their position and orientation in the scene, in real time. Several components must then be implemented to perform these sequential steps, each one taking as input the output of the previous. This pipeline will be different depending on what type of descriptor we are using: local or global.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:3D_recognition_pipeline.png|thumb|center|800px|Example of local and global 3D recognition pipelines in PCL (image from [http://www.inf.ethz.ch/personal/zeislb/publications/aldoma_2012jram_PCLTutorial.pdf this paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
First of all, we have to ''train'' the system. Training means, in this case, creating a database will all the objects we want to be able to recognize, and the descriptors for every associated pose. Only after that we can implement the recognition pipeline. Also, there are some postprocessing steps that are not mandatory to perform but will yield better results if done, like pose refinement and hypothesis verification.&lt;br /&gt;
&lt;br /&gt;
In the next sections we will go over every step, with some PCL code snippets, as always.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://www.pointclouds.org/assets/uploads/cglibs13_recognition.pdf 3D Object Recognition and 6DOF Pose Estimation]&lt;br /&gt;
** [http://www.pointclouds.org/assets/icra2013/ensembles.pdf Ensemble Learning for Object Recognition and Pose Estimation]&lt;br /&gt;
** [http://www.pointclouds.org/assets/uploads/PCL-IAS13_Recognition.pdf 3D Object Recognition in Clutter with the Point Cloud Library]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [https://www.inf.ethz.ch/personal/zeislb/publications/aldoma_2012jram_PCLTutorial.pdf Point Cloud Library: Three-Dimensional Object Recognition and 6 DoF Pose Estimation] (Aitor Aldoma et al., 2012)&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Training=&lt;br /&gt;
&lt;br /&gt;
As stated, we require a database with all the objects we intend to recognize later (it is possible to build a recognition system that works with object categories and high level descriptions, for example to find all things that &amp;quot;look&amp;quot; like a chair in the scene, but that is an entirely different matter). The most common way to do this is to take dozens of snapshots of every object from different angles. This can be done with a device such as a pan-tilt unit (which consists of a mechanical platform that can be rotated on two axes in known, precise amounts), or with a common table and a printed checkerboard pattern on it, to be used as ground truth to get the camera position and orientation. Of course, you need a physical copy of the object first.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Pan_tilt_unit.jpg|thumb|center|350px|Pan-tilt unit (image from http://pointclouds.org/).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another possibility is to do it virtually, rendering 3D models of the objects (modelled beforehand with CAD software like [http://www.blender.org/ Blender]) and using the Z buffer, which contains the depth information, to simulate the input a depth sensor would give. The perks of doing this include: no segmentation step is necessary to get the object cluster, and the ground truth pose information will be 100% exact.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Virtual_scanner_tesselated_sphere.png|thumb|center|600px|Visualization of the virtual scanning process (image taken from a presentation by [http://users.acin.tuwien.ac.at/mzillich/?site=0 Michael Zillich]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Many 3D object [http://pointclouds.org/media/ datasets] that are available for download have been created with one of these methods. Whichever you choose, be sure to be consistent and take all snapshots uniformly, with the same sampling level (one every X degrees). When choosing this amount, try to make a compromise between the complexity of the database (size is usually not a problem, but the more snapshots you have, the longer it will take to finish the matching step) and the precision of pose estimation.&lt;br /&gt;
&lt;br /&gt;
After this, the desired descriptor(s) must be computed for every snapshot of each object, and saved to disk. If global descriptors are being used, a [[PCL/OpenNI_tutorial_5:_3D_object_recognition_%28pipeline%29#CRH|Camera Roll Histogram]] (CRH) should be included in order to retrieve the full 6 DoF pose, as many descriptors are invariant to the camera roll angle, which would limit the pose estimation to 5 DoF. Finally, ground truth information about the camera position and orientation will make it possible to compare it against the result given back by the recognition system. Most datasets include such information in text files next to the corresponding snapshot, usually in the form of a 3x3 rotation matrix and a 3D translation vector.&lt;br /&gt;
&lt;br /&gt;
If you are using local descriptors, it is possible to work with a database of full objects, instead of partial snapshots (this can not be done with global descriptors because they are computed from the whole cluster, and a full object is something we would never get in practice; plus, they also encode viewpoint information). To do this, you would have to capture the object from all angles and then use registration or other technique to stitch the clouds together in order to create a complete, continous model. Resampling should also be needed because overlapping areas would have higher point density. Also, as normals are oriented according to the viewpoint when they are computed, you would have to merge them properly so they all point outwards, instead of recomputing them after composing the object cluster. The perk of using full objects is that the matching step will be shorter, as the system will not have to check against the descriptors of multiple snapshots.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/vfh_recognition.php Cluster Recognition and 6DOF Pose Estimation using VFH descriptors]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Getting a full model==&lt;br /&gt;
&lt;br /&gt;
Like I explained above, getting a full .pcd model for our object with a sensor requires some careful setup, good segmentation, normal computation and possibly resampling. On the other hand, if we use a 3D CAD model, it becomes much easier. A program like [http://www.blender.org/ Blender] (free and open source) can be used to design a model of the object, after taking some measurements. Programs like [http://meshlab.sourceforge.net/ MeshLab] can convert to and from many types of 3D formats. In this case, the one favored by PCL is binary [https://en.wikipedia.org/wiki/PLY_%28file_format%29 PLY] with normals (Blender can export to ASCII PLY, but I have sometimes encountered a processing error with those).&lt;br /&gt;
&lt;br /&gt;
===Raytracing===&lt;br /&gt;
&lt;br /&gt;
PCL offers a couple of command line tools to render a .pcd cloud file from a CAD model. The first one does this by raytracing. It uses a 3D sphere or icosahedron that is tesselated (divided in polygonal regions). Then, the virtual camera is placed in each of the vertices (or the polygons' centers) pointing to the origin, where the object model is. Multiple snapshots are rendered, and the final cloud is composed from this information.&lt;br /&gt;
&lt;br /&gt;
You can invoke it like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_mesh2pcd &amp;lt;input.ply&amp;gt; &amp;lt;output.pcd&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here you can see the result of converting a mesh exported from Blender, the program's mascot, [https://en.wikipedia.org/wiki/Blender_%28software%29#Suzanne Suzanne]:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:Pcl_mesh2pcd_example_original.png | Original 3D mesh in VTK format.&lt;br /&gt;
File:Pcl_mesh2pcd_example_result.png | Resulting point cloud.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The tool has some parameters available, you can check their usage with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_mesh2pcd -h&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, one of the parameters controls the size of the voxel grid used to downsample the cloud after the raytracing operation. With this you can regulate the point density of the resulting .pcd file.&lt;br /&gt;
&lt;br /&gt;
===Sampling===&lt;br /&gt;
&lt;br /&gt;
The second tool we are going to see employs sampling, with a voxel grid. It gets similar results to the previous one. You can invoke it in an identical way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_mesh_sampling &amp;lt;input.ply&amp;gt; &amp;lt;output.pcd&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It has the same parameters as the previous tool. Check the usage with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_mesh_sampling -h&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Getting partial views==&lt;br /&gt;
&lt;br /&gt;
===Tool===&lt;br /&gt;
&lt;br /&gt;
If you need to get a collection of partial snapshots of the model, PCL has also a tool for that. It is the virtual equivalent of using one of those rotating tables I told you about. It works just like &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''pcl_mesh2pcd''&amp;lt;/span&amp;gt;, but instead of composing all renders into a single file, it saves each one to its own.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_virtual_scanner &amp;lt;input.ply&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Inside a directory, 42 .pcd files will be generated. You can check the usage by invoking it with no parameters; there are a couple of interesting options to add noise to the clouds, in order to better simulate the input from sensors like Kinect. If you need something more flexible (like a custom amount of snapshots, as the program does not let you change the number), see the next section.&lt;br /&gt;
&lt;br /&gt;
===Code===&lt;br /&gt;
&lt;br /&gt;
You can customize the snapshot generation process by making direct use of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;RenderViewsTesselatedSphere&amp;quot;&amp;lt;/span&amp;gt; class, which is similar to what the previous tool uses internally. The code is the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/vtk_lib_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;vtkPolyDataMapper.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/apps/render_views_tesselated_sphere.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Load the PLY model from a file.&lt;br /&gt;
	vtkSmartPointer&amp;lt;vtkPLYReader&amp;gt; reader = vtkSmartPointer&amp;lt;vtkPLYReader&amp;gt;::New();&lt;br /&gt;
	reader-&amp;gt;SetFileName(argv[1]);&lt;br /&gt;
	reader-&amp;gt;Update();&lt;br /&gt;
&lt;br /&gt;
	// VTK is not exactly straightforward...&lt;br /&gt;
	vtkSmartPointer &amp;lt; vtkPolyDataMapper &amp;gt; mapper = vtkSmartPointer&amp;lt;vtkPolyDataMapper&amp;gt;::New();&lt;br /&gt;
	mapper-&amp;gt;SetInputConnection(reader-&amp;gt;GetOutputPort());&lt;br /&gt;
	mapper-&amp;gt;Update();&lt;br /&gt;
&lt;br /&gt;
	vtkSmartPointer&amp;lt;vtkPolyData&amp;gt; object = mapper-&amp;gt;GetInput();&lt;br /&gt;
&lt;br /&gt;
	// Virtual scanner object.&lt;br /&gt;
	pcl::apps::RenderViewsTesselatedSphere render_views;&lt;br /&gt;
	render_views.addModelFromPolyData(object);&lt;br /&gt;
	// Pixel width of the rendering window, it directly affects the snapshot file size.&lt;br /&gt;
	render_views.setResolution(150);&lt;br /&gt;
	// Horizontal FoV of the virtual camera.&lt;br /&gt;
	render_views.setViewAngle(57.0f);&lt;br /&gt;
	// If true, the resulting clouds of the snapshots will be organized.&lt;br /&gt;
	render_views.setGenOrganized(true);&lt;br /&gt;
	// How much to subdivide the icosahedron. Increasing this will result in a lot more snapshots.&lt;br /&gt;
	render_views.setTesselationLevel(1);&lt;br /&gt;
	// If true, the camera will be placed at the vertices of the triangles. If false, at the centers.&lt;br /&gt;
	// This will affect the number of snapshots produced (if true, less will be made).&lt;br /&gt;
	// True: 42 for level 1, 162 for level 2, 642 for level 3...&lt;br /&gt;
	// False: 80 for level 1, 320 for level 2, 1280 for level 3...&lt;br /&gt;
	render_views.setUseVertices(true);&lt;br /&gt;
	// If true, the entropies (the amount of occlusions) will be computed for each snapshot (optional).&lt;br /&gt;
	render_views.setComputeEntropies(true);&lt;br /&gt;
&lt;br /&gt;
	render_views.generateViews();&lt;br /&gt;
&lt;br /&gt;
	// Object for storing the rendered views.&lt;br /&gt;
	std::vector&amp;lt;pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr&amp;gt; views;&lt;br /&gt;
	// Object for storing the poses, as 4x4 transformation matrices.&lt;br /&gt;
	std::vector&amp;lt;Eigen::Matrix4f, Eigen::aligned_allocator&amp;lt;Eigen::Matrix4f&amp;gt; &amp;gt; poses;&lt;br /&gt;
	// Object for storing the entropies (optional).&lt;br /&gt;
	std::vector&amp;lt;float&amp;gt; entropies;&lt;br /&gt;
	render_views.getViews(views);&lt;br /&gt;
	render_views.getPoses(poses);&lt;br /&gt;
	render_views.getEntropies(entropies);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At the end of the program, the generated views (as point clouds) and the corresponding poses (transformation matrices) get saved to those vectors, so you can use them as you like (e.g., saving the snapshots to .pcd files and the poses to text files). There is no API documentation available, if you want to know more, check the &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''render_views_tesselated_sphere.h''&amp;lt;/span&amp;gt; file inside PCL's source folder.&lt;br /&gt;
&lt;br /&gt;
If you get a segmentation fault, check if the .ply file is in ASCII format (you should be able to open and read it with a text editor). If it is, convert it to binary .ply using MeshLab. This usually solves it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': 3D model, [Resolution], [View angle], [Camera constraints], [Organized], [Sphere size]. [Tesselation level], [Use vertices], [Compute entropies]&lt;br /&gt;
* '''Output''': Snapshot clouds, Snapshot poses, [Snapshot entropies]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_snapshot_creator.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Local pipeline=&lt;br /&gt;
&lt;br /&gt;
The following sections enumerate and explain the steps that compose the object recognition pipeline when using local descriptors.&lt;br /&gt;
&lt;br /&gt;
==Keypoint extraction==&lt;br /&gt;
&lt;br /&gt;
In this step, we have to decide which points from the current cloud will be considered keypoints (and have the local descriptor computed for them). The main characteristics of a good keypoint detector are:&lt;br /&gt;
&lt;br /&gt;
* '''Repeatability''': there should be a good chance of the same points being chosen over several iterations, even when the scene is captured from a different angle.&lt;br /&gt;
* '''Distinctiveness''': the chosen keypoints should be highly characterizing and descriptive. It should be easy to describe and match them.&lt;br /&gt;
&lt;br /&gt;
Because the second one depends on the local descriptor being used (and how the feature is computed), a different keypoint detection technique would have to be implemented for each. Another simple alternative is to perform downsampling on the cloud, and use all remaining points. This can be done easily with [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#Downsampling|downsampling]]. Follow the link to find a code snippet that you can adapt to use in your system. You can also use the [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#Finding_keypoints|NARF keypoint detector]].&lt;br /&gt;
&lt;br /&gt;
===ISS===&lt;br /&gt;
&lt;br /&gt;
Another keypoint detector available in PCL is the ISS (Intrinsic Shape Signatures) one. ISS is a local descriptor presented in 2009, which creates a view-independent signature of the local surface patch. An algorithm for choosing keypoints was also included to better fit the descriptor. The algorithm scans the surfaces and chooses only points with large variations in the [https://en.wikipedia.org/wiki/Principal_curvature principal direction] (the shape of the surface), which are ideal for keypoints.&lt;br /&gt;
&lt;br /&gt;
You can use the associated PCL class this way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/keypoints/iss_3d.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// This function by Tommaso Cavallari and Federico Tombari, taken from the tutorial&lt;br /&gt;
// http://pointclouds.org/documentation/tutorials/correspondence_grouping.php&lt;br /&gt;
double&lt;br /&gt;
computeCloudResolution(const pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::ConstPtr&amp;amp; cloud)&lt;br /&gt;
{&lt;br /&gt;
	double resolution = 0.0;&lt;br /&gt;
	int numberOfPoints = 0;&lt;br /&gt;
	int nres;&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; indices(2);&lt;br /&gt;
	std::vector&amp;lt;float&amp;gt; squaredDistances(2);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt; tree;&lt;br /&gt;
	tree.setInputCloud(cloud);&lt;br /&gt;
&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; cloud-&amp;gt;size(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		if (! pcl_isfinite((*cloud)[i].x))&lt;br /&gt;
			continue;&lt;br /&gt;
&lt;br /&gt;
		// Considering the second neighbor since the first is the point itself.&lt;br /&gt;
		nres = tree.nearestKSearch(i, 2, indices, squaredDistances);&lt;br /&gt;
		if (nres == 2)&lt;br /&gt;
		{&lt;br /&gt;
			resolution += sqrt(squaredDistances[1]);&lt;br /&gt;
			++numberOfPoints;&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
	if (numberOfPoints != 0)&lt;br /&gt;
		resolution /= numberOfPoints;&lt;br /&gt;
&lt;br /&gt;
	return resolution;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point cloud and the keypoints.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr keypoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// ISS keypoint detector object.&lt;br /&gt;
	pcl::ISSKeypoint3D&amp;lt;pcl::PointXYZ, pcl::PointXYZ&amp;gt; detector;&lt;br /&gt;
	detector.setInputCloud(cloud);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	detector.setSearchMethod(kdtree);&lt;br /&gt;
	double resolution = computeCloudResolution(cloud);&lt;br /&gt;
	// Set the radius of the spherical neighborhood used to compute the scatter matrix.&lt;br /&gt;
	detector.setSalientRadius(6 * resolution);&lt;br /&gt;
	// Set the radius for the application of the non maxima supression algorithm.&lt;br /&gt;
	detector.setNonMaxRadius(4 * resolution);&lt;br /&gt;
	// Set the minimum number of neighbors that has to be found while applying the non maxima suppression algorithm.&lt;br /&gt;
	detector.setMinNeighbors(5);&lt;br /&gt;
	// Set the upper bound on the ratio between the second and the first eigenvalue.&lt;br /&gt;
	detector.setThreshold21(0.975);&lt;br /&gt;
	// Set the upper bound on the ratio between the third and the second eigenvalue.&lt;br /&gt;
	detector.setThreshold32(0.975);&lt;br /&gt;
	// Set the number of prpcessing threads to use. 0 sets it to automatic.&lt;br /&gt;
	detector.setNumberOfThreads(4);&lt;br /&gt;
	&lt;br /&gt;
	detector.compute(*keypoints);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search method, Salient radius, Non maxima radius, Minimum neighbors, Eigenvalues thresholds, [Number of threads]&lt;br /&gt;
* '''Output''': Keypoints&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&amp;amp;arnumber=5457637 Intrinsic shape signatures: A shape descriptor for 3D object recognition] (requires IEEE Xplore subscription) (Yu Zhong, 2009)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_i_s_s_keypoint3_d.html pcl::ISSKeypoint3D]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_ISS_keypoints.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Computing descriptors==&lt;br /&gt;
&lt;br /&gt;
The next step is to compute the desired local descriptor(s) for every keypoint. The input parameters vary from one to another, and their efectiveness depends on the scene we are capturing and the objects we are working with, so I can not give you an ideal solution. Check the list of [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#Table|available descriptors]], choose the one you deem best, and study the original publication to understand how it works and how the computation can be tuned to your needs, with the use of parameters (and try to understand how changing them affects the output). Refer to the given code snippets for each.&lt;br /&gt;
&lt;br /&gt;
==Matching==&lt;br /&gt;
&lt;br /&gt;
After the local descriptor has been computed for every keypoint in the cloud, we have to match them, finding correspondences with the ones we have stored in our object database. For this, a search structure like a [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#k-d_tree|''k''-d tree]] can be used to perform a nearest neighbor search, retrieving the Euclidean distances between descriptors (and optionally, enforcing a maximum distance value as a threshold). Every descriptor in the scene should be matched against the descriptors of every model in the database, because this accounts for the presence of multiple instances of the model, which would not be recognized if we did it the other way around.&lt;br /&gt;
&lt;br /&gt;
The following code sample shows how to find all point correspondences between the cloud and a model:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/shot.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the SHOT descriptors for the scene.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;::Ptr sceneDescriptors(new pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;());&lt;br /&gt;
	// Object for storing the SHOT descriptors for the model.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;::Ptr modelDescriptors(new pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read the already computed descriptors from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::SHOT352&amp;gt;(argv[1], *sceneDescriptors) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::SHOT352&amp;gt;(argv[2], *modelDescriptors) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// A kd-tree object that uses the FLANN library for fast search of nearest neighbors.&lt;br /&gt;
	pcl::KdTreeFLANN&amp;lt;pcl::SHOT352&amp;gt; matching;&lt;br /&gt;
	matching.setInputCloud(modelDescriptors);&lt;br /&gt;
	// A Correspondence object stores the indices of the query and the match,&lt;br /&gt;
	// and the distance/weight.&lt;br /&gt;
	pcl::CorrespondencesPtr correspondences(new pcl::Correspondences());&lt;br /&gt;
&lt;br /&gt;
	// Check every descriptor computed for the scene.&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; sceneDescriptors-&amp;gt;size(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		std::vector&amp;lt;int&amp;gt; neighbors(1);&lt;br /&gt;
		std::vector&amp;lt;float&amp;gt; squaredDistances(1);&lt;br /&gt;
		// Ignore NaNs.&lt;br /&gt;
		if (pcl_isfinite(sceneDescriptors-&amp;gt;at(i).descriptor[0]))&lt;br /&gt;
		{&lt;br /&gt;
			// Find the nearest neighbor (in descriptor space)...&lt;br /&gt;
			int neighborCount = matching.nearestKSearch(sceneDescriptors-&amp;gt;at(i), 1, neighbors, squaredDistances);&lt;br /&gt;
			// ...and add a new correspondence if the distance is less than a threshold&lt;br /&gt;
			// (SHOT distances are between 0 and 1, other descriptors use different metrics).&lt;br /&gt;
			if (neighborCount == 1 &amp;amp;&amp;amp; squaredDistances[0] &amp;lt; 0.25f)&lt;br /&gt;
			{&lt;br /&gt;
				pcl::Correspondence correspondence(neighbors[0], static_cast&amp;lt;int&amp;gt;(i), squaredDistances[0]);&lt;br /&gt;
				correspondences-&amp;gt;push_back(correspondence);&lt;br /&gt;
			}&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
	std::cout &amp;lt;&amp;lt; &amp;quot;Found &amp;quot; &amp;lt;&amp;lt; correspondences-&amp;gt;size() &amp;lt;&amp;lt; &amp;quot; correspondences.&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/correspondence_grouping.php 3D Object Recognition based on Correspondence Grouping]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_kd_tree_f_l_a_n_n.html pcl::KdTreeFLANN]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/structpcl_1_1_correspondence.html pcl::Correspondence]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_local_pipeline_matching.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Correspondence grouping==&lt;br /&gt;
&lt;br /&gt;
Right now, all we have is a list of correspondences between keypoints in the scene and keypoints from some object(s) in the database. This does not necessarily mean that a given object is present in the scene. Consider this example: a box with two keypoints located on opposite corners. The distance between the points is known. The matching stage has found two keypoints in the scene with very similar descriptors, but as it turns out, the distance between them is way different. Unless we are taking non-rigid transformations into account (such as scaling), then we have to discard the idea that those points belong to the object.&lt;br /&gt;
&lt;br /&gt;
This check can be implemented with an additional step called ''correspondence grouping''. Like the name states, it groups correspondences that are geometrically consistent (for the given object model) in clusters, and discards the ones that do not. Rotations and translations are permitted, but anything other than that will not meet the criteria. Also, as a minimum of 3 correspondences are required for retrieving the 6 DoF pose, groups with less than that can be ignored.&lt;br /&gt;
&lt;br /&gt;
PCL offers a couple of classes to perform correspondence grouping.&lt;br /&gt;
&lt;br /&gt;
===Geometric consistency===&lt;br /&gt;
&lt;br /&gt;
This is the simplest method. It just iterates over all feature correspondences not yet grouped, and adds them to the current subset if they are geometrically consistent. The set resolution can be tuned to make the algorithm more or less forgiving when enforcing the consistency. For every subset (which should correspond with an instance of the model), this PCL class also computes the transformation (rotation and translation), making the next step ([[PCL/OpenNI_tutorial_5:_3D_object_recognition_(pipeline)#Pose_estimation|pose estimation]]) unnecessary.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/correspondence.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/recognition/cg/geometric_consistency.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the keypoints of the scene and the model.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr sceneKeypoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr modelKeypoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Objects for storing the unclustered and clustered correspondences.&lt;br /&gt;
	pcl::CorrespondencesPtr correspondences(new pcl::Correspondences());&lt;br /&gt;
	std::vector&amp;lt;pcl::Correspondences&amp;gt; clusteredCorrespondences;&lt;br /&gt;
	// Object for storing the transformations (rotation plus translation).&lt;br /&gt;
	std::vector&amp;lt;Eigen::Matrix4f, Eigen::aligned_allocator&amp;lt;Eigen::Matrix4f&amp;gt; &amp;gt; transformations;&lt;br /&gt;
&lt;br /&gt;
	// Read the keypoints from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *sceneKeypoints) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[2], *modelKeypoints) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: here you would compute the correspondences.&lt;br /&gt;
	// It has been omitted here for simplicity.&lt;br /&gt;
&lt;br /&gt;
	// Object for correspondence grouping.&lt;br /&gt;
	pcl::GeometricConsistencyGrouping&amp;lt;pcl::PointXYZ, pcl::PointXYZ&amp;gt; grouping;&lt;br /&gt;
	grouping.setSceneCloud(sceneKeypoints);&lt;br /&gt;
	grouping.setInputCloud(modelKeypoints);&lt;br /&gt;
	grouping.setModelSceneCorrespondences(correspondences);&lt;br /&gt;
	// Minimum cluster size. Default is 3 (as at least 3 correspondences&lt;br /&gt;
	// are needed to compute the 6 DoF pose).&lt;br /&gt;
	grouping.setGCThreshold(3);&lt;br /&gt;
	// Resolution of the consensus set used to cluster correspondences together,&lt;br /&gt;
	// in metric units. Default is 1.0.&lt;br /&gt;
	grouping.setGCSize(0.01);&lt;br /&gt;
&lt;br /&gt;
	grouping.recognize(transformations, clusteredCorrespondences);&lt;br /&gt;
&lt;br /&gt;
	std::cout &amp;lt;&amp;lt; &amp;quot;Model instances found: &amp;quot; &amp;lt;&amp;lt; transformations.size() &amp;lt;&amp;lt; std::endl &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; transformations.size(); i++)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Instance &amp;quot; &amp;lt;&amp;lt; (i + 1) &amp;lt;&amp;lt; &amp;quot;:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tHas &amp;quot; &amp;lt;&amp;lt; clusteredCorrespondences[i].size() &amp;lt;&amp;lt; &amp;quot; correspondences.&amp;quot; &amp;lt;&amp;lt; std::endl &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
		Eigen::Matrix3f rotation = transformations[i].block&amp;lt;3, 3&amp;gt;(0, 0);&lt;br /&gt;
		Eigen::Vector3f translation = transformations[i].block&amp;lt;3, 1&amp;gt;(0, 3);&lt;br /&gt;
		printf(&amp;quot;\t\t    | %6.3f %6.3f %6.3f | \n&amp;quot;, rotation(0, 0), rotation(0, 1), rotation(0, 2));&lt;br /&gt;
		printf(&amp;quot;\t\tR = | %6.3f %6.3f %6.3f | \n&amp;quot;, rotation(1, 0), rotation(1, 1), rotation(1, 2));&lt;br /&gt;
		printf(&amp;quot;\t\t    | %6.3f %6.3f %6.3f | \n&amp;quot;, rotation(2, 0), rotation(2, 1), rotation(2, 2));&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		printf(&amp;quot;\t\tt = &amp;lt; %0.3f, %0.3f, %0.3f &amp;gt;\n&amp;quot;, translation(0), translation(1), translation(2));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you do not need to compute the transformations (or prefer to do it later in its own step), you can use the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;cluster()&amp;quot;&amp;lt;/span&amp;gt; function instead of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;recognize()&amp;quot;&amp;lt;/span&amp;gt; one, which only takes one parameter (the object for the clustered correspondences).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (model), Points (scene), Correspondences, Minimum cluster size, Consensus set resolution&lt;br /&gt;
* '''Output''': Clustered correspondences, [Transformation]&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/correspondence_grouping.php 3D Object Recognition based on Correspondence Grouping]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_geometric_consistency_grouping.html pcl::GeometricConsistencyGrouping]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_local_pipeline_GC_grouping.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Hough voting===&lt;br /&gt;
&lt;br /&gt;
This method is a bit more complex. It uses a technique known as [https://en.wikipedia.org/wiki/Hough_transform Hough transform], which was originally devised to perform line detection on 2D images, though it was later expanded to work with arbitrary shapes or higher dimensions. The method works with a voting procedure, with votes being cast on candidates that are found to be better suitable. If enough votes are accumulated for a given position in the space, then the shape is considered &amp;quot;found&amp;quot; there and its parameters are retrieved.&lt;br /&gt;
&lt;br /&gt;
The voting procedure is carried out in a parameter space, which would have 6 dimensions in case a full pose needed to be estimated (rotation plus translation). Because that would be computationally expensive, the method implemented in PCL uses only a 3D Hough space, and employs local Reference Frames (RFs) to account for the remaining 3 DoF. Its computation can then be divided in two steps.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Hough_voting_pipeline.png|thumb|center|600px|Use of the Hough voting scheme (red) in a 3D object recognition pipeline (image from [http://vision.deis.unibo.it/fede/papers/psivt10.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Computing Reference Frames====&lt;br /&gt;
&lt;br /&gt;
For every pair of correspondences, a local Reference Frame must be computed to retrieve the pose later. PCL offers a class that implements the BOrder Aware Repeatable Directions (BOARD) algorithm.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/board.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr sceneCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr modelCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Objects for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr sceneNormals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr modelNormals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Objects for storing the keypoints.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr sceneKeypoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr modelKeypoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Objects for storing the Reference Frames.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::ReferenceFrame&amp;gt;::Ptr sceneRF(new pcl::PointCloud&amp;lt;pcl::ReferenceFrame&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::ReferenceFrame&amp;gt;::Ptr modelRF(new pcl::PointCloud&amp;lt;pcl::ReferenceFrame&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read the scene and model clouds from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *sceneCloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[2], *modelCloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: here you would estimate the normals for the whole cloud, and choose&lt;br /&gt;
	// the keypoints. It has been omitted here for simplicity.&lt;br /&gt;
&lt;br /&gt;
	// BOARD RF estimation object.&lt;br /&gt;
	pcl::BOARDLocalReferenceFrameEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::ReferenceFrame&amp;gt; rf;&lt;br /&gt;
	// Search radius (maximum distance of the points used to estimate the X and Y axes&lt;br /&gt;
	// of the BOARD Reference Frame for a given point).&lt;br /&gt;
	rf.setRadiusSearch(0.02f);&lt;br /&gt;
	// Check if support is complete, or has missing regions because it is too close to mesh borders.&lt;br /&gt;
	rf.setFindHoles(true);&lt;br /&gt;
&lt;br /&gt;
	rf.setInputCloud(sceneKeypoints);&lt;br /&gt;
	rf.setInputNormals(sceneNormals);&lt;br /&gt;
	rf.setSearchSurface(sceneCloud);&lt;br /&gt;
	rf.compute(*sceneRF);&lt;br /&gt;
&lt;br /&gt;
	rf.setInputCloud(modelKeypoints);&lt;br /&gt;
	rf.setInputNormals(modelNormals);&lt;br /&gt;
	rf.setSearchSurface(modelCloud);&lt;br /&gt;
	rf.compute(*modelRF);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Be sure to check the API reference because the BOARD estimation object has many more parameters to tune.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Search radius, [Tangent radius], [Margin array size], [Margin threshold], [Find holes], [Hole size threshold], [Steepness threshold]&lt;br /&gt;
* '''Output''': Reference Frames&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/correspondence_grouping.php 3D Object Recognition based on Correspondence Grouping]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://vision.deis.unibo.it/LRF/LRF_repeatability_ICCV2011.pdf On the repeatability of the local reference frame for partial shape matching] (Alioscia Petrelli and Luigi Di Stefano, 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_b_o_a_r_d_local_reference_frame_estimation.html pcl::BOARDLocalReferenceFrameEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_local_pipeline_Hough_RFs.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Clustering====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/recognition/cg/hough_3d.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the keypoints of the scene and the model.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr sceneKeypoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr modelKeypoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Objects for storing the Reference Frames.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::ReferenceFrame&amp;gt;::Ptr sceneRF(new pcl::PointCloud&amp;lt;pcl::ReferenceFrame&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::ReferenceFrame&amp;gt;::Ptr modelRF(new pcl::PointCloud&amp;lt;pcl::ReferenceFrame&amp;gt;);&lt;br /&gt;
	// Objects for storing the unclustered and clustered correspondences.&lt;br /&gt;
	pcl::CorrespondencesPtr correspondences(new pcl::Correspondences());&lt;br /&gt;
	std::vector&amp;lt;pcl::Correspondences&amp;gt; clusteredCorrespondences;&lt;br /&gt;
	// Object for storing the transformations (rotation plus translation).&lt;br /&gt;
	std::vector&amp;lt;Eigen::Matrix4f, Eigen::aligned_allocator&amp;lt;Eigen::Matrix4f&amp;gt; &amp;gt; transformations;&lt;br /&gt;
&lt;br /&gt;
	// Read the keypoints from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *sceneKeypoints) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[2], *modelKeypoints) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: here you would compute the correspondences and the Reference Frames.&lt;br /&gt;
	// It has been omitted here for simplicity.&lt;br /&gt;
&lt;br /&gt;
	// Object for correspondence grouping.&lt;br /&gt;
	pcl::Hough3DGrouping&amp;lt;pcl::PointXYZ, pcl::PointXYZ, pcl::ReferenceFrame, pcl::ReferenceFrame&amp;gt; grouping;&lt;br /&gt;
	grouping.setInputCloud(modelKeypoints);&lt;br /&gt;
	grouping.setInputRf(modelRF);&lt;br /&gt;
	grouping.setSceneCloud(sceneKeypoints);&lt;br /&gt;
	grouping.setSceneRf(sceneRF);&lt;br /&gt;
	grouping.setModelSceneCorrespondences(correspondences);&lt;br /&gt;
	// Minimum cluster size. Default is 3 (as at least 3 correspondences&lt;br /&gt;
	// are needed to compute the 6 DoF pose).&lt;br /&gt;
	grouping.setHoughThreshold(3);&lt;br /&gt;
	// Size of each bin in the Hough space.&lt;br /&gt;
	grouping.setHoughBinSize(3);&lt;br /&gt;
	// If true, the vote casting procedure will interpolate the score&lt;br /&gt;
	// between neighboring bins in the Hough space.&lt;br /&gt;
	grouping.setUseInterpolation(true);&lt;br /&gt;
	// If true, the vote casting procedure will use the correspondence's&lt;br /&gt;
	// weighted distance to compute the Hough voting score.&lt;br /&gt;
	grouping.setUseDistanceWeight(false);&lt;br /&gt;
&lt;br /&gt;
	grouping.recognize(transformations, clusteredCorrespondences);&lt;br /&gt;
&lt;br /&gt;
	std::cout &amp;lt;&amp;lt; &amp;quot;Model instances found: &amp;quot; &amp;lt;&amp;lt; transformations.size() &amp;lt;&amp;lt; std::endl &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; transformations.size(); i++)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Instance &amp;quot; &amp;lt;&amp;lt; (i + 1) &amp;lt;&amp;lt; &amp;quot;:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tHas &amp;quot; &amp;lt;&amp;lt; clusteredCorrespondences[i].size() &amp;lt;&amp;lt; &amp;quot; correspondences.&amp;quot; &amp;lt;&amp;lt; std::endl &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
		Eigen::Matrix3f rotation = transformations[i].block&amp;lt;3, 3&amp;gt;(0, 0);&lt;br /&gt;
		Eigen::Vector3f translation = transformations[i].block&amp;lt;3, 1&amp;gt;(0, 3);&lt;br /&gt;
		printf(&amp;quot;\t\t    | %6.3f %6.3f %6.3f | \n&amp;quot;, rotation(0, 0), rotation(0, 1), rotation(0, 2));&lt;br /&gt;
		printf(&amp;quot;\t\tR = | %6.3f %6.3f %6.3f | \n&amp;quot;, rotation(1, 0), rotation(1, 1), rotation(1, 2));&lt;br /&gt;
		printf(&amp;quot;\t\t    | %6.3f %6.3f %6.3f | \n&amp;quot;, rotation(2, 0), rotation(2, 1), rotation(2, 2));&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		printf(&amp;quot;\t\tt = &amp;lt; %0.3f, %0.3f, %0.3f &amp;gt;\n&amp;quot;, translation(0), translation(1), translation(2));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you do not give the Reference Frames to the Hough grouping object, it will calculate them itself, but then you need to specify additional parameters. See the API for more details. Also, you can &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;cluster()&amp;quot;&amp;lt;/span&amp;gt; instead of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;recognize()&amp;quot;&amp;lt;/span&amp;gt; if you want to skip the pose estimation now.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (model), Points (scene), Correspondences, Reference Frames (model), Reference Frames (scene), Hough bin size, Hough threshold, [Use distance weight], [Use interpolation]&lt;br /&gt;
* '''Output''': Clustered correspondences, [Transformation]&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/correspondence_grouping.php 3D Object Recognition based on Correspondence Grouping]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://vision.deis.unibo.it/fede/papers/psivt10.pdf Object recognition in 3D scenes with occlusions and clutter by Hough voting] (Federico Tombari and Luigi Di Stefano, 2010)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_hough3_d_grouping.html pcl::Hough3DGrouping]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_local_pipeline_Hough_grouping.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Pose estimation==&lt;br /&gt;
&lt;br /&gt;
The two correspondence grouping classes included in PCL already compute the full pose, but you can do it manually if you want, or if you want to refine the results of the previous step (removing correspondences that are not compatible with the same 6 DoF pose). You can use a method like RANSAC (RANdom SAmple Consensus), to get the rotation and translation of the rigid transformation that best fits the correspondences of a model instance. In a previous tutorial we mentioned that there was a technique called [[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Registration|feature-based registration]] which was an alternative to ICP, and this is it.&lt;br /&gt;
&lt;br /&gt;
Of course, PCL has some classes for this. The one we are going to see adds a prerejection step to the RANSAC loop, in order to discard hypotheses that are probably wrong. To do this, the algorithm forms polygons with the points of the input sets, and then checks pose-invariant geometrical constraints. To be precise, it makes sure that the polygon edges are of similar length. You can let it run for a specified number of iterations (as, like you already know, RANSAC never actually converges), or set a threshold.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/shot.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/registration/sample_consensus_prerejective.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the scene and the model.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr scene(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr model(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr alignedModel(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Objects for storing the SHOT descriptors for the scene and model.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;::Ptr sceneDescriptors(new pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;());&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;::Ptr modelDescriptors(new pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read the clouds from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *scene) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[2], *model) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: here you would compute or load the descriptors for both&lt;br /&gt;
	// the scene and the model. It has been omitted here for simplicity.&lt;br /&gt;
&lt;br /&gt;
	// Object for pose estimation.&lt;br /&gt;
	pcl::SampleConsensusPrerejective&amp;lt;pcl::PointXYZ, pcl::PointXYZ, pcl::SHOT352&amp;gt; pose;&lt;br /&gt;
	pose.setInputSource(model);&lt;br /&gt;
	pose.setInputTarget(scene);&lt;br /&gt;
	pose.setSourceFeatures(modelDescriptors);&lt;br /&gt;
	pose.setTargetFeatures(sceneDescriptors);&lt;br /&gt;
	// Instead of matching a descriptor with its nearest neighbor, choose randomly between&lt;br /&gt;
	// the N closest ones, making it more robust to outliers, but increasing time.&lt;br /&gt;
	pose.setCorrespondenceRandomness(2);&lt;br /&gt;
	// Set the fraction (0-1) of inlier points required for accepting a transformation.&lt;br /&gt;
	// At least this number of points will need to be aligned to accept a pose.&lt;br /&gt;
	pose.setInlierFraction(0.25f);&lt;br /&gt;
	// Set the number of samples to use during each iteration (minimum for 6 DoF is 3).&lt;br /&gt;
	pose.setNumberOfSamples(3);&lt;br /&gt;
	// Set the similarity threshold (0-1) between edge lengths of the polygons. The&lt;br /&gt;
	// closer to 1, the more strict the rejector will be, probably discarding acceptable poses.&lt;br /&gt;
	pose.setSimilarityThreshold(0.6f);&lt;br /&gt;
	// Set the maximum distance threshold between two correspondent points in source and target.&lt;br /&gt;
	// If the distance is larger, the points will be ignored in the alignment process.&lt;br /&gt;
	pose.setMaxCorrespondenceDistance(0.01f);&lt;br /&gt;
&lt;br /&gt;
	pose.align(*alignedModel);&lt;br /&gt;
&lt;br /&gt;
	if (pose.hasConverged())&lt;br /&gt;
	{&lt;br /&gt;
		Eigen::Matrix4f transformation = pose.getFinalTransformation();&lt;br /&gt;
		Eigen::Matrix3f rotation = transformation.block&amp;lt;3, 3&amp;gt;(0, 0);&lt;br /&gt;
		Eigen::Vector3f translation = transformation.block&amp;lt;3, 1&amp;gt;(0, 3);&lt;br /&gt;
&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Transformation matrix:&amp;quot; &amp;lt;&amp;lt; std::endl &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		printf(&amp;quot;\t\t    | %6.3f %6.3f %6.3f | \n&amp;quot;, rotation(0, 0), rotation(0, 1), rotation(0, 2));&lt;br /&gt;
		printf(&amp;quot;\t\tR = | %6.3f %6.3f %6.3f | \n&amp;quot;, rotation(1, 0), rotation(1, 1), rotation(1, 2));&lt;br /&gt;
		printf(&amp;quot;\t\t    | %6.3f %6.3f %6.3f | \n&amp;quot;, rotation(2, 0), rotation(2, 1), rotation(2, 2));&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		printf(&amp;quot;\t\tt = &amp;lt; %0.3f, %0.3f, %0.3f &amp;gt;\n&amp;quot;, translation(0), translation(1), translation(2));&lt;br /&gt;
	}&lt;br /&gt;
	else std::cout &amp;lt;&amp;lt; &amp;quot;Did not converge.&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For an alternative, check the [http://docs.pointclouds.org/trunk/classpcl_1_1_sample_consensus_model_registration.html pcl::SampleConsensusModelRegistration]  or the [http://docs.pointclouds.org/trunk/classpcl_1_1_sample_consensus_initial_alignment.html pcl::SampleConsensusInitialAlignment] classes.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (model), Points (scene), Descriptors (model), Descriptors (scene), [Correspondence randomness], [Inlier fraction], [Number of samples], [Similarity threshold]&lt;br /&gt;
* '''Output''': Aligned model, [Transformation]&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/alignment_prerejective.php Robust pose estimation of rigid objects]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/template_alignment.php Aligning object templates to a point cloud]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://personal.lut.fi/users/joni.kamarainen/downloads/publications/icra2013.pdf Pose Estimation using Local Structure-Specific Shape and Appearance Context] (Anders Glent Buch et al., 2013)&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_sample_consensus_prerejective.html pcl::SampleConsensusPrerejective]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1registration_1_1_correspondence_rejector_poly.html pcl::registration::CorrespondenceRejectorPoly]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_local_pipeline_pose.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Global pipeline=&lt;br /&gt;
&lt;br /&gt;
Because of the way global descriptors work, the global pipeline for object recognition differs in some steps.&lt;br /&gt;
&lt;br /&gt;
==Segmentation==&lt;br /&gt;
&lt;br /&gt;
Unlike local descriptors, global ones understand the notion of ''object''. They are not computed for single points, but for whole clusters. Because of this, the first step is to perform [[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Segmentation|segmentation]] on the cloud in order to retrieve all objects. You can use any method you like from the ones available in PCL, as long as it works for you. Also, performing some kind of preprocessing is a good idea, like removing all clusters that are smaller or bigger than certain thresholds, which should be set according to the objects in the database. Yet another possibility is to perform [[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Plane_model|plane segmentation]] to find any table(s) in the scene, and consider only clusters sitting on it (if we can assume there is a table, of course).&lt;br /&gt;
&lt;br /&gt;
PCL provides the [http://docs.pointclouds.org/trunk/classpcl_1_1_extract_polygonal_prism_data.html pcl::ExtractPolygonalPrismData] class for the latter task. By giving a [[PCL/OpenNI_tutorial_3:_Cloud_processing_%28advanced%29#Convex_hull|convex hull]] computed from the plane coefficients, it will extrude it a certain height to create a prism, and then give back all points that lie inside. This is the code:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/sample_consensus/method_types.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/sample_consensus/model_types.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/segmentation/sac_segmentation.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/extract_indices.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/convex_hull.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/segmentation/extract_polygonal_prism_data.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/cloud_viewer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr plane(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr convexHull(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr objects(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Get the plane model, if present.&lt;br /&gt;
	pcl::ModelCoefficients::Ptr coefficients(new pcl::ModelCoefficients);&lt;br /&gt;
	pcl::SACSegmentation&amp;lt;pcl::PointXYZ&amp;gt; segmentation;&lt;br /&gt;
	segmentation.setInputCloud(cloud);&lt;br /&gt;
	segmentation.setModelType(pcl::SACMODEL_PLANE);&lt;br /&gt;
	segmentation.setMethodType(pcl::SAC_RANSAC);&lt;br /&gt;
	segmentation.setDistanceThreshold(0.01);&lt;br /&gt;
	segmentation.setOptimizeCoefficients(true);&lt;br /&gt;
	pcl::PointIndices::Ptr planeIndices(new pcl::PointIndices);&lt;br /&gt;
	segmentation.segment(*planeIndices, *coefficients);&lt;br /&gt;
&lt;br /&gt;
	if (planeIndices-&amp;gt;indices.size() == 0)&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Could not find a plane in the scene.&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	else&lt;br /&gt;
	{&lt;br /&gt;
		// Copy the points of the plane to a new cloud.&lt;br /&gt;
		pcl::ExtractIndices&amp;lt;pcl::PointXYZ&amp;gt; extract;&lt;br /&gt;
		extract.setInputCloud(cloud);&lt;br /&gt;
		extract.setIndices(planeIndices);&lt;br /&gt;
		extract.filter(*plane);&lt;br /&gt;
&lt;br /&gt;
		// Retrieve the convex hull.&lt;br /&gt;
		pcl::ConvexHull&amp;lt;pcl::PointXYZ&amp;gt; hull;&lt;br /&gt;
		hull.setInputCloud(plane);&lt;br /&gt;
		// Make sure that the resulting hull is bidimensional.&lt;br /&gt;
		hull.setDimension(2);&lt;br /&gt;
		hull.reconstruct(*convexHull);&lt;br /&gt;
&lt;br /&gt;
		// Redundant check.&lt;br /&gt;
		if (hull.getDimension() == 2)&lt;br /&gt;
		{&lt;br /&gt;
			// Prism object.&lt;br /&gt;
			pcl::ExtractPolygonalPrismData&amp;lt;pcl::PointXYZ&amp;gt; prism;&lt;br /&gt;
			prism.setInputCloud(cloud);&lt;br /&gt;
			prism.setInputPlanarHull(convexHull);&lt;br /&gt;
			// First parameter: minimum Z value. Set to 0, segments objects lying on the plane (can be negative).&lt;br /&gt;
			// Second parameter: maximum Z value, set to 10cm. Tune it according to the height of the objects you expect.&lt;br /&gt;
			prism.setHeightLimits(0.0f, 0.1f);&lt;br /&gt;
			pcl::PointIndices::Ptr objectIndices(new pcl::PointIndices);&lt;br /&gt;
&lt;br /&gt;
			prism.segment(*objectIndices);&lt;br /&gt;
&lt;br /&gt;
			// Get and show all points retrieved by the hull.&lt;br /&gt;
			extract.setIndices(objectIndices);&lt;br /&gt;
			extract.filter(*objects);&lt;br /&gt;
			pcl::visualization::CloudViewer viewerObjects(&amp;quot;Objects on table&amp;quot;);&lt;br /&gt;
			viewerObjects.showCloud(objects);&lt;br /&gt;
			while (!viewerObjects.wasStopped())&lt;br /&gt;
			{&lt;br /&gt;
				// Do nothing but wait.&lt;br /&gt;
			}&lt;br /&gt;
		}&lt;br /&gt;
		else std::cout &amp;lt;&amp;lt; &amp;quot;The chosen hull is not planar.&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:Prism_scene_before.png | Original scene, with a mug on a table.&lt;br /&gt;
File:Prism_scene_after.png | Result after using the polygonal prism class.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you get some residual points of the table in the output, try increasing the minimum Z value a bit.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Convex hull, Height limits, [Dimensionality]&lt;br /&gt;
* '''Output''': Point indices&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_extract_polygonal_prism_data.html pcl::ExtractPolygonalPrismData]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_polygonal_prism.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Computing descriptors==&lt;br /&gt;
&lt;br /&gt;
For every cluster that has survived the previous step, a global descriptor must be computed. Most only output one histogram (except for CVFH and derivatives), so the database will be smaller than with local descriptors. Check the [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#Table|list]] and choose the one that fits you best.&lt;br /&gt;
&lt;br /&gt;
===CRH===&lt;br /&gt;
&lt;br /&gt;
In a previous article about global descriptors we talked about how they were invariant to the camera roll angle. When [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#CVFH|CVFH]] was presented, the computation of a Camera Roll Histogram (CRH) was proposed to overcome this. The CRH should be computed and stored next to the global descriptor (VFH, CVFH...) for its use in a later phase, the pose estimation.&lt;br /&gt;
&lt;br /&gt;
The CRH is computed as follows: for every point, its normal is projected onto a plane orthogonal to the vector given by the camera center and the centroid of the cluster (VFH) or region (CVFH). Then, the angle of the projected normal to the up vector of the camera is computed and added to the histogram, which has 90 bins (for a resolution of 4º). The projected normals are weighted by their magnitudes to reduce the effect of input noise. Check the [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6130296 paper] for details.&lt;br /&gt;
&lt;br /&gt;
You can compute it using PCL, of course:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/common/centroid.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/crh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// A handy typedef.&lt;br /&gt;
typedef pcl::Histogram&amp;lt;90&amp;gt; CRH90;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the CRH.&lt;br /&gt;
	pcl::PointCloud&amp;lt;CRH90&amp;gt;::Ptr histogram(new pcl::PointCloud&amp;lt;CRH90&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: this cloud file should contain a snapshot of the object. Remember&lt;br /&gt;
	// that you need to compute a CRH for every VFH or CVFH descriptor that you&lt;br /&gt;
	// are going to have (that is, one for every snapshot).&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// CRH estimation object.&lt;br /&gt;
	pcl::CRHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, CRH90&amp;gt; crh;&lt;br /&gt;
	crh.setInputCloud(object);&lt;br /&gt;
	crh.setInputNormals(normals);&lt;br /&gt;
	Eigen::Vector4f centroid;&lt;br /&gt;
	pcl::compute3DCentroid(*object, centroid);&lt;br /&gt;
	crh.setCentroid(centroid);&lt;br /&gt;
&lt;br /&gt;
	// Compute the CRH.&lt;br /&gt;
	crh.compute(*histogram);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Normals, Centroid&lt;br /&gt;
* '''Output''': Camera roll histogram&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6130296 CAD-Model Recognition and 6DOF Pose Estimation Using 3D Cues] (requires IEEE Xplore subscription) (Aitor Aldoma et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_c_r_h_estimation.html pcl::CRHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_CRH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Matching==&lt;br /&gt;
&lt;br /&gt;
Like with the local pipeline, once all the descriptors have been computed we have to perform a search for their nearest neighbors in the database. The [https://en.wikipedia.org/wiki/Taxicab_geometry Manhattan or L1 distance] is recommended over the [https://en.wikipedia.org/wiki/Euclidean_distance Euclidean or L2] one because it is more robust to occlusions.&lt;br /&gt;
&lt;br /&gt;
Usually, more than one neighbor is retrieved, which means that the found object's pose is somewhere between the two. The next steps can help improving the accuraccy of the estimation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/vfh_recognition.php Cluster Recognition and 6DOF Pose Estimation using VFH descriptors]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Pose estimation==&lt;br /&gt;
&lt;br /&gt;
The object's position can be determined by computing and aligning the [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#Computing_the_centroid|centroids]] of the clusters (the one in the scene, and the one in the snapshot). For the rotation, we must use the viewpoint information encoded in the descriptor (you can check the ground truth pose information that was saved with the matched snapshot). This still leaves the roll angle unresolved, as global descriptors are invariant to it. Now is when the Camera Roll Histogram ([[PCL/OpenNI_tutorial_5:_3D_object_recognition_%28pipeline%29#CRH|CRH]]) that we should have stored next to the descriptor comes in handy. The angle can be obtained by aligning the current CRH with the stored one, completing the 6 DoF pose estimation.&lt;br /&gt;
&lt;br /&gt;
===CRH===&lt;br /&gt;
&lt;br /&gt;
This is the code for retrieving the roll angle using CRH:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/common/centroid.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/crh.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/recognition/crh_alignment.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// A handy typedef.&lt;br /&gt;
typedef pcl::Histogram&amp;lt;90&amp;gt; CRH90;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Clouds for storing the object's cluster and view.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr viewCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr clusterCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Objects for storing the CRHs of both.&lt;br /&gt;
	pcl::PointCloud&amp;lt;CRH90&amp;gt;::Ptr viewCRH(new pcl::PointCloud&amp;lt;CRH90&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;CRH90&amp;gt;::Ptr clusterCRH(new pcl::PointCloud&amp;lt;CRH90&amp;gt;);&lt;br /&gt;
	// Objects for storing the centroids.&lt;br /&gt;
	Eigen::Vector4f viewCentroid;&lt;br /&gt;
	Eigen::Vector4f clusterCentroid;&lt;br /&gt;
&lt;br /&gt;
	// Read the trained view and the scene cluster from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *viewCloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[2], *clusterCloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: here you would compute the CRHs and centroids of both clusters.&lt;br /&gt;
	// It has been omitted here for simplicity.&lt;br /&gt;
&lt;br /&gt;
	// CRH alignment object.&lt;br /&gt;
	pcl::CRHAlignment&amp;lt;pcl::PointXYZ, 90&amp;gt; alignment;&lt;br /&gt;
	alignment.setInputAndTargetView(clusterCloud, viewCloud);&lt;br /&gt;
	// CRHAlignment works with Vector3f, not Vector4f.&lt;br /&gt;
	Eigen::Vector3f viewCentroid3f(viewCentroid[0], viewCentroid[1], viewCentroid[2]);&lt;br /&gt;
	Eigen::Vector3f clusterCentroid3f(clusterCentroid[0], clusterCentroid[1], clusterCentroid[2]);&lt;br /&gt;
	alignment.setInputAndTargetCentroids(clusterCentroid3f, viewCentroid3f);&lt;br /&gt;
&lt;br /&gt;
	// Compute the roll angle(s).&lt;br /&gt;
	std::vector&amp;lt;float&amp;gt; angles;&lt;br /&gt;
	alignment.computeRollAngle(*clusterCRH, *viewCRH, angles);&lt;br /&gt;
&lt;br /&gt;
	if (angles.size() &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;List of angles where the histograms correlate:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
		for (int i = 0; i &amp;lt; angles.size(); i++)&lt;br /&gt;
		{&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; angles.at(i) &amp;lt;&amp;lt; &amp;quot; degrees.&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will return a list of most probable roll angles. If you want, you can instead use the [http://docs.pointclouds.org/trunk/classpcl_1_1_c_r_h_alignment.html#a32c40356d56e22d48eccce5e27e436f2 align()] function, which calls &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;computeRollAngle()&amp;quot;&amp;lt;/span&amp;gt; internally. It will return a list of transformations that include the translation (computed from the difference between the centroid's coordinates) and the roll angles, but not the yaw or pitch (you have to get those with pose estimation).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (input and target), Centroids (input and target), CRHs (input and target)&lt;br /&gt;
* '''Output''': Roll angles, [Transformations]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6130296 CAD-Model Recognition and 6DOF Pose Estimation Using 3D Cues] (requires IEEE Xplore subscription) (Aitor Aldoma et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_c_r_h_alignment.html pcl::CRHAlignment]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_roll_estimation.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Postprocessing=&lt;br /&gt;
&lt;br /&gt;
The pipelines, as they are, should already return a decent result, but we can implement optional steps to improve the accuraccy of the pose estimation, and decrease the probability of a false positive.&lt;br /&gt;
&lt;br /&gt;
==Pose refinement==&lt;br /&gt;
&lt;br /&gt;
The 6 DoF pose estimation can be refined with the use of the [[PCL/OpenNI_tutorial_3:_Cloud_processing_%28advanced%29#Iterative_Closest_Point_.28ICP.29|Iterative Closest Point]] (ICP) algorithm. It will try to continuously realign the clouds to improve the transformation, until the termination condition is met (maximum iterations, and distance or error threshold). This is identical to performing [[PCL/OpenNI_tutorial_3:_Cloud_processing_%28advanced%29#Registration|registration]], only between input and model.&lt;br /&gt;
&lt;br /&gt;
==Hypothesis verification==&lt;br /&gt;
&lt;br /&gt;
''(Work in progress)''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)&amp;diff=4742</id>
		<title>PCL/OpenNI tutorial 4: 3D object recognition (descriptors)</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)&amp;diff=4742"/>
				<updated>2015-11-01T23:12:44Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It is time to learn the basics of one of the most interesting applications of point cloud processing: 3D object recognition. Akin to 2D recognition, this technique relies on finding good keypoints (characteristic points) in the cloud, and matching them to a set of previously saved ones. But 3D has several advantages over 2D: namely, we will be able to estimate with decent accuracy the exact position and orientation of the object, relative to the sensor. Also, 3D object recognition tends to be more robust to clutter (crowded scenes where objects in the front occluding objects in the background). And finally, having information about the object's shape will help with collision avoidance or grasping operations.&lt;br /&gt;
&lt;br /&gt;
In this first tutorial we will see what descriptors are, how many types are there available in PCL, and how to compute them.&lt;br /&gt;
&lt;br /&gt;
=Overview=&lt;br /&gt;
&lt;br /&gt;
The basis of 3D object recognition is to find a set of correspondences between two different clouds, one of them containing the object we are looking for. In order to do this, we need a way to compare points in an unambiguous manner. Until now, we have worked with points that store the XYZ coordinates, the RGB color... but none of those properties are unique enough. In two sequential scans, two points could share the same coordinates despite belonging to different surfaces, and using the color information takes us back to 2D recognition, will all the lightning related problems.&lt;br /&gt;
&lt;br /&gt;
In a previous tutorial, we talked about [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#Feature_estimation | features]], before introducing the normals. Normals are an example of feature, because they encode information about the vicinity of the point. That is, the neighboring points are taken into account when computing it, giving us an idea of how the surrounding surface is. But this is not enough. For a feature to be optimal, it must meet the following criteria:&lt;br /&gt;
&lt;br /&gt;
# '''It must be robust to transformations''': rigid transformations (the ones that do not change the distance between points) like translations and rotations must not affect the feature. Even if we play with the cloud a bit beforehand, there should be no difference.&lt;br /&gt;
# '''It must be robust to noise''': measurement errors that cause noise should not change the feature estimation much.&lt;br /&gt;
# '''It must be resolution invariant''': if sampled with different density (like after performing downsampling), the result must be identical or similar.&lt;br /&gt;
&lt;br /&gt;
This is where descriptors come in. They are more complex (and precise) signatures of a point, that encode a lot of information about the surrounding geometry. The purpose is to unequivocally identify a point across multiple point clouds, no matter the noise, resolution or transformations. Also, some of them capture additional data about the object they belong to, like the viewpoint (that lets us retrieve the pose).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Correspondence.jpg|thumb|center|600px|Finding correspondences between point features of two clouds (image from http://pointclouds.org/).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are many 3D descriptors implemented into PCL. Each one has its own method for computing unique values for a point. Some use the difference between the angles of the normals of the point and its neighbors, for example. Others use the distances between the points. Because of this, some are inherently better or worse for certain purposes. A given descriptor may be scale invariant, and another one may be better with occlusions and partial views of objects. Which one you choose depends on what you want to do.&lt;br /&gt;
&lt;br /&gt;
After calculating the necessary values, an additional step is performed to reduce the descriptor size: the result is binned into an [https://en.wikipedia.org/wiki/Histogram histogram]. To do this, the value range of each variable that makes up the descriptor is divided into ''n'' subdivisions, and the number of occurrences in each one is counted. Try to imagine a descriptor that computes a single variable, that ranges from 1 to 100. We choose to create 10 bins for it, so the first bin would gather all occurrences between 1 and 10, the second from 11 to 20, and so on. We look at the value of the variable for the first point-neighbor pair, and it is 27, so we increment the value of the third bin by 1. We keep doing this until we get a final histogram for that keypoint. The bin size must be carefully chosen depending on how descriptive that variable is (the variables do not have to share the same number of bins, and also the bins do not have to be of the same size; if for example most values from the previous example fell in the 50-100 range then it would be sensible to have more bins of smaller size in that range).&lt;br /&gt;
&lt;br /&gt;
Descriptors can be classified in two main categories: global and local. The process for computing and using each one (recognition pipeline) is different, so each will be explained in its own section in this article.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/how_features_work.php How 3D Features work in PCL]&lt;br /&gt;
** [https://github.com/PointCloudLibrary/pcl/wiki/Overview-and-Comparison-of-Features  Overview and Comparison of Features]&lt;br /&gt;
** [http://www.pointclouds.org/assets/icra2013/pcl_features_icra13.pdf How does a good feature look like?]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [https://www.inf.ethz.ch/personal/zeislb/publications/aldoma_2012jram_PCLTutorial.pdf Point Cloud Library: Three-Dimensional Object Recognition and 6 DoF Pose Estimation] (Aitor Aldoma et al., 2012)&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Table==&lt;br /&gt;
&lt;br /&gt;
The following table will give you a hint of how many descriptors are there in PCL, and some of their features:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot; style=&amp;quot;margin: auto; text-align:center&amp;quot; cellpadding=&amp;quot;15&amp;quot;&lt;br /&gt;
|+ 3D descriptors&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Name&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Type&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Size&amp;lt;sup&amp;gt;†&amp;lt;/sup&amp;gt;&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Custom PointType&amp;lt;sup&amp;gt;††&amp;lt;/sup&amp;gt;&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#PFH|PFH]] (Point Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 125&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#FPFH|FPFH]] (Fast Point Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 33&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#RSD|RSD]] (Radius-Based Surface Descriptor)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 289&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#3DSC|3DSC]] (3D Shape Context)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 1980&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#USC|USC]] (Unique Shape Context)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 1960&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#SHOT|SHOT]] (Signatures of Histograms of Orientations)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 352&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#Spin_image|Spin image]]&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 153*&lt;br /&gt;
| style=&amp;quot;color: red;&amp;quot; | No&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#RIFT|RIFT]] (Rotation-Invariant Feature Transform)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 32*&lt;br /&gt;
| style=&amp;quot;color: red;&amp;quot; | No&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#NARF|NARF]] (Normal Aligned Radial Feature)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 36&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#RoPS|RoPS]] (Rotational Projection Statistics)&lt;br /&gt;
| style=&amp;quot;color: sienna;&amp;quot; | Local&lt;br /&gt;
| 135*&lt;br /&gt;
| style=&amp;quot;color: red;&amp;quot; | No&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#VFH|VFH]] (Viewpoint Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 308&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#CVFH|CVFH]] (Clustered Viewpoint Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 308&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#OUR-CVFH|OUR-CVFH]] (Oriented, Unique and Repeatable Clustered Viewpoint Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 308&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#ESF|ESF]] (Ensemble of Shape Functions)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 640&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#GFPFH|GFPFH]] (Global Fast Point Feature Histogram)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 16&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[PCL/OpenNI_tutorial_4:_3D_object_recognition_(descriptors)#GRSD|GRSD]] (Global Radius-Based Surface Descriptor)&lt;br /&gt;
| style=&amp;quot;color: darkcyan;&amp;quot; | Global&lt;br /&gt;
| 21&lt;br /&gt;
| style=&amp;quot;color: green;&amp;quot; | Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
† Values marked with an asterisk (*) indicate that the descriptor's size depends on some parameter(s), and the one given is for the default values.&lt;br /&gt;
&lt;br /&gt;
†† Descriptors without their own custom PointType use the generic &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::Histogram&amp;lt;&amp;gt;&amp;quot;&amp;lt;/span&amp;gt; type. See [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#Saving_and_loading|Saving and loading]].&lt;br /&gt;
&lt;br /&gt;
Optionally, you can download a document with a less simple version of the table. Page format is A4, landscape:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Downloads''':&lt;br /&gt;
** [http://robotica.unileon.es/~victorm/3D_descriptors_comparison.odt Table as original ODT] (uses font embedding, requires LibreOffice 4)&lt;br /&gt;
** [http://robotica.unileon.es/~victorm/3D_descriptors_comparison.pdf Table as PDF]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Local descriptors=&lt;br /&gt;
&lt;br /&gt;
Local descriptors are computed for individual points that we give as input. They have no notion of what an object is, they just describe how the local geometry is around that point. Usually, it is your task to choose which points you want a descriptor to be computed for: the ''keypoints''. Most of the time, you can get away by just performing a downsampling and choosing all remaining points, but keypoint detectors are available, like the one used for [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#Finding_keypoints|NARF]], or [[PCL/OpenNI_tutorial_5:_3D_object_recognition_%28pipeline%29#ISS|ISS]].&lt;br /&gt;
&lt;br /&gt;
Local descriptors are used for object recognition and [[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Registration | registration]]. Now we will see which ones are implemented into PCL.&lt;br /&gt;
&lt;br /&gt;
==PFH==&lt;br /&gt;
&lt;br /&gt;
PFH stands for Point Feature Histogram. It is one of the most important descriptors offered by PCL and the basis of others such as FPFH. The PFH tries to capture information of the geometry surrounding the point by analyzing the difference between the directions of the normals in the vicinity (and because of this, an imprecise normal estimation may produce low-quality descriptors).&lt;br /&gt;
&lt;br /&gt;
First, the algorithm pairs all points in the vicinity (not just the chosen keypoint with its neighbors, but also the neighbors with themselves). Then, for each pair, a [https://en.wikipedia.org/wiki/Darboux_frame fixed coordinate frame] is computed from their normals. With this frame, the difference between the normals can be encoded with 3 angular variables. These variables, together with the euclidean distance between the points, are saved, and then binned to an histogram when all pairs have been computed. The final descriptor is the concatenation of the histograms of each variable (4 in total).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:PFH_neighbors.png | Point pairs established when computing the PFH for a point (image from http://pointclouds.org).&lt;br /&gt;
File:PFH_frame.png | Fixed coordinate frame and angular features computed for one of the pairs (image from http://pointclouds.org).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Computing descriptors in PCL is very easy, and the PFH is not an exception:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/pfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the PFH descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PFHSignature125&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::PFHSignature125&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// PFH estimation object.&lt;br /&gt;
	pcl::PFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::PFHSignature125&amp;gt; pfh;&lt;br /&gt;
	pfh.setInputCloud(cloud);&lt;br /&gt;
	pfh.setInputNormals(normals);&lt;br /&gt;
	pfh.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. Note: the value given here has to be&lt;br /&gt;
	// larger than the radius used to estimate the normals.&lt;br /&gt;
	pfh.setRadiusSearch(0.05);&lt;br /&gt;
&lt;br /&gt;
	pfh.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, PCL uses the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PFHSignature125&amp;quot;&amp;lt;/span&amp;gt; type to save the descriptor to. This means that the descriptor's size is 125 (the dimensionality of the feature vector). Dividing a feature in ''D'' dimensional space in ''B'' divisions requires a total of B&amp;lt;sup&amp;gt;D&amp;lt;/sup&amp;gt; bins. The original proposal makes use of the distance between the points, but the implementation of PCL does not, as it was not considered discriminative enough (specially in 2.5D scans, where the distance between points increases the further away from the sensor). Hence, the remaining features (with one dimension each) can be divided in 5 divisions, resulting in a 125-dimensional (5&amp;lt;sup&amp;gt;3&amp;lt;/sup&amp;gt;) vector.&lt;br /&gt;
&lt;br /&gt;
The final object that stores the computed descriptors can be handled like a normal cloud (even saved to, or read from, disk), and it has the same number of &amp;quot;points&amp;quot; than the original one. The &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PFHSignature125&amp;quot;&amp;lt;/span&amp;gt; object at position 0, for example, stores the PFH descriptor for the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; point at the same position in the cloud.&lt;br /&gt;
&lt;br /&gt;
For additional details about the descriptor, check the original publications listed below, or PCL's tutorial.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Search method, Radius&lt;br /&gt;
* '''Output''': PFH descriptors&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/pfh_estimation.php Point Feature Histograms (PFH) descriptors]&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://ias.in.tum.de/_media/spezial/bib/rusu08ias.pdf Persistent Point Feature Histograms for 3D Point Clouds] (Radu Bogdan Rusu et al., 2008)&lt;br /&gt;
** [http://files.rbrusu.com/publications/RusuPhDThesis.pdf Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments] (Radu Bogdan Rusu, 2009, section 4.4, page 50)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_p_f_h_estimation.html pcl::PFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_PFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===FPFH===&lt;br /&gt;
&lt;br /&gt;
PFH gives accurate results, but it has a drawback: it is too computationally expensive to perform at real time. For a cloud of ''n'' keypoints with ''k'' neighbors considered, it has a [https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations complexity] of ''O(nk&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;)''. Because of this, a derived descriptor was created, named FPFH (Fast Point Feature Histogram).&lt;br /&gt;
&lt;br /&gt;
The FPFH considers only the direct connections between the current keypoint and its neighbors, removing additional links between neighbors. This takes the complexity down to ''O(nk)''. Because of this, the resulting histogram is referred to as SPFH (Simplified Point Feature Histogram). The reference frame and the angular variables are computed as always.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:FPFH_neighbors.png|thumb|center|400px|Point pairs established when computing the FPFH for a point (image from http://pointclouds.org).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To account for the loss of these extra connections, an additional step takes place after all histograms have been computed: the SPFHs of a point's neighbors are &amp;quot;merged&amp;quot; with its own, weighted according to the distance. This has the effect of giving a point surface information of points as far away as 2 times the radius used. Finally, the 3 histograms (distance is not used) are concatenated to compose the final descriptor.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/fpfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the FPFH descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::FPFHSignature33&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::FPFHSignature33&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// FPFH estimation object.&lt;br /&gt;
	pcl::FPFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::FPFHSignature33&amp;gt; fpfh;&lt;br /&gt;
	fpfh.setInputCloud(cloud);&lt;br /&gt;
	fpfh.setInputNormals(normals);&lt;br /&gt;
	fpfh.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. Note: the value given here has to be&lt;br /&gt;
	// larger than the radius used to estimate the normals.&lt;br /&gt;
	fpfh.setRadiusSearch(0.05);&lt;br /&gt;
&lt;br /&gt;
	fpfh.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An additional implementation of the FPFH estimation that takes advantage of multithreaded optimizations (with the [https://en.wikipedia.org/wiki/OpenMP OpenMP API]) is available in the class &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;FPFHEstimationOMP&amp;quot;&amp;lt;/span&amp;gt;. Its interface is identical to the standard unoptimized implementation. Using it will result in a big performance boost on multi-core systems, meaning faster computation times. Remember to include the header &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;fpfh_omp.h&amp;quot;&amp;lt;/span&amp;gt; instead.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Search method, Radius&lt;br /&gt;
* '''Output''': FPFH descriptors&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/fpfh_estimation.php Fast Point Feature Histograms (FPFH) descriptors]&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://files.rbrusu.com/publications/Rusu09ICRA.pdf Fast Point Feature Histograms (FPFH) for 3D Registration] (Radu Bogdan Rusu et al., 2009)&lt;br /&gt;
** [http://files.rbrusu.com/publications/RusuPhDThesis.pdf Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments] (Radu Bogdan Rusu, 2009, section 4.5, page 57)&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_f_p_f_h_estimation.html pcl::FPFHEstimation]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_f_p_f_h_estimation_o_m_p.html pcl::FPFHEstimationOMP]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_FPFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==RSD==&lt;br /&gt;
&lt;br /&gt;
The Radius-Based Surface Descriptor encodes the radial relationship of the point and its neighborhood. For every pair of the keypoint with a neighbor, the algorithm computes the distance between them, and the difference between their normals. Then, by assuming that both points lie on the surface of a sphere, said sphere is found by fitting not only the points, but also the normals (otherwise, there would be infinite possible spheres). Finally, from all the point-neighbor spheres, only the ones with the maximum and minimum radii are kept and saved to the descriptor of that point.&lt;br /&gt;
&lt;br /&gt;
As you may have deduced already, when two points lie on a flat surface, the sphere radius will be infinite. If, on the other hand, they lie on the curved face of a cylinder, the radius will be more or less the same as that of the cylinder. This allows us to tell objects apart with RSD. The algorithm takes a parameter that sets the maximum radius at which the points will be considered to be part of a plane.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:RSD_sphere.png | Sphere that fits two points with their normals (image from [http://ias.in.tum.de/_media/events/rgbd2011/marton.pdf original presentation]).&lt;br /&gt;
File:RSD_values.png | RSD values for a cloud; points on a flat surface have maximum values (image from [http://ias.in.tum.de/_media/events/rgbd2011/marton.pdf original presentation]).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the code for compiling the RSD descriptor:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/rsd.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the RSD descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PrincipalRadiiRSD&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::PrincipalRadiiRSD&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// RSD estimation object.&lt;br /&gt;
	RSDEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::PrincipalRadiiRSD&amp;gt; rsd;&lt;br /&gt;
	rsd.setInputCloud(cloud);&lt;br /&gt;
	rsd.setInputNormals(normals);&lt;br /&gt;
	rsd.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. Note: the value given here has to be&lt;br /&gt;
	// larger than the radius used to estimate the normals.&lt;br /&gt;
	rsd.setRadiusSearch(0.05);&lt;br /&gt;
	// Plane radius. Any radius larger than this is considered infinite (a plane).&lt;br /&gt;
	rsd.setPlaneRadius(0.1);&lt;br /&gt;
	// Do we want to save the full distance-angle histograms?&lt;br /&gt;
	rsd.setSaveHistograms(false);&lt;br /&gt;
	&lt;br /&gt;
	rsd.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: This code will only compile with PCL versions 1.8 and above (the current trunk).'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Optionally, you can use the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setSaveHistograms()&amp;quot;&amp;lt;/span&amp;gt; function to enable the saving of the full distance-angle histograms, and then use &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getHistograms()&amp;quot;&amp;lt;/span&amp;gt; to retrieve them.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Search method, Radius, Maximum Radius, [Subdivisions], [Save full histograms]&lt;br /&gt;
* '''Output''': RSD descriptors&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.391.7398&amp;amp;rep=rep1&amp;amp;type=pdf General 3D Modelling of Novel Objects from a Single View] (Zoltan-Csaba Marton et al., 2010)&lt;br /&gt;
** [https://ias.cs.tum.edu/_media/spezial/bib/marton11ijrr.pdf Combined 2D-3D Categorization and Classification for Multimodal Perception Systems] (Zoltan-Csaba Marton et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_r_s_d_estimation.html pcl::RSDEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_RSD.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==3DSC==&lt;br /&gt;
&lt;br /&gt;
The 3D Shape Context is a descriptor that extends its existing 2D counterpart to the third dimension. It works by creating a support structure (a sphere, to be precise) centered at the point we are computing the descriptor for, with the given search radius. The &amp;quot;north pole&amp;quot; of that sphere (the notion of &amp;quot;up&amp;quot;) is pointed to match the normal at that point. Then, the sphere is divided in 3D regions or bins. In the first 2 coordinates (azimuth and elevation) the divisions are equally spaced, but in the third (the radial dimension), divisions are logarithmically spaced so they are smaller towards the center. A minimum radius can be specified to prevent very small bins, that would be too sensitive to small changes in the surface.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:3DSC_support_structure.png|thumb|center|200px|Support structure to compute the 3DSC for a point (image from [http://www.cs.jhu.edu/~misha/Papers/Frome04.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For each bin, a weighted count is accumulated for every neighboring point that lies within. The weight depends on the volume of the bin and the local point density (number of points around the current neighbor). This gives the descriptor some degree of resolution invariance.&lt;br /&gt;
&lt;br /&gt;
We have mentioned that the sphere is given the direction of the normal. This still leaves one degree of freedom (only two axes have been locked, the azimuth remains free). Because of this, the descriptor so far does not cope with rotation. To overcome this (so the same point in two different clouds has the same value), the support sphere is rotated around the normal ''N'' times (a number of degrees that corresponds with the divisions in the azimuth) and the process is repeated for each, giving a total of ''N'' descriptors for that point.&lt;br /&gt;
&lt;br /&gt;
You can compute the 3DSC descriptor the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/3dsc.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the 3DSC descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::ShapeContext1980&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::ShapeContext1980&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// 3DSC estimation object.&lt;br /&gt;
	pcl::ShapeContext3DEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::ShapeContext1980&amp;gt; sc3d;&lt;br /&gt;
	sc3d.setInputCloud(cloud);&lt;br /&gt;
	sc3d.setInputNormals(normals);&lt;br /&gt;
	sc3d.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. It will also be the radius of the support sphere.&lt;br /&gt;
	sc3d.setRadiusSearch(0.05);&lt;br /&gt;
	// The minimal radius value for the search sphere, to avoid being too sensitive&lt;br /&gt;
	// in bins close to the center of the sphere.&lt;br /&gt;
	sc3d.setMinimalRadius(0.05 / 10.0);&lt;br /&gt;
	// Radius used to compute the local point density for the neighbors&lt;br /&gt;
	// (the density is the number of points within that radius).&lt;br /&gt;
	sc3d.setPointDensityRadius(0.05 / 5.0);&lt;br /&gt;
&lt;br /&gt;
	sc3d.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Search method, Radius, Minimal radius, Point density radius&lt;br /&gt;
* '''Output''': 3DSC descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://www.cs.jhu.edu/~misha/Papers/Frome04.pdf Recognizing Objects in Range Data Using Regional Point Descriptors] (Andrea Frome et al., 2004)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_shape_context3_d_estimation.html pcl::ShapeContext3DEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_3DSC.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===USC===&lt;br /&gt;
&lt;br /&gt;
The Unique Shape Context descriptor extends the 3DSC by defining a local reference frame, in order to provide an unique orientation for each point. This not only improves the accuracy of the descriptor, it also reduces its size, as computing multiple descriptors to account for orientation is no longer necessary.&lt;br /&gt;
&lt;br /&gt;
You can check the second publication listed below to learn more about how the LRF is computed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/usc.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the USC descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::UniqueShapeContext1960&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::UniqueShapeContext1960&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// USC estimation object.&lt;br /&gt;
	pcl::UniqueShapeContext&amp;lt;pcl::PointXYZ, pcl::UniqueShapeContext1960, pcl::ReferenceFrame&amp;gt; usc;&lt;br /&gt;
	usc.setInputCloud(cloud);&lt;br /&gt;
	// Search radius, to look for neighbors. It will also be the radius of the support sphere.&lt;br /&gt;
	usc.setRadiusSearch(0.05);&lt;br /&gt;
	// The minimal radius value for the search sphere, to avoid being too sensitive&lt;br /&gt;
	// in bins close to the center of the sphere.&lt;br /&gt;
	usc.setMinimalRadius(0.05 / 10.0);&lt;br /&gt;
	// Radius used to compute the local point density for the neighbors&lt;br /&gt;
	// (the density is the number of points within that radius).&lt;br /&gt;
	usc.setPointDensityRadius(0.05 / 5.0);&lt;br /&gt;
	// Set the radius to compute the Local Reference Frame.&lt;br /&gt;
	usc.setLocalRadius(0.05);&lt;br /&gt;
&lt;br /&gt;
	usc.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: This code will only compile with PCL versions 1.8 and above (the current trunk). For 1.7 and below, change UniqueShapeContext1960 to ShapeContext1980, and edit CMakeLists.txt.'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius, Minimal radius, Point density radius, Local radius&lt;br /&gt;
* '''Output''': USC descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://vision.deis.unibo.it/fede/papers/3dor10.pdf Unique Shape Context for 3D Data Description] (Federico Tombari et al., 2010)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_unique_shape_context.html pcl::UniqueShapeContext]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_USC.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==SHOT==&lt;br /&gt;
&lt;br /&gt;
SHOT stands for Signature of Histograms of Orientations. Like 3DSC, it encodes information about the topology (surface) withing a spherical support structure. This sphere is divided in 32 bins or volumes, with 8 divisions along the azimuth, 2 along the elevation, and 2 along the radius. For every volume, a one-dimensional local histogram is computed. The variable chosen is the angle between the normal of the keypoint and the current point within that volume (to be precise, the cosine, which was found to be better suitable).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:SHOT_support_structure.png|thumb|center|200px|Support structure to compute SHOT. Only 4 azimuth divisions are shown for clarity (image from [http://vision.deis.unibo.it/fede/papers/eccv10.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When all local histograms have been computed, they are stitched together in a final descriptor. Like the USC descriptor, SHOT makes use of a local reference frame, making it rotation invariant. It is also robust to noise and clutter.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/shot.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the SHOT descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::SHOT352&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// SHOT estimation object.&lt;br /&gt;
	pcl::SHOTEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::SHOT352&amp;gt; shot;&lt;br /&gt;
	shot.setInputCloud(cloud);&lt;br /&gt;
	shot.setInputNormals(normals);&lt;br /&gt;
	// The radius that defines which of the keypoint's neighbors are described.&lt;br /&gt;
	// If too large, there may be clutter, and if too small, not enough points may be found.&lt;br /&gt;
	shot.setRadiusSearch(0.02);&lt;br /&gt;
&lt;br /&gt;
	shot.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Like with FPFH, a multithreading-optimized variant is available with &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;SHOTEstimationOMP&amp;quot;&amp;lt;/span&amp;gt;, that makes use of OpenMP. You need to include the header &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;shot_omp.h&amp;quot;&amp;lt;/span&amp;gt;. Also, another variant that uses the texture for matching is available, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;SHOTColorEstimation&amp;quot;&amp;lt;/span&amp;gt;, with an optimized version too (see the second publication for more details). It outputs a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;SHOT1344&amp;quot;&amp;lt;/span&amp;gt; descriptor.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Radius&lt;br /&gt;
* '''Output''': SHOT descriptors&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://vision.deis.unibo.it/fede/papers/eccv10.pdf Unique Signatures of Histograms for Local Surface Description] (Federico Tombari et al., 2010)&lt;br /&gt;
** [http://www.vision.deis.unibo.it/fede/papers/icip11.pdf A Combined Texture-Shaped Descriptor for Enhanced 3D Feature Matching] (Federico Tombari et al., 2011)&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_estimation.html pcl::SHOTEstimation], &lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_color_estimation.html pcl::SHOTColorEstimation]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_estimation_o_m_p.html pcl::SHOTEstimationOMP]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_s_h_o_t_color_estimation_o_m_p.html pcl::SHOTColorEstimationOMP]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_SHOT.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Spin image==&lt;br /&gt;
&lt;br /&gt;
The Spin Image (SI) is the oldest descriptor we are going to see here. It has been around since 1997, but it still sees some use for certain applications. It was originally designed to describe surfaces made by vertices, edges and polygons, but it has been since adapted for point clouds. The descriptor is unlike all others in that the output resembles an image that can be compared with another with the usual means.&lt;br /&gt;
&lt;br /&gt;
The support structure used is a cylinder, centered at the point, with a given radius and height, and aligned with the normal. This cylinder is divided radially and vertically into volumes. For each one, the number of neighbors lying inside is added up, eventually producing a descriptor. Weighting and interpolation are used to improve the result. The final descriptor can be seen as a grayscale image where dark areas correspond to volumes with higher point density.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Spin images.png|thumb|center|400px|Spin images computed for 3 points of a model (image from [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.71.4190&amp;amp;rep=rep1&amp;amp;type=pdf original thesis]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/spin_image.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// A handy typedef.&lt;br /&gt;
typedef pcl::Histogram&amp;lt;153&amp;gt; SpinImage;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the spin image for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;SpinImage&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;SpinImage&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Spin image estimation object.&lt;br /&gt;
	pcl::SpinImageEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, SpinImage&amp;gt; si;&lt;br /&gt;
	si.setInputCloud(cloud);&lt;br /&gt;
	si.setInputNormals(normals);&lt;br /&gt;
	// Radius of the support cylinder.&lt;br /&gt;
	si.setRadiusSearch(0.02);&lt;br /&gt;
	// Set the resolution of the spin image (the number of bins along one dimension).&lt;br /&gt;
	// Note: you must change the output histogram size to reflect this.&lt;br /&gt;
	si.setImageWidth(8);&lt;br /&gt;
&lt;br /&gt;
	si.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Spin Image estimation object provides more methods for tuning the estimation, so checking the API is recommended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Normals, Radius, Image resolution&lt;br /&gt;
* '''Output''': Spin images&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.71.4190&amp;amp;rep=rep1&amp;amp;type=pdf Spin-Images: A Representation for 3-D Surface Matching] (Andrew Edie Johnson, 1997)&lt;br /&gt;
** [http://www.cs.jhu.edu/~misha/Papers/Johnson99.pdf Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes] (Andrew Edie Johnson and Martial Hebert, 1999)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_spin_image_estimation.html pcl::SpinImageEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_spin_image.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==RIFT==&lt;br /&gt;
&lt;br /&gt;
The Rotation-Invariant Feature Transform, like the spin image, takes some concepts from 2D features, in this case from the Scale-Invariant Feature Transform ([https://en.wikipedia.org/wiki/Scale-invariant_feature_transform SIFT]). It is the only descriptor seen here that requires intensity information in order to compute it (it can be obtained from the RGB color values). This means, of course, that you will not be able to use RIFT with standard XYZ clouds, you also need the texture.&lt;br /&gt;
&lt;br /&gt;
In the first step, a circular patch (with the given radius) is fitted on the surface the point lies on. This patch is divided into concentric rings, according to the chosen distance bin size. Then, an histogram is populated with all the point's neighbors lying inside a sphere centered at that point and with the mentioned radius. The distance and the orientation of the intensity gradient at each point are considered. To make it rotation invariant, the angle between the gradient orientation and the vector pointing outward from the center of the patch is measured.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:RIFT.png|thumb|center|600px|RIFT feature values at 3 different locations in the descriptor (image from [http://www.cs.illinois.edu/~slazebni/publications/pami05.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The authors' original implementation uses 4 rings and 8 histogram orientations, which produce a descriptor of size 32. RIFT is not robust to texture flipping, though this was never considered a big issue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/point_types_conversion.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/intensity_gradient.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/rift.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// A handy typedef.&lt;br /&gt;
typedef pcl::Histogram&amp;lt;32&amp;gt; RIFT32;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud with color information.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;::Ptr cloudColor(new pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;);&lt;br /&gt;
	// Object for storing the point cloud with intensity value.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZI&amp;gt;::Ptr cloudIntensity(new pcl::PointCloud&amp;lt;pcl::PointXYZI&amp;gt;);&lt;br /&gt;
	// Object for storing the intensity gradients.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::IntensityGradient&amp;gt;::Ptr gradients(new pcl::PointCloud&amp;lt;pcl::IntensityGradient&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the RIFT descriptor for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;RIFT32&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;RIFT32&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGB&amp;gt;(argv[1], *cloudColor) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Convert the RGB to intensity.&lt;br /&gt;
	pcl::PointCloudXYZRGBtoXYZI(*cloudColor, *cloudIntensity);&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZI, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloudIntensity);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZI&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZI&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Compute the intensity gradients.&lt;br /&gt;
	pcl::IntensityGradientEstimation &amp;lt; pcl::PointXYZI, pcl::Normal, pcl::IntensityGradient,&lt;br /&gt;
		pcl::common::IntensityFieldAccessor&amp;lt;pcl::PointXYZI&amp;gt; &amp;gt; ge;&lt;br /&gt;
	ge.setInputCloud(cloudIntensity);&lt;br /&gt;
	ge.setInputNormals(normals);&lt;br /&gt;
	ge.setRadiusSearch(0.03);&lt;br /&gt;
	ge.compute(*gradients);&lt;br /&gt;
&lt;br /&gt;
	// RIFT estimation object.&lt;br /&gt;
	pcl::RIFTEstimation&amp;lt;pcl::PointXYZI, pcl::IntensityGradient, RIFT32&amp;gt; rift;&lt;br /&gt;
	rift.setInputCloud(cloudIntensity);&lt;br /&gt;
	rift.setSearchMethod(kdtree);&lt;br /&gt;
	// Set the intensity gradients to use.&lt;br /&gt;
	rift.setInputGradient(gradients);&lt;br /&gt;
	// Radius, to get all neighbors within.&lt;br /&gt;
	rift.setRadiusSearch(0.02);&lt;br /&gt;
	// Set the number of bins to use in the distance dimension.&lt;br /&gt;
	rift.setNrDistanceBins(4);&lt;br /&gt;
	// Set the number of bins to use in the gradient orientation dimension.&lt;br /&gt;
	rift.setNrGradientBins(8);&lt;br /&gt;
	// Note: you must change the output histogram size to reflect the previous values.&lt;br /&gt;
&lt;br /&gt;
	rift.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search method, Intensity gradients, Radius, Distance bins, Gradient bins&lt;br /&gt;
* '''Output''': RIFT descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://www.cs.illinois.edu/~slazebni/publications/pami05.pdf A Sparse Texture Representation Using Local Affine Regions] (Svetlana Lazebnik et al., 2005)&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_r_i_f_t_estimation.html pcl::RIFTEstimation]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_intensity_gradient_estimation.html pcl::IntensityGradientEstimation]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/structpcl_1_1_intensity_gradient.html pcl::IntensityGradient]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_RIFT.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==NARF==&lt;br /&gt;
&lt;br /&gt;
The Normal Aligned Radial Feature is the only descriptor here that does not take a point cloud as input. Instead, it works with range images. A range image is a common RGB image in which the distance to the point that corresponds to a certain pixel is encoded as a color value in the [https://en.wikipedia.org/wiki/Visible_spectrum visible light spectrum]: the points that are closer to the camera would be violet, while the points near the maximum sensor range would be red.&lt;br /&gt;
&lt;br /&gt;
NARF also requires us to find suitable keypoints to compute the descriptor for. NARF keypoints are located near an object's corners, and this also requires to find the borders (transitions from foreground to background), which are trivial to find with a range image. Because of this lengthy pipeline, I will describe the whole process in different sections.&lt;br /&gt;
&lt;br /&gt;
===Obtaining a range image===&lt;br /&gt;
&lt;br /&gt;
Because we always work with point clouds, I will now explain how you can convert one into a range image, in order to use it for the NARF descriptor. PCL provides a couple of handy classes to perform the conversion, given that you fill the camera data correctly.&lt;br /&gt;
&lt;br /&gt;
A range image can be created in two ways. First, we can use spherical projection, which would give us an image similar to the ones produced by a LIDAR sensor. Second, we can use planar projection, which is better suitable for camera-like sensors as the Kinect or the Xtion, and will not have the characteristic distortion of the first one.&lt;br /&gt;
&lt;br /&gt;
====Spherical projection====&lt;br /&gt;
&lt;br /&gt;
The following code will take a point cloud and create a range image from it, using spherical projection:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/range_image_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Parameters needed by the range image object:&lt;br /&gt;
&lt;br /&gt;
	// Angular resolution is the angular distance between pixels.&lt;br /&gt;
	// Kinect: 57° horizontal FOV, 43° vertical FOV, 640x480 (chosen here).&lt;br /&gt;
	// Xtion: 58° horizontal FOV, 45° vertical FOV, 640x480.&lt;br /&gt;
	float angularResolutionX = (float)(57.0f / 640.0f * (M_PI / 180.0f));&lt;br /&gt;
	float angularResolutionY = (float)(43.0f / 480.0f * (M_PI / 180.0f));&lt;br /&gt;
	// Maximum horizontal and vertical angles. For example, for a full panoramic scan,&lt;br /&gt;
	// the first would be 360º. Choosing values that adjust to the real sensor will&lt;br /&gt;
	// decrease the time it takes, but don't worry. If the values are bigger than&lt;br /&gt;
	// the real ones, the image will be automatically cropped to discard empty zones.&lt;br /&gt;
	float maxAngleX = (float)(60.0f * (M_PI / 180.0f));&lt;br /&gt;
	float maxAngleY = (float)(50.0f * (M_PI / 180.0f));&lt;br /&gt;
	// Sensor pose. Thankfully, the cloud includes the data.&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	// Noise level. If greater than 0, values of neighboring points will be averaged.&lt;br /&gt;
	// This would set the search radius (e.g., 0.03 == 3cm).&lt;br /&gt;
	float noiseLevel = 0.0f;&lt;br /&gt;
	// Minimum range. If set, any point closer to the sensor than this will be ignored.&lt;br /&gt;
	float minimumRange = 0.0f;&lt;br /&gt;
	// Border size. If greater than 0, a border of &amp;quot;unobserved&amp;quot; points will be left&lt;br /&gt;
	// in the image when it is cropped.&lt;br /&gt;
	int borderSize = 1;&lt;br /&gt;
&lt;br /&gt;
	// Range image object.&lt;br /&gt;
	pcl::RangeImage rangeImage;&lt;br /&gt;
	rangeImage.createFromPointCloud(*cloud, angularResolutionX, angularResolutionY,&lt;br /&gt;
									maxAngleX, maxAngleY, sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
									noiseLevel, minimumRange, borderSize);&lt;br /&gt;
&lt;br /&gt;
	// Visualize the image.&lt;br /&gt;
	pcl::visualization::RangeImageVisualizer viewer(&amp;quot;Range image&amp;quot;);&lt;br /&gt;
	viewer.showRangeImage(rangeImage);&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer.spinOnce();&lt;br /&gt;
		// Sleep 100ms to go easy on the CPU.&lt;br /&gt;
		pcl_sleep(0.1);&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here you can see an example of the output range image:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_spherical.png|thumb|center|400px|Range image of a point cloud, using spherical projection.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Angular resolution, Maximum angle, Sensor pose, Coordinate frame, Noise level, Maximum range, Border size&lt;br /&gt;
* '''Output''': Range image (spherical projection)&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/range_image_creation.php How to create a range image from a point cloud]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/range_image_visualization.php How to visualize a range image]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_range_image.html pcl::RangeImage]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_range_image_spherical.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Planar projection====&lt;br /&gt;
&lt;br /&gt;
As mentioned, planar projection will give better results with clouds taken from a depth camera:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image_planar.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/range_image_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Parameters needed by the planar range image object:&lt;br /&gt;
&lt;br /&gt;
	// Image size. Both Kinect and Xtion work at 640x480.&lt;br /&gt;
	int imageSizeX = 640;&lt;br /&gt;
	int imageSizeY = 480;&lt;br /&gt;
	// Center of projection. here, we choose the middle of the image.&lt;br /&gt;
	float centerX = 640.0f / 2.0f;&lt;br /&gt;
	float centerY = 480.0f / 2.0f;&lt;br /&gt;
	// Focal length. The value seen here has been taken from the original depth images.&lt;br /&gt;
	// It is safe to use the same value vertically and horizontally.&lt;br /&gt;
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;&lt;br /&gt;
	// Sensor pose. Thankfully, the cloud includes the data.&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	// Noise level. If greater than 0, values of neighboring points will be averaged.&lt;br /&gt;
	// This would set the search radius (e.g., 0.03 == 3cm).&lt;br /&gt;
	float noiseLevel = 0.0f;&lt;br /&gt;
	// Minimum range. If set, any point closer to the sensor than this will be ignored.&lt;br /&gt;
	float minimumRange = 0.0f;&lt;br /&gt;
&lt;br /&gt;
	// Planar range image object.&lt;br /&gt;
	pcl::RangeImagePlanar rangeImagePlanar;&lt;br /&gt;
	rangeImagePlanar.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,&lt;br /&gt;
			centerX, centerY, focalLengthX, focalLengthX,&lt;br /&gt;
			sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
			noiseLevel, minimumRange);&lt;br /&gt;
&lt;br /&gt;
	// Visualize the image.&lt;br /&gt;
	pcl::visualization::RangeImageVisualizer viewer(&amp;quot;Planar range image&amp;quot;);&lt;br /&gt;
	viewer.showRangeImage(rangeImagePlanar);&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer.spinOnce();&lt;br /&gt;
		// Sleep 100ms to go easy on the CPU.&lt;br /&gt;
		pcl_sleep(0.1);&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_planar.png|thumb|center|400px|Range image of a point cloud, using planar projection.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Image size, Projection center, Focal length, Sensor pose, Coordinate frame, Noise level, Maximum range&lt;br /&gt;
* '''Output''': Range image (planar projection)&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/range_image_creation.php How to create a range image from a point cloud]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/range_image_visualization.php How to visualize a range image]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_range_image_planar.html pcl::RangeImagePlanar]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_range_image_planar.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you prefer to do the conversion in real time while you inspect the cloud, PCL ships with an [https://github.com/PointCloudLibrary/pcl/tree/master/doc/tutorials/content/sources/openni_range_image_visualization example] that fetches an &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;openni_wrapper::DepthImage&amp;quot;&amp;lt;/span&amp;gt; from an OpenNI device and creates the range image from it. You can adapt the code of the [[PCL/OpenNI_tutorial_1:_Installing_and_testing#Testing | example]] example from tutorial 1 to save it to disk with the function [http://docs.pointclouds.org/trunk/group__io.html#ga7291a029cdcde32ca3639d07dc6491b9 pcl::io::saveRangeImagePlanarFilePNG()].&lt;br /&gt;
&lt;br /&gt;
===Extracting borders===&lt;br /&gt;
&lt;br /&gt;
NARF keypoints are located near the edges of objects in the range image, so in order to find them, we first have to extract the borders. A border is defined as an abrupt change from foreground to background. In a range image, this can be easily seen because there is a &amp;quot;jump&amp;quot; in the depth value of two adjacent pixels.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_border_detection.png|thumb|center|400px|Border detection on a range image (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are three types of borders. ''Object borders'' consist of the pixels (or points) located on the very edge of an object (the outermost points still belonging to the object). ''Shadow borders'' are points in the background on the edge of occlusions (empty areas in the background due to the objects in front covering them). Notice that, when the cloud is seen from the sensor's viewpoint, object and shadow points will seem adjacent. Finally, ''veil points'' are points interpolated between the previous two which appear in scans taken with LIDAR sensors, so we do not have to worry about them here.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:Range_image_border_type.png | Types of borders (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).&lt;br /&gt;
File:Range_image_borders_example.png | Example of object and shadow borders on a cloud.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The algorithm basically compares a point's depth with the values of its neighbors, and if a big difference is found, we know it is due to a border. Points closer to the sensor will be marked as object borders, and the other ones as shadow borders.&lt;br /&gt;
&lt;br /&gt;
PCL provides a class for extracting borders of a range image:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image_planar.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/range_image_border_extractor.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/range_image_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the borders.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::BorderDescription&amp;gt;::Ptr borders(new pcl::PointCloud&amp;lt;pcl::BorderDescription&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Convert the cloud to range image.&lt;br /&gt;
	int imageSizeX = 640, imageSizeY = 480;&lt;br /&gt;
	float centerX = (640.0f / 2.0f), centerY = (480.0f / 2.0f);&lt;br /&gt;
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	float noiseLevel = 0.0f, minimumRange = 0.0f;&lt;br /&gt;
	pcl::RangeImagePlanar rangeImage;&lt;br /&gt;
	rangeImage.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,&lt;br /&gt;
			centerX, centerY, focalLengthX, focalLengthX,&lt;br /&gt;
			sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
			noiseLevel, minimumRange);&lt;br /&gt;
&lt;br /&gt;
	// Border extractor object.&lt;br /&gt;
	pcl::RangeImageBorderExtractor borderExtractor(&amp;amp;rangeImage);&lt;br /&gt;
&lt;br /&gt;
	borderExtractor.compute(*borders);&lt;br /&gt;
&lt;br /&gt;
	// Visualize the borders.&lt;br /&gt;
	pcl::visualization::RangeImageVisualizer* viewer = NULL;&lt;br /&gt;
	viewer = pcl::visualization::RangeImageVisualizer::getRangeImageBordersWidget(rangeImage,&lt;br /&gt;
			 -std::numeric_limits&amp;lt;float&amp;gt;::infinity(),&lt;br /&gt;
			 std::numeric_limits&amp;lt;float&amp;gt;::infinity(),&lt;br /&gt;
			 false, *borders, &amp;quot;Borders&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce();&lt;br /&gt;
		// Sleep 100ms to go easy on the CPU.&lt;br /&gt;
		pcl_sleep(0.1);&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_borders.png|thumb|center|400px|Borders found on the range image.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can use the extractor's &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getParameters()&amp;quot;&amp;lt;/span&amp;gt; function to get a [http://docs.pointclouds.org/trunk/structpcl_1_1_range_image_border_extractor_1_1_parameters.html pcl::RangeImageBorderExtractor::Parameters] struct with the settings that will be used.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Range image&lt;br /&gt;
* '''Output''': Borders&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/range_image_border_extraction.php How to extract borders from range images]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://europa.informatik.uni-freiburg.de/files/steder11icra.pdf Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries] (Bastian Steder et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_range_image_border_extractor.html pcl::RangeImageBorderExtractor]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_range_image_borders.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Finding keypoints===&lt;br /&gt;
&lt;br /&gt;
Citing the [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original publication]:''&lt;br /&gt;
&lt;br /&gt;
''&amp;quot;We have the following requirements for our interest point extraction procedure:&lt;br /&gt;
&lt;br /&gt;
# ''It must take information about borders and the surface structure into account.''&lt;br /&gt;
# ''It must select positions that can be reliably detected even if the object is observed from another perspective.''&lt;br /&gt;
# ''The points must be on positions that provide stable areas for normal estimation or the descriptor calculation in general.&amp;quot;''&lt;br /&gt;
&lt;br /&gt;
The procedure is the following: for every point in the range image, a score is computed that conveys how much the surface changes in its neighborhood (this is tuned with the ''support size'' σ, which is the diameter of the sphere used to find neighboring points). Also, the dominant direction of this change is computed. Then, this direction is compared with those of the neighbors, trying to find how stable the point is (if the directions are very different, that means the point is not stable, and that the surface around changes a lot). Points that are near the object's corners (but not exactly on the very edge) will be good keypoints, yet stable enough.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:NARF_keypoints.png | NARF keypoints (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).&lt;br /&gt;
File:NARF_keypoints_support_sizes.png | Interest regions with a support size of 20cm (up) and 1m (down) (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In PCL, NARF keypoints can be found this way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image_planar.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/range_image_border_extractor.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/keypoints/narf_keypoint.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/range_image_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the keypoints' indices.&lt;br /&gt;
	pcl::PointCloud&amp;lt;int&amp;gt;::Ptr keypoints(new pcl::PointCloud&amp;lt;int&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Convert the cloud to range image.&lt;br /&gt;
	int imageSizeX = 640, imageSizeY = 480;&lt;br /&gt;
	float centerX = (640.0f / 2.0f), centerY = (480.0f / 2.0f);&lt;br /&gt;
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	float noiseLevel = 0.0f, minimumRange = 0.0f;&lt;br /&gt;
	pcl::RangeImagePlanar rangeImage;&lt;br /&gt;
	rangeImage.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,&lt;br /&gt;
			centerX, centerY, focalLengthX, focalLengthX,&lt;br /&gt;
			sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
			noiseLevel, minimumRange);&lt;br /&gt;
&lt;br /&gt;
	pcl::RangeImageBorderExtractor borderExtractor;&lt;br /&gt;
	// Keypoint detection object.&lt;br /&gt;
	pcl::NarfKeypoint detector(&amp;amp;borderExtractor);&lt;br /&gt;
	detector.setRangeImage(&amp;amp;rangeImage);&lt;br /&gt;
	// The support size influences how big the surface of interest will be,&lt;br /&gt;
	// when finding keypoints from the border information.&lt;br /&gt;
	detector.getParameters().support_size = 0.2f;&lt;br /&gt;
&lt;br /&gt;
	detector.compute(*keypoints);&lt;br /&gt;
&lt;br /&gt;
	// Visualize the keypoints.&lt;br /&gt;
	pcl::visualization::RangeImageVisualizer viewer(&amp;quot;NARF keypoints&amp;quot;);&lt;br /&gt;
	viewer.showRangeImage(rangeImage);&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; keypoints-&amp;gt;points.size(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		viewer.markPoint(keypoints-&amp;gt;points[i] % rangeImage.width,&lt;br /&gt;
						 keypoints-&amp;gt;points[i] / rangeImage.width,&lt;br /&gt;
						 // Set the color of the pixel to red (the background&lt;br /&gt;
						 // circle is already that color). All other parameters&lt;br /&gt;
						 // are left untouched, check the API for more options.&lt;br /&gt;
						 pcl::visualization::Vector3ub(1.0f, 0.0f, 0.0f));&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer.spinOnce();&lt;br /&gt;
		// Sleep 100ms to go easy on the CPU.&lt;br /&gt;
		pcl_sleep(0.1);&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Range_image_NARF_keypoints.png|thumb|center|400px|NARF keypoints found on the range image.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Range image, Border extractor, Support Size&lt;br /&gt;
* '''Output''': NARF keypoints&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/narf_keypoint_extraction.php How to extract NARF keypoint from a range image]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://europa.informatik.uni-freiburg.de/files/steder11icra.pdf Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries] (Bastian Steder et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_narf_keypoint.html pcl::NarfKeypoint]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_NARF_keypoints.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Computing the descriptor===&lt;br /&gt;
&lt;br /&gt;
We have created the range image from a point cloud, and we have extracted the borders in order to find good keypoints. Now it is time to compute the NARF descriptor for each keypoint.&lt;br /&gt;
&lt;br /&gt;
The NARF descriptor encodes information about surface changes around a point. First, a local range patch is created around the point. It is like a small range image centered at that point, aligned with the normal (it would seem as if we were looking at the point along the normal). Then, a star pattern with ''n'' beams is overlaid onto the patch, also centered at the point. For every beam, a value is computed, that reflects how much the surface under it changes. The stronger the change is, and the closer to the center it is, the higher the final value will be. The ''n'' resulting values compose the final descriptor.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:NARF_descriptor.png|thumb|center|600px|Computing the NARF descriptor for a keypoint (image from [http://www.willowgarage.com/sites/default/files/icra2011_3dfeatures.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The descriptor right now is not invariant to rotations around the normal. To achieve this, the whole possible 360 degrees are binned into an histogram. The value of each bin is computed from the descriptor values according to the angle. Then, the bin with the highest value is considered the dominant orientation, and the descriptor is shifted according to it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/range_image/range_image_planar.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/range_image_border_extractor.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/keypoints/narf_keypoint.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/narf_descriptor.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the keypoints' indices.&lt;br /&gt;
	pcl::PointCloud&amp;lt;int&amp;gt;::Ptr keypoints(new pcl::PointCloud&amp;lt;int&amp;gt;);&lt;br /&gt;
	// Object for storing the NARF descriptors.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Narf36&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::Narf36&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Convert the cloud to range image.&lt;br /&gt;
	int imageSizeX = 640, imageSizeY = 480;&lt;br /&gt;
	float centerX = (640.0f / 2.0f), centerY = (480.0f / 2.0f);&lt;br /&gt;
	float focalLengthX = 525.0f, focalLengthY = focalLengthX;&lt;br /&gt;
	Eigen::Affine3f sensorPose = Eigen::Affine3f(Eigen::Translation3f(cloud-&amp;gt;sensor_origin_[0],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[1],&lt;br /&gt;
								 cloud-&amp;gt;sensor_origin_[2])) *&lt;br /&gt;
								 Eigen::Affine3f(cloud-&amp;gt;sensor_orientation_);&lt;br /&gt;
	float noiseLevel = 0.0f, minimumRange = 0.0f;&lt;br /&gt;
	pcl::RangeImagePlanar rangeImage;&lt;br /&gt;
	rangeImage.createFromPointCloudWithFixedSize(*cloud, imageSizeX, imageSizeY,&lt;br /&gt;
			centerX, centerY, focalLengthX, focalLengthX,&lt;br /&gt;
			sensorPose, pcl::RangeImage::CAMERA_FRAME,&lt;br /&gt;
			noiseLevel, minimumRange);&lt;br /&gt;
&lt;br /&gt;
	// Extract the keypoints.&lt;br /&gt;
	pcl::RangeImageBorderExtractor borderExtractor;&lt;br /&gt;
	pcl::NarfKeypoint detector(&amp;amp;borderExtractor);&lt;br /&gt;
	detector.setRangeImage(&amp;amp;rangeImage);&lt;br /&gt;
	detector.getParameters().support_size = 0.2f;&lt;br /&gt;
	detector.compute(*keypoints);&lt;br /&gt;
&lt;br /&gt;
	// The NARF estimator needs the indices in a vector, not a cloud.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; keypoints2;&lt;br /&gt;
	keypoints2.resize(keypoints-&amp;gt;points.size());&lt;br /&gt;
	for (unsigned int i = 0; i &amp;lt; keypoints-&amp;gt;size(); ++i)&lt;br /&gt;
		keypoints2[i] = keypoints-&amp;gt;points[i];&lt;br /&gt;
	// NARF estimation object.&lt;br /&gt;
	pcl::NarfDescriptor narf(&amp;amp;rangeImage, &amp;amp;keypoints2);&lt;br /&gt;
	// Support size: choose the same value you used for keypoint extraction.&lt;br /&gt;
	narf.getParameters().support_size = 0.2f;&lt;br /&gt;
	// If true, the rotation invariant version of NARF will be used. The histogram&lt;br /&gt;
	// will be shifted according to the dominant orientation to provide robustness to&lt;br /&gt;
	// rotations around the normal.&lt;br /&gt;
	narf.getParameters().rotation_invariant = true;&lt;br /&gt;
&lt;br /&gt;
	narf.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Range image, Key points, Support Size&lt;br /&gt;
* '''Output''': NARF descriptors&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/narf_feature_extraction.php How to extract NARF Features from a range image]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://europa.informatik.uni-freiburg.de/files/steder11icra.pdf Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries] (Bastian Steder et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_narf_descriptor.html pcl::NarfDescriptor]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_NARF_descriptor.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==RoPS==&lt;br /&gt;
&lt;br /&gt;
The Rotational Projection Statistics (RoPS) feature is a bit different from the other descriptors because it works with a triangle mesh, so a previous triangulation step is needed for generating this mesh from the cloud. Apart from that, most concepts are similar.&lt;br /&gt;
&lt;br /&gt;
In order to compute RoPS for a keypoint, the local surface is cropped according to a support radius, so only points and triangles lying inside are taken into account. Then, a local reference frame (LRF) is computed, giving the descriptor its rotational invariance. A coordinate system is created with the point as the origin, and the axes aligned with the LRF. Then, for every axis, several steps are performed.&lt;br /&gt;
&lt;br /&gt;
First, the local surface is rotated around the current axis. The angle is determined by one of the parameters, which sets the number of rotations. Then, all points in the local surface are projected onto the XY, XZ and YZ planes. For each one, statistical information about the distribution of the projected points is computed, and concatenated to form the final descriptor.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/point_types_conversion.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/gp3.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/rops_estimation.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// A handy typedef.&lt;br /&gt;
typedef pcl::Histogram&amp;lt;135&amp;gt; ROPS135;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing both the points and the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;::Ptr cloudNormals(new pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
	// Object for storing the ROPS descriptor for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;ROPS135&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;ROPS135&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Perform triangulation.&lt;br /&gt;
	pcl::concatenateFields(*cloud, *normals, *cloudNormals);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointNormal&amp;gt;::Ptr kdtree2(new pcl::search::KdTree&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
	kdtree2-&amp;gt;setInputCloud(cloudNormals);&lt;br /&gt;
	pcl::GreedyProjectionTriangulation&amp;lt;pcl::PointNormal&amp;gt; triangulation;&lt;br /&gt;
	pcl::PolygonMesh triangles;&lt;br /&gt;
	triangulation.setSearchRadius(0.025);&lt;br /&gt;
	triangulation.setMu(2.5);&lt;br /&gt;
	triangulation.setMaximumNearestNeighbors(100);&lt;br /&gt;
	triangulation.setMaximumSurfaceAngle(M_PI / 4); // 45 degrees.&lt;br /&gt;
	triangulation.setNormalConsistency(false);&lt;br /&gt;
	triangulation.setMinimumAngle(M_PI / 18); // 10 degrees.&lt;br /&gt;
	triangulation.setMaximumAngle(2 * M_PI / 3); // 120 degrees.&lt;br /&gt;
	triangulation.setInputCloud(cloudNormals);&lt;br /&gt;
	triangulation.setSearchMethod(kdtree2);&lt;br /&gt;
	triangulation.reconstruct(triangles);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should only compute descriptors for chosen keypoints. It has&lt;br /&gt;
	// been omitted here for simplicity.&lt;br /&gt;
&lt;br /&gt;
	// RoPs estimation object.&lt;br /&gt;
	pcl::ROPSEstimation&amp;lt;pcl::PointXYZ, ROPS135&amp;gt; rops;&lt;br /&gt;
	rops.setInputCloud(cloud);&lt;br /&gt;
	rops.setSearchMethod(kdtree);&lt;br /&gt;
	rops.setRadiusSearch(0.03);&lt;br /&gt;
	rops.setTriangles(triangles.polygons);&lt;br /&gt;
	// Number of partition bins that is used for distribution matrix calculation.&lt;br /&gt;
	rops.setNumberOfPartitionBins(5);&lt;br /&gt;
	// The greater the number of rotations is, the bigger the resulting descriptor.&lt;br /&gt;
	// Make sure to change the histogram size accordingly.&lt;br /&gt;
	rops.setNumberOfRotations(3);&lt;br /&gt;
	// Support radius that is used to crop the local surface of the point.&lt;br /&gt;
	rops.setSupportRadius(0.025);&lt;br /&gt;
&lt;br /&gt;
	rops.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Triangles, Search method, Support radius, Number of rotations, Number of partition bins&lt;br /&gt;
* '''Output''': RoPS descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://arxiv.org/pdf/1304.3192v1.pdf Rotational Projection Statistics for 3D Local Surface Description and Object Recognition] (Yulan Guo et al., 2013)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_r_o_p_s_estimation.html pcl::ROPSEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_RoPS.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Global descriptors=&lt;br /&gt;
&lt;br /&gt;
Global descriptors encode object geometry. They are not computed for individual points, but for a whole cluster that represents an object. Because of this, a preprocessing step ([[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Segmentation|segmentation]]) is required, in order to retrieve possible candidates.&lt;br /&gt;
&lt;br /&gt;
Global descriptors are used for object recognition and classification, geometric analysis (object type, shape...), and pose estimation.&lt;br /&gt;
&lt;br /&gt;
You should also know that many local descriptors can also be used as global ones. This can be done with descriptors that use a radius to search for neighbors (as PFH does). The trick is to compute it for one single point in the object cluster, and set the radius to the maximum possible distance between any two points (so all points in the cluster are considered as neighbors).&lt;br /&gt;
&lt;br /&gt;
==VFH==&lt;br /&gt;
&lt;br /&gt;
The Viewpoint Feature Histogram is based on the FPFH. Because the latter is invariant to the object's pose, the authors decided to expand it by including information about the viewpoint. Also, the FPFH is estimated once for the whole cluster, not for every point.&lt;br /&gt;
&lt;br /&gt;
The VFH is made up by two parts: a viewpoint direction component, and an extended FPFH component. To compute the first one, the object's [[PCL/OpenNI_tutorial_2:_Cloud_processing_%28basic%29#Computing_the_centroid|centroid]] is found, which is the point that results from averaging the X, Y and Z coordinates of all points. Then, the vector between the viewpoint (the position of the sensor) and this centroid is computed, and normalized. Finally, for all points in the cluster, the angle between this vector and their normal is calculated, and the result is binned into an histogram. The vector is translated to each point when computing the angle because it makes the descriptor scale invariant.&lt;br /&gt;
&lt;br /&gt;
The second component is computed like the FPFH (that results in 3 histograms for the 3 angular features, α, φ and θ), with some differences: it is only computed for the centroid, using the computed viewpoint direction vector as its normal (as the point, obviously, does not have a normal), and setting all the cluster's points as neighbors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:VFH_viewpoint_component.png | Viewpoint component of the VFH (image from [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf original paper]).&lt;br /&gt;
File:VFH_extended_FPFH_component.png | Extended FPFH component of the VFH (image from [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf original paper]).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The resulting 4 histograms (1 for the viewpoint component, 3 for the extended FPFH component) are concatenated to build the final VFH descriptor. By default, the bins are normalized using the total number of points in the cluster. This makes the VFH descriptor invariant to scale.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:VFH_histogram.png|thumb|center|400px| VFH histogram (image from [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The PCL implementation computes an additional fifth histogram with the distances of the cluster points to the centroid (the Shape Distribution Component, SDC), increading the size of the output descriptor from 263 to 308. The SDC is taken from the CVFH descriptor that we will see in the next section, and makes the result more robust.&lt;br /&gt;
&lt;br /&gt;
The VFH of an already clustered object can be computed this way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/vfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the VFH descriptor.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// VFH estimation object.&lt;br /&gt;
	pcl::VFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; vfh;&lt;br /&gt;
	vfh.setInputCloud(object);&lt;br /&gt;
	vfh.setInputNormals(normals);&lt;br /&gt;
	vfh.setSearchMethod(kdtree);&lt;br /&gt;
	// Optionally, we can normalize the bins of the resulting histogram,&lt;br /&gt;
	// using the total number of points.&lt;br /&gt;
	vfh.setNormalizeBins(true);&lt;br /&gt;
	// Also, we can normalize the SDC with the maximum size found between&lt;br /&gt;
	// the centroid and any of the cluster's points.&lt;br /&gt;
	vfh.setNormalizeDistance(false);&lt;br /&gt;
&lt;br /&gt;
	vfh.compute(*descriptor);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Because only one VFH descriptor is computed for the whole cluster, the size of the cloud object that stores the result will be 1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Normals, Search method, [Normalize bins], [Normalize SDC]&lt;br /&gt;
* '''Output''': VFH descriptor&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/vfh_estimation.php Estimating VFH signatures for a set of points]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://www.willowgarage.com/sites/default/files/Rusu10IROS.pdf Fast 3D Recognition and Pose Using the Viewpoint Feature Histogram] (Radu Bogdan Rusu et al., 2010)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_v_f_h_estimation.html pcl::VFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_VFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===CVFH===&lt;br /&gt;
&lt;br /&gt;
The original VFH descriptor is not robust to occlusion or other sensor artifacts, or measurement errors. If the object cluster is missing many points, the resulting computed centroid will differ from the original, altering the final descriptor, and preventing a positive match from being found. Because of that, the Clustered Viewpoint Feature Histogram (CVFH) was introduced.&lt;br /&gt;
&lt;br /&gt;
The idea is very simple: instead of computing a single VFH histogram for the whole cluster, the object is first divided in stable, smooth regions using [[PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)#Region_growing | region-growing segmentation]], that enforces several constraints in the distances and differences of normals of the points belonging to every region. Then, a VFH is computed for every region. Thanks to this, an object can be found in a scene, as long as at least one of its regions is fully visible.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:CVFH_occlusion.png | Typical occlusion issues in a point cloud (image from original paper).&lt;br /&gt;
File:CVFH_regions.png | Object regions computed for the CVFH (image from original paper).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additionally, a Shape Distribution Component (SDC) is also computed and included. It encodes information about the distribution of the points arond the region's centroid, measuring the distances. The SDC allows to differentiate objects with similar characteristics (size and normal distribution), like two planar surfaces from each other.&lt;br /&gt;
&lt;br /&gt;
The authors proposed to discard the histogram normalization step that is performed in VFH. This has the effect of making the descriptor dependant of scale, so an object of a certain size would not match a bigger or smaller copy of itself. It also makes CVFH more robust to occlusion.&lt;br /&gt;
&lt;br /&gt;
CVFH is invariant to the camera roll angle, like most global descriptors. This is so because rotations about that camera axis do not change the observable geometry that descriptors are computed from, limiting the pose estimation to 5 DoF. The use of a Camera Roll Histogram (CRH) has been proposed to overcome this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/cvfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the CVFH descriptors.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// CVFH estimation object.&lt;br /&gt;
	pcl::CVFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; cvfh;&lt;br /&gt;
	cvfh.setInputCloud(object);&lt;br /&gt;
	cvfh.setInputNormals(normals);&lt;br /&gt;
	cvfh.setSearchMethod(kdtree);&lt;br /&gt;
	// Set the maximum allowable deviation of the normals,&lt;br /&gt;
	// for the region segmentation step.&lt;br /&gt;
	cvfh.setEPSAngleThreshold(5.0 / 180.0 * M_PI); // 5 degrees.&lt;br /&gt;
	// Set the curvature threshold (maximum disparity between curvatures),&lt;br /&gt;
	// for the region segmentation step.&lt;br /&gt;
	cvfh.setCurvatureThreshold(1.0);&lt;br /&gt;
	// Set to true to normalize the bins of the resulting histogram,&lt;br /&gt;
	// using the total number of points. Note: enabling it will make CVFH&lt;br /&gt;
	// invariant to scale just like VFH, but the authors encourage the opposite.&lt;br /&gt;
	cvfh.setNormalizeBins(false);&lt;br /&gt;
&lt;br /&gt;
	cvfh.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can further customize the segmentation step with &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setClusterTolerance()&amp;quot;&amp;lt;/span&amp;gt; (to set the maximum Euclidean distance between points in the same cluster) and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setMinPoints()&amp;quot;&amp;lt;/span&amp;gt;. The size of the output will be equal to the number of regions the object was divided in. Also, check the functions &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getCentroidClusters()&amp;quot;&amp;lt;/span&amp;gt; and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getCentroidNormalClusters()&amp;quot;&amp;lt;/span&amp;gt;, you can use them to get information about the centroids used to compute the different CVFH descriptors.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Normals, Search method, Angle threshold, Curvature threshold, [Normalize bins], [Cluster tolerance], [Minimum points]&lt;br /&gt;
* '''Output''': CVFH descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6130296 CAD-Model Recognition and 6DOF Pose Estimation Using 3D Cues] (requires IEEE Xplore subscription) (Aitor Aldoma et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_c_v_f_h_estimation.html pcl::CVFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_CVFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====OUR-CVFH====&lt;br /&gt;
&lt;br /&gt;
The Oriented, Unique and Repeatable CVFH expands the previous descriptor, adding the computation of an unique reference frame to make it more robust.&lt;br /&gt;
&lt;br /&gt;
OUR-CVFH relies on the use of Semi-Global Unique Reference Frames (SGURFs), which are repeatable coordinate systems computed for each region. Not only they remove the invariance to camera roll and allow to extract the 6DoF pose directly without additional steps, but they also improve the spatial descriptiveness.&lt;br /&gt;
&lt;br /&gt;
The first part of the computation is akin to CVFH, but after segmentation, the points in each region are filtered once more according to the difference between their normals and the region's average normal. This results in better shaped regions, improving the estimation of the Reference Frames (RFs).&lt;br /&gt;
&lt;br /&gt;
After this, the SGURF is computed for each region. Disambiguation is performed to decide the sign of the axes, according to the points' distribution. If this is not enough and the sign remains ambiguous, multiple RFs will need to be created to account for it. Finally, the OUR-CVFH descriptor is computed. The original Shape Distribution Component (SDC) is discarded, and the surface is now described according to the RFs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:OUR-CVFH.png|thumb|center|600px| SGURF frame and resulting histogram of a region (image from [http://vision.deis.unibo.it/fede/papers/dagm12.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/our_cvfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the OUR-CVFH descriptors.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// OUR-CVFH estimation object.&lt;br /&gt;
	pcl::OURCVFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; ourcvfh;&lt;br /&gt;
	ourcvfh.setInputCloud(object);&lt;br /&gt;
	ourcvfh.setInputNormals(normals);&lt;br /&gt;
	ourcvfh.setSearchMethod(kdtree);&lt;br /&gt;
	ourcvfh.setEPSAngleThreshold(5.0 / 180.0 * M_PI); // 5 degrees.&lt;br /&gt;
	ourcvfh.setCurvatureThreshold(1.0);&lt;br /&gt;
	ourcvfh.setNormalizeBins(false);&lt;br /&gt;
	// Set the minimum axis ratio between the SGURF axes. At the disambiguation phase,&lt;br /&gt;
	// this will decide if additional Reference Frames need to be created, if ambiguous.&lt;br /&gt;
	ourcvfh.setAxisRatio(0.8);&lt;br /&gt;
&lt;br /&gt;
	ourcvfh.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can use the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;getTransforms()&amp;quot;&amp;lt;/span&amp;gt; function to get the transformations aligning the cloud to the corresponding SGURF.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Normals, Search method, Angle threshold, Curvature threshold, [Normalize bins], [Cluster tolerance], [Minimum points], [Axis ratio]&lt;br /&gt;
* '''Output''': OUR-CVFH descriptors&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://vision.deis.unibo.it/fede/papers/dagm12.pdf OUR-CVFH – Oriented, Unique and Repeatable Clustered Viewpoint Feature Histogram for Object Recognition and 6DOF Pose Estimation] (Aitor Aldoma et al., 2012)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_o_u_r_c_v_f_h_estimation.html pcl::OURCVFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_OUR-CVFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==ESF==&lt;br /&gt;
&lt;br /&gt;
The Ensemble of Shape Functions (ESF) is a combination of 3 different shape functions that describe certain properties of the cloud's points: distances, angles and area. This descriptor is very unique because it does not require normal information. Actually, it does not need any preprocessing, as it is robust to noise and incomplete surfaces.&lt;br /&gt;
&lt;br /&gt;
The algorithm uses a voxel grid as an approximation of the real surface. It iterates through all the points in the cloud: for every iteration, 3 random points are chosen. For these points, the shape functions are computed:&lt;br /&gt;
&lt;br /&gt;
* '''D2''': this function computes the distances between point pairs (3 overall). Then, for every pair, it checks if the line that connects both points lies entirely inside the surface, entirely outside (crossing free space), or both. Depending on this, the distance value will be binned to one of three possible histograms: IN, OUT or MIXED.&lt;br /&gt;
* '''D2 ratio''': an additional histogram for the ratio between parts of the line inside the surface, and parts outside. This value will be 0 if the line is completely outside, 1 if completely inside, and some value in between if mixed.&lt;br /&gt;
* '''D3''': this computes the square root of the area of the triangle formed by the 3 points. Like D2, the result is also classified as IN, OUT or MIXED, each with its own histogram.&lt;br /&gt;
* '''A3''': this function computes the angle formed by the points. Then, the value is binned depending on how the line opposite to the angle is (once again, as IN, OUT or MIXED).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:ESF.png|thumb|center|600px| ESF descriptor (image from [http://www.inf.ethz.ch/personal/zeislb/publications/aldoma_2012jram_PCLTutorial.pdf this paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After the loop is over, we are left with 10 subhistograms (IN, OUT and MIXED for D2, D3 and A3, and an additional one for the ratio). Each one has 64 bins, so the size of the final ESF descriptor is 640.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/esf.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the ESF descriptor.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::ESFSignature640&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::ESFSignature640&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// ESF estimation object.&lt;br /&gt;
	pcl::ESFEstimation&amp;lt;pcl::PointXYZ, pcl::ESFSignature640&amp;gt; esf;&lt;br /&gt;
	esf.setInputCloud(object);&lt;br /&gt;
&lt;br /&gt;
	esf.compute(*descriptor);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster)&lt;br /&gt;
* '''Output''': ESF descriptor&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6181760 Ensemble of shape functions for 3D object classification] (requires IEEE Xplore subscription) (Walter Wohlkinger and Markus Vincze, 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_e_s_f_estimation.html pcl::ESFEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_ESF.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==GFPFH==&lt;br /&gt;
&lt;br /&gt;
As you may have guessed, GFPFH stands for Global Fast Point Feature Histogram, the global version of the FPFH descriptor. GFPFH was designed for the task of helping a robot navigate its environment, having some context of the objects around it.&lt;br /&gt;
&lt;br /&gt;
The first step before being able to compute the descriptor is surface categorization. A set of logical primitives (the classes, or categories) is created, which depends on the type of objects we expect the robot to find on the scene. For example, if we know there will be a coffee mug, we create three: one for the handle, and the other two for the outer and inner faces. Then, FPFH descriptors are computed, and everything is fed to a [https://en.wikipedia.org/wiki/Conditional_random_field Conditional Random Field] (CRF) algorithm. The CRF will label each surface with one of the previous categories, so we end up with a cloud where each point has been classified depending of the type of object (or object's region) it belongs to.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:FPFH_CRF.png|thumb|center|300px| Classification of objects made with FPFH and CRF (image from [https://www.willowgarage.com/sites/default/files/iccv09.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Now, the GFPFH descriptor can be computed with the result of the classification step. It will encode what the object is made of, so the robot can easily recognize it. First, an [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#Octree | octree]] is created, dividing the object in voxel leaves. For every leaf, a set of probabilities is created, one for each class. Each one stores the probability of that leaf belonging to the class, and it is computed according to the number of points in that leaf that have been labelled as that class, and the total number of points. Then, for every pair of leaves in the octree, a line is casted, connecting them. Every leaf in its path is checked for occupancy, storing the result in an histogram. If the leaf is empty (free space), a value of 0 is saved. Otherwise, the leaf probabilities are used.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:GFPFH.png|thumb|center|500px| Computing the GFPFH with a voxel grid (image from [https://www.willowgarage.com/sites/default/files/iccv09.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following code will compute the GFPFH for a cloud with label information. The categorization step is up to you, as it depends largely on the type of the scene, and the use you are going to give it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/gfpfh.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Cloud for storing the object.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZL&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZL&amp;gt;);&lt;br /&gt;
	// Object for storing the GFPFH descriptor.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::GFPFHSignature16&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::GFPFHSignature16&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Note: you should have performed preprocessing to cluster out the object&lt;br /&gt;
	// from the cloud, and save it to this individual file.&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZL&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you should now perform classification on the cloud's points. See the&lt;br /&gt;
	// original paper for more details. For this example, we will now consider 4&lt;br /&gt;
	// different classes, and randomly label each point as one of them.&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; object-&amp;gt;points.size(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		object-&amp;gt;points[i].label = 1 + i % 4;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// ESF estimation object.&lt;br /&gt;
	pcl::GFPFHEstimation&amp;lt;pcl::PointXYZL, pcl::PointXYZL, pcl::GFPFHSignature16&amp;gt; gfpfh;&lt;br /&gt;
	gfpfh.setInputCloud(object);&lt;br /&gt;
	// Set the object that contains the labels for each point. Thanks to the&lt;br /&gt;
	// PointXYZL type, we can use the same object we store the cloud in.&lt;br /&gt;
	gfpfh.setInputLabels(object);&lt;br /&gt;
	// Set the size of the octree leaves to 1cm (cubic).&lt;br /&gt;
	gfpfh.setOctreeLeafSize(0.01);&lt;br /&gt;
	// Set the number of classes the cloud has been labelled with (default is 16).&lt;br /&gt;
	gfpfh.setNumberOfClasses(4);&lt;br /&gt;
&lt;br /&gt;
	gfpfh.compute(*descriptor);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Labels, Number of classes, Leaf size&lt;br /&gt;
* '''Output''': GFPFH descriptor&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [https://www.willowgarage.com/sites/default/files/iccv09.pdf Detecting and Segmenting Objects for Mobile Manipulation] (Radu Bogdan Rusu et al., 2009)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_g_f_p_f_h_estimation.html pcl::GFPFHEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_GFPFH.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==GRSD==&lt;br /&gt;
&lt;br /&gt;
The global version of the Radius-based Surface Descriptor works in a similar fashion to GFPFH. A voxelization and a surface categorization step are performed beforehand, labelling all surface patches according to the geometric category (plane, cylinder, edge, rim, sphere), using RSD. Then, the whole cluster is classified into one of these categories, and the GRSD descriptor is computed from this.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:GRSD.png|thumb|center|400px| Classification of objects for GRSD and resulting histogram (image from [https://ias.in.tum.de/_media/spezial/bib/grsd10humanoids.pdf original paper]).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To compute it:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/grsd.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing the GRSD descriptors for each point.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::GRSDSignature21&amp;gt;::Ptr descriptors(new pcl::PointCloud&amp;lt;pcl::GRSDSignature21&amp;gt;());&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Note: you would usually perform downsampling now. It has been omitted here&lt;br /&gt;
	// for simplicity, but be aware that computation can take a long time.&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// GRSD estimation object.&lt;br /&gt;
	GRSDEstimation&amp;lt;PointXYZ, Normal, GRSDSignature21&amp;gt; grsd;&lt;br /&gt;
	grsd.setInputCloud(cloud);&lt;br /&gt;
	grsd.setInputNormals(normals);&lt;br /&gt;
	grsd.setSearchMethod(kdtree);&lt;br /&gt;
	// Search radius, to look for neighbors. Note: the value given here has to be&lt;br /&gt;
	// larger than the radius used to estimate the normals.&lt;br /&gt;
	grsd.setRadiusSearch(0.05);&lt;br /&gt;
	&lt;br /&gt;
	grsd.compute(*descriptors);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: This code will only compile with PCL versions 1.8 and above (the current trunk).'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points (cluster), Normals, Search method, Radius&lt;br /&gt;
* '''Output''': GRSD descriptor&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [https://ias.in.tum.de/_media/spezial/bib/grsd10humanoids.pdf Hierarchical Object Geometric Categorization and Appearance Classification for Mobile Manipulation] (Zoltan-Csaba Marton et al., 2010)&lt;br /&gt;
** [https://ias.cs.tum.edu/_media/spezial/bib/marton11ijrr.pdf Combined 2D-3D Categorization and Classification for Multimodal Perception Systems] (Zoltan-Csaba Marton et al., 2011)&lt;br /&gt;
** [http://www.mi.t.u-tokyo.ac.jp/top/downloadpublication/36 Voxelized Shape and Color Histograms for RGB-D] (Zoltan-Csaba Marton et al., 2011)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_g_r_s_d_estimation.html pcl::GRSDEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_GRSD.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Saving and loading=&lt;br /&gt;
&lt;br /&gt;
You can save a descriptor to a file just [[PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)#Writing_to_file|like with any other cloud type]]. One caveat, though. If you are using a descriptor that has its own custom type, like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PFHSignature125&amp;quot;&amp;lt;/span&amp;gt;, everything will be OK. But with descriptors that do not (where you have to use &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::Histogram&amp;lt;&amp;gt;&amp;quot;&amp;lt;/span&amp;gt;), you will get this error: &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;POINT_TYPE_NOT_PROPERLY_REGISTERED&amp;quot;''&amp;lt;/span&amp;gt;. In order to save or load from a file, PCL's IO functions need to know about the number, type and size of the fields. To solve this, you will have to properly register a new point type for your descriptor. For example, this will work for the [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#RoPS|RoPS]] descriptor example we saw earlier:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;POINT_CLOUD_REGISTER_POINT_STRUCT(ROPS135,&lt;br /&gt;
                                  (float[135], histogram, histogram)&lt;br /&gt;
)&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add the previous to your code (change it accordingly), and you will be able to save and load descriptors as usual.&lt;br /&gt;
&lt;br /&gt;
=Visualization=&lt;br /&gt;
&lt;br /&gt;
Sometimes it is desired to check a visual representation of a descriptor, perhaps to analyze the distribution of data over different bins. Because they are saved as histograms, this is something trivial to do. PCL offers a couple of classes to do this.&lt;br /&gt;
&lt;br /&gt;
==PCLHistogramVisualizer==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PCLHistogramVisualizer&amp;quot;&amp;lt;/span&amp;gt; is the simplest way to plot an histogram. The class has little functionality, but it does its job. Only one call is necessary to give the histogram and its size:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/vfh.h&amp;gt;&lt;br /&gt;
#include&amp;lt;pcl/visualization/histogram_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Clouds for storing everything.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Estimate VFH descriptor.&lt;br /&gt;
	pcl::VFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; vfh;&lt;br /&gt;
	vfh.setInputCloud(object);&lt;br /&gt;
	vfh.setInputNormals(normals);&lt;br /&gt;
	vfh.setSearchMethod(kdtree);&lt;br /&gt;
	vfh.setNormalizeBins(true);&lt;br /&gt;
	vfh.setNormalizeDistance(false);&lt;br /&gt;
	vfh.compute(*descriptor);&lt;br /&gt;
&lt;br /&gt;
	// Plotter object.&lt;br /&gt;
	pcl::visualization::PCLHistogramVisualizer viewer;&lt;br /&gt;
	// We need to set the size of the descriptor beforehand.&lt;br /&gt;
	viewer.addFeatureHistogram(*descriptor, 308);&lt;br /&gt;
&lt;br /&gt;
	viewer.spin();&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Histogram_visualizer_VFH.png|thumb|center|500px| VFH histogram seen with the Histogram Visualizer.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Descriptor, descriptor size, [Window size], [Background color], [Y axis range]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_histogram_visualizer.html pcl::visualization::PCLHistogramVisualizer]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_histogram_visualizer.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==PCLPlotter==&lt;br /&gt;
&lt;br /&gt;
This class has all the methods from &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PCLHistogramVisualizer&amp;quot;&amp;lt;/span&amp;gt; (which will be deprecated soon) plus a lot more features. The code is almost the same:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/vfh.h&amp;gt;&lt;br /&gt;
#include&amp;lt;pcl/visualization/pcl_plotter.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Clouds for storing everything.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr object(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;::Ptr descriptor(new pcl::PointCloud&amp;lt;pcl::VFHSignature308&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *object) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(object);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Estimate VFH descriptor.&lt;br /&gt;
	pcl::VFHEstimation&amp;lt;pcl::PointXYZ, pcl::Normal, pcl::VFHSignature308&amp;gt; vfh;&lt;br /&gt;
	vfh.setInputCloud(object);&lt;br /&gt;
	vfh.setInputNormals(normals);&lt;br /&gt;
	vfh.setSearchMethod(kdtree);&lt;br /&gt;
	vfh.setNormalizeBins(true);&lt;br /&gt;
	vfh.setNormalizeDistance(false);&lt;br /&gt;
	vfh.compute(*descriptor);&lt;br /&gt;
&lt;br /&gt;
	// Plotter object.&lt;br /&gt;
	pcl::visualization::PCLPlotter plotter;&lt;br /&gt;
	// We need to set the size of the descriptor beforehand.&lt;br /&gt;
	plotter.addFeatureHistogram(*descriptor, 308);&lt;br /&gt;
&lt;br /&gt;
	plotter.plot();&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:PCL_plotter_VFH.png|thumb|center|500px| VFH histogram seen with the PCLPlotter class.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you have raw data (such as a vector of floats) you can use the [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_plotter.html#aaf23b2b1c2f91c517cfde387ee1b654e addHistogramData()] function to plot it as an histogram.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Descriptor, descriptor size, [Window size], [Background color], [Y axis range]&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/pcl_plotter.php PCLPlotter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_plotter.html pcl::visualization::PCLPlotter]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_plotter.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==PCL Viewer==&lt;br /&gt;
&lt;br /&gt;
This program, included with PCL, will also let you open and visualize a saved descriptor. Internally, it uses [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29#PCLPlotter|PCLPlotter]]. You can invoke the viewer from the command line like this, so it comes handy:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_viewer &amp;lt;descriptor_file&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:PCL_viewer_VFH.png|thumb|center|500px| VFH histogram seen with ''pcl_viewer''.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)&amp;diff=4741</id>
		<title>PCL/OpenNI tutorial 3: Cloud processing (advanced)</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_3:_Cloud_processing_(advanced)&amp;diff=4741"/>
				<updated>2015-11-01T23:09:09Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Most of the techniques seen in the [[PCL/OpenNI tutorial 2: Cloud processing (basic) | previous tutorial]] focused on ''preprocessing'', that is, performing certain operations on the cloud to get it ready for further analysis or work. Downsampling, removing outliers, surface smoothing, estimating the normals...&lt;br /&gt;
&lt;br /&gt;
This new tutorial will teach you many interesting things that can be done with point clouds after the preparation step, such as registration, segmentation and model matching, and other advanced operations.&lt;br /&gt;
&lt;br /&gt;
=Registration=&lt;br /&gt;
&lt;br /&gt;
''Registration'' is the technique of aligning two point clouds, like pieces of a puzzle. To be precise, the algorithm finds a set of correspondences between them, which would mean that there is an area of the scene that has been captured in both clouds. A ''linear transformation'' is then computed, which outputs a matrix that contains a rotation and a translation. These are the operations that you would apply to one of the clouds so it would get in place with respect to the other, with the intersecting areas overlapping.&lt;br /&gt;
&lt;br /&gt;
The best results are achieved with clouds that are very similar, so you should try to minimize the transformations between them. Meaning, do not run like crazy with a Kinect in your hands, grabbing frames, and expect PCL to match the clouds. It is better to move the sensor gingerly and at steady intervals. If you use a robotic arm or a rotating tabletop with precise angles, even better.&lt;br /&gt;
&lt;br /&gt;
Registration is a very useful technique because it lets you retrieve full, complete and continous [http://pointclouds.org/documentation/tutorials/in_hand_scanner.php models] of a scene or object. It is also expensive for the CPU. Optimizations have been developed that allow to do cloud matching in real time with the GPU, like [http://pointclouds.org/documentation/tutorials/using_kinfu_large_scale.php KinFu] does.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Pairwise_registration.jpg|thumb|center|650px|Flowchart diagram showing an iteration of the registration algorithm (image from http://pointclouds.org/).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The previous diagram illustrates how the procedure is done. There are mainly two different methods to choose from:&lt;br /&gt;
&lt;br /&gt;
* '''ICP registration''': ICP stands for [https://en.wikipedia.org/wiki/Iterative_closest_point Iterative Closest Point]. It is an algorithm that will find the best transformation that minimizes the distance from the source point cloud to the target one. The problem is that it will do it by associating every point of the source cloud to its &amp;quot;twin&amp;quot; in the target cloud in a linear way, so it can be considered a brute force method. It the clouds are too big, the algorithm will take its time to finish, so try downsampling them first.&lt;br /&gt;
* '''Feature-based registration''': the algorithm finds a set of keypoints in each cloud, computes a local descriptor for each (we explain what local descriptors are in [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29|the next tutorial]]; they are like individual signatures of points), and then performs a search to see if the clouds have keypoints in common. If at least 3 correspondences are found, a transformation can be computed. For accurate results, several correspondences must be found. This method is (or should be) faster than the first, because matching is only done for the keypoints, not the whole cloud.&lt;br /&gt;
&lt;br /&gt;
ICP will probably fail if the difference between the clouds is too big. Usually, you will use features first to perform an initial rough alignment of the clouds, and then use ICP to refine it with precision. Feature-based registration is basically identical to the [[PCL/OpenNI_tutorial_5:_3D_object_recognition_%28pipeline%29|3D object recognition problem]], so we will not look into it here. Features, descriptors and recognition will be dealt with in the next tutorials.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/registration_api.php The PCL Registration API]&lt;br /&gt;
** [http://www.pointclouds.org/assets/icra2013/Miller_presentation_ICRA13.pdf Clean Models from Noisy Sensors: Registration and Reconstruction Techniques]&lt;br /&gt;
** [http://www.pointclouds.org/assets/icra2013/registration.pdf 3D Registration with PCL]&lt;br /&gt;
** [http://www.pointclouds.org/assets/uploads/PCL-IAS13_Registration.pdf Point Cloud Registration]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Iterative Closest Point (ICP)==&lt;br /&gt;
&lt;br /&gt;
PCL offers several implementations of the ICP algorithm. We will see the general, unoptimized one, but you may want to try one of the optimized variants listed below in the API section. The code for aligning (registering) two point clouds is the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/registration/icp.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr sourceCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr targetCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr finalCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read two PCD files from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *sourceCloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[2], *targetCloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// ICP object.&lt;br /&gt;
	pcl::IterativeClosestPoint&amp;lt;pcl::PointXYZ, pcl::PointXYZ&amp;gt; registration;&lt;br /&gt;
	registration.setInputSource(sourceCloud);&lt;br /&gt;
	registration.setInputTarget(targetCloud);&lt;br /&gt;
&lt;br /&gt;
	registration.align(*finalCloud);&lt;br /&gt;
	if (registration.hasConverged())&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;ICP converged.&amp;quot; &amp;lt;&amp;lt; std::endl&lt;br /&gt;
				  &amp;lt;&amp;lt; &amp;quot;The score is &amp;quot; &amp;lt;&amp;lt; registration.getFitnessScore() &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Transformation matrix:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; registration.getFinalTransformation() &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
	else std::cout &amp;lt;&amp;lt; &amp;quot;ICP did not converge.&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The cloud saved in &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;finalCloud&amp;quot;&amp;lt;/span&amp;gt; is the same as &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;sourceCloud&amp;quot;&amp;lt;/span&amp;gt; (it has the same points), but the transformation found by ICP has been applied, so when visualized next to &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;targetCloud&amp;quot;&amp;lt;/span&amp;gt; they should match properly (errors are inevitable and depend of many factors, like the quality of the sensor, the distance between the clouds, the algorithm used...).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:Registration_cloud1.png | Source point cloud.&lt;br /&gt;
File:Registration_cloud2.png | Target point cloud.&lt;br /&gt;
File:Registration_before.png | Both clouds shown together before registration.&lt;br /&gt;
File:Registration_after.png | Both clouds shown together after registration.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Source cloud, Target cloud&lt;br /&gt;
* '''Output''': Registered source cloud, Transformation matrix&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/iterative_closest_point.php How to use iterative closest point]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/pairwise_incremental_registration.php How to incrementally register pairs of clouds]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/interactive_icp.php Interactive Iterative Closest Point]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/normal_distributions_transform.php How to use Normal Distributions Transform]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_iterative_closest_point.html pcl::IterativeClosestPoint]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_iterative_closest_point_with_normals.html pcl::IterativeClosestPointWithNormals]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_iterative_closest_point_non_linear.html pcl::IterativeClosestPointNonLinear]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_joint_iterative_closest_point.html pcl::JointIterativeClosestPoint]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_normal_distributions_transform.html pcl::NormalDistributionsTransform]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_registration.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Model fitting (RANSAC)=&lt;br /&gt;
&lt;br /&gt;
Given a set of data, it is possible to determine if a part of it fits a certain mathematical model, using an iterative method known as [https://en.wikipedia.org/wiki/RANSAC RANSAC (Random Sample Consensus)]. RANSAC works under the assumption that the data contains ''inliers'' (data can can be adjusted to the model, even with a little noise) and ''outliers'' (data that does not fit the model at all). The algoritm is non-deterministic: every iteration, the accuracy of the result improves. To be precise, the probability of the result being the correct one increases, though it will never be of 100%.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:RANSAC_before.png | Set of data points containing outliers and inliers fitting a line model (image from Wikimedia Commons).&lt;br /&gt;
File:RANSAC_after.png | The same set, after applying RANSAC to find the parameters of the line model, discarding outliers (image from Wikimedia Commons).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
PCL offers implementations of RANSAC to work with point clouds. For example, you can find which points in the cloud fit the model of a sphere, and the procedure will return the parameters of said model. This is how you would do it:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/sample_consensus/ransac.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/sample_consensus/sac_model_sphere.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr inlierPoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// RANSAC objects: model and algorithm.&lt;br /&gt;
	pcl::SampleConsensusModelSphere&amp;lt;pcl::PointXYZ&amp;gt;::Ptr model(new pcl::SampleConsensusModelSphere&amp;lt;pcl::PointXYZ&amp;gt;(cloud));&lt;br /&gt;
	pcl::RandomSampleConsensus&amp;lt;pcl::PointXYZ&amp;gt; ransac(model);&lt;br /&gt;
	// Set the maximum allowed distance to the model.&lt;br /&gt;
	ransac.setDistanceThreshold(0.01);&lt;br /&gt;
	ransac.computeModel();&lt;br /&gt;
&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; inlierIndices;&lt;br /&gt;
	ransac.getInliers(inlierIndices);&lt;br /&gt;
&lt;br /&gt;
	// Copy all inliers of the model to another cloud.&lt;br /&gt;
	pcl::copyPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(*cloud, inlierIndices, *inlierPoints);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There are many more models to choose from. The most important ones are:&lt;br /&gt;
&lt;br /&gt;
* [http://docs.pointclouds.org/trunk/classpcl_1_1_sample_consensus_model_circle2_d.html pcl::SampleConsensusModelCircle2D]: A 2D circle on the X-Y plane.&lt;br /&gt;
* [http://docs.pointclouds.org/trunk/classpcl_1_1_sample_consensus_model_circle3_d.html pcl::SampleConsensusModelCircle3D]: A 3D circle (any plane).&lt;br /&gt;
* [http://docs.pointclouds.org/trunk/classpcl_1_1_sample_consensus_model_cone.html pcl::SampleConsensusModelCone]: A cone.&lt;br /&gt;
* [http://docs.pointclouds.org/trunk/classpcl_1_1_sample_consensus_model_cylinder.html pcl::SampleConsensusModelCylinder]: A cylinder.&lt;br /&gt;
* [http://docs.pointclouds.org/trunk/classpcl_1_1_sample_consensus_model_line.html pcl::SampleConsensusModelLine]: A line.&lt;br /&gt;
* [http://docs.pointclouds.org/trunk/classpcl_1_1_sample_consensus_model_plane.html pcl::SampleConsensusModelPlane]: A plane.&lt;br /&gt;
* [http://docs.pointclouds.org/trunk/classpcl_1_1_sample_consensus_model_sphere.html pcl::SampleConsensusModelSphere]: A sphere.&lt;br /&gt;
* [http://docs.pointclouds.org/trunk/classpcl_1_1_sample_consensus_model_stick.html pcl::SampleConsensusModelStick]: A stick (a line with user-given minimum/maximum width).&lt;br /&gt;
&lt;br /&gt;
Also, the RANSAC class offers some functions to configure its behavior. For example, with &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setMaxIterations()&amp;quot;&amp;lt;/span&amp;gt; you can specify the maximum number of iterations the algoritm will run for, instead of using a distance threshold (if you do not set a stop condition, RANSAC will just run forever, getting a slightly better result each time).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Model, Model threshold&lt;br /&gt;
* '''Output''': Vector of Point indices (inliers)&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/random_sample_consensus.php How to use Random Sample Consensus model]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_random_sample_consensus.html pcl::RandomSampleConsensus]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_sphere_model.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Projecting points==&lt;br /&gt;
&lt;br /&gt;
After you have recovered the coefficients of a model, it is possible to project the points of a cloud onto said model. For example, if the model is a plane, after projection the points of the cloud would all lie on the plane.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/sample_consensus/method_types.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/sample_consensus/model_types.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/segmentation/sac_segmentation.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/extract_indices.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/project_inliers.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/cloud_viewer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudNoPlane(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr planePoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr projectedPoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Get the plane model, if present.&lt;br /&gt;
	pcl::ModelCoefficients::Ptr coefficients(new pcl::ModelCoefficients);&lt;br /&gt;
	pcl::SACSegmentation&amp;lt;pcl::PointXYZ&amp;gt; segmentation;&lt;br /&gt;
	segmentation.setInputCloud(cloud);&lt;br /&gt;
	segmentation.setModelType(pcl::SACMODEL_PLANE);&lt;br /&gt;
	segmentation.setMethodType(pcl::SAC_RANSAC);&lt;br /&gt;
	segmentation.setDistanceThreshold(0.01);&lt;br /&gt;
	segmentation.setOptimizeCoefficients(true);&lt;br /&gt;
	pcl::PointIndices::Ptr inlierIndices(new pcl::PointIndices);&lt;br /&gt;
	segmentation.segment(*inlierIndices, *coefficients);&lt;br /&gt;
&lt;br /&gt;
	if (inlierIndices-&amp;gt;indices.size() == 0)&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Could not find a plane in the scene.&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	else&lt;br /&gt;
	{&lt;br /&gt;
		std::cerr &amp;lt;&amp;lt; &amp;quot;Plane coefficients: &amp;quot; &amp;lt;&amp;lt; coefficients-&amp;gt;values[0] &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; coefficients-&amp;gt;values[1] &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; coefficients-&amp;gt;values[2] &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; coefficients-&amp;gt;values[3] &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
		// Create a second point cloud that does not have the plane points.&lt;br /&gt;
		// Also, extract the plane points to visualize them later.&lt;br /&gt;
		pcl::ExtractIndices&amp;lt;pcl::PointXYZ&amp;gt; extract;&lt;br /&gt;
		extract.setInputCloud(cloud);&lt;br /&gt;
		extract.setIndices(inlierIndices);&lt;br /&gt;
		extract.filter(*planePoints);&lt;br /&gt;
		extract.setNegative(true);&lt;br /&gt;
		extract.filter(*cloudNoPlane);&lt;br /&gt;
&lt;br /&gt;
		// Object for projecting points onto a model.&lt;br /&gt;
		pcl::ProjectInliers&amp;lt;pcl::PointXYZ&amp;gt; projection;&lt;br /&gt;
		// We have to specify what model we used.&lt;br /&gt;
		projection.setModelType(pcl::SACMODEL_PLANE);&lt;br /&gt;
		projection.setInputCloud(cloudNoPlane);&lt;br /&gt;
		// And we have to give the coefficients we got.&lt;br /&gt;
		projection.setModelCoefficients(coefficients);&lt;br /&gt;
		projection.filter(*projectedPoints);&lt;br /&gt;
&lt;br /&gt;
		// Visualize everything.&lt;br /&gt;
		pcl::visualization::CloudViewer viewerPlane(&amp;quot;Plane&amp;quot;);&lt;br /&gt;
		viewerPlane.showCloud(planePoints);&lt;br /&gt;
		while (!viewerPlane.wasStopped())&lt;br /&gt;
		{&lt;br /&gt;
			// Do nothing but wait.&lt;br /&gt;
		}&lt;br /&gt;
		pcl::visualization::CloudViewer viewerProjection(&amp;quot;Projected points&amp;quot;);&lt;br /&gt;
		viewerProjection.showCloud(projectedPoints);&lt;br /&gt;
		while (!viewerProjection.wasStopped())&lt;br /&gt;
		{&lt;br /&gt;
			// Do nothing but wait.&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After running the example, you will see that all points have been &amp;quot;flattened&amp;quot; onto the plane (if there was one in the scene, that is). The program will first show you the points that have been fitted to a plane. When you quit with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Alt+F4'''&amp;lt;/span&amp;gt; or &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Q'''&amp;lt;/span&amp;gt;, a second window will open displaying the results of the projection.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Model type, Model coefficients&lt;br /&gt;
* '''Output''': Projected points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/project_inliers.php Projecting points using a parametric model]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_project_inliers.html pcl::ProjectInliers]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_point_projection.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Segmentation=&lt;br /&gt;
&lt;br /&gt;
''Segmentation'' consists of breaking the cloud apart in different pieces or sections: groups of points called ''clusters'' (which is why it is also referred to as ''clustering''). The idea is to divide it in several parts to be processed independently. Ideally, every cluster would belong to the logical notion of &amp;quot;object&amp;quot;. For example, for a cloud that shows 3 boxes on a table, 4 clusters would be created: one for the table, and one for each of the boxes.&lt;br /&gt;
&lt;br /&gt;
There are many segmentation techniques, each one with its own criteria for grouping points together. Some consider the distance between points, some take into account the normals, or even the texture, too. Here I will give you an overview of each one.&lt;br /&gt;
&lt;br /&gt;
Segmentation, together with all other methods seen so far, will allow you to retrieve the models of individual objects from a scene. Also, it is possible to segment certain shapes like planes or cylinders. If you need example point clouds to practice your segmentation skills with, take a look at the [http://www.acin.tuwien.ac.at/?id=289 OSD] (Object Segmentation Database).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://www.pointclouds.org/assets/icra2013/pcl_organized_segmentation.pdf Organized Point Cloud Segmentation]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Euclidean==&lt;br /&gt;
&lt;br /&gt;
Euclidean segmentation is the simplest of all. It checks the distance between two points. If it is less than a threshold, both are considered to belong in the same cluster. It works like a [https://en.wikipedia.org/wiki/Flood_fill flood fill] algorithm: a point in the cloud is &amp;quot;marked&amp;quot; as &amp;quot;chosen&amp;quot; for the cluster. Then, it spreads like a virus to all other points that are near enough, and from those to even more points, until none new can be added. Then, a new cluster is initialized, and the procedure starts again with the remaining unmarked points.&lt;br /&gt;
&lt;br /&gt;
In PCL, Euclidean segmentation is done as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/segmentation/extract_clusters.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// kd-tree object for searches.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	kdtree-&amp;gt;setInputCloud(cloud);&lt;br /&gt;
&lt;br /&gt;
	// Euclidean clustering object.&lt;br /&gt;
	pcl::EuclideanClusterExtraction&amp;lt;pcl::PointXYZ&amp;gt; clustering;&lt;br /&gt;
	// Set cluster tolerance to 2cm (small values may cause objects to be divided&lt;br /&gt;
	// in several clusters, whereas big values may join objects in a same cluster).&lt;br /&gt;
	clustering.setClusterTolerance(0.02);&lt;br /&gt;
	// Set the minimum and maximum number of points that a cluster can have.&lt;br /&gt;
	clustering.setMinClusterSize(100);&lt;br /&gt;
	clustering.setMaxClusterSize(25000);&lt;br /&gt;
	clustering.setSearchMethod(kdtree);&lt;br /&gt;
	clustering.setInputCloud(cloud);&lt;br /&gt;
	std::vector&amp;lt;pcl::PointIndices&amp;gt; clusters;&lt;br /&gt;
	clustering.extract(clusters);&lt;br /&gt;
&lt;br /&gt;
	// For every cluster...&lt;br /&gt;
	int currentClusterNum = 1;&lt;br /&gt;
	for (std::vector&amp;lt;pcl::PointIndices&amp;gt;::const_iterator i = clusters.begin(); i != clusters.end(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		// ...add all its points to a new cloud...&lt;br /&gt;
		pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cluster(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
		for (std::vector&amp;lt;int&amp;gt;::const_iterator point = i-&amp;gt;indices.begin(); point != i-&amp;gt;indices.end(); point++)&lt;br /&gt;
			cluster-&amp;gt;points.push_back(cloud-&amp;gt;points[*point]);&lt;br /&gt;
		cluster-&amp;gt;width = cluster-&amp;gt;points.size();&lt;br /&gt;
		cluster-&amp;gt;height = 1;&lt;br /&gt;
		cluster-&amp;gt;is_dense = true;&lt;br /&gt;
&lt;br /&gt;
		// ...and save it to disk.&lt;br /&gt;
		if (cluster-&amp;gt;points.size() &amp;lt;= 0)&lt;br /&gt;
			break;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Cluster &amp;quot; &amp;lt;&amp;lt; currentClusterNum &amp;lt;&amp;lt; &amp;quot; has &amp;quot; &amp;lt;&amp;lt; cluster-&amp;gt;points.size() &amp;lt;&amp;lt; &amp;quot; points.&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::string fileName = &amp;quot;cluster&amp;quot; + boost::to_string(currentClusterNum) + &amp;quot;.pcd&amp;quot;;&lt;br /&gt;
		pcl::io::savePCDFileASCII(fileName, *cluster);&lt;br /&gt;
&lt;br /&gt;
		currentClusterNum++;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You may have to play with the cluster tolerance parameter until you get good results. Also, it is recommended to perform planar segmentation (we will see it later) to remove the floor and/or table(s) from the scene, or the results may be wrong.&lt;br /&gt;
&lt;br /&gt;
After the program finishes, you can visualize all clusters at the same time with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_viewer cluster*&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:EuclideanClusteringBefore.png | Original cloud.&lt;br /&gt;
File:EuclideanClusteringAfter.png | Clusters recovered after Euclidean segmentation (each cluster is renderer with a different color).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All segmentation objects provide a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setIndices()&amp;quot;&amp;lt;/span&amp;gt; function. You can use it to provide the indices of the points you want to apply clustering to, instead of the whole cloud.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Cluster tolerance, Minimum cluster size, Maximum cluster size, Search method, [Indices]&lt;br /&gt;
* '''Output''': Vector of Point indices (clusters)&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/cluster_extraction.php Euclidean Cluster Extraction]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_euclidean_cluster_extraction.html pcl::EuclideanClusterExtraction]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_euclidean_clustering.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Conditional===&lt;br /&gt;
&lt;br /&gt;
Conditional Euclidean segmentation works the same way as the standard one seen above, with one exception. Apart from the distance check, points need also to meet a special, custom requirement for them to be added to a cluster. This condition is user-specified. It boils down to this: for every pair of points (the first one, the ''seed'', is being processed in this moment, the second one, ''candidate'', is a neighbor of the former that is being tested) a custom function will be called. This function has a certain signature: it receives&lt;br /&gt;
# a copy of the two points so we can perform our own tests, and&lt;br /&gt;
# the squared distance&lt;br /&gt;
and returns a boolean value. If the value is true, the candidate may be added to the cluster. If false, it will not, even if it passed the distance check.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/segmentation/conditional_euclidean_clustering.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// If this function returns true, the candidate point will be added&lt;br /&gt;
// to the cluster of the seed point.&lt;br /&gt;
bool&lt;br /&gt;
customCondition(const pcl::PointXYZ&amp;amp; seedPoint, const pcl::PointXYZ&amp;amp; candidatePoint, float squaredDistance)&lt;br /&gt;
{&lt;br /&gt;
	// Do whatever you want here.&lt;br /&gt;
	if (candidatePoint.y &amp;lt; seedPoint.y)&lt;br /&gt;
		return false;&lt;br /&gt;
&lt;br /&gt;
	return true;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Conditional Euclidean clustering object.&lt;br /&gt;
	pcl::ConditionalEuclideanClustering&amp;lt;pcl::PointXYZ&amp;gt; clustering;&lt;br /&gt;
	clustering.setClusterTolerance(0.02);&lt;br /&gt;
	clustering.setMinClusterSize(100);&lt;br /&gt;
	clustering.setMaxClusterSize(25000);&lt;br /&gt;
	clustering.setInputCloud(cloud);&lt;br /&gt;
	// Set the function that will be called for every pair of points to check.&lt;br /&gt;
	clustering.setConditionFunction(&amp;amp;customCondition);&lt;br /&gt;
	std::vector&amp;lt;pcl::PointIndices&amp;gt; clusters;&lt;br /&gt;
	clustering.segment(clusters);&lt;br /&gt;
&lt;br /&gt;
	// For every cluster...&lt;br /&gt;
	int currentClusterNum = 1;&lt;br /&gt;
	for (std::vector&amp;lt;pcl::PointIndices&amp;gt;::const_iterator i = clusters.begin(); i != clusters.end(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		// ...add all its points to a new cloud...&lt;br /&gt;
		pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cluster(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
		for (std::vector&amp;lt;int&amp;gt;::const_iterator point = i-&amp;gt;indices.begin(); point != i-&amp;gt;indices.end(); point++)&lt;br /&gt;
			cluster-&amp;gt;points.push_back(cloud-&amp;gt;points[*point]);&lt;br /&gt;
		cluster-&amp;gt;width = cluster-&amp;gt;points.size();&lt;br /&gt;
		cluster-&amp;gt;height = 1;&lt;br /&gt;
		cluster-&amp;gt;is_dense = true;&lt;br /&gt;
&lt;br /&gt;
		// ...and save it to disk.&lt;br /&gt;
		if (cluster-&amp;gt;points.size() &amp;lt;= 0)&lt;br /&gt;
			break;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Cluster &amp;quot; &amp;lt;&amp;lt; currentClusterNum &amp;lt;&amp;lt; &amp;quot; has &amp;quot; &amp;lt;&amp;lt; cluster-&amp;gt;points.size() &amp;lt;&amp;lt; &amp;quot; points.&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::string fileName = &amp;quot;cluster&amp;quot; + boost::to_string(currentClusterNum) + &amp;quot;.pcd&amp;quot;;&lt;br /&gt;
		pcl::io::savePCDFileASCII(fileName, *cluster);&lt;br /&gt;
&lt;br /&gt;
		currentClusterNum++;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The condition that is implemented above (checking if the candidate's Y coordinate is lower than the seed's) does not make a lot of sense, but I just wanted you to understand how the method works.&lt;br /&gt;
&lt;br /&gt;
If you pass &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;true&amp;quot;&amp;lt;/span&amp;gt; to the constructor of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::ConditionalEuclideanClustering&amp;quot;&amp;lt;/span&amp;gt;, the object will store the clusters that were discarded for being too small or too big. You can later retrieve them with the [http://docs.pointclouds.org/trunk/classpcl_1_1_conditional_euclidean_clustering.html#af264099e843fbf5d48049857e1e1d6af getRemovedClusters()] function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Cluster tolerance, Minimum cluster size, Maximum cluster size, Search method, Custom check function, [Indices]&lt;br /&gt;
* '''Output''': Vector of Point indices (clusters)&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/conditional_euclidean_clustering.php Conditional Euclidean Clustering]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_conditional_euclidean_clustering.html pcl::ConditionalEuclideanClustering]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_conditional_euclidean_clustering.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Region growing==&lt;br /&gt;
&lt;br /&gt;
This type of segmentation (which is also a greedy-like, flood fill approach like the Euclidean one) groups together points that check a smoothness constraint. The angle between their normals and the difference of curvatures are checked to see if they could belong to the same smooth surface. Think about a box laying on a table: with Euclidean segmentation, both would be considered to be in the same cluster because they are &amp;quot;touching&amp;quot;. With region growing segmentation, this would not happen, because there is a 90° (with ideal normal estimation, that is) difference between the normals of a point in the table and another one in the box's lateral.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/search/kdtree.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/segmentation/region_growing.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// kd-tree object for searches.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	kdtree-&amp;gt;setInputCloud(cloud);&lt;br /&gt;
&lt;br /&gt;
	// Estimate the normals.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Region growing clustering object.&lt;br /&gt;
	pcl::RegionGrowing&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; clustering;&lt;br /&gt;
	clustering.setMinClusterSize(100);&lt;br /&gt;
	clustering.setMaxClusterSize(10000);&lt;br /&gt;
	clustering.setSearchMethod(kdtree);&lt;br /&gt;
	clustering.setNumberOfNeighbours(30);&lt;br /&gt;
	clustering.setInputCloud(cloud);&lt;br /&gt;
	clustering.setInputNormals(normals);&lt;br /&gt;
	// Set the angle in radians that will be the smoothness threshold&lt;br /&gt;
	// (the maximum allowable deviation of the normals).&lt;br /&gt;
	clustering.setSmoothnessThreshold(7.0 / 180.0 * M_PI); // 7 degrees.&lt;br /&gt;
	// Set the curvature threshold. The disparity between curvatures will be&lt;br /&gt;
	// tested after the normal deviation check has passed.&lt;br /&gt;
	clustering.setCurvatureThreshold(1.0);&lt;br /&gt;
&lt;br /&gt;
	std::vector &amp;lt;pcl::PointIndices&amp;gt; clusters;&lt;br /&gt;
	clustering.extract(clusters);&lt;br /&gt;
&lt;br /&gt;
	// For every cluster...&lt;br /&gt;
	int currentClusterNum = 1;&lt;br /&gt;
	for (std::vector&amp;lt;pcl::PointIndices&amp;gt;::const_iterator i = clusters.begin(); i != clusters.end(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		// ...add all its points to a new cloud...&lt;br /&gt;
		pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cluster(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
		for (std::vector&amp;lt;int&amp;gt;::const_iterator point = i-&amp;gt;indices.begin(); point != i-&amp;gt;indices.end(); point++)&lt;br /&gt;
			cluster-&amp;gt;points.push_back(cloud-&amp;gt;points[*point]);&lt;br /&gt;
		cluster-&amp;gt;width = cluster-&amp;gt;points.size();&lt;br /&gt;
		cluster-&amp;gt;height = 1;&lt;br /&gt;
		cluster-&amp;gt;is_dense = true;&lt;br /&gt;
&lt;br /&gt;
		// ...and save it to disk.&lt;br /&gt;
		if (cluster-&amp;gt;points.size() &amp;lt;= 0)&lt;br /&gt;
			break;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Cluster &amp;quot; &amp;lt;&amp;lt; currentClusterNum &amp;lt;&amp;lt; &amp;quot; has &amp;quot; &amp;lt;&amp;lt; cluster-&amp;gt;points.size() &amp;lt;&amp;lt; &amp;quot; points.&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::string fileName = &amp;quot;cluster&amp;quot; + boost::to_string(currentClusterNum) + &amp;quot;.pcd&amp;quot;;&lt;br /&gt;
		pcl::io::savePCDFileASCII(fileName, *cluster);&lt;br /&gt;
&lt;br /&gt;
		currentClusterNum++;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Minimum cluster size, Maximum cluster size, Neighbor count, Search method, Normals, Smoothness threshold, Curvature threshold, [Indices]&lt;br /&gt;
* '''Output''': Vector of Point indices (clusters)&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/region_growing_segmentation.php Region growing segmentation]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_region_growing.html pcl::RegionGrowing]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_region_growing_clustering.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Color-based===&lt;br /&gt;
&lt;br /&gt;
This method is based on the previous one but, instead of comparing the normals and the curvature of the points, it compares the RGB color. Like always, if the difference is less than a threshold, both are put in the same cluster. Obviously, for this to work you need a cloud with RGB information (with the type &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZRGB&amp;quot;&amp;lt;/span&amp;gt;). You can use the [[PCL/OpenNI_tutorial_1:_Installing_and_testing#Testing_(OpenNI_viewer) | OpenNI viewer]] program to retrieve them with the Kinect or Xtion.&lt;br /&gt;
&lt;br /&gt;
Apart from the color check, this algorithm also uses two postprocessing steps. In the first one, neighboring clusters with similar average color are merged (this is controlled with a threshold, too). In the second one, clusters that are smaller than the minimum size are merged with their neighbors.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/search/kdtree.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/segmentation/region_growing_rgb.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud, with color information.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGB&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// kd-tree object for searches.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZRGB&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZRGB&amp;gt;);&lt;br /&gt;
	kdtree-&amp;gt;setInputCloud(cloud);&lt;br /&gt;
&lt;br /&gt;
	// Color-based region growing clustering object.&lt;br /&gt;
	pcl::RegionGrowingRGB&amp;lt;pcl::PointXYZRGB&amp;gt; clustering;&lt;br /&gt;
	clustering.setInputCloud(cloud);&lt;br /&gt;
	clustering.setSearchMethod(kdtree);&lt;br /&gt;
	// Here, the minimum cluster size affects also the postprocessing step:&lt;br /&gt;
	// clusters smaller than this will be merged with their neighbors.&lt;br /&gt;
	clustering.setMinClusterSize(100);&lt;br /&gt;
	// Set the distance threshold, to know which points will be considered neighbors.&lt;br /&gt;
	clustering.setDistanceThreshold(10);&lt;br /&gt;
	// Color threshold for comparing the RGB color of two points.&lt;br /&gt;
	clustering.setPointColorThreshold(6);&lt;br /&gt;
	// Region color threshold for the postprocessing step: clusters with colors&lt;br /&gt;
	// within the threshold will be merged in one.&lt;br /&gt;
	clustering.setRegionColorThreshold(5);&lt;br /&gt;
&lt;br /&gt;
	std::vector &amp;lt;pcl::PointIndices&amp;gt; clusters;&lt;br /&gt;
	clustering.extract(clusters);&lt;br /&gt;
&lt;br /&gt;
	// For every cluster...&lt;br /&gt;
	int currentClusterNum = 1;&lt;br /&gt;
	for (std::vector&amp;lt;pcl::PointIndices&amp;gt;::const_iterator i = clusters.begin(); i != clusters.end(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		// ...add all its points to a new cloud...&lt;br /&gt;
		pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;::Ptr cluster(new pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;);&lt;br /&gt;
		for (std::vector&amp;lt;int&amp;gt;::const_iterator point = i-&amp;gt;indices.begin(); point != i-&amp;gt;indices.end(); point++)&lt;br /&gt;
			cluster-&amp;gt;points.push_back(cloud-&amp;gt;points[*point]);&lt;br /&gt;
		cluster-&amp;gt;width = cluster-&amp;gt;points.size();&lt;br /&gt;
		cluster-&amp;gt;height = 1;&lt;br /&gt;
		cluster-&amp;gt;is_dense = true;&lt;br /&gt;
&lt;br /&gt;
		// ...and save it to disk.&lt;br /&gt;
		if (cluster-&amp;gt;points.size() &amp;lt;= 0)&lt;br /&gt;
			break;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Cluster &amp;quot; &amp;lt;&amp;lt; currentClusterNum &amp;lt;&amp;lt; &amp;quot; has &amp;quot; &amp;lt;&amp;lt; cluster-&amp;gt;points.size() &amp;lt;&amp;lt; &amp;quot; points.&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::string fileName = &amp;quot;cluster&amp;quot; + boost::to_string(currentClusterNum) + &amp;quot;.pcd&amp;quot;;&lt;br /&gt;
		pcl::io::savePCDFileASCII(fileName, *cluster);&lt;br /&gt;
&lt;br /&gt;
		currentClusterNum++;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Minimum cluster size, Maximum cluster size, Search method, Distance threshold, Point color threshold, Region color threshold, [Indices]&lt;br /&gt;
* '''Output''': Vector of Point indices (clusters)&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/region_growing_rgb_segmentation.php Color-based region growing segmentation]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_region_growing_r_g_b.html pcl::RegionGrowingRGB]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_color_region_growing_clustering.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Min-Cut==&lt;br /&gt;
&lt;br /&gt;
The Min-Cut (minimum cut) algorithm is different from the previous ones. It performs binary segmentation: it divides the cloud in two clusters, one with points that do not belong to the object we are interested in (background points) and another with points that are considered part of the object (foreground points).&lt;br /&gt;
&lt;br /&gt;
In order to do that, the algorithm uses a vertex graph, where each vertex (node of the graph) represents a point, plus two additional vertices called ''source'' and ''sink'' that will be connected to the others with edges with different penalties (weights). Then, edges are also established between neighboring points, with a weight value each that depends on the distance that separates them. The greater the distance, the higher the probability of the edge being cut. The algorithm also needs as input a point that is known to be the center of the object, and the radius (size). After everything has been set, the minimum cut will be found, and we will have a list of foreground points.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/segmentation/min_cut_segmentation.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Min-cut clustering object.&lt;br /&gt;
	pcl::MinCutSegmentation&amp;lt;pcl::PointXYZ&amp;gt; clustering;&lt;br /&gt;
	clustering.setInputCloud(cloud);&lt;br /&gt;
	// Create a cloud that lists all the points that we know belong to the object&lt;br /&gt;
	// (foreground points). We should set here the object's center.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr foregroundPoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;());&lt;br /&gt;
	pcl::PointXYZ point;&lt;br /&gt;
	point.x = 100.0;&lt;br /&gt;
	point.y = 100.0;&lt;br /&gt;
	point.z = 100.0;&lt;br /&gt;
	foregroundPoints-&amp;gt;points.push_back(point);&lt;br /&gt;
	clustering.setForegroundPoints(foregroundPoints);&lt;br /&gt;
	// Set sigma, which affects the smooth cost calculation. It should be&lt;br /&gt;
	// set depending on the spacing between points in the cloud (resolution).&lt;br /&gt;
	clustering.setSigma(0.05);&lt;br /&gt;
	// Set the radius of the object we are looking for.&lt;br /&gt;
	clustering.setRadius(0.20);&lt;br /&gt;
	// Set the number of neighbors to look for. Increasing this also increases&lt;br /&gt;
	// the number of edges the graph will have.&lt;br /&gt;
	clustering.setNumberOfNeighbours(20);&lt;br /&gt;
	// Set the foreground penalty. It is the weight of the edges&lt;br /&gt;
	// that connect clouds points with the source vertex.&lt;br /&gt;
	clustering.setSourceWeight(0.6);&lt;br /&gt;
&lt;br /&gt;
	std::vector &amp;lt;pcl::PointIndices&amp;gt; clusters;&lt;br /&gt;
	clustering.extract(clusters);&lt;br /&gt;
&lt;br /&gt;
	std::cout &amp;lt;&amp;lt; &amp;quot;Maximum flow is &amp;quot; &amp;lt;&amp;lt; clustering.getMaxFlow() &amp;lt;&amp;lt; &amp;quot;.&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
	// For every cluster...&lt;br /&gt;
	int currentClusterNum = 1;&lt;br /&gt;
	for (std::vector&amp;lt;pcl::PointIndices&amp;gt;::const_iterator i = clusters.begin(); i != clusters.end(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		// ...add all its points to a new cloud...&lt;br /&gt;
		pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cluster(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
		for (std::vector&amp;lt;int&amp;gt;::const_iterator point = i-&amp;gt;indices.begin(); point != i-&amp;gt;indices.end(); point++)&lt;br /&gt;
			cluster-&amp;gt;points.push_back(cloud-&amp;gt;points[*point]);&lt;br /&gt;
		cluster-&amp;gt;width = cluster-&amp;gt;points.size();&lt;br /&gt;
		cluster-&amp;gt;height = 1;&lt;br /&gt;
		cluster-&amp;gt;is_dense = true;&lt;br /&gt;
&lt;br /&gt;
		// ...and save it to disk.&lt;br /&gt;
		if (cluster-&amp;gt;points.size() &amp;lt;= 0)&lt;br /&gt;
			break;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Cluster &amp;quot; &amp;lt;&amp;lt; currentClusterNum &amp;lt;&amp;lt; &amp;quot; has &amp;quot; &amp;lt;&amp;lt; cluster-&amp;gt;points.size() &amp;lt;&amp;lt; &amp;quot; points.&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::string fileName = &amp;quot;cluster&amp;quot; + boost::to_string(currentClusterNum) + &amp;quot;.pcd&amp;quot;;&lt;br /&gt;
		pcl::io::savePCDFileASCII(fileName, *cluster);&lt;br /&gt;
&lt;br /&gt;
		currentClusterNum++;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More details about the min-cut algorithm can be found in the [http://gfx.cs.princeton.edu/pubs/Golovinskiy_2009_MBS/paper_small.pdf original paper (PDF)].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Foreground points (object's center), Object's radius, Sigma, Neighbor count, Foreground penalty, [Indices]&lt;br /&gt;
* '''Output''': Vector of Point indices (object cluster)&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/min_cut_segmentation.php Min-Cut Based Segmentation]&lt;br /&gt;
* '''Publication''':&lt;br /&gt;
** [http://gfx.cs.princeton.edu/pubs/Golovinskiy_2009_MBS/paper_small.pdf Min-Cut Based Segmentation of Point Clouds] (Aleksey Golovinskiy and Thomas Funkhouser, 2009)&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_min_cut_clustering.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==RANSAC==&lt;br /&gt;
&lt;br /&gt;
As we saw in a previous section, PCL offers an implementation of the RANSAC algorithm to fit a set of points to a model. It will search the cloud and find a list of points that support the chosen model (plane, sphere...), and also compute the coefficients of the model. An object exists to perform easy segmentation of the points retrieved by the algorithm.&lt;br /&gt;
&lt;br /&gt;
If what you want is to extract the points of a model with already known coefficients, see this tutorial:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/model_outlier_removal.php Filtering a PointCloud using ModelOutlierRemoval]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Plane model===&lt;br /&gt;
&lt;br /&gt;
The following code lets you segment all planar surfaces from a point cloud. This is very important, because you will be able to detect things like the floor, ceiling, walls, or a table in a scene. If you know that the object you are looking for is sitting on a table, you can discard all points that are not supported by it, with the associated performance gain. Take a look at the [[PCL/OpenNI_tutorial_5:_3D_object_recognition_%28pipeline%29#Segmentation|pcl::ExtractPolygonalPrismData]] class, which is designed for this. After giving the coefficients of the plane and computing the convex hull, you can then create a 3D prism by extruding the hull a given height, and extract all points that lie inside it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/sample_consensus/method_types.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/sample_consensus/model_types.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/segmentation/sac_segmentation.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr inlierPoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for storing the plane model coefficients.&lt;br /&gt;
	pcl::ModelCoefficients::Ptr coefficients(new pcl::ModelCoefficients);&lt;br /&gt;
	// Create the segmentation object.&lt;br /&gt;
	pcl::SACSegmentation&amp;lt;pcl::PointXYZ&amp;gt; segmentation;&lt;br /&gt;
	segmentation.setInputCloud(cloud);&lt;br /&gt;
	// Configure the object to look for a plane.&lt;br /&gt;
	segmentation.setModelType(pcl::SACMODEL_PLANE);&lt;br /&gt;
	// Use RANSAC method.&lt;br /&gt;
	segmentation.setMethodType(pcl::SAC_RANSAC);&lt;br /&gt;
	// Set the maximum allowed distance to the model.&lt;br /&gt;
	segmentation.setDistanceThreshold(0.01);&lt;br /&gt;
	// Enable model coefficient refinement (optional).&lt;br /&gt;
	segmentation.setOptimizeCoefficients(true);&lt;br /&gt;
&lt;br /&gt;
	pcl::PointIndices inlierIndices;&lt;br /&gt;
	segmentation.segment(inlierIndices, *coefficients);&lt;br /&gt;
&lt;br /&gt;
	if (inlierIndices.indices.size() == 0)&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Could not find any points that fitted the plane model.&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	else&lt;br /&gt;
	{&lt;br /&gt;
		std::cerr &amp;lt;&amp;lt; &amp;quot;Model coefficients: &amp;quot; &amp;lt;&amp;lt; coefficients-&amp;gt;values[0] &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; coefficients-&amp;gt;values[1] &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; coefficients-&amp;gt;values[2] &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; coefficients-&amp;gt;values[3] &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
		// Copy all inliers of the model to another cloud.&lt;br /&gt;
		pcl::copyPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(*cloud, inlierIndices, *inlierPoints);&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Model type, Method type, Model threshold, [Optimize coefficients]&lt;br /&gt;
* '''Output''': Vector of Point indices (plane cluster)&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/planar_segmentation.php Plane model segmentation]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_s_a_c_segmentation.html pcl::SACSegmentation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_plane_segmentation.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Cylinder model===&lt;br /&gt;
&lt;br /&gt;
Segmenting a cylinder shape out of the cloud if analogous. There is only an additional parameter for specifying the radii:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/sample_consensus/method_types.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/sample_consensus/model_types.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/segmentation/sac_segmentation.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr inlierPoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for storing the plane model coefficients.&lt;br /&gt;
	pcl::ModelCoefficients::Ptr coefficients(new pcl::ModelCoefficients);&lt;br /&gt;
	// Create the segmentation object.&lt;br /&gt;
	pcl::SACSegmentation&amp;lt;pcl::PointXYZ&amp;gt; segmentation;&lt;br /&gt;
	segmentation.setInputCloud(cloud);&lt;br /&gt;
	// Configure the object to look for a plane.&lt;br /&gt;
	segmentation.setModelType(pcl::SACMODEL_CYLINDER);&lt;br /&gt;
	// Use RANSAC method.&lt;br /&gt;
	segmentation.setMethodType(pcl::SAC_RANSAC);&lt;br /&gt;
	// Set the maximum allowed distance to the model.&lt;br /&gt;
	segmentation.setDistanceThreshold(0.01);&lt;br /&gt;
	// Enable model coefficient refinement (optional).&lt;br /&gt;
	segmentation.setOptimizeCoefficients(true);&lt;br /&gt;
	// Set minimum and maximum radii of the cylinder.&lt;br /&gt;
	segmentation.setRadiusLimits(0, 0.1);&lt;br /&gt;
&lt;br /&gt;
	pcl::PointIndices inlierIndices;&lt;br /&gt;
	segmentation.segment(inlierIndices, *coefficients);&lt;br /&gt;
&lt;br /&gt;
	if (inlierIndices.indices.size() == 0)&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Could not find any points that fitted the cylinder model.&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	// Copy all inliers of the model to another cloud.&lt;br /&gt;
	else pcl::copyPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(*cloud, inlierIndices, *inlierPoints);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Model type, Method type, Model threshold, [Optimize coefficients]&lt;br /&gt;
* '''Output''': Vector of Point indices (cylinder cluster)&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/cylinder_segmentation.php Cylinder model segmentation]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_s_a_c_segmentation.html pcl::SACSegmentation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_cylinder_segmentation.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Retrieving the hull=&lt;br /&gt;
&lt;br /&gt;
For a given set of points, PCL can compute the external hull of the shape, using the [http://www.qhull.org/ QHull library]. The hull could be defined as the points that conform the outermost boundary of the set, like a shell showing the volume. There are two types of hull that you can calculate, the [https://en.wikipedia.org/wiki/Convex_hull convex] and the concave one. A convex hull can not have &amp;quot;holes&amp;quot;, that is, the segment that connects any pair of points can never cross &amp;quot;empty&amp;quot; space (not inside the hull). A concave hull, on the other hand, usually takes less area, like you can see in the following images:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:ConcaveHull.png | Concave hull.&lt;br /&gt;
File:ConvexHull.png | Convex hull.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Concave hull==&lt;br /&gt;
&lt;br /&gt;
The following code shows how you can use PCL to compute the concave hull of the first plane found in the scene:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/sample_consensus/method_types.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/sample_consensus/model_types.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/segmentation/sac_segmentation.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/extract_indices.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/concave_hull.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/cloud_viewer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr plane(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr concaveHull(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Get the plane model, if present.&lt;br /&gt;
	pcl::ModelCoefficients::Ptr coefficients(new pcl::ModelCoefficients);&lt;br /&gt;
	pcl::SACSegmentation&amp;lt;pcl::PointXYZ&amp;gt; segmentation;&lt;br /&gt;
	segmentation.setInputCloud(cloud);&lt;br /&gt;
	segmentation.setModelType(pcl::SACMODEL_PLANE);&lt;br /&gt;
	segmentation.setMethodType(pcl::SAC_RANSAC);&lt;br /&gt;
	segmentation.setDistanceThreshold(0.01);&lt;br /&gt;
	segmentation.setOptimizeCoefficients(true);&lt;br /&gt;
	pcl::PointIndices::Ptr inlierIndices(new pcl::PointIndices);&lt;br /&gt;
	segmentation.segment(*inlierIndices, *coefficients);&lt;br /&gt;
&lt;br /&gt;
	if (inlierIndices-&amp;gt;indices.size() == 0)&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Could not find a plane in the scene.&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	else&lt;br /&gt;
	{&lt;br /&gt;
		// Copy the points of the plane to a new cloud.&lt;br /&gt;
		pcl::ExtractIndices&amp;lt;pcl::PointXYZ&amp;gt; extract;&lt;br /&gt;
		extract.setInputCloud(cloud);&lt;br /&gt;
		extract.setIndices(inlierIndices);&lt;br /&gt;
		extract.filter(*plane);&lt;br /&gt;
&lt;br /&gt;
		// Object for retrieving the concave hull.&lt;br /&gt;
		pcl::ConcaveHull&amp;lt;pcl::PointXYZ&amp;gt; hull;&lt;br /&gt;
		hull.setInputCloud(plane);&lt;br /&gt;
		// Set alpha, which is the maximum length from a vertex to the center of the voronoi cell&lt;br /&gt;
		// (the smaller, the greater the resolution of the hull).&lt;br /&gt;
		hull.setAlpha(0.1);&lt;br /&gt;
		hull.reconstruct(*concaveHull);&lt;br /&gt;
&lt;br /&gt;
		// Visualize the hull.&lt;br /&gt;
		pcl::visualization::CloudViewer viewerPlane(&amp;quot;Concave hull&amp;quot;);&lt;br /&gt;
		viewerPlane.showCloud(concaveHull);&lt;br /&gt;
		while (!viewerPlane.wasStopped())&lt;br /&gt;
		{&lt;br /&gt;
			// Do nothing but wait.&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By choosing a smaller alpha value, you can improve the detail of the hull.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:ConcaveHullBefore.png | Table scene taken from the PCL dataset.&lt;br /&gt;
File:ConcaveHullAfter.png | Concave hull of the table seen in the cloud (better seen fullscreen).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The class [http://docs.pointclouds.org/trunk/classpcl_1_1_crop_hull.html pcl::CropHull] lets you find points that lie inside or outside a hull. One possible use, for a scene with a table and objects on it, would be:&lt;br /&gt;
&lt;br /&gt;
# Find a plane in the scene (it should be the table).&lt;br /&gt;
# Compute the hull of the plane.&lt;br /&gt;
# Project the points of the cloud onto the plane.&lt;br /&gt;
# Find which projected points are lying inside the plane hull. Those belong to objects sitting on the table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Alpha&lt;br /&gt;
* '''Output''': Hull points (concave)&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/hull_2d.php Construct a concave or convex hull polygon for a plane model]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_concave_hull.html pcl::ConcaveHull]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_concave_hull.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Convex hull==&lt;br /&gt;
&lt;br /&gt;
Computing the convex hull is done the same way, just changing the names accordingly. Also, there is no need to set the alpha parameter here:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/sample_consensus/method_types.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/sample_consensus/model_types.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/segmentation/sac_segmentation.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/extract_indices.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/convex_hull.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/cloud_viewer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr plane(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr convexHull(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Get the plane model, if present.&lt;br /&gt;
	pcl::ModelCoefficients::Ptr coefficients(new pcl::ModelCoefficients);&lt;br /&gt;
	pcl::SACSegmentation&amp;lt;pcl::PointXYZ&amp;gt; segmentation;&lt;br /&gt;
	segmentation.setInputCloud(cloud);&lt;br /&gt;
	segmentation.setModelType(pcl::SACMODEL_PLANE);&lt;br /&gt;
	segmentation.setMethodType(pcl::SAC_RANSAC);&lt;br /&gt;
	segmentation.setDistanceThreshold(0.01);&lt;br /&gt;
	segmentation.setOptimizeCoefficients(true);&lt;br /&gt;
	pcl::PointIndices::Ptr inlierIndices(new pcl::PointIndices);&lt;br /&gt;
	segmentation.segment(*inlierIndices, *coefficients);&lt;br /&gt;
&lt;br /&gt;
	if (inlierIndices-&amp;gt;indices.size() == 0)&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Could not find a plane in the scene.&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	else&lt;br /&gt;
	{&lt;br /&gt;
		// Copy the points of the plane to a new cloud.&lt;br /&gt;
		pcl::ExtractIndices&amp;lt;pcl::PointXYZ&amp;gt; extract;&lt;br /&gt;
		extract.setInputCloud(cloud);&lt;br /&gt;
		extract.setIndices(inlierIndices);&lt;br /&gt;
		extract.filter(*plane);&lt;br /&gt;
&lt;br /&gt;
		// Object for retrieving the convex hull.&lt;br /&gt;
		pcl::ConvexHull&amp;lt;pcl::PointXYZ&amp;gt; hull;&lt;br /&gt;
		hull.setInputCloud(plane);&lt;br /&gt;
		hull.reconstruct(*convexHull);&lt;br /&gt;
&lt;br /&gt;
		// Visualize the hull.&lt;br /&gt;
		pcl::visualization::CloudViewer viewerPlane(&amp;quot;Convex hull&amp;quot;);&lt;br /&gt;
		viewerPlane.showCloud(convexHull);&lt;br /&gt;
		while (!viewerPlane.wasStopped())&lt;br /&gt;
		{&lt;br /&gt;
			// Do nothing but wait.&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points&lt;br /&gt;
* '''Output''': Hull points (convex)&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/hull_2d.php Construct a concave or convex hull polygon for a plane model]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_convex_hull.html pcl::ConvexHull]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_convex_hull.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Reconstruction=&lt;br /&gt;
&lt;br /&gt;
To learn about how to use PCL to build a smooth, parametric surface representation from a point cloud, you can read the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/bspline_fitting.php Fitting trimmed B-splines to unordered point clouds]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Triangulation==&lt;br /&gt;
&lt;br /&gt;
Triangulation is a a way of estimating the surface captured by a point cloud, by connecting points with each other, ending up with a continous polygon mesh (three-sided polygons, that is, triangles). After you retrieve this surface mesh, you can for example export it to a format that most 3D modelling tools will understand, like [https://en.wikipedia.org/wiki/VTK VTK (Visualization Toolkit)], which can be opened in Blender or 3ds Max. PCL can also convert from VTK to [https://en.wikipedia.org/wiki/PLY_%28file_format%29 PLY] or [https://en.wikipedia.org/wiki/Wavefront_.obj_file OBJ].&lt;br /&gt;
&lt;br /&gt;
The following code uses PCL's implementation of a greedy triangulation algorithm that works with local 2D projections. For every point, it looks &amp;quot;down&amp;quot; along the normal, and connects neighboring points. For more information, specially regarding the parameters, check the API or the original PCL tutorial:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/gp3.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/io/vtk_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	// Object for storing both the points and the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;::Ptr cloudNormals(new pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Normal estimation.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// The triangulation object requires the points and normals to be stored in the same structure.&lt;br /&gt;
	pcl::concatenateFields(*cloud, *normals, *cloudNormals);&lt;br /&gt;
	// Tree object for searches in this new object.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointNormal&amp;gt;::Ptr kdtree2(new pcl::search::KdTree&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
	kdtree2-&amp;gt;setInputCloud(cloudNormals);&lt;br /&gt;
&lt;br /&gt;
	// Triangulation object.&lt;br /&gt;
	pcl::GreedyProjectionTriangulation&amp;lt;pcl::PointNormal&amp;gt; triangulation;&lt;br /&gt;
	// Output object, containing the mesh.&lt;br /&gt;
	pcl::PolygonMesh triangles;&lt;br /&gt;
	// Maximum distance between connected points (maximum edge length).&lt;br /&gt;
	triangulation.setSearchRadius(0.025);&lt;br /&gt;
	// Maximum acceptable distance for a point to be considered,&lt;br /&gt;
	// relative to the distance of the nearest point.&lt;br /&gt;
	triangulation.setMu(2.5);&lt;br /&gt;
	// How many neighbors are searched for.&lt;br /&gt;
	triangulation.setMaximumNearestNeighbors(100);&lt;br /&gt;
	// Points will not be connected to the current point&lt;br /&gt;
	// if their normals deviate more than the specified angle.&lt;br /&gt;
	triangulation.setMaximumSurfaceAngle(M_PI / 4); // 45 degrees.&lt;br /&gt;
	// If false, the direction of normals will not be taken into account&lt;br /&gt;
	// when computing the angle between them.&lt;br /&gt;
	triangulation.setNormalConsistency(false);&lt;br /&gt;
	// Minimum and maximum angle there can be in a triangle.&lt;br /&gt;
	// The first is not guaranteed, the second is.&lt;br /&gt;
	triangulation.setMinimumAngle(M_PI / 18); // 10 degrees.&lt;br /&gt;
	triangulation.setMaximumAngle(2 * M_PI / 3); // 120 degrees.&lt;br /&gt;
&lt;br /&gt;
	// Triangulate the cloud.&lt;br /&gt;
	triangulation.setInputCloud(cloudNormals);&lt;br /&gt;
	triangulation.setSearchMethod(kdtree2);&lt;br /&gt;
	triangulation.reconstruct(triangles);&lt;br /&gt;
&lt;br /&gt;
	// Save to disk.&lt;br /&gt;
	pcl::io::saveVTKFile(&amp;quot;mesh.vtk&amp;quot;, triangles);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:VTK.png|thumb|center|600px|VTK file produced by triangulation of a point cloud.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search method, Maximum distance, Maximum edge length, Minimum angle, Maximum angle, Maximum surface angle, [Normal consistency]&lt;br /&gt;
* '''Output''': Polygon mesh&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/greedy_projection.php Fast triangulation of unordered point clouds]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_greedy_projection_triangulation.html pcl::GreedyProjectionTriangulation]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/structpcl_1_1_polygon_mesh.html pcl::PolygonMesh]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_triangulation.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)&amp;diff=4740</id>
		<title>PCL/OpenNI tutorial 2: Cloud processing (basic)</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)&amp;diff=4740"/>
				<updated>2015-11-01T23:05:50Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Point_cloud.png|thumb|right|200px|Rendering of a point cloud, showing a desktop.]]&lt;br /&gt;
&lt;br /&gt;
All right: in the [[PCL/OpenNI tutorial 1: Installing and testing| previous tutorial]] you installed OpenNI and PCL. You managed to get your device working. You compiled and ran the example program, watched as it fetched one frame after another, and even saved a couple of point clouds to disk. Now you may be wondering &amp;quot;What about now? What use depth sensors are? What can I do with point clouds?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Well, depth cameras have a lot of useful applications. For example, you could use them as [http://tech.integrate.biz/kinect_mocap.htm motion tracking] hardware and save you thousands of dollars in professional gear. The original Kinect was actually designed as an alternative to physical controllers (e.g., gamepads) in games. [https://frantracerkinectft.codeplex.com/ Hand gestures] can also be recognized, so people have managed to successfully implement control interfaces similar to what could be seen in Minority Report.&lt;br /&gt;
&lt;br /&gt;
Working low-level with point clouds offers interesting possibilities too. As they are essentially a 3D &amp;quot;scan&amp;quot; or the scene, you could capture your whole room as a [http://pointclouds.org/documentation/tutorials/using_kinfu_large_scale.php continuous 3D mesh], then import it in your modelling software of choice and use it as a reference for a better reconstruction. You could use it to measure distances, or detect obstacles, for a robot navigation system. But the most exciting alternative by far, and the one where most of the research is being focused on, is to perform 3D object recognition and pose estimation, even in [https://www.youtube.com/watch?feature=player_detailpage&amp;amp;v=quGhaggn3cQ#t=349 real time]. This allows a robot to fulfill tasks like going to the kitchen to get you a cup of coffee.&lt;br /&gt;
&lt;br /&gt;
Most of the advanced stuff that can be done with a point cloud requires some previous steps, like filtering, reconstruction or normal estimation. In this tutorial, I will introduce you to the basics of point cloud processing, and leave the complicated methods for the [[PCL/OpenNI tutorial 3: Cloud processing (advanced) | next tutorial]]. I will explain what every technique does and what it should be used for, and include PCL code snippets so you can check how to implement it in your programs.&lt;br /&gt;
&lt;br /&gt;
=Point clouds=&lt;br /&gt;
&lt;br /&gt;
First, a little explanation. A point cloud as taken from a depth sensor consists of a series of points in 3D space, as simple as that. A &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::PointCloud&amp;lt;PointT&amp;gt;&amp;quot;&amp;lt;/span&amp;gt; object stores the points inside a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;std::vector&amp;lt;PointT&amp;gt;&amp;quot;&amp;lt;/span&amp;gt; structure. The cloud object exposes some functions that let you get information about them, like the point count. As you can see, the class is templated so you can instantiate it for many types of points. For example, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt;, which stores three floats for the X, Y and Z coordinates, and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZRGB&amp;quot;&amp;lt;/span&amp;gt;, which also stores color (texture) information for each point, the kind we would get with Kinect or Xtion RGB-D cameras. A list of [http://pointclouds.org/documentation/tutorials/adding_custom_ptype.php#what-pointt-types-are-available-in-pcl point types] and a basic description of the [http://pointclouds.org/documentation/tutorials/basic_structures.php cloud object] are available on PCL's website.&lt;br /&gt;
&lt;br /&gt;
Clouds will usually be handled with pointers, as they tend to be large. PCL implements the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;boost::shared_ptr&amp;quot;&amp;lt;/span&amp;gt; class from the [https://en.wikipedia.org/wiki/Boost_%28C%2B%2B_libraries%29 Boost C++ libraries], that keeps track of the references to a certain object. Every time you pass a shared pointer and a copy of it is created, a reference counter is automatically increased. When the pointer is destroyed, it is decreased. If the counter ever reaches 0, it means that no one is holding an active reference to the object, and its destructor is called to free memory. This can take a while to get used to, and you will be mixing cloud objects and cloud object pointers until then, but do not worry.&lt;br /&gt;
&lt;br /&gt;
==PCD files==&lt;br /&gt;
&lt;br /&gt;
When clouds are saved to disk, a [http://pointclouds.org/documentation/tutorials/pcd_file_format.php PCD] (Point Cloud Data) file is produced. You can choose to do it so in binary or plain text format. The first one is faster to work with, whereas the second one will let you to inspect the cloud by opening it with a common text editor. If you do so, you may see something like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;# .PCD v.7 − Point Cloud Data file format&lt;br /&gt;
VERSION .7&lt;br /&gt;
FIELDS x y z&lt;br /&gt;
SIZE 4 4 4&lt;br /&gt;
TYPE F F F&lt;br /&gt;
WIDTH 2&lt;br /&gt;
HEIGHT 1&lt;br /&gt;
VIEWPOINT 0 0 0 1 0 0 0&lt;br /&gt;
POINTS 2&lt;br /&gt;
DATA ascii&lt;br /&gt;
0.73412 -1.12643 0.82218&lt;br /&gt;
0.44739 -0.34735 -0.04624&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This tells us that the cloud consist of 2 points of type &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; (because each point has only 3 fields; the X, Y and Z coordinates, each stored in a 4-byte float), and lists their coordinates. It also stores the viewport information (the relative position of the points to the sensor) as a translation quaternion. Knowing the position of the camera can be useful for certain procedures like normal estimation.&lt;br /&gt;
&lt;br /&gt;
===Reading from file===&lt;br /&gt;
&lt;br /&gt;
Reading a point cloud from a PCD file to and storing it in memory can be done with a single function call. You must specify the correct point type. For example, for .pcd files saved with the OpenNI viewer application that we saw in the [[PCL/OpenNI tutorial 1: Installing and testing | previous tutorial]], you could use either &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZRGBA&amp;quot;&amp;lt;/span&amp;gt; or &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; if you are not going to use the color information.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I have not bothered to include proper error/argument checking, but you should do it in your programs. Be sure to check the [http://docs.pointclouds.org PCL API] every time I introduce you to one of its classes or methods.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/reading_pcd.php Reading Point Cloud data from PCD files]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl_1_1io.html pcl::io namespace]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_file_read.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Writing to file===&lt;br /&gt;
&lt;br /&gt;
Saving a point cloud to a file is as easy as reading it. You can choose between doing it in plain readable (ASCII) or binary format. The first one can be read and altered with a text editor, the second one is faster.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Write it back to disk under a different name.&lt;br /&gt;
	// Another possibility would be &amp;quot;savePCDFileBinary()&amp;quot;.&lt;br /&gt;
	pcl::io::savePCDFileASCII(&amp;quot;output.pcd&amp;quot;, *cloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/writing_pcd.php Writing Point Cloud data to PCD files]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl_1_1io.html pcl::io namespace]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_file_write.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you want a better alternative for loading or saving PCD files (some of the functions we saw are a bit outdated, but they do the job), take a look at the [http://docs.pointclouds.org/trunk/classpcl_1_1_p_c_d_reader.html PCDReader] and [http://docs.pointclouds.org/trunk/classpcl_1_1_p_c_d_writer.html PCDWriter] classes.&lt;br /&gt;
&lt;br /&gt;
==Visualization==&lt;br /&gt;
&lt;br /&gt;
Of course, PCL ships with a dedicated program to load and visualize point clouds and a lot of other stuff, &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''pcl_viewer''&amp;lt;/span&amp;gt;. You can run it from a terminal with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_viewer &amp;lt;file 1&amp;gt; [file 2] ... [file N]&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you load several files, each cloud will be rendered with a different color to help you tell them apart. The program uses internally the [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_visualizer.html pcl::visualization::PCLVisualizer] class. Press the &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''H'''&amp;lt;/span&amp;gt; key while it is running to print the help on the terminal. Keys I tend to find useful are &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''1'''&amp;lt;/span&amp;gt;, which chooses a different color for the cloud (useful if the current one is too hard to see), &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''R'''&amp;lt;/span&amp;gt;, which resets the camera, and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''J'''&amp;lt;/span&amp;gt;, which saves a PNG snapshot.&lt;br /&gt;
&lt;br /&gt;
In your own code, it is possible to iterate through the cloud and print the information of each point to console, but this is cumbersome. Sometimes you just want to visualize the cloud to check something. PCL can create a window for you and display it, letting users pan and zoom their way around. The simplest method is the following one:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/cloud_viewer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Cloud viewer object. You can set the window title.&lt;br /&gt;
	pcl::visualization::CloudViewer viewer(argv[1]);&lt;br /&gt;
	viewer.showCloud(cloud);&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		// Do nothing but wait.&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;CloudViewer&amp;quot;&amp;lt;/span&amp;gt; has less features than &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PCLVisualizer&amp;quot;&amp;lt;/span&amp;gt; but it is easier to set up. The program will remain active until you close the window with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Alt+F4'''&amp;lt;/span&amp;gt; or press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Q'''&amp;lt;/span&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/cloud_viewer.php The CloudViewer]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/pcl_visualizer.php PCLVisualizer]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/walkthrough.php Visualization library overview]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_cloud_viewer.html pcl::visualization::CloudViewer]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_visualize_cloud.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Concatenating two clouds==&lt;br /&gt;
&lt;br /&gt;
There are two types of concatenation with point clouds:&lt;br /&gt;
&lt;br /&gt;
* '''Points''':  You initially have two clouds, &amp;quot;A&amp;quot; and &amp;quot;B&amp;quot;, with &amp;quot;N&amp;quot; and &amp;quot;M&amp;quot; points each. A third cloud is created (&amp;quot;C&amp;quot;), that has &amp;quot;N + M&amp;quot; points (all points of the first two clouds). For this to work, the point type must be the same for all clouds (&amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt;, or &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt;, or whatever).&lt;br /&gt;
* '''Fields''': With this, you can join the fields of two clouds with different types. The only constraint is that both must have the same number of points. For example, if you have a cloud of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; type and another of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt; type with &amp;quot;N&amp;quot; points each, you can concatenate them to create a cloud of type &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointNormal&amp;quot;&amp;lt;/span&amp;gt; (that stores X, Y and Z coordinates, plus the normal) that also has &amp;quot;N&amp;quot; points.&lt;br /&gt;
&lt;br /&gt;
Point concatenation is the easiest one becase you can use the cloud's add operator (+), which is overloaded.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudA(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudB(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudC(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read two PCD files from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGBA&amp;gt;(argv[1], *cloudA) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGBA&amp;gt;(argv[2], *cloudB) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Create cloud &amp;quot;C&amp;quot;, with the points of both &amp;quot;A&amp;quot; and &amp;quot;B&amp;quot;.&lt;br /&gt;
	*cloudC = (*cloudA) + (*cloudB);&lt;br /&gt;
	// Now cloudC-&amp;gt;points.size() equals cloudA-&amp;gt;points.size() + cloudB-&amp;gt;points.size().&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/concatenate_clouds.php Concatenate the points of two Point Clouds]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_concatenate_clouds.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Field concatenation is done with a function call:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudPoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr cloudNormals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;::Ptr cloudAll(new pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudPoints) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Compute the normals of the cloud (do not worry, we will see this later).&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloudPoints);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.05);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*cloudNormals);&lt;br /&gt;
&lt;br /&gt;
	// Concatenate the fields (PointXYZ + Normal = PointNormal).&lt;br /&gt;
	pcl::concatenateFields(*cloudPoints, *cloudNormals, *cloudAll);&lt;br /&gt;
&lt;br /&gt;
	// Print the data to standard output.&lt;br /&gt;
	for (size_t currentPoint = 0; currentPoint &amp;lt; cloudAll-&amp;gt;points.size(); currentPoint++)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tXYZ:&amp;quot; &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].x &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].y &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].z &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tNormal:&amp;quot; &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[0] &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[1] &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[2] &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/concatenate_clouds.php Concatenate the points of two Point Clouds]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__common.html#gac6add803f86fd16a998dce541e9ef402 pcl::concatenateFields()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_concatenate_fields.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Matrix transformations==&lt;br /&gt;
&lt;br /&gt;
Clouds can be directly manipulated with [https://en.wikipedia.org/wiki/Transformation_matrix transformations] (translation, rotation, scaling). 3D transformations use a 4x4 matrix, which can be a little complicated to understand (specially when it comes to rotations), so check some book or [http://web.iitd.ac.in/~hegde/cad/lecture/L6_3dtrans.pdf tutorial] before trying to use one. The transformed cloud will be saved to another object so you can keep both versions and use them as you wish.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/common/transforms.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr transformed(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Transformation matrix object, initialized to the identity matrix&lt;br /&gt;
	// (a null transformation).&lt;br /&gt;
	Eigen::Matrix4f transformation = Eigen::Matrix4f::Identity();&lt;br /&gt;
&lt;br /&gt;
	// Set a rotation around the Z axis.&lt;br /&gt;
	float theta = M_PI / 2; // 180 degrees.&lt;br /&gt;
	transformation(0, 0) = cos(theta);&lt;br /&gt;
	transformation(0, 1) = -sin(theta);&lt;br /&gt;
	transformation(1, 0) = sin(theta);&lt;br /&gt;
	transformation(1, 1) = cos(theta);&lt;br /&gt;
&lt;br /&gt;
	// Set a translation on the X axis.&lt;br /&gt;
	transformation(0, 3) = 1.0f; // 1 meter (positive direction).&lt;br /&gt;
&lt;br /&gt;
	pcl::transformPointCloud(*cloud, *transformed, transformation);&lt;br /&gt;
&lt;br /&gt;
	// Visualize both the original and the result.&lt;br /&gt;
	pcl::visualization::PCLVisualizer viewer(argv[1]);&lt;br /&gt;
	viewer.addPointCloud(cloud, &amp;quot;original&amp;quot;);&lt;br /&gt;
	// The transformed one's points will be red in color.&lt;br /&gt;
	pcl::visualization::PointCloudColorHandlerCustom&amp;lt;pcl::PointXYZ&amp;gt; colorHandler(transformed, 255, 0, 0);&lt;br /&gt;
	viewer.addPointCloud(transformed, colorHandler, &amp;quot;transformed&amp;quot;);&lt;br /&gt;
	// Add 3D colored axes to help see the transformation.&lt;br /&gt;
	viewer.addCoordinateSystem(1.0, &amp;quot;reference&amp;quot;, 0);&lt;br /&gt;
&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer.spinOnce();&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Cloud_transformation.png|thumb|center|600px|Rotation and translation applied to a cloud.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check the rest of the functions, because you can also use a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Affine3&amp;quot;&amp;lt;/span&amp;gt; or a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Quaternion&amp;quot;&amp;lt;/span&amp;gt; instead of a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Matrix4&amp;quot;&amp;lt;/span&amp;gt; object.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Transformation&lt;br /&gt;
* '''Output''': Transformed points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/matrix_transform.php Using a matrix to transform a point cloud]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl.html#aefbe4956d1c8fb785a97df6708d57c56 pcl::transformPointCloud()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_matrix_transform.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Using indices==&lt;br /&gt;
&lt;br /&gt;
Many of PCL's algorithms return indices. Indices are just a list of some of the points of a cloud (not the points themselves with all their data, but just their index in the cloud). For example, a cylinder segmentation algorithm will give you as output a list of points that were considered to fit the cylinder model. You can then use the indices to extract the points to a second cloud for further work with the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::ExtractIndices&amp;quot;&amp;lt;/span&amp;gt; class. Its &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt; function will instruct it to extract the points NOT indexed (so you can remove all cylinder-shaped clusters from the cloud).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/extract_indices.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/segmentation/sac_segmentation.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudAll(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudExtracted(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudAll) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Plane segmentation (do not worry, we will see this later).&lt;br /&gt;
	pcl::ModelCoefficients::Ptr coefficients(new pcl::ModelCoefficients);&lt;br /&gt;
	pcl::SACSegmentation&amp;lt;pcl::PointXYZ&amp;gt; segmentation;&lt;br /&gt;
	segmentation.setOptimizeCoefficients(true);&lt;br /&gt;
	segmentation.setModelType(pcl::SACMODEL_PLANE);&lt;br /&gt;
	segmentation.setMethodType(pcl::SAC_RANSAC);&lt;br /&gt;
	segmentation.setDistanceThreshold(0.01);&lt;br /&gt;
	segmentation.setInputCloud(cloudAll);&lt;br /&gt;
&lt;br /&gt;
	// Object for storing the indices.&lt;br /&gt;
	pcl::PointIndices::Ptr pointIndices(new pcl::PointIndices);&lt;br /&gt;
&lt;br /&gt;
	segmentation.segment(*pointIndices, *coefficients);&lt;br /&gt;
&lt;br /&gt;
	// Object for extracting points from a list of indices.&lt;br /&gt;
	pcl::ExtractIndices&amp;lt;pcl::PointXYZ&amp;gt; extract;&lt;br /&gt;
	extract.setInputCloud(cloudAll);&lt;br /&gt;
	extract.setIndices(pointIndices);&lt;br /&gt;
	// We will extract the points that are NOT indexed (the ones that are not in a plane).&lt;br /&gt;
	extract.setNegative(true);&lt;br /&gt;
	extract.filter(*cloudExtracted);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Indices, [Negative mode]&lt;br /&gt;
* '''Output''': Points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/extract_indices.php Extracting indices from a PointCloud]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/structpcl_1_1_point_indices.html pcl::PointIndices]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_extract_indices.html pcl::ExtractIndices]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_extract_indices.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Removing NaNs==&lt;br /&gt;
&lt;br /&gt;
The cloud retrieved from the sensor may contain several kind of measurement errors and/or inaccuracies. One of them is the presence of [https://en.wikipedia.org/wiki/NaN NaN (Not a Number)] values in the coordinates of some points, as you can see in the following file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;# .PCD v0.7 - Point Cloud Data file format&lt;br /&gt;
VERSION 0.7&lt;br /&gt;
FIELDS x y z rgba&lt;br /&gt;
SIZE 4 4 4 4&lt;br /&gt;
TYPE F F F U&lt;br /&gt;
COUNT 1 1 1 1&lt;br /&gt;
WIDTH 640&lt;br /&gt;
HEIGHT 480&lt;br /&gt;
VIEWPOINT 0 0 0 1 0 0 0&lt;br /&gt;
POINTS 307200&lt;br /&gt;
DATA ascii&lt;br /&gt;
nan nan nan 10135463&lt;br /&gt;
nan nan nan 10398635&lt;br /&gt;
nan nan nan 10070692&lt;br /&gt;
nan nan nan 10268071&lt;br /&gt;
...&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cloud objects have a member function called &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;is_dense()&amp;quot;&amp;lt;/span&amp;gt; that will return true if all points have valid, finite values. A NaN indicates that there was a problem measuring the distance to that point, maybe because the sensor was too close or too far, or because the surface was reflective.&lt;br /&gt;
&lt;br /&gt;
Sadly, many of PCL's algorithms will fail if even a single NaN is present in the cloud you feed them as input, with a message like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;Assertion `point_representation_-&amp;gt;isValid (point) &amp;amp;&amp;amp; &amp;quot;Invalid (NaN, Inf) point coordinates given to radiusSearch!&amp;quot;' failed.&amp;quot;''&amp;lt;/span&amp;gt;. If this happens, you will have to remove such values from the cloud. You can use a program like the following to &amp;quot;clean&amp;quot; a cloud from all NaN points:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/filter.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	if (argc != 3)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tUsage: &amp;quot; &amp;lt;&amp;lt; argv[0] &amp;lt;&amp;lt; &amp;quot; &amp;lt;input cloud&amp;gt; &amp;lt;output cloud&amp;gt;&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// The mapping tells you to what points of the old cloud the new ones correspond,&lt;br /&gt;
	// but we will not use it.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; mapping;&lt;br /&gt;
	pcl::removeNaNFromPointCloud(*cloud, *cloud, mapping);&lt;br /&gt;
&lt;br /&gt;
	// Save it back.&lt;br /&gt;
	pcl::io::savePCDFileASCII(argv[2], *cloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The problem with this method is that it will not keep the cloud organized. All clouds store a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;width&amp;quot;&amp;lt;/span&amp;gt; and a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;height&amp;quot;&amp;lt;/span&amp;gt; variable. In ''unorganized'' clouds, the total number of points is the same as the width, and the height is set to 1. In ''organized'' clouds (like the ones taken from camera-like sensors such as Kinect or Xtion), the width and the height are the same that the pixels of the image resolution the sensor works with. Points are distributed in rows like in the depth image, and every one of them corresponds to a pixel. The member function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;isOrganized()&amp;quot;&amp;lt;/span&amp;gt; will return true if the height is greater than 1.&lt;br /&gt;
&lt;br /&gt;
Because removing NaNs will change the number of points of the cloud, it can no longer be kept organized with the original width/height ratio, so the function will set the height to 1. This is not a big issue, as only a few of PCL's algorithms work explicitly with organized clouds (and they mostly use it for optimization), but you must take it into account.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Cloud&lt;br /&gt;
* '''Output''': Cloud without NaNs&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__filters.html#gac463283a9e9c18a66d3d29b28a575064 pcl::removeNaNFromPointCloud()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_remove_NaNs.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Saving a PNG==&lt;br /&gt;
&lt;br /&gt;
PCL offers the possibility of saving the values of a point cloud to a PNG image file. Obviously, this can only be done with organized clouds, because the rows and columns of the resulting image will correspond exactly to the ones of the cloud. For example, if you have a point cloud taken from a sensor like Kinect or Xtion, you can use this to retrieve the 640x480 RGB image that matches the cloud.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/io/png_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGB&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Save the image (cloud must be organized).&lt;br /&gt;
	pcl::io::savePNGFile(&amp;quot;output.png&amp;quot;, *cloud, &amp;quot;rgb&amp;quot;);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;savePNGFile()&amp;quot;&amp;lt;/span&amp;gt; has several implementations; in this case, we have chosen the one that lets you specify which fields to save. If you omit that parameter, the function will save the RGB fields by default.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__io.html#gabd03e43fac3048635512f447c8bc50b3 pcl::io::savePNGFile()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_save_PNG.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Computing the centroid==&lt;br /&gt;
&lt;br /&gt;
The [https://en.wikipedia.org/wiki/Centroid centroid] of a cloud is a point with coordinates that result from computing the mean of the values of all points in the cloud. You could say it is the &amp;quot;center of mass&amp;quot;, and it has several uses for certain algorithms. If you intend to compute the actual center of gravity of a [[PCL/OpenNI_tutorial_3:_Cloud_processing_%28advanced%29#Segmentation|clustered]] object though, remember that the sensor does not retrieve the part of the object that is hidden from the camera, like the back faces that are occluded by the front ones, or the inside. You only have the part of the surface that faces the camera.&lt;br /&gt;
&lt;br /&gt;
The centroid of a cloud can be computed with a single function call:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/common/centroid.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object to store the centroid coordinates.&lt;br /&gt;
	Eigen::Vector4f centroid;&lt;br /&gt;
	&lt;br /&gt;
	pcl::compute3DCentroid(*cloud, centroid);&lt;br /&gt;
&lt;br /&gt;
	std::cout &amp;lt;&amp;lt; &amp;quot;The XYZ coordinates of the centroid are: (&amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[0] &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[1] &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[2] &amp;lt;&amp;lt; &amp;quot;).&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl.html#a23daec3829d2d4100a2f185372b3753a pcl::compute3DCentroid()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_compute_centroid.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Feature estimation=&lt;br /&gt;
&lt;br /&gt;
A ''feature'' of a point is some characteristic that can help us to tell it apart from a different point. Of course, you could say that the XYZ coordinates would be characteristic enough, but when interpolating the data of two different, yet corresponding, clouds, they become useless. We need something better, that will have similar values when computed for the same spot of a surface in both sets.&lt;br /&gt;
&lt;br /&gt;
Normals are an example of a very simple feature. 3D [http://pointclouds.org/documentation/tutorials/how_features_work.php features] will be important later in our tutorials, when we talk about [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29|descriptors]] (more detailed &amp;quot;signatures&amp;quot; of points used for close matching).&lt;br /&gt;
&lt;br /&gt;
==Normal estimation==&lt;br /&gt;
&lt;br /&gt;
As you may remember from geometry class, the [https://en.wikipedia.org/wiki/Surface_normal normal] of a plane is an unit vector that is perpendicular to it. The normal of a surface at a point is defined as the vector that is perpendicular to the plane that is tangent to the surface at the point. Surface normals can be calculated for the points of a cloud, too. It is considered a feature, albeit not a very discriminative one.&lt;br /&gt;
&lt;br /&gt;
I will not go into detail with the math of the estimation method, but you just have to know that is uses the nearest neighbors (the points that are closest to the one we are calculating the normal for) to find out the tangent plane and the normal vector. You can customize the method with the search radius (think about a sphere of that radius, centered in the point; all neighboring points that lie within will be used for the computation) and the viewpoint (by default, the output normals will be directionless; by supposing that all vectors must point towards the camera - because otherwise they would belong to surfaces that are not visible from the sensor - they can all be re-oriented accordingly).&lt;br /&gt;
&lt;br /&gt;
Normals are also important because they give us information about the ''curvature'' of the surface at some point, which can be used to our advantage. Many of PCL's algorithms will require us to provide the normals of the input cloud. To estimate them, you can use the following code:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for normal estimation.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	// For every point, use all neighbors in a radius of 3cm.&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	// A kd-tree is a data structure that makes searches efficient. More about it later.&lt;br /&gt;
	// The normal estimation object will use it to find nearest neighbors.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
&lt;br /&gt;
	// Calculate the normals.&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Visualize them.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Normals&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(cloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	// Display one normal out of 20, as a line of length 3cm.&lt;br /&gt;
	viewer-&amp;gt;addPointCloudNormals&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt;(cloud, normals, 20, 0.03, &amp;quot;normals&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, normals are stored in &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointCloud&amp;quot;&amp;lt;/span&amp;gt; objects too, instantiated to the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt; type. If you ever extract some indices from a cloud before feeding them to an algorithm, be sure to do the same with the normals. I have not made use of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setViewPoint()&amp;quot;&amp;lt;/span&amp;gt; function, but you get the idea.&lt;br /&gt;
&lt;br /&gt;
Instead of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setRadiusSearch()&amp;quot;&amp;lt;/span&amp;gt;, you can make use of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setKSearch()&amp;quot;&amp;lt;/span&amp;gt;, which takes an integer, ''K''. This will take into account a point's ''K'' nearest neighbors to compute the normal. Please mind that this can produce some odd results for points that are not in dense areas, as some of the &amp;quot;neighbors&amp;quot; will be too far from them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Normal_estimation.png|thumb|center|600px|Visualization of the normals computed for a point cloud.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search method, Radius | Neighbor count, [Indices], [Viewpoint]&lt;br /&gt;
* '''Output''': Normals&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/normal_estimation.php Estimating Surface Normals in a Point Cloud]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_normal_estimation.html pcl::NormalEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_estimate_normals.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Integral images===&lt;br /&gt;
&lt;br /&gt;
Integral images are a method for normal estimation on organized clouds. The algorithm sees the cloud as a depth image, and creates certain rectangular areas over which the normals are computed, by taking into account the relationship between neighboring &amp;quot;pixels&amp;quot; (points), without the need to run lookups on a search structure like a tree. Because of this, it is very efficient and the normals are usually computed in a split second. If you are performing real time computations on clouds taken from a RGB-D sensor, this is the approach you should use because the previous method will probably take several seconds to finish.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/integral_image_normal.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for normal estimation.&lt;br /&gt;
	pcl::IntegralImageNormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	// Other estimation methods: COVARIANCE_MATRIX, AVERAGE_DEPTH_CHANGE, SIMPLE_3D_GRADIENT.&lt;br /&gt;
	// They determine the smoothness of the result, and the running time.&lt;br /&gt;
	normalEstimation.setNormalEstimationMethod(normalEstimation.AVERAGE_3D_GRADIENT);&lt;br /&gt;
	// Depth threshold for computing object borders based on depth changes, in meters.&lt;br /&gt;
	normalEstimation.setMaxDepthChangeFactor(0.02f);&lt;br /&gt;
	// Factor that influences the size of the area used to smooth the normals.&lt;br /&gt;
	normalEstimation.setNormalSmoothingSize(10.0f);&lt;br /&gt;
&lt;br /&gt;
	// Calculate the normals.&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Visualize them.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Normals&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(cloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	// Display one normal out of 20, as a line of length 3cm.&lt;br /&gt;
	viewer-&amp;gt;addPointCloudNormals&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt;(cloud, normals, 20, 0.03, &amp;quot;normals&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Estimation method, [Smoothing factor], [Depth threshold], [Rectangle size], [Border policy], [Depth dependent smoothing]&lt;br /&gt;
* '''Output''': Normals&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/normal_estimation_using_integral_images.php Normal Estimation Using Integral Images]&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://islab.ulsan.ac.kr/files/announcement/411/Adaptive%20Neighborhood%20Selection%20for%20Real-Time%20Surface%20Normal.pdf Adaptive Neighborhood Selection for Real-Time Surface Normal Estimation from Organized Point Cloud Data Using Integral Images] (S. Holzer et al., 2012)&lt;br /&gt;
** [https://www.willowgarage.com/sites/default/files/holz_rgbd_seg.pdf Real-Time Plane Segmentation using RGB-D Cameras] (Dirk Holz et al., 2012)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_integral_image_normal_estimation.html pcl::IntegralImageNormalEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_estimate_normals_integral.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Decomposition=&lt;br /&gt;
&lt;br /&gt;
When I talk about ''decomposition'', I mean the process of creating an ordered, organized data structure from the points of the cloud, with some purpose in mind like making further analysis easier, or performing filters.&lt;br /&gt;
&lt;br /&gt;
==''k''-d tree==&lt;br /&gt;
&lt;br /&gt;
A [https://en.wikipedia.org/wiki/K-d_tree ''k''-d tree] (''k''-dimensional tree) is a data structure that organizes a set of points in a ''k''-dimensional space, in a way that makes range search operations very efficient (for example, finding the nearest neighbor of a point, which is the point that is closest to it in space; or finding all neighbors within a radius).&lt;br /&gt;
&lt;br /&gt;
It is a binary tree, that is, every non-leaf node in it has two children nodes, the &amp;quot;left&amp;quot; and &amp;quot;right&amp;quot; ones. Each level splits space on a specific dimension. For example, in 3-dimensional space, at the root node (first level) all children would be split based on the first dimension, X (points having coordinates with greater X values would go to the right subtree, points with lesser values to the left one). At the second level (the nodes we just created), the split would be done on the Y axis, following the same criteria. At the third level (the grandchildren), we would use the Z axis. At the fourth level, we would get back to the X axis, and so on. Usually, the median point is chosen to be root at every level.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Kdtree.png|thumb|center|700px|Example of a 2D ''k''d-tree (image from Wikimedia Commons).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In PCL, creating a ''k''-d tree from a cloud is a straightforward and fast operation:&lt;br /&gt;
&lt;br /&gt;
===Searching===&lt;br /&gt;
&lt;br /&gt;
The following code snippet shows you how to use a ''k''-d tree to perform efficient searches:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/search/kdtree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// kd-tree object.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt; kdtree;&lt;br /&gt;
	kdtree.setInputCloud(cloud);&lt;br /&gt;
&lt;br /&gt;
	// We will find the 5 nearest neighbors of this point&lt;br /&gt;
	// (it does not have to be one of the cloud's, we can use any coordinate).&lt;br /&gt;
	pcl::PointXYZ point;&lt;br /&gt;
	point.x = 0.0524343;&lt;br /&gt;
	point.y = -0.58016;&lt;br /&gt;
	point.z = 1.776;&lt;br /&gt;
	// This vector will store the output neighbors.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; pointIndices(5);&lt;br /&gt;
	// This vector will store their squared distances to the search point.&lt;br /&gt;
	std::vector&amp;lt;float&amp;gt; squaredDistances(5);&lt;br /&gt;
	// Perform the search, and print out results.&lt;br /&gt;
	if (kdtree.nearestKSearch(point, 5, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;5 nearest neighbors of the point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Now we find all neighbors within 3cm of the point&lt;br /&gt;
	// (inside a sphere of radius 3cm centered at the point).&lt;br /&gt;
	if (kdtree.radiusSearch(point, 0.03, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors within 3cm:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another alternative would be &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;KdTreeFLANN&amp;quot;&amp;lt;/span&amp;gt;, which makes use of [http://www.cs.ubc.ca/research/flann/ FLANN] (Fast Library for Approximate Nearest Neighbors).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius | Neighbor count&lt;br /&gt;
* '''Output''': Point indices, Squared distances&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/kdtree_search.php How to use a KdTree to search]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1search_1_1_kd_tree.html pcl::search::KdTree]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_kd_tree_f_l_a_n_n.html pcl::KdTreeFLANN]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_kdtree_search.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Octree==&lt;br /&gt;
&lt;br /&gt;
Like the ''k''-d tree, the [https://en.wikipedia.org/wiki/Octree octree] is an hierarchical tree data structure useful for searches, but also compression or downsampling. Each octree node (called a ''voxel'', a term that will sound familiar for users of 3D engines, and which could be described as a &amp;quot;3D pixel&amp;quot;, that is, a cube) has either eight children or no children. The root node describes a cubic bounding box which encapsulates all points. At every level, it becomes subdivided by a factor of 2 which results in an increased&lt;br /&gt;
voxel resolution. That way, we can selectively give more resolution to certain zones.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Octree.png|thumb|center|700px|Depiction of an octree (image from Wikimedia Commons).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Searching===&lt;br /&gt;
&lt;br /&gt;
Downsampling will be covered in a different section, so now I will show you how to perform searches with an octree:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/octree/octree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Octree object, with a maximum resolution of 128&lt;br /&gt;
	// (resolution at lowest octree level).&lt;br /&gt;
	pcl::octree::OctreePointCloudSearch&amp;lt;pcl::PointXYZ&amp;gt; octree(128);&lt;br /&gt;
	octree.setInputCloud(cloud);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
&lt;br /&gt;
	// We will find all neighbors within the same voxel as this point.&lt;br /&gt;
	// (it does not have to be one of the cloud's, we can use any coordinate).&lt;br /&gt;
	pcl::PointXYZ point;&lt;br /&gt;
	point.x = 0.0524343;&lt;br /&gt;
	point.y = -0.58016;&lt;br /&gt;
	point.z = 1.776;&lt;br /&gt;
	// This vector will store the output neighbors.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; pointIndices;&lt;br /&gt;
	// Perform the search, and print out results.&lt;br /&gt;
	if (! octree.voxelSearch(point, pointIndices) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors in the same voxel:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// We will find the 5 nearest neighbors of the point.&lt;br /&gt;
	// This vector will store their squared distances to the search point.&lt;br /&gt;
	std::vector&amp;lt;float&amp;gt; squaredDistances;&lt;br /&gt;
	if (octree.nearestKSearch(point, 5, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;5 nearest neighbors of the point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Now we find all neighbors within 3cm of the point&lt;br /&gt;
	// (inside a sphere of radius 3cm centered at the point).&lt;br /&gt;
	// The point DOES have to belong in the cloud.&lt;br /&gt;
	if (octree.radiusSearch(point, 0.03, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors within 3cm:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, the code is mostly interchangeable with the one of the ''k''d-tree.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius | Neighbor count&lt;br /&gt;
* '''Output''': Point indices, Squared distances&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/octree.php Spatial Partitioning and Search Operations with Octrees]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1search_1_1_octree.html pcl::search::Octree]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1octree_1_1_octree_point_cloud_search.html pcl::octree::OctreePointCloudSearch]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_octree_search.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Compressing/Decompressing===&lt;br /&gt;
&lt;br /&gt;
On the other hand, this is the code needed to compress and decompress a point cloud with PCL:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/compression/octree_pointcloud_compression.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr decompressedCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Octree compressor object.&lt;br /&gt;
	// Check /usr/include/pcl-&amp;lt;version&amp;gt;/pcl/compression/compression_profiles.h for more profiles.&lt;br /&gt;
	// The second parameter enables the output of information.&lt;br /&gt;
	pcl::io::OctreePointCloudCompression&amp;lt;pcl::PointXYZ&amp;gt; octreeCompression(pcl::io::MED_RES_ONLINE_COMPRESSION_WITHOUT_COLOR, true);&lt;br /&gt;
	// Stringstream that will hold the compressed cloud.&lt;br /&gt;
	std::stringstream compressedData;&lt;br /&gt;
&lt;br /&gt;
	// Compress the cloud (you would save the stream to disk).&lt;br /&gt;
	octreeCompression.encodePointCloud(cloud, compressedData);&lt;br /&gt;
&lt;br /&gt;
	// Decompress the cloud.&lt;br /&gt;
	octreeCompression.decodePointCloud(compressedData, decompressedCloud);&lt;br /&gt;
&lt;br /&gt;
	// Display the decompressed cloud.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Octree compression&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(decompressedCloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforeCompression.png | Original point cloud.&lt;br /&gt;
File:AfterCompression.png | Point cloud retrieved after octree compression.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As you can see, the process has decreased the amount of points in the cloud. The default parameters of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;OctreePointCloudCompression&amp;quot;&amp;lt;/span&amp;gt; object establish an octree resolution (at lowest level) of 1cm, and a coordinate precision of 1mm. These values can of course be changed to suit your needs, but simplifying the cloud in this way is an useful operation that we will cover later.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Resolution, Compression profile&lt;br /&gt;
* '''Output''': Compressed stream&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/compression.php Point Cloud Compression]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1io_1_1_octree_point_cloud_compression.html pcl::io::OctreePointCloudCompression]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_octree_compress.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Detecting changes===&lt;br /&gt;
&lt;br /&gt;
PCL has an octree-based object that allows to detect changes between two point clouds (for example, two clouds taken one after another with an OpenNI device). This object stores the trees of both clouds at the same time, and compares them, returning a list of indices of the points that are new in the second cloud (they did not exist in the first):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/octree/octree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudA(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudB(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read two PCD files from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudA) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[2], *cloudB) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Change detector object, with a resolution of 128&lt;br /&gt;
	// (resolution at lowest octree level).&lt;br /&gt;
	pcl::octree::OctreePointCloudChangeDetector&amp;lt;pcl::PointXYZ&amp;gt; octree(128.0f);&lt;br /&gt;
&lt;br /&gt;
	// Add cloudA to the octree.&lt;br /&gt;
	octree.setInputCloud(cloudA);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
	// The change detector object is able to store two clouds at the same time;&lt;br /&gt;
	// with this we can reset the buffer but keep the previous tree saved.&lt;br /&gt;
	octree.switchBuffers();&lt;br /&gt;
	// Now, add cloudB to the octree like we did before.&lt;br /&gt;
	octree.setInputCloud(cloudB);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; newPoints;&lt;br /&gt;
	// Get a vector with the indices of all points that are new in cloud B,&lt;br /&gt;
	// when compared with the ones that existed in cloud A.&lt;br /&gt;
	octree.getPointIndicesFromNewVoxels(newPoints);&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; newPoints.size(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Point (&amp;quot; &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].x &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].y &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].z&lt;br /&gt;
				  &amp;lt;&amp;lt; &amp;quot;) was not in cloud A but is in cloud B&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
	std::cout &amp;lt;&amp;lt; newPoints.size() &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
} &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': First cloud, Second cloud, Resolution&lt;br /&gt;
* '''Output''': Point indices&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/octree_change.php Spatial change detection on unorganized point cloud data]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1octree_1_1_octree_point_cloud_change_detector.html pcl::octree::OctreePointCloudChangeDetector]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_change_detection.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Filtering=&lt;br /&gt;
&lt;br /&gt;
The raw point cloud, as returned from the sensor, usually needs some kind of polishing before being handed to processing algorithms. Normally, this is due to the low precision of the sensor, which leads to noise, incorrect measurements, holes in surfaces... Sometimes it is just the opposite: the cloud has so many points that processing it would take forever. Luckily, several filtering methods exist to &amp;quot;fix&amp;quot; many recurring errors in point clouds.&lt;br /&gt;
&lt;br /&gt;
==Passthrough filter==&lt;br /&gt;
&lt;br /&gt;
A passthrough filter will remove any point from the cloud whose values do not fall in a certain range, user-given. For example, if you wanted to discard all points farther away than, say, 3 meters, you would run the filter on the Z coordinate with a range of [0,3]. This filter can be useful to discard unneeded objects from the cloud, but you may have to adapt a different reference frame if the default one (relative to the sensor) is not fitting. For example, filtering on the Y value to remove all points not sitting on a table will yield unwanted results if the camera was at an odd angle.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/passthrough.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::PassThrough&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Filter out all points with Z values not in the [0-2] range.&lt;br /&gt;
	filter.setFilterFieldName(&amp;quot;z&amp;quot;);&lt;br /&gt;
	filter.setFilterLimits(0.0, 2.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforePassthrough.png | Original point cloud.&lt;br /&gt;
File:AfterPassthrough.png | Point cloud after applying a passthrough filter with Z in [0,2].&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setFilterLimitsNegative()&amp;quot;&amp;lt;/span&amp;gt; function (not seen in the example) will tell the filter object whether to return the points inside the range (&amp;quot;false&amp;quot;, the default value) or outside it (&amp;quot;true&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Filter field, Filter limits, [Negative mode]&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/passthrough.php Filtering a PointCloud using a PassThrough filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_pass_through.html pcl::PassThrough]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_passthrough_filter.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Conditional removal==&lt;br /&gt;
&lt;br /&gt;
Another way of getting the exact same result than with the previous example would be to use a conditional removal filter. We can build any kind of condition we want for the values of the point:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/conditional_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// We must build a condition.&lt;br /&gt;
	// And &amp;quot;And&amp;quot; condition requires all tests to check true. &amp;quot;Or&amp;quot; conditions also available.&lt;br /&gt;
	pcl::ConditionAnd&amp;lt;pcl::PointXYZ&amp;gt;::Ptr condition(new pcl::ConditionAnd&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// First test, the point's Z value must be greater than (GT) 0.&lt;br /&gt;
	condition-&amp;gt;addComparison(pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;::ConstPtr(new pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;(&amp;quot;z&amp;quot;, pcl::ComparisonOps::GT, 0.0)));&lt;br /&gt;
	// Second test, the point's Z value must be less than (LT) 2.&lt;br /&gt;
	condition-&amp;gt;addComparison(pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;::ConstPtr(new pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;(&amp;quot;z&amp;quot;, pcl::ComparisonOps::LT, 2.0)));&lt;br /&gt;
	// Checks available: GT, GE, LT, LE, EQ.&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::ConditionalRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setCondition(condition);&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// If true, points that do not pass the filter will be set to a certain value (default NaN).&lt;br /&gt;
	// If false, they will be just removed, but that could break the structure of the cloud&lt;br /&gt;
	// (organized clouds are clouds taken from camera-like sensors that return a matrix-like image).&lt;br /&gt;
	filter.setKeepOrganized(true);&lt;br /&gt;
	// If keep organized was set true, points that failed the test will have their Z value set to this.&lt;br /&gt;
	filter.setUserFilterValue(0.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Condition, [Keep organized]&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/remove_outliers.php Removing outliers using a Conditional or RadiusOutlier removal]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_conditional_removal.html pcl::ConditionalRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_conditional_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Outlier removal==&lt;br /&gt;
&lt;br /&gt;
Outliers are lonely points that are spread here and there in the cloud, like annoying mosquitoes. They are the product of the sensor's inaccuracy, which registers measurements where there should not be any. Outliers are considered undesired noise because they can introduce errors in calculations, like for example the normal estimation. Hence, trimming they out of the cloud will not only make computations faster (because the less points there are, the less time it takes, as we will see with downsampling) but also help us get more precise values.&lt;br /&gt;
&lt;br /&gt;
===Radius-based===&lt;br /&gt;
&lt;br /&gt;
The radius-based outlier removal is the simplest method of all. You must specify a search radius and the minimum number of neighbors than a point must have to avoid being labelled as outlier. The algorithm will then iterate through all points (which can be slow in if the cloud is big) and perform the check: if less than that number of points are found within the radius, it is removed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/radius_outlier_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::RadiusOutlierRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Every point must have 10 neighbors within 15cm, or it will be removed.&lt;br /&gt;
	filter.setRadiusSearch(0.15);&lt;br /&gt;
	filter.setMinNeighborsInRadius(10);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All PCL classes that inherit from &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::FilterIndices&amp;quot;&amp;lt;/span&amp;gt; (like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::RadiusOutlierRemoval&amp;quot;&amp;lt;/span&amp;gt;) provide the function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt;, which if set to true, makes the filter return the points that did NOT pass the filter.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius, Neighbor count&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/remove_outliers.php Removing outliers using a Conditional or RadiusOutlier removal]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_radius_outlier_removal.html pcl::RadiusOutlierRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_radius_outlier_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Statistical===&lt;br /&gt;
&lt;br /&gt;
The statistical outlier removal process is a bit more refined. First, for every point, the mean distance to its ''K'' neighbors is computed. Then, if we asume that the result is a [https://en.wikipedia.org/wiki/Normal_distribution normal (gaussian) distribution] with a mean ''μ'' and a standard deviation ''σ'', we can deem it safe to remove all points with mean distances that fall out of the global mean plus deviation. Basically, it runs a statistical analysis of the distances between neighboring points, and trims all which are not considered &amp;quot;normal&amp;quot; (you define what &amp;quot;normal&amp;quot; is with the parameters of the algorithm).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/statistical_outlier_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::StatisticalOutlierRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Set number of neighbors to consider to 50.&lt;br /&gt;
	filter.setMeanK(50);&lt;br /&gt;
	// Set standard deviation multiplier to 1.&lt;br /&gt;
	// Points with a distance larger than 1 standard deviation of the mean distance will be outliers.&lt;br /&gt;
	filter.setStddevMulThresh(1.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can make use of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt; with this filter too.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Neighbor count, Deviation multiplier&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/statistical_outlier.php Removing outliers using a StatisticalOutlierRemoval filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_statistical_outlier_removal.html pcl::StatisticalOutlierRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_statistical_outlier_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Resampling==&lt;br /&gt;
&lt;br /&gt;
Resampling means changing the number of points of the cloud, either by reducing (downsampling) or increasing (upsampling). Both have a purpose, as we will see.&lt;br /&gt;
&lt;br /&gt;
===Downsampling===&lt;br /&gt;
&lt;br /&gt;
A sensor like Kinect or Xtion produces a cloud with 307200 (640x480) points. Believe it or not, that is a lot of information, and the new Kinect is already out there, working at a higher resolution. Keep in mind that, if we were to perform a simple operation on every point of a cloud, it would be of ''O(n)'', n being the number of points. If we had to compare every point with its ''k'' nearest neighbors, it would be ''O(nk)''. You may have noticed by now that some of the algorithms I showed you take several seconds to run on your PC; you now know why.&lt;br /&gt;
&lt;br /&gt;
Optimization techniques are being implemented as I write this to allow PCL to employ the computer's GPU (graphic card) to perform the computations using [https://en.wikipedia.org/wiki/CUDA CUDA]. Many classes already provide an alternative in the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::gpu&amp;quot;&amp;lt;/span&amp;gt; namespace. This offers dramatic improvements in speed, but it is not always a possibility.&lt;br /&gt;
&lt;br /&gt;
Another option is to reduce the complexity of the cloud, that is, to remove all points that are not needed for the purpose. There are several ways to do this. You could trim all outliers from the cloud, then perform segmentation to extract only the parts that interest you. A common operation is also downsampling, which will give you back a cloud that, for all intents and purposes, equivalent to the original one, despite having less points.&lt;br /&gt;
&lt;br /&gt;
Downsampling is done via ''voxelization''. I already explained what voxels were in the [[#Octree | octree section]]. The cloud is divided in multiple cube-shaped regions with the desired resolution. Then, all points inside every voxel are processed so only one remains. The simplest way would be to randomly select one of them, but a more accurate approach would be to compute the [[PCL/OpenNI_tutorial_2:_Cloud_processing_%28basic%29#Computing_the_centroid|centroid]], which is the point whose coordinates are the mean values of all the points that belong to the voxel.&lt;br /&gt;
&lt;br /&gt;
If done with sensible parameters, downsampling will yield results that are precise enough to work with, at the cost of less CPU power.&lt;br /&gt;
&lt;br /&gt;
====Voxel grid====&lt;br /&gt;
&lt;br /&gt;
A straightforward way of downsampling a cloud:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/voxel_grid.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::VoxelGrid&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// We set the size of every voxel to be 1x1x1cm&lt;br /&gt;
	// (only one point per every cubic centimeter will survive).&lt;br /&gt;
	filter.setLeafSize(0.01f, 0.01f, 0.01f);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforeDownsampling.png | Original point cloud.&lt;br /&gt;
File:AfterDownsampling.png | Point cloud after being downsampled with a leaf size of 1cm.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Voxel size&lt;br /&gt;
* '''Output''': Downsampled cloud&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/voxel_grid.php Downsampling a PointCloud using a VoxelGrid filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_voxel_grid.html pcl::VoxelGrid]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_downsample.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Uniform sampling====&lt;br /&gt;
&lt;br /&gt;
This class does basically the same, but it outputs the indices of the points that survived the process. It is a common way of choosing keypoints when computing [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29|descriptors]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/keypoints/uniform_sampling.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Uniform sampling object.&lt;br /&gt;
	pcl::UniformSampling&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// We set the size of every voxel to be 1x1x1cm&lt;br /&gt;
	// (only one point per every cubic centimeter will survive).&lt;br /&gt;
	filter.setRadiusSearch(0.01f);&lt;br /&gt;
	// We need an additional object to store the indices of surviving points.&lt;br /&gt;
	pcl::PointCloud&amp;lt;int&amp;gt; keypointIndices;&lt;br /&gt;
&lt;br /&gt;
	filter.compute(keypointIndices);&lt;br /&gt;
	pcl::copyPointCloud(*cloud, keypointIndices.points, *filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Voxel size&lt;br /&gt;
* '''Output''': Point indices&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_uniform_sampling.html pcl::UniformSampling]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_downsample_uniform.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Upsampling===&lt;br /&gt;
&lt;br /&gt;
Upsampling is a form of [[PCL/OpenNI_tutorial_2:_Cloud_processing_%28basic%29#Reconstruction| surface reconstruction]], but I put it here. When you have fewer points than you would like, upsampling can help you to recover the original surface(s), by [https://en.wikipedia.org/wiki/Upsampling interpolating] the ones you currently have, which is just a sophisticated guess. The result will never be a hundred percent accurate, but sometimes it is the only choice. So, before downsampling your clouds, be sure to keep a copy of the original data!&lt;br /&gt;
&lt;br /&gt;
PCL uses the [https://en.wikipedia.org/wiki/Moving_least_squares Moving Least Squares (MLS)] algorithm for this. The code looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/mls.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filtering object.&lt;br /&gt;
	pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Object for searching.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree;&lt;br /&gt;
	filter.setSearchMethod(kdtree);&lt;br /&gt;
	// Use all neighbors in a radius of 3cm.&lt;br /&gt;
	filter.setSearchRadius(0.03);&lt;br /&gt;
	// Upsampling method. Other possibilites are DISTINCT_CLOUD, RANDOM_UNIFORM_DENSITY&lt;br /&gt;
	// and VOXEL_GRID_DILATION. NONE disables upsampling. Check the API for details.&lt;br /&gt;
	filter.setUpsamplingMethod(pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointXYZ&amp;gt;::SAMPLE_LOCAL_PLANE);&lt;br /&gt;
	// Radius around each point, where the local plane will be sampled.&lt;br /&gt;
	filter.setUpsamplingRadius(0.03);&lt;br /&gt;
	// Sampling step size. Bigger values will yield less (if any) new points.&lt;br /&gt;
	filter.setUpsamplingStepSize(0.02);&lt;br /&gt;
&lt;br /&gt;
	filter.process(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:PCL_upsampling_original.png | Original point cloud (coffee mug on a table).&lt;br /&gt;
File:PCL_upsampling_downsampled.png | Downsampled with a leaf size of 1cm.&lt;br /&gt;
File:PCL_upsampling_local_plane.png | Upsampled using SAMPLE_LOCAL_PLANE (upsampling radius 3cm, step size 2cm).&lt;br /&gt;
File:PCL_upsampling_uniform_density.png | Upsampled using RANDOM_UNIFORM_DENSITY (target point density of 300 per 3cm radius). &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check the [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html#ac29ad97b98353d64ce64e2ff924f7d20 setUpsamplingMethod()] API to see all upsampling methods available, and the parameters you need to set for them to work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search radius, Upsampling method, [Upsampling parameters]&lt;br /&gt;
* '''Output''': Upsampled cloud&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html pcl::MovingLeastSquares]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_upsample.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Reconstruction=&lt;br /&gt;
&lt;br /&gt;
==Surface smoothing==&lt;br /&gt;
&lt;br /&gt;
As stated, depth sensors are not very accurate, and the resulting clouds have measurement errors, outliers, holes in surfaces, etc. Surfaces can be reconstructed by means of an algorithm, that iterates through all points and interpolates the data, trying to guess how the original surface was. Like with upsampling, PCL uses the [https://en.wikipedia.org/wiki/Moving_least_squares MLS] algorithm and class. Performing this step is important, because the resulting cloud's normals will be more accurate.&lt;br /&gt;
&lt;br /&gt;
Surface smoothing is easily done with the following code:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/mls.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// The output will also contain the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;::Ptr smoothedCloud(new pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Smoothing object (we choose what point types we want as input and output).&lt;br /&gt;
	pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointNormal&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Use all neighbors in a radius of 3cm.&lt;br /&gt;
	filter.setSearchRadius(0.03);&lt;br /&gt;
	// If true, the surface and normal are approximated using a polynomial estimation&lt;br /&gt;
	// (if false, only a tangent one).&lt;br /&gt;
	filter.setPolynomialFit(true);&lt;br /&gt;
	// We can tell the algorithm to also compute smoothed normals (optional).&lt;br /&gt;
	filter.setComputeNormals(true);&lt;br /&gt;
	// kd-tree object for performing searches.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree;&lt;br /&gt;
	filter.setSearchMethod(kdtree);&lt;br /&gt;
&lt;br /&gt;
	filter.process(*smoothedCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The output cloud will also contain the improved normals of the smoother cloud.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search radius, [Polynomial fit], [Compute normals]&lt;br /&gt;
* '''Output''': Smoothed cloud, [Smoothed normals]&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/resampling.php Smoothing and normal estimation based on polynomial reconstruction]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html pcl::MovingLeastSquares]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_surface_smoothing.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)&amp;diff=4739</id>
		<title>PCL/OpenNI tutorial 2: Cloud processing (basic)</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)&amp;diff=4739"/>
				<updated>2015-11-01T23:04:43Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Point_cloud.png|thumb|right|200px|Rendering of a point cloud, showing a desktop.]]&lt;br /&gt;
&lt;br /&gt;
All right: in the [[PCL/OpenNI tutorial 1: Installing and testing| previous tutorial]] you installed OpenNI and PCL. You managed to get your device working. You compiled and ran the example program, watched as it fetched one frame after another, and even saved a couple of point clouds to disk. Now you may be wondering &amp;quot;What about now? What use depth sensors are? What can I do with point clouds?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Well, depth cameras have a lot of useful applications. For example, you could use them as [http://tech.integrate.biz/kinect_mocap.htm motion tracking] hardware and save you thousands of dollars in professional gear. The original Kinect was actually designed as an alternative to physical controllers (e.g., gamepads) in games. [https://frantracerkinectft.codeplex.com/ Hand gestures] can also be recognized, so people have managed to successfully implement control interfaces similar to what could be seen in Minority Report.&lt;br /&gt;
&lt;br /&gt;
Working low-level with point clouds offers interesting possibilities too. As they are essentially a 3D &amp;quot;scan&amp;quot; or the scene, you could capture your whole room as a [http://pointclouds.org/documentation/tutorials/using_kinfu_large_scale.php continuous 3D mesh], then import it in your modelling software of choice and use it as a reference for a better reconstruction. You could use it to measure distances, or detect obstacles, for a robot navigation system. But the most exciting alternative by far, and the one where most of the research is being focused on, is to perform 3D object recognition and pose estimation, even in [https://www.youtube.com/watch?feature=player_detailpage&amp;amp;v=quGhaggn3cQ#t=349 real time]. This allows a robot to fulfill tasks like going to the kitchen to get you a cup of coffee.&lt;br /&gt;
&lt;br /&gt;
Most of the advanced stuff that can be done with a point cloud requires some previous steps, like filtering, reconstruction or normal estimation. In this tutorial, I will introduce you to the basics of point cloud processing, and leave the complicated methods for the [[PCL/OpenNI tutorial 3: Cloud processing (advanced) | next tutorial]]. I will explain what every technique does and what it should be used for, and include PCL code snippets so you can check how to implement it in your programs.&lt;br /&gt;
&lt;br /&gt;
=Point clouds=&lt;br /&gt;
&lt;br /&gt;
First, a little explanation. A point cloud as taken from a depth sensor consists of a series of points in 3D space, as simple as that. A &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::PointCloud&amp;lt;PointT&amp;gt;&amp;quot;&amp;lt;/span&amp;gt; object stores the points inside a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;std::vector&amp;lt;PointT&amp;gt;&amp;quot;&amp;lt;/span&amp;gt; structure. The cloud object exposes some functions that let you get information about them, like the point count. As you can see, the class is templated so you can instantiate it for many types of points. For example, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt;, which stores three floats for the X, Y and Z coordinates, and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZRGB&amp;quot;&amp;lt;/span&amp;gt;, which also stores color (texture) information for each point, the kind we would get with Kinect or Xtion RGB-D cameras. A list of [http://pointclouds.org/documentation/tutorials/adding_custom_ptype.php#what-pointt-types-are-available-in-pcl point types] and a basic description of the [http://pointclouds.org/documentation/tutorials/basic_structures.php cloud object] are available on PCL's website.&lt;br /&gt;
&lt;br /&gt;
Clouds will usually be handled with pointers, as they tend to be large. PCL implements the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;boost::shared_ptr&amp;quot;&amp;lt;/span&amp;gt; class from the [https://en.wikipedia.org/wiki/Boost_%28C%2B%2B_libraries%29 Boost C++ libraries], that keeps track of the references to a certain object. Every time you pass a shared pointer and a copy of it is created, a reference counter is automatically increased. When the pointer is destroyed, it is decreased. If the counter ever reaches 0, it means that no one is holding an active reference to the object, and its destructor is called to free memory. This can take a while to get used to, and you will be mixing cloud objects and cloud object pointers until then, but do not worry.&lt;br /&gt;
&lt;br /&gt;
==PCD files==&lt;br /&gt;
&lt;br /&gt;
When clouds are saved to disk, a [http://pointclouds.org/documentation/tutorials/pcd_file_format.php PCD] (Point Cloud Data) file is produced. You can choose to do it so in binary or plain text format. The first one is faster to work with, whereas the second one will let you to inspect the cloud by opening it with a common text editor. If you do so, you may see something like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;# .PCD v.7 − Point Cloud Data file format&lt;br /&gt;
VERSION .7&lt;br /&gt;
FIELDS x y z&lt;br /&gt;
SIZE 4 4 4&lt;br /&gt;
TYPE F F F&lt;br /&gt;
WIDTH 2&lt;br /&gt;
HEIGHT 1&lt;br /&gt;
VIEWPOINT 0 0 0 1 0 0 0&lt;br /&gt;
POINTS 2&lt;br /&gt;
DATA ascii&lt;br /&gt;
0.73412 -1.12643 0.82218&lt;br /&gt;
0.44739 -0.34735 -0.04624&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This tells us that the cloud consist of 2 points of type &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; (because each point has only 3 fields; the X, Y and Z coordinates, each stored in a 4-byte float), and lists their coordinates. It also stores the viewport information (the relative position of the points to the sensor) as a translation quaternion. Knowing the position of the camera can be useful for certain procedures like normal estimation.&lt;br /&gt;
&lt;br /&gt;
===Reading from file===&lt;br /&gt;
&lt;br /&gt;
Reading a point cloud from a PCD file to and storing it in memory can be done with a single function call. You must specify the correct point type. For example, for .pcd files saved with the OpenNI viewer application that we saw in the [[PCL/OpenNI tutorial 1: Installing and testing | previous tutorial]], you could use either &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZRGBA&amp;quot;&amp;lt;/span&amp;gt; or &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; if you are not going to use the color information.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I have not bothered to include proper error/argument checking, but you should do it in your programs. Be sure to check the [http://docs.pointclouds.org PCL API] every time I introduce you to one of its classes or methods.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/reading_pcd.php Reading Point Cloud data from PCD files]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl_1_1io.html pcl::io namespace]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_file_read.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Writing to file===&lt;br /&gt;
&lt;br /&gt;
Saving a point cloud to a file is as easy as reading it. You can choose between doing it in plain readable (ASCII) or binary format. The first one can be read and altered with a text editor, the second one is faster.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Write it back to disk under a different name.&lt;br /&gt;
	// Another possibility would be &amp;quot;savePCDFileBinary()&amp;quot;.&lt;br /&gt;
	pcl::io::savePCDFileASCII(&amp;quot;output.pcd&amp;quot;, *cloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/writing_pcd.php Writing Point Cloud data to PCD files]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl_1_1io.html pcl::io namespace]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_file_write.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you want a better alternative for loading or saving PCD files (some of the functions we saw are a bit outdated, but they do the job), take a look at the [http://docs.pointclouds.org/trunk/classpcl_1_1_p_c_d_reader.html PCDReader] and [http://docs.pointclouds.org/trunk/classpcl_1_1_p_c_d_writer.html PCDWriter] classes.&lt;br /&gt;
&lt;br /&gt;
==Visualization==&lt;br /&gt;
&lt;br /&gt;
Of course, PCL ships with a dedicated program to load and visualize point clouds and a lot of other stuff, &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''pcl_viewer''&amp;lt;/span&amp;gt;. You can run it from a terminal with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_viewer &amp;lt;file 1&amp;gt; [file 2] ... [file N]&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you load several files, each cloud will be rendered with a different color to help you tell them apart. The program uses internally the [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_visualizer.html pcl::visualization::PCLVisualizer] class. Press the &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''H'''&amp;lt;/span&amp;gt; key while it is running to print the help on the terminal. Keys I tend to find useful are &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''1'''&amp;lt;/span&amp;gt;, which chooses a different color for the cloud (useful if the current one is too hard to see), &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''R'''&amp;lt;/span&amp;gt;, which resets the camera, and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''J'''&amp;lt;/span&amp;gt;, which saves a PNG snapshot.&lt;br /&gt;
&lt;br /&gt;
In your own code, it is possible to iterate through the cloud and print the information of each point to console, but this is cumbersome. Sometimes you just want to visualize the cloud to check something. PCL can create a window for you and display it, letting users pan and zoom their way around. The simplest method is the following one:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/cloud_viewer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Cloud viewer object. You can set the window title.&lt;br /&gt;
	pcl::visualization::CloudViewer viewer(argv[1]);&lt;br /&gt;
	viewer.showCloud(cloud);&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		// Do nothing but wait.&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;CloudViewer&amp;quot;&amp;lt;/span&amp;gt; has less features than &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PCLVisualizer&amp;quot;&amp;lt;/span&amp;gt; but it is easier to set up. The program will remain active until you close the window with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Alt+F4'''&amp;lt;/span&amp;gt; or press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Q'''&amp;lt;/span&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/cloud_viewer.php The CloudViewer]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/pcl_visualizer.php PCLVisualizer]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/walkthrough.php Visualization library overview]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_cloud_viewer.html pcl::visualization::CloudViewer]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_visualize_cloud.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Concatenating two clouds==&lt;br /&gt;
&lt;br /&gt;
There are two types of concatenation with point clouds:&lt;br /&gt;
&lt;br /&gt;
* '''Points''':  You initially have two clouds, &amp;quot;A&amp;quot; and &amp;quot;B&amp;quot;, with &amp;quot;N&amp;quot; and &amp;quot;M&amp;quot; points each. A third cloud is created (&amp;quot;C&amp;quot;), that has &amp;quot;N + M&amp;quot; points (all points of the first two clouds). For this to work, the point type must be the same for all clouds (&amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt;, or &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt;, or whatever).&lt;br /&gt;
* '''Fields''': With this, you can join the fields of two clouds with different types. The only constraint is that both must have the same number of points. For example, if you have a cloud of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; type and another of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt; type with &amp;quot;N&amp;quot; points each, you can concatenate them to create a cloud of type &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointNormal&amp;quot;&amp;lt;/span&amp;gt; (that stores X, Y and Z coordinates, plus the normal) that also has &amp;quot;N&amp;quot; points.&lt;br /&gt;
&lt;br /&gt;
Point concatenation is the easiest one becase you can use the cloud's add operator (+), which is overloaded.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudA(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudB(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudC(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read two PCD files from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGBA&amp;gt;(argv[1], *cloudA) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGBA&amp;gt;(argv[2], *cloudB) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Create cloud &amp;quot;C&amp;quot;, with the points of both &amp;quot;A&amp;quot; and &amp;quot;B&amp;quot;.&lt;br /&gt;
	*cloudC = (*cloudA) + (*cloudB);&lt;br /&gt;
	// Now cloudC-&amp;gt;points.size() equals cloudA-&amp;gt;points.size() + cloudB-&amp;gt;points.size().&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/concatenate_clouds.php Concatenate the points of two Point Clouds]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_concatenate_clouds.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Field concatenation is done with a function call:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudPoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr cloudNormals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;::Ptr cloudAll(new pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudPoints) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Compute the normals of the cloud (do not worry, we will see this later).&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloudPoints);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.05);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*cloudNormals);&lt;br /&gt;
&lt;br /&gt;
	// Concatenate the fields (PointXYZ + Normal = PointNormal).&lt;br /&gt;
	pcl::concatenateFields(*cloudPoints, *cloudNormals, *cloudAll);&lt;br /&gt;
&lt;br /&gt;
	// Print the data to standard output.&lt;br /&gt;
	for (size_t currentPoint = 0; currentPoint &amp;lt; cloudAll-&amp;gt;points.size(); currentPoint++)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tXYZ:&amp;quot; &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].x &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].y &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].z &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tNormal:&amp;quot; &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[0] &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[1] &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[2] &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/concatenate_clouds.php Concatenate the points of two Point Clouds]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__common.html#gac6add803f86fd16a998dce541e9ef402 pcl::concatenateFields()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_concatenate_fields.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Matrix transformations==&lt;br /&gt;
&lt;br /&gt;
Clouds can be directly manipulated with [https://en.wikipedia.org/wiki/Transformation_matrix transformations] (translation, rotation, scaling). 3D transformations use a 4x4 matrix, which can be a little complicated to understand (specially when it comes to rotations), so check some book or [http://web.iitd.ac.in/~hegde/cad/lecture/L6_3dtrans.pdf tutorial] before trying to use one. The transformed cloud will be saved to another object so you can keep both versions and use them as you wish.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/common/transforms.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr transformed(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Transformation matrix object, initialized to the identity matrix&lt;br /&gt;
	// (a null transformation).&lt;br /&gt;
	Eigen::Matrix4f transformation = Eigen::Matrix4f::Identity();&lt;br /&gt;
&lt;br /&gt;
	// Set a rotation around the Z axis.&lt;br /&gt;
	float theta = M_PI / 2; // 180 degrees.&lt;br /&gt;
	transformation(0, 0) = cos(theta);&lt;br /&gt;
	transformation(0, 1) = -sin(theta);&lt;br /&gt;
	transformation(1, 0) = sin(theta);&lt;br /&gt;
	transformation(1, 1) = cos(theta);&lt;br /&gt;
&lt;br /&gt;
	// Set a translation on the X axis.&lt;br /&gt;
	transformation(0, 3) = 1.0f; // 1 meter (positive direction).&lt;br /&gt;
&lt;br /&gt;
	pcl::transformPointCloud(*cloud, *transformed, transformation);&lt;br /&gt;
&lt;br /&gt;
	// Visualize both the original and the result.&lt;br /&gt;
	pcl::visualization::PCLVisualizer viewer(argv[1]);&lt;br /&gt;
	viewer.addPointCloud(cloud, &amp;quot;original&amp;quot;);&lt;br /&gt;
	// The transformed one's points will be red in color.&lt;br /&gt;
	pcl::visualization::PointCloudColorHandlerCustom&amp;lt;pcl::PointXYZ&amp;gt; colorHandler(transformed, 255, 0, 0);&lt;br /&gt;
	viewer.addPointCloud(transformed, colorHandler, &amp;quot;transformed&amp;quot;);&lt;br /&gt;
	// Add 3D colored axes to help see the transformation.&lt;br /&gt;
	viewer.addCoordinateSystem(1.0, &amp;quot;reference&amp;quot;, 0);&lt;br /&gt;
&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer.spinOnce();&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Cloud_transformation.png|thumb|center|600px|Rotation and translation applied to a cloud.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check the rest of the functions, because you can also use a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Affine3&amp;quot;&amp;lt;/span&amp;gt; or a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Quaternion&amp;quot;&amp;lt;/span&amp;gt; instead of a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Matrix4&amp;quot;&amp;lt;/span&amp;gt; object.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Transformation&lt;br /&gt;
* '''Output''': Transformed points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/matrix_transform.php Using a matrix to transform a point cloud]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl.html#aefbe4956d1c8fb785a97df6708d57c56 pcl::transformPointCloud()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_matrix_transform.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Using indices==&lt;br /&gt;
&lt;br /&gt;
Many of PCL's algorithms return indices. Indices are just a list of some of the points of a cloud (not the points themselves with all their data, but just their index in the cloud). For example, a cylinder segmentation algorithm will give you as output a list of points that were considered to fit the cylinder model. You can then use the indices to extract the points to a second cloud for further work with the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::ExtractIndices&amp;quot;&amp;lt;/span&amp;gt; class. Its &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt; function will instruct it to extract the points NOT indexed (so you can remove all cylinder-shaped clusters from the cloud).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/extract_indices.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/segmentation/sac_segmentation.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudAll(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudExtracted(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudAll) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Plane segmentation (do not worry, we will see this later).&lt;br /&gt;
	pcl::ModelCoefficients::Ptr coefficients(new pcl::ModelCoefficients);&lt;br /&gt;
	pcl::SACSegmentation&amp;lt;pcl::PointXYZ&amp;gt; segmentation;&lt;br /&gt;
	segmentation.setOptimizeCoefficients(true);&lt;br /&gt;
	segmentation.setModelType(pcl::SACMODEL_PLANE);&lt;br /&gt;
	segmentation.setMethodType(pcl::SAC_RANSAC);&lt;br /&gt;
	segmentation.setDistanceThreshold(0.01);&lt;br /&gt;
	segmentation.setInputCloud(cloudAll);&lt;br /&gt;
&lt;br /&gt;
	// Object for storing the indices.&lt;br /&gt;
	pcl::PointIndices::Ptr pointIndices(new pcl::PointIndices);&lt;br /&gt;
&lt;br /&gt;
	segmentation.segment(*pointIndices, *coefficients);&lt;br /&gt;
&lt;br /&gt;
	// Object for extracting points from a list of indices.&lt;br /&gt;
	pcl::ExtractIndices&amp;lt;pcl::PointXYZ&amp;gt; extract;&lt;br /&gt;
	extract.setInputCloud(cloudAll);&lt;br /&gt;
	extract.setIndices(pointIndices);&lt;br /&gt;
	// We will extract the points that are NOT indexed (the ones that are not in a plane).&lt;br /&gt;
	extract.setNegative(true);&lt;br /&gt;
	extract.filter(*cloudExtracted);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Indices, [Negative mode]&lt;br /&gt;
* '''Output''': Points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/extract_indices.php Extracting indices from a PointCloud]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/structpcl_1_1_point_indices.html pcl::PointIndices]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_extract_indices.html pcl::ExtractIndices]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_extract_indices.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Removing NaNs==&lt;br /&gt;
&lt;br /&gt;
The cloud retrieved from the sensor may contain several kind of measurement errors and/or inaccuracies. One of them is the presence of [https://en.wikipedia.org/wiki/NaN NaN (Not a Number)] values in the coordinates of some points, as you can see in the following file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;# .PCD v0.7 - Point Cloud Data file format&lt;br /&gt;
VERSION 0.7&lt;br /&gt;
FIELDS x y z rgba&lt;br /&gt;
SIZE 4 4 4 4&lt;br /&gt;
TYPE F F F U&lt;br /&gt;
COUNT 1 1 1 1&lt;br /&gt;
WIDTH 640&lt;br /&gt;
HEIGHT 480&lt;br /&gt;
VIEWPOINT 0 0 0 1 0 0 0&lt;br /&gt;
POINTS 307200&lt;br /&gt;
DATA ascii&lt;br /&gt;
nan nan nan 10135463&lt;br /&gt;
nan nan nan 10398635&lt;br /&gt;
nan nan nan 10070692&lt;br /&gt;
nan nan nan 10268071&lt;br /&gt;
...&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cloud objects have a member function called &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;is_dense()&amp;quot;&amp;lt;/span&amp;gt; that will return true if all points have valid, finite values. A NaN indicates that there was a problem measuring the distance to that point, maybe because the sensor was too close or too far, or because the surface was reflective.&lt;br /&gt;
&lt;br /&gt;
Sadly, many of PCL's algorithms will fail if even a single NaN is present in the cloud you feed them as input, with a message like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;Assertion `point_representation_-&amp;gt;isValid (point) &amp;amp;&amp;amp; &amp;quot;Invalid (NaN, Inf) point coordinates given to radiusSearch!&amp;quot;' failed.&amp;quot;''&amp;lt;/span&amp;gt;. If this happens, you will have to remove such values from the cloud. You can use a program like the following to &amp;quot;clean&amp;quot; a cloud from all NaN points:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/filter.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	if (argc != 3)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tUsage: &amp;quot; &amp;lt;&amp;lt; argv[0] &amp;lt;&amp;lt; &amp;quot; &amp;lt;input cloud&amp;gt; &amp;lt;output cloud&amp;gt;&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// The mapping tells you to what points of the old cloud the new ones correspond,&lt;br /&gt;
	// but we will not use it.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; mapping;&lt;br /&gt;
	pcl::removeNaNFromPointCloud(*cloud, *cloud, mapping);&lt;br /&gt;
&lt;br /&gt;
	// Save it back.&lt;br /&gt;
	pcl::io::savePCDFileASCII(argv[2], *cloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The problem with this method is that it will not keep the cloud organized. All clouds store a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;width&amp;quot;&amp;lt;/span&amp;gt; and a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;height&amp;quot;&amp;lt;/span&amp;gt; variable. In ''unorganized'' clouds, the total number of points is the same as the width, and the height is set to 1. In ''organized'' clouds (like the ones taken from camera-like sensors such as Kinect or Xtion), the width and the height are the same that the pixels of the image resolution the sensor works with. Points are distributed in rows like in the depth image, and every one of them corresponds to a pixel. The member function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;isOrganized()&amp;quot;&amp;lt;/span&amp;gt; will return true if the height is greater than 1.&lt;br /&gt;
&lt;br /&gt;
Because removing NaNs will change the number of points of the cloud, it can no longer be kept organized with the original width/height ratio, so the function will set the height to 1. This is not a big issue, as only a few of PCL's algorithms work explicitly with organized clouds (and they mostly use it for optimization), but you must take it into account.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Cloud&lt;br /&gt;
* '''Output''': Cloud without NaNs&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__filters.html#gac463283a9e9c18a66d3d29b28a575064 pcl::removeNaNFromPointCloud()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_remove_NaNs.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Saving a PNG==&lt;br /&gt;
&lt;br /&gt;
PCL offers the possibility of saving the values of a point cloud to a PNG image file. Obviously, this can only be done with organized clouds, because the rows and columns of the resulting image will correspond exactly to the ones of the cloud. For example, if you have a point cloud taken from a sensor like Kinect or Xtion, you can use this to retrieve the 640x480 RGB image that matches the cloud.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/io/png_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGB&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Save the image (cloud must be organized).&lt;br /&gt;
	pcl::io::savePNGFile(&amp;quot;output.png&amp;quot;, *cloud, &amp;quot;rgb&amp;quot;);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;savePNGFile()&amp;quot;&amp;lt;/span&amp;gt; has several implementations; in this case, we have chosen the one that lets you specify which fields to save. If you omit that parameter, the function will save the RGB fields by default.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__io.html#gabd03e43fac3048635512f447c8bc50b3 pcl::io::savePNGFile()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_save_PNG.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Computing the centroid==&lt;br /&gt;
&lt;br /&gt;
The [https://en.wikipedia.org/wiki/Centroid centroid] of a cloud is a point with coordinates that result from computing the mean of the values of all points in the cloud. You could say it is the &amp;quot;center of mass&amp;quot;, and it has several uses for certain algorithms. If you intend to compute the actual center of gravity of a [[PCL/OpenNI_tutorial_3:_Cloud_processing_%28advanced%29#Segmentation|clustered]] object though, remember that the sensor does not retrieve the part of the object that is hidden from the camera, like the back faces that are occluded by the front ones, or the inside. You only have the part of the surface that faces the camera.&lt;br /&gt;
&lt;br /&gt;
The centroid of a cloud can be computed with a single function call:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/common/centroid.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object to store the centroid coordinates.&lt;br /&gt;
	Eigen::Vector4f centroid;&lt;br /&gt;
	&lt;br /&gt;
	pcl::compute3DCentroid(*cloud, centroid);&lt;br /&gt;
&lt;br /&gt;
	std::cout &amp;lt;&amp;lt; &amp;quot;The XYZ coordinates of the centroid are: (&amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[0] &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[1] &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[2] &amp;lt;&amp;lt; &amp;quot;).&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl.html#a23daec3829d2d4100a2f185372b3753a pcl::compute3DCentroid()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_compute_centroid.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Feature estimation=&lt;br /&gt;
&lt;br /&gt;
A ''feature'' of a point is some characteristic that can help us to tell it apart from a different point. Of course, you could say that the XYZ coordinates would be characteristic enough, but when interpolating the data of two different, yet corresponding, clouds, they become useless. We need something better, that will have similar values when computed for the same spot of a surface in both sets.&lt;br /&gt;
&lt;br /&gt;
Normals are an example of a very simple feature. 3D [http://pointclouds.org/documentation/tutorials/how_features_work.php features] will be important later in our tutorials, when we talk about [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29|descriptors]] (more detailed &amp;quot;signatures&amp;quot; of points used for close matching).&lt;br /&gt;
&lt;br /&gt;
==Normal estimation==&lt;br /&gt;
&lt;br /&gt;
As you may remember from geometry class, the [https://en.wikipedia.org/wiki/Surface_normal normal] of a plane is an unit vector that is perpendicular to it. The normal of a surface at a point is defined as the vector that is perpendicular to the plane that is tangent to the surface at the point. Surface normals can be calculated for the points of a cloud, too. It is considered a feature, albeit not a very discriminative one.&lt;br /&gt;
&lt;br /&gt;
I will not go into detail with the math of the estimation method, but you just have to know that is uses the nearest neighbors (the points that are closest to the one we are calculating the normal for) to find out the tangent plane and the normal vector. You can customize the method with the search radius (think about a sphere of that radius, centered in the point; all neighboring points that lie within will be used for the computation) and the viewpoint (by default, the output normals will be directionless; by supposing that all vectors must point towards the camera - because otherwise they would belong to surfaces that are not visible from the sensor - they can all be re-oriented accordingly).&lt;br /&gt;
&lt;br /&gt;
Normals are also important because they give us information about the ''curvature'' of the surface at some point, which can be used to our advantage. Many of PCL's algorithms will require us to provide the normals of the input cloud. To estimate them, you can use the following code:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for normal estimation.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	// For every point, use all neighbors in a radius of 3cm.&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	// A kd-tree is a data structure that makes searches efficient. More about it later.&lt;br /&gt;
	// The normal estimation object will use it to find nearest neighbors.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
&lt;br /&gt;
	// Calculate the normals.&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Visualize them.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Normals&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(cloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	// Display one normal out of 20, as a line of length 3cm.&lt;br /&gt;
	viewer-&amp;gt;addPointCloudNormals&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt;(cloud, normals, 20, 0.03, &amp;quot;normals&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, normals are stored in &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointCloud&amp;quot;&amp;lt;/span&amp;gt; objects too, instantiated to the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt; type. If you ever extract some indices from a cloud before feeding them to an algorithm, be sure to do the same with the normals. I have not made use of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setViewPoint()&amp;quot;&amp;lt;/span&amp;gt; function, but you get the idea.&lt;br /&gt;
&lt;br /&gt;
Instead of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setRadiusSearch()&amp;quot;&amp;lt;/span&amp;gt;, you can make use of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setKSearch()&amp;quot;&amp;lt;/span&amp;gt;, which takes an integer, ''K''. This will take into account a point's ''K'' nearest neighbors to compute the normal. Please mind that this can produce some odd results for points that are not in dense areas, as some of the &amp;quot;neighbors&amp;quot; will be too far from them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Normal_estimation.png|thumb|center|600px|Visualization of the normals computed for a point cloud.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search method, Radius | Neighbor count, [Indices], [Viewpoint]&lt;br /&gt;
* '''Output''': Normals&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/normal_estimation.php Estimating Surface Normals in a Point Cloud]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_normal_estimation.html pcl::NormalEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_estimate_normals.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Integral images===&lt;br /&gt;
&lt;br /&gt;
Integral images are a method for normal estimation on organized clouds. The algorithm sees the cloud as a depth image, and creates certain rectangular areas over which the normals are computed, by taking into account the relationship between neighboring &amp;quot;pixels&amp;quot; (points), without the need to run lookups on a search structure like a tree. Because of this, it is very efficient and the normals are usually computed in a split second. If you are performing real time computations on clouds taken from a RGB-D sensor, this is the approach you should use because the previous method will probably take several seconds to finish.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/integral_image_normal.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for normal estimation.&lt;br /&gt;
	pcl::IntegralImageNormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	// Other estimation methods: COVARIANCE_MATRIX, AVERAGE_DEPTH_CHANGE, SIMPLE_3D_GRADIENT.&lt;br /&gt;
	// They determine the smoothness of the result, and the running time.&lt;br /&gt;
	normalEstimation.setNormalEstimationMethod(normalEstimation.AVERAGE_3D_GRADIENT);&lt;br /&gt;
	// Depth threshold for computing object borders based on depth changes, in meters.&lt;br /&gt;
	normalEstimation.setMaxDepthChangeFactor(0.02f);&lt;br /&gt;
	// Factor that influences the size of the area used to smooth the normals.&lt;br /&gt;
	normalEstimation.setNormalSmoothingSize(10.0f);&lt;br /&gt;
&lt;br /&gt;
	// Calculate the normals.&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Visualize them.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Normals&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(cloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	// Display one normal out of 20, as a line of length 3cm.&lt;br /&gt;
	viewer-&amp;gt;addPointCloudNormals&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt;(cloud, normals, 20, 0.03, &amp;quot;normals&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Estimation method, [Smoothing factor], [Depth threshold], [Rectangle size], [Border policy], [Depth dependent smoothing]&lt;br /&gt;
* '''Output''': Normals&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/normal_estimation_using_integral_images.php Normal Estimation Using Integral Images]&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://islab.ulsan.ac.kr/files/announcement/411/Adaptive%20Neighborhood%20Selection%20for%20Real-Time%20Surface%20Normal.pdf Adaptive Neighborhood Selection for Real-Time Surface Normal Estimation from Organized Point Cloud Data Using Integral Images] (S. Holzer et al., 2012)&lt;br /&gt;
** [https://www.willowgarage.com/sites/default/files/holz_rgbd_seg.pdf Real-Time Plane Segmentation using RGB-D Cameras] (Dirk Holz et al., 2012)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_integral_image_normal_estimation.html pcl::IntegralImageNormalEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_estimate_normals_integral.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Decomposition=&lt;br /&gt;
&lt;br /&gt;
When I talk about ''decomposition'', I mean the process of creating an ordered, organized data structure from the points of the cloud, with some purpose in mind like making further analysis easier, or performing filters.&lt;br /&gt;
&lt;br /&gt;
==''k''-d tree==&lt;br /&gt;
&lt;br /&gt;
A [https://en.wikipedia.org/wiki/K-d_tree ''k''-d tree] (''k''-dimensional tree) is a data structure that organizes a set of points in a ''k''-dimensional space, in a way that makes range search operations very efficient (for example, finding the nearest neighbor of a point, which is the point that is closest to it in space; or finding all neighbors within a radius).&lt;br /&gt;
&lt;br /&gt;
It is a binary tree, that is, every non-leaf node in it has two children nodes, the &amp;quot;left&amp;quot; and &amp;quot;right&amp;quot; ones. Each level splits space on a specific dimension. For example, in 3-dimensional space, at the root node (first level) all children would be split based on the first dimension, X (points having coordinates with greater X values would go to the right subtree, points with lesser values to the left one). At the second level (the nodes we just created), the split would be done on the Y axis, following the same criteria. At the third level (the grandchildren), we would use the Z axis. At the fourth level, we would get back to the X axis, and so on. Usually, the median point is chosen to be root at every level.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Kdtree.png|thumb|center|700px|Example of a 2D ''k''d-tree (image from Wikimedia Commons).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In PCL, creating a ''k''-d tree from a cloud is a straightforward and fast operation:&lt;br /&gt;
&lt;br /&gt;
===Searching===&lt;br /&gt;
&lt;br /&gt;
The following code snippet shows you how to use a ''k''-d tree to perform efficient searches:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/search/kdtree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// kd-tree object.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt; kdtree;&lt;br /&gt;
	kdtree.setInputCloud(cloud);&lt;br /&gt;
&lt;br /&gt;
	// We will find the 5 nearest neighbors of this point&lt;br /&gt;
	// (it does not have to be one of the cloud's, we can use any coordinate).&lt;br /&gt;
	pcl::PointXYZ point;&lt;br /&gt;
	point.x = 0.0524343;&lt;br /&gt;
	point.y = -0.58016;&lt;br /&gt;
	point.z = 1.776;&lt;br /&gt;
	// This vector will store the output neighbors.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; pointIndices(5);&lt;br /&gt;
	// This vector will store their squared distances to the search point.&lt;br /&gt;
	std::vector&amp;lt;float&amp;gt; squaredDistances(5);&lt;br /&gt;
	// Perform the search, and print out results.&lt;br /&gt;
	if (kdtree.nearestKSearch(point, 5, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;5 nearest neighbors of the point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Now we find all neighbors within 3cm of the point&lt;br /&gt;
	// (inside a sphere of radius 3cm centered at the point).&lt;br /&gt;
	if (kdtree.radiusSearch(point, 0.03, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors within 3cm:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another alternative would be &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;KdTreeFLANN&amp;quot;&amp;lt;/span&amp;gt;, which makes use of [http://www.cs.ubc.ca/research/flann/ FLANN] (Fast Library for Approximate Nearest Neighbors).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius | Neighbor count&lt;br /&gt;
* '''Output''': Point indices, Squared distances&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/kdtree_search.php How to use a KdTree to search]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1search_1_1_kd_tree.html pcl::search::KdTree]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_kd_tree_f_l_a_n_n.html pcl::KdTreeFLANN]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_kdtree_search.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Octree==&lt;br /&gt;
&lt;br /&gt;
Like the ''k''-d tree, the [https://en.wikipedia.org/wiki/Octree octree] is an hierarchical tree data structure useful for searches, but also compression or downsampling. Each octree node (called a ''voxel'', a term that will sound familiar for users of 3D engines, and which could be described as a &amp;quot;3D pixel&amp;quot;, that is, a cube) has either eight children or no children. The root node describes a cubic bounding box which encapsulates all points. At every level, it becomes subdivided by a factor of 2 which results in an increased&lt;br /&gt;
voxel resolution. That way, we can selectively give more resolution to certain zones.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Octree.png|thumb|center|700px|Depiction of an octree (image from Wikimedia Commons).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Searching===&lt;br /&gt;
&lt;br /&gt;
Downsampling will be covered in a different section, so now I will show you how to perform searches with an octree:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/octree/octree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Octree object, with a maximum resolution of 128&lt;br /&gt;
	// (resolution at lowest octree level).&lt;br /&gt;
	pcl::octree::OctreePointCloudSearch&amp;lt;pcl::PointXYZ&amp;gt; octree(128);&lt;br /&gt;
	octree.setInputCloud(cloud);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
&lt;br /&gt;
	// We will find all neighbors within the same voxel as this point.&lt;br /&gt;
	// (it does not have to be one of the cloud's, we can use any coordinate).&lt;br /&gt;
	pcl::PointXYZ point;&lt;br /&gt;
	point.x = 0.0524343;&lt;br /&gt;
	point.y = -0.58016;&lt;br /&gt;
	point.z = 1.776;&lt;br /&gt;
	// This vector will store the output neighbors.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; pointIndices;&lt;br /&gt;
	// Perform the search, and print out results.&lt;br /&gt;
	if (! octree.voxelSearch(point, pointIndices) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors in the same voxel:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// We will find the 5 nearest neighbors of the point.&lt;br /&gt;
	// This vector will store their squared distances to the search point.&lt;br /&gt;
	std::vector&amp;lt;float&amp;gt; squaredDistances;&lt;br /&gt;
	if (octree.nearestKSearch(point, 5, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;5 nearest neighbors of the point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Now we find all neighbors within 3cm of the point&lt;br /&gt;
	// (inside a sphere of radius 3cm centered at the point).&lt;br /&gt;
	// The point DOES have to belong in the cloud.&lt;br /&gt;
	if (octree.radiusSearch(point, 0.03, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors within 3cm:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, the code is mostly interchangeable with the one of the ''k''d-tree.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius | Neighbor count&lt;br /&gt;
* '''Output''': Point indices, Squared distances&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/octree.php Spatial Partitioning and Search Operations with Octrees]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1search_1_1_octree.html pcl::search::Octree]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1octree_1_1_octree_point_cloud_search.html pcl::octree::OctreePointCloudSearch]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_octree_search.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Compressing/Decompressing===&lt;br /&gt;
&lt;br /&gt;
On the other hand, this is the code needed to compress and decompress a point cloud with PCL:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/compression/octree_pointcloud_compression.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr decompressedCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Octree compressor object.&lt;br /&gt;
	// Check /usr/include/pcl-&amp;lt;version&amp;gt;/pcl/compression/compression_profiles.h for more profiles.&lt;br /&gt;
	// The second parameter enables the output of information.&lt;br /&gt;
	pcl::io::OctreePointCloudCompression&amp;lt;pcl::PointXYZ&amp;gt; octreeCompression(pcl::io::MED_RES_ONLINE_COMPRESSION_WITHOUT_COLOR, true);&lt;br /&gt;
	// Stringstream that will hold the compressed cloud.&lt;br /&gt;
	std::stringstream compressedData;&lt;br /&gt;
&lt;br /&gt;
	// Compress the cloud (you would save the stream to disk).&lt;br /&gt;
	octreeCompression.encodePointCloud(cloud, compressedData);&lt;br /&gt;
&lt;br /&gt;
	// Decompress the cloud.&lt;br /&gt;
	octreeCompression.decodePointCloud(compressedData, decompressedCloud);&lt;br /&gt;
&lt;br /&gt;
	// Display the decompressed cloud.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Octree compression&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(decompressedCloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforeCompression.png | Original point cloud.&lt;br /&gt;
File:AfterCompression.png | Point cloud retrieved after octree compression.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As you can see, the process has decreased the amount of points in the cloud. The default parameters of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;OctreePointCloudCompression&amp;quot;&amp;lt;/span&amp;gt; object establish an octree resolution (at lowest level) of 1cm, and a coordinate precision of 1mm. These values can of course be changed to suit your needs, but simplifying the cloud in this way is an useful operation that we will cover later.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Resolution, Compression profile&lt;br /&gt;
* '''Output''': Compressed stream&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/compression.php Point Cloud Compression]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1io_1_1_octree_point_cloud_compression.html pcl::io::OctreePointCloudCompression]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_octree_compress.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Detecting changes===&lt;br /&gt;
&lt;br /&gt;
PCL has an octree-based object that allows to detect changes between two point clouds (for example, two clouds taken one after another with an OpenNI device). This object stores the trees of both clouds at the same time, and compares them, returning a list of indices of the points that are new in the second cloud (they did not exist in the first):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/octree/octree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudA(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudB(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read two PCD files from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudA) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[2], *cloudB) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Change detector object, with a resolution of 128&lt;br /&gt;
	// (resolution at lowest octree level).&lt;br /&gt;
	pcl::octree::OctreePointCloudChangeDetector&amp;lt;pcl::PointXYZ&amp;gt; octree(128.0f);&lt;br /&gt;
&lt;br /&gt;
	// Add cloudA to the octree.&lt;br /&gt;
	octree.setInputCloud(cloudA);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
	// The change detector object is able to store two clouds at the same time;&lt;br /&gt;
	// with this we can reset the buffer but keep the previous tree saved.&lt;br /&gt;
	octree.switchBuffers();&lt;br /&gt;
	// Now, add cloudB to the octree like we did before.&lt;br /&gt;
	octree.setInputCloud(cloudB);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; newPoints;&lt;br /&gt;
	// Get a vector with the indices of all points that are new in cloud B,&lt;br /&gt;
	// when compared with the ones that existed in cloud A.&lt;br /&gt;
	octree.getPointIndicesFromNewVoxels(newPoints);&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; newPoints.size(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Point (&amp;quot; &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].x &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].y &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].z&lt;br /&gt;
				  &amp;lt;&amp;lt; &amp;quot;) was not in cloud A but is in cloud B&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
	std::cout &amp;lt;&amp;lt; newPoints.size() &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
} &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': First cloud, Second cloud, Resolution&lt;br /&gt;
* '''Output''': Point indices&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/octree_change.php Spatial change detection on unorganized point cloud data]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1octree_1_1_octree_point_cloud_change_detector.html pcl::octree::OctreePointCloudChangeDetector]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_change_detection.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Filtering=&lt;br /&gt;
&lt;br /&gt;
The raw point cloud, as returned from the sensor, usually needs some kind of polishing before being handed to processing algorithms. Normally, this is due to the low precision of the sensor, which leads to noise, incorrect measurements, holes in surfaces... Sometimes it is just the opposite: the cloud has so many points that processing it would take forever. Luckily, several filtering methods exist to &amp;quot;fix&amp;quot; many recurring errors in point clouds.&lt;br /&gt;
&lt;br /&gt;
==Passthrough filter==&lt;br /&gt;
&lt;br /&gt;
A passthrough filter will remove any point from the cloud whose values do not fall in a certain range, user-given. For example, if you wanted to discard all points farther away than, say, 3 meters, you would run the filter on the Z coordinate with a range of [0,3]. This filter can be useful to discard unneeded objects from the cloud, but you may have to adapt a different reference frame if the default one (relative to the sensor) is not fitting. For example, filtering on the Y value to remove all points not sitting on a table will yield unwanted results if the camera was at an odd angle.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/passthrough.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::PassThrough&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Filter out all points with Z values not in the [0-2] range.&lt;br /&gt;
	filter.setFilterFieldName(&amp;quot;z&amp;quot;);&lt;br /&gt;
	filter.setFilterLimits(0.0, 2.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforePassthrough.png | Original point cloud.&lt;br /&gt;
File:AfterPassthrough.png | Point cloud after applying a passthrough filter with Z in [0,2].&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setFilterLimitsNegative()&amp;quot;&amp;lt;/span&amp;gt; function (not seen in the example) will tell the filter object whether to return the points inside the range (&amp;quot;false&amp;quot;, the default value) or outside it (&amp;quot;true&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Filter field, Filter limits, [Negative mode]&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/passthrough.php Filtering a PointCloud using a PassThrough filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_pass_through.html pcl::PassThrough]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_passthrough_filter.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Conditional removal==&lt;br /&gt;
&lt;br /&gt;
Another way of getting the exact same result than with the previous example would be to use a conditional removal filter. We can build any kind of condition we want for the values of the point:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/conditional_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// We must build a condition.&lt;br /&gt;
	// And &amp;quot;And&amp;quot; condition requires all tests to check true. &amp;quot;Or&amp;quot; conditions also available.&lt;br /&gt;
	pcl::ConditionAnd&amp;lt;pcl::PointXYZ&amp;gt;::Ptr condition(new pcl::ConditionAnd&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// First test, the point's Z value must be greater than (GT) 0.&lt;br /&gt;
	condition-&amp;gt;addComparison(pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;::ConstPtr(new pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;(&amp;quot;z&amp;quot;, pcl::ComparisonOps::GT, 0.0)));&lt;br /&gt;
	// Second test, the point's Z value must be less than (LT) 2.&lt;br /&gt;
	condition-&amp;gt;addComparison(pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;::ConstPtr(new pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;(&amp;quot;z&amp;quot;, pcl::ComparisonOps::LT, 2.0)));&lt;br /&gt;
	// Checks available: GT, GE, LT, LE, EQ.&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::ConditionalRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setCondition(condition);&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// If true, points that do not pass the filter will be set to a certain value (default NaN).&lt;br /&gt;
	// If false, they will be just removed, but that could break the structure of the cloud&lt;br /&gt;
	// (organized clouds are clouds taken from camera-like sensors that return a matrix-like image).&lt;br /&gt;
	filter.setKeepOrganized(true);&lt;br /&gt;
	// If keep organized was set true, points that failed the test will have their Z value set to this.&lt;br /&gt;
	filter.setUserFilterValue(0.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Condition, [Keep organized]&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/remove_outliers.php Removing outliers using a Conditional or RadiusOutlier removal]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_conditional_removal.html pcl::ConditionalRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_conditional_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Outlier removal==&lt;br /&gt;
&lt;br /&gt;
Outliers are lonely points that are spread here and there in the cloud, like annoying mosquitoes. They are the product of the sensor's inaccuracy, which registers measurements where there should not be any. Outliers are considered undesired noise because they can introduce errors in calculations, like for example the normal estimation. Hence, trimming they out of the cloud will not only make computations faster (because the less points there are, the less time it takes, as we will see with downsampling) but also help us get more precise values.&lt;br /&gt;
&lt;br /&gt;
===Radius-based===&lt;br /&gt;
&lt;br /&gt;
The radius-based outlier removal is the simplest method of all. You must specify a search radius and the minimum number of neighbors than a point must have to avoid being labelled as outlier. The algorithm will then iterate through all points (which can be slow in if the cloud is big) and perform the check: if less than that number of points are found within the radius, it is removed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/radius_outlier_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::RadiusOutlierRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Every point must have 10 neighbors within 15cm, or it will be removed.&lt;br /&gt;
	filter.setRadiusSearch(0.15);&lt;br /&gt;
	filter.setMinNeighborsInRadius(10);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All PCL classes that inherit from &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::FilterIndices&amp;quot;&amp;lt;/span&amp;gt; (like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::RadiusOutlierRemoval&amp;quot;&amp;lt;/span&amp;gt;) provide the function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt;, which if set to true, makes the filter return the points that did NOT pass the filter.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius, Neighbor count&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/remove_outliers.php Removing outliers using a Conditional or RadiusOutlier removal]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_radius_outlier_removal.html pcl::RadiusOutlierRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_radius_outlier_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Statistical===&lt;br /&gt;
&lt;br /&gt;
The statistical outlier removal process is a bit more refined. First, for every point, the mean distance to its ''K'' neighbors is computed. Then, if we asume that the result is a [https://en.wikipedia.org/wiki/Normal_distribution normal (gaussian) distribution] with a mean ''μ'' and a standard deviation ''σ'', we can deem it safe to remove all points with mean distances that fall out of the global mean plus deviation. Basically, it runs a statistical analysis of the distances between neighboring points, and trims all which are not considered &amp;quot;normal&amp;quot; (you define what &amp;quot;normal&amp;quot; is with the parameters of the algorithm).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/statistical_outlier_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::StatisticalOutlierRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Set number of neighbors to consider to 50.&lt;br /&gt;
	filter.setMeanK(50);&lt;br /&gt;
	// Set standard deviation multiplier to 1.&lt;br /&gt;
	// Points with a distance larger than 1 standard deviation of the mean distance will be outliers.&lt;br /&gt;
	filter.setStddevMulThresh(1.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can make use of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt; with this filter too.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Neighbor count, Deviation multiplier&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/statistical_outlier.php Removing outliers using a StatisticalOutlierRemoval filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_statistical_outlier_removal.html pcl::StatisticalOutlierRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_statistical_outlier_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Resampling==&lt;br /&gt;
&lt;br /&gt;
Resampling means changing the number of points of the cloud, either by reducing (downsampling) or increasing (upsampling). Both have a purpose, as we will see.&lt;br /&gt;
&lt;br /&gt;
===Downsampling===&lt;br /&gt;
&lt;br /&gt;
A sensor like Kinect or Xtion produces a cloud with 307200 (640x480) points. Believe it or not, that is a lot of information, and the new Kinect is already out there, working at a higher resolution. Keep in mind that, if we were to perform a simple operation on every point of a cloud, it would be of ''O(n)'', n being the number of points. If we had to compare every point with its ''k'' nearest neighbors, it would be ''O(nk)''. You may have noticed by now that some of the algorithms I showed you take several seconds to run on your PC; you now know why.&lt;br /&gt;
&lt;br /&gt;
Optimization techniques are being implemented as I write this to allow PCL to employ the computer's GPU (graphic card) to perform the computations using [https://developer.nvidia.com/about-cuda CUDA]. Many classes already provide an alternative in the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::gpu&amp;quot;&amp;lt;/span&amp;gt; namespace. This offers dramatic improvements in speed, but it is not always a possibility.&lt;br /&gt;
&lt;br /&gt;
Another option is to reduce the complexity of the cloud, that is, to remove all points that are not needed for the purpose. There are several ways to do this. You could trim all outliers from the cloud, then perform segmentation to extract only the parts that interest you. A common operation is also downsampling, which will give you back a cloud that, for all intents and purposes, equivalent to the original one, despite having less points.&lt;br /&gt;
&lt;br /&gt;
Downsampling is done via ''voxelization''. I already explained what voxels were in the [[#Octree | octree section]]. The cloud is divided in multiple cube-shaped regions with the desired resolution. Then, all points inside every voxel are processed so only one remains. The simplest way would be to randomly select one of them, but a more accurate approach would be to compute the [[PCL/OpenNI_tutorial_2:_Cloud_processing_%28basic%29#Computing_the_centroid|centroid]], which is the point whose coordinates are the mean values of all the points that belong to the voxel.&lt;br /&gt;
&lt;br /&gt;
If done with sensible parameters, downsampling will yield results that are precise enough to work with, at the cost of less CPU power.&lt;br /&gt;
&lt;br /&gt;
====Voxel grid====&lt;br /&gt;
&lt;br /&gt;
A straightforward way of downsampling a cloud:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/voxel_grid.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::VoxelGrid&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// We set the size of every voxel to be 1x1x1cm&lt;br /&gt;
	// (only one point per every cubic centimeter will survive).&lt;br /&gt;
	filter.setLeafSize(0.01f, 0.01f, 0.01f);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforeDownsampling.png | Original point cloud.&lt;br /&gt;
File:AfterDownsampling.png | Point cloud after being downsampled with a leaf size of 1cm.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Voxel size&lt;br /&gt;
* '''Output''': Downsampled cloud&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/voxel_grid.php Downsampling a PointCloud using a VoxelGrid filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_voxel_grid.html pcl::VoxelGrid]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_downsample.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Uniform sampling====&lt;br /&gt;
&lt;br /&gt;
This class does basically the same, but it outputs the indices of the points that survived the process. It is a common way of choosing keypoints when computing [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29|descriptors]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/keypoints/uniform_sampling.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Uniform sampling object.&lt;br /&gt;
	pcl::UniformSampling&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// We set the size of every voxel to be 1x1x1cm&lt;br /&gt;
	// (only one point per every cubic centimeter will survive).&lt;br /&gt;
	filter.setRadiusSearch(0.01f);&lt;br /&gt;
	// We need an additional object to store the indices of surviving points.&lt;br /&gt;
	pcl::PointCloud&amp;lt;int&amp;gt; keypointIndices;&lt;br /&gt;
&lt;br /&gt;
	filter.compute(keypointIndices);&lt;br /&gt;
	pcl::copyPointCloud(*cloud, keypointIndices.points, *filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Voxel size&lt;br /&gt;
* '''Output''': Point indices&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_uniform_sampling.html pcl::UniformSampling]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_downsample_uniform.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Upsampling===&lt;br /&gt;
&lt;br /&gt;
Upsampling is a form of [[PCL/OpenNI_tutorial_2:_Cloud_processing_%28basic%29#Reconstruction| surface reconstruction]], but I put it here. When you have fewer points than you would like, upsampling can help you to recover the original surface(s), by [https://en.wikipedia.org/wiki/Upsampling interpolating] the ones you currently have, which is just a sophisticated guess. The result will never be a hundred percent accurate, but sometimes it is the only choice. So, before downsampling your clouds, be sure to keep a copy of the original data!&lt;br /&gt;
&lt;br /&gt;
PCL uses the [https://en.wikipedia.org/wiki/Moving_least_squares Moving Least Squares (MLS)] algorithm for this. The code looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/mls.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filtering object.&lt;br /&gt;
	pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Object for searching.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree;&lt;br /&gt;
	filter.setSearchMethod(kdtree);&lt;br /&gt;
	// Use all neighbors in a radius of 3cm.&lt;br /&gt;
	filter.setSearchRadius(0.03);&lt;br /&gt;
	// Upsampling method. Other possibilites are DISTINCT_CLOUD, RANDOM_UNIFORM_DENSITY&lt;br /&gt;
	// and VOXEL_GRID_DILATION. NONE disables upsampling. Check the API for details.&lt;br /&gt;
	filter.setUpsamplingMethod(pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointXYZ&amp;gt;::SAMPLE_LOCAL_PLANE);&lt;br /&gt;
	// Radius around each point, where the local plane will be sampled.&lt;br /&gt;
	filter.setUpsamplingRadius(0.03);&lt;br /&gt;
	// Sampling step size. Bigger values will yield less (if any) new points.&lt;br /&gt;
	filter.setUpsamplingStepSize(0.02);&lt;br /&gt;
&lt;br /&gt;
	filter.process(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:PCL_upsampling_original.png | Original point cloud (coffee mug on a table).&lt;br /&gt;
File:PCL_upsampling_downsampled.png | Downsampled with a leaf size of 1cm.&lt;br /&gt;
File:PCL_upsampling_local_plane.png | Upsampled using SAMPLE_LOCAL_PLANE (upsampling radius 3cm, step size 2cm).&lt;br /&gt;
File:PCL_upsampling_uniform_density.png | Upsampled using RANDOM_UNIFORM_DENSITY (target point density of 300 per 3cm radius). &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check the [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html#ac29ad97b98353d64ce64e2ff924f7d20 setUpsamplingMethod()] API to see all upsampling methods available, and the parameters you need to set for them to work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search radius, Upsampling method, [Upsampling parameters]&lt;br /&gt;
* '''Output''': Upsampled cloud&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html pcl::MovingLeastSquares]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_upsample.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Reconstruction=&lt;br /&gt;
&lt;br /&gt;
==Surface smoothing==&lt;br /&gt;
&lt;br /&gt;
As stated, depth sensors are not very accurate, and the resulting clouds have measurement errors, outliers, holes in surfaces, etc. Surfaces can be reconstructed by means of an algorithm, that iterates through all points and interpolates the data, trying to guess how the original surface was. Like with upsampling, PCL uses the [https://en.wikipedia.org/wiki/Moving_least_squares MLS] algorithm and class. Performing this step is important, because the resulting cloud's normals will be more accurate.&lt;br /&gt;
&lt;br /&gt;
Surface smoothing is easily done with the following code:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/mls.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// The output will also contain the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;::Ptr smoothedCloud(new pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Smoothing object (we choose what point types we want as input and output).&lt;br /&gt;
	pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointNormal&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Use all neighbors in a radius of 3cm.&lt;br /&gt;
	filter.setSearchRadius(0.03);&lt;br /&gt;
	// If true, the surface and normal are approximated using a polynomial estimation&lt;br /&gt;
	// (if false, only a tangent one).&lt;br /&gt;
	filter.setPolynomialFit(true);&lt;br /&gt;
	// We can tell the algorithm to also compute smoothed normals (optional).&lt;br /&gt;
	filter.setComputeNormals(true);&lt;br /&gt;
	// kd-tree object for performing searches.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree;&lt;br /&gt;
	filter.setSearchMethod(kdtree);&lt;br /&gt;
&lt;br /&gt;
	filter.process(*smoothedCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The output cloud will also contain the improved normals of the smoother cloud.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search radius, [Polynomial fit], [Compute normals]&lt;br /&gt;
* '''Output''': Smoothed cloud, [Smoothed normals]&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/resampling.php Smoothing and normal estimation based on polynomial reconstruction]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html pcl::MovingLeastSquares]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_surface_smoothing.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)&amp;diff=4738</id>
		<title>PCL/OpenNI tutorial 2: Cloud processing (basic)</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)&amp;diff=4738"/>
				<updated>2015-11-01T23:02:09Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Point_cloud.png|thumb|right|200px|Rendering of a point cloud, showing a desktop.]]&lt;br /&gt;
&lt;br /&gt;
All right: in the [[PCL/OpenNI tutorial 1: Installing and testing| previous tutorial]] you installed OpenNI and PCL. You managed to get your device working. You compiled and ran the example program, watched as it fetched one frame after another, and even saved a couple of point clouds to disk. Now you may be wondering &amp;quot;What about now? What use depth sensors are? What can I do with point clouds?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Well, depth cameras have a lot of useful applications. For example, you could use them as [http://tech.integrate.biz/kinect_mocap.htm motion tracking] hardware and save you thousands of dollars in professional gear. The original Kinect was actually designed as an alternative to physical controllers (e.g., gamepads) in games. [https://frantracerkinectft.codeplex.com/ Hand gestures] can also be recognized, so people have managed to successfully implement control interfaces similar to what could be seen in Minority Report.&lt;br /&gt;
&lt;br /&gt;
Working low-level with point clouds offers interesting possibilities too. As they are essentially a 3D &amp;quot;scan&amp;quot; or the scene, you could capture your whole room as a [http://pointclouds.org/documentation/tutorials/using_kinfu_large_scale.php continuous 3D mesh], then import it in your modelling software of choice and use it as a reference for a better reconstruction. You could use it to measure distances, or detect obstacles, for a robot navigation system. But the most exciting alternative by far, and the one where most of the research is being focused on, is to perform 3D object recognition and pose estimation, even in [https://www.youtube.com/watch?feature=player_detailpage&amp;amp;v=quGhaggn3cQ#t=349 real time]. This allows a robot to fulfill tasks like going to the kitchen to get you a cup of coffee.&lt;br /&gt;
&lt;br /&gt;
Most of the advanced stuff that can be done with a point cloud requires some previous steps, like filtering, reconstruction or normal estimation. In this tutorial, I will introduce you to the basics of point cloud processing, and leave the complicated methods for the [[PCL/OpenNI tutorial 3: Cloud processing (advanced) | next tutorial]]. I will explain what every technique does and what it should be used for, and include PCL code snippets so you can check how to implement it in your programs.&lt;br /&gt;
&lt;br /&gt;
=Point clouds=&lt;br /&gt;
&lt;br /&gt;
First, a little explanation. A point cloud as taken from a depth sensor consists of a series of points in 3D space, as simple as that. A &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::PointCloud&amp;lt;PointT&amp;gt;&amp;quot;&amp;lt;/span&amp;gt; object stores the points inside a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;std::vector&amp;lt;PointT&amp;gt;&amp;quot;&amp;lt;/span&amp;gt; structure. The cloud object exposes some functions that let you get information about them, like the point count. As you can see, the class is templated so you can instantiate it for many types of points. For example, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt;, which stores three floats for the X, Y and Z coordinates, and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZRGB&amp;quot;&amp;lt;/span&amp;gt;, which also stores color (texture) information for each point, the kind we would get with Kinect or Xtion RGB-D cameras. A list of [http://pointclouds.org/documentation/tutorials/adding_custom_ptype.php#what-pointt-types-are-available-in-pcl point types] and a basic description of the [http://pointclouds.org/documentation/tutorials/basic_structures.php cloud object] are available on PCL's website.&lt;br /&gt;
&lt;br /&gt;
Clouds will usually be handled with pointers, as they tend to be large. PCL implements the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;boost::shared_ptr&amp;quot;&amp;lt;/span&amp;gt; class from the [https://en.wikipedia.org/wiki/Boost_%28C%2B%2B_libraries%29 Boost C++ libraries], that keeps track of the references to a certain object. Every time you pass a shared pointer and a copy of it is created, a reference counter is automatically increased. When the pointer is destroyed, it is decreased. If the counter ever reaches 0, it means that no one is holding an active reference to the object, and its destructor is called to free memory. This can take a while to get used to, and you will be mixing cloud objects and cloud object pointers until then, but do not worry.&lt;br /&gt;
&lt;br /&gt;
==PCD files==&lt;br /&gt;
&lt;br /&gt;
When clouds are saved to disk, a [http://pointclouds.org/documentation/tutorials/pcd_file_format.php PCD] (Point Cloud Data) file is produced. You can choose to do it so in binary or plain text format. The first one is faster to work with, whereas the second one will let you to inspect the cloud by opening it with a common text editor. If you do so, you may see something like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;# .PCD v.7 − Point Cloud Data file format&lt;br /&gt;
VERSION .7&lt;br /&gt;
FIELDS x y z&lt;br /&gt;
SIZE 4 4 4&lt;br /&gt;
TYPE F F F&lt;br /&gt;
WIDTH 2&lt;br /&gt;
HEIGHT 1&lt;br /&gt;
VIEWPOINT 0 0 0 1 0 0 0&lt;br /&gt;
POINTS 2&lt;br /&gt;
DATA ascii&lt;br /&gt;
0.73412 -1.12643 0.82218&lt;br /&gt;
0.44739 -0.34735 -0.04624&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This tells us that the cloud consist of 2 points of type &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; (because each point has only 3 fields; the X, Y and Z coordinates, each stored in a 4-byte float), and lists their coordinates. It also stores the viewport information (the relative position of the points to the sensor) as a translation quaternion. Knowing the position of the camera can be useful for certain procedures like normal estimation.&lt;br /&gt;
&lt;br /&gt;
===Reading from file===&lt;br /&gt;
&lt;br /&gt;
Reading a point cloud from a PCD file to and storing it in memory can be done with a single function call. You must specify the correct point type. For example, for .pcd files saved with the OpenNI viewer application that we saw in the [[PCL/OpenNI tutorial 1: Installing and testing | previous tutorial]], you could use either &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZRGBA&amp;quot;&amp;lt;/span&amp;gt; or &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; if you are not going to use the color information.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I have not bothered to include proper error/argument checking, but you should do it in your programs. Be sure to check the [http://docs.pointclouds.org PCL API] every time I introduce you to one of its classes or methods.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/reading_pcd.php Reading Point Cloud data from PCD files]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl_1_1io.html pcl::io namespace]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_file_read.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Writing to file===&lt;br /&gt;
&lt;br /&gt;
Saving a point cloud to a file is as easy as reading it. You can choose between doing it in plain readable (ASCII) or binary format. The first one can be read and altered with a text editor, the second one is faster.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Write it back to disk under a different name.&lt;br /&gt;
	// Another possibility would be &amp;quot;savePCDFileBinary()&amp;quot;.&lt;br /&gt;
	pcl::io::savePCDFileASCII(&amp;quot;output.pcd&amp;quot;, *cloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/writing_pcd.php Writing Point Cloud data to PCD files]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl_1_1io.html pcl::io namespace]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_file_write.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you want a better alternative for loading or saving PCD files (some of the functions we saw are a bit outdated, but they do the job), take a look at the [http://docs.pointclouds.org/trunk/classpcl_1_1_p_c_d_reader.html PCDReader] and [http://docs.pointclouds.org/trunk/classpcl_1_1_p_c_d_writer.html PCDWriter] classes.&lt;br /&gt;
&lt;br /&gt;
==Visualization==&lt;br /&gt;
&lt;br /&gt;
Of course, PCL ships with a dedicated program to load and visualize point clouds and a lot of other stuff, &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''pcl_viewer''&amp;lt;/span&amp;gt;. You can run it from a terminal with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_viewer &amp;lt;file 1&amp;gt; [file 2] ... [file N]&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you load several files, each cloud will be rendered with a different color to help you tell them apart. The program uses internally the [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_visualizer.html pcl::visualization::PCLVisualizer] class. Press the &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''H'''&amp;lt;/span&amp;gt; key while it is running to print the help on the terminal. Keys I tend to find useful are &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''1'''&amp;lt;/span&amp;gt;, which chooses a different color for the cloud (useful if the current one is too hard to see), &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''R'''&amp;lt;/span&amp;gt;, which resets the camera, and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''J'''&amp;lt;/span&amp;gt;, which saves a PNG snapshot.&lt;br /&gt;
&lt;br /&gt;
In your own code, it is possible to iterate through the cloud and print the information of each point to console, but this is cumbersome. Sometimes you just want to visualize the cloud to check something. PCL can create a window for you and display it, letting users pan and zoom their way around. The simplest method is the following one:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/cloud_viewer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Cloud viewer object. You can set the window title.&lt;br /&gt;
	pcl::visualization::CloudViewer viewer(argv[1]);&lt;br /&gt;
	viewer.showCloud(cloud);&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		// Do nothing but wait.&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;CloudViewer&amp;quot;&amp;lt;/span&amp;gt; has less features than &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PCLVisualizer&amp;quot;&amp;lt;/span&amp;gt; but it is easier to set up. The program will remain active until you close the window with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Alt+F4'''&amp;lt;/span&amp;gt; or press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Q'''&amp;lt;/span&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/cloud_viewer.php The CloudViewer]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/pcl_visualizer.php PCLVisualizer]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/walkthrough.php Visualization library overview]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_cloud_viewer.html pcl::visualization::CloudViewer]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_visualize_cloud.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Concatenating two clouds==&lt;br /&gt;
&lt;br /&gt;
There are two types of concatenation with point clouds:&lt;br /&gt;
&lt;br /&gt;
* '''Points''':  You initially have two clouds, &amp;quot;A&amp;quot; and &amp;quot;B&amp;quot;, with &amp;quot;N&amp;quot; and &amp;quot;M&amp;quot; points each. A third cloud is created (&amp;quot;C&amp;quot;), that has &amp;quot;N + M&amp;quot; points (all points of the first two clouds). For this to work, the point type must be the same for all clouds (&amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt;, or &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt;, or whatever).&lt;br /&gt;
* '''Fields''': With this, you can join the fields of two clouds with different types. The only constraint is that both must have the same number of points. For example, if you have a cloud of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; type and another of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt; type with &amp;quot;N&amp;quot; points each, you can concatenate them to create a cloud of type &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointNormal&amp;quot;&amp;lt;/span&amp;gt; (that stores X, Y and Z coordinates, plus the normal) that also has &amp;quot;N&amp;quot; points.&lt;br /&gt;
&lt;br /&gt;
Point concatenation is the easiest one becase you can use the cloud's add operator (+), which is overloaded.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudA(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudB(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudC(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read two PCD files from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGBA&amp;gt;(argv[1], *cloudA) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGBA&amp;gt;(argv[2], *cloudB) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Create cloud &amp;quot;C&amp;quot;, with the points of both &amp;quot;A&amp;quot; and &amp;quot;B&amp;quot;.&lt;br /&gt;
	*cloudC = (*cloudA) + (*cloudB);&lt;br /&gt;
	// Now cloudC-&amp;gt;points.size() equals cloudA-&amp;gt;points.size() + cloudB-&amp;gt;points.size().&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/concatenate_clouds.php Concatenate the points of two Point Clouds]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_concatenate_clouds.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Field concatenation is done with a function call:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudPoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr cloudNormals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;::Ptr cloudAll(new pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudPoints) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Compute the normals of the cloud (do not worry, we will see this later).&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloudPoints);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.05);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*cloudNormals);&lt;br /&gt;
&lt;br /&gt;
	// Concatenate the fields (PointXYZ + Normal = PointNormal).&lt;br /&gt;
	pcl::concatenateFields(*cloudPoints, *cloudNormals, *cloudAll);&lt;br /&gt;
&lt;br /&gt;
	// Print the data to standard output.&lt;br /&gt;
	for (size_t currentPoint = 0; currentPoint &amp;lt; cloudAll-&amp;gt;points.size(); currentPoint++)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tXYZ:&amp;quot; &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].x &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].y &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].z &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tNormal:&amp;quot; &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[0] &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[1] &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[2] &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/concatenate_clouds.php Concatenate the points of two Point Clouds]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__common.html#gac6add803f86fd16a998dce541e9ef402 pcl::concatenateFields()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_concatenate_fields.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Matrix transformations==&lt;br /&gt;
&lt;br /&gt;
Clouds can be directly manipulated with [http://en.wikipedia.org/wiki/Transformation_matrix transformations] (translation, rotation, scaling). 3D transformations use a 4x4 matrix, which can be a little complicated to understand (specially when it comes to rotations), so check some book or [http://web.iitd.ac.in/~hegde/cad/lecture/L6_3dtrans.pdf tutorial] before trying to use one. The transformed cloud will be saved to another object so you can keep both versions and use them as you wish.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/common/transforms.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr transformed(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Transformation matrix object, initialized to the identity matrix&lt;br /&gt;
	// (a null transformation).&lt;br /&gt;
	Eigen::Matrix4f transformation = Eigen::Matrix4f::Identity();&lt;br /&gt;
&lt;br /&gt;
	// Set a rotation around the Z axis.&lt;br /&gt;
	float theta = M_PI / 2; // 180 degrees.&lt;br /&gt;
	transformation(0, 0) = cos(theta);&lt;br /&gt;
	transformation(0, 1) = -sin(theta);&lt;br /&gt;
	transformation(1, 0) = sin(theta);&lt;br /&gt;
	transformation(1, 1) = cos(theta);&lt;br /&gt;
&lt;br /&gt;
	// Set a translation on the X axis.&lt;br /&gt;
	transformation(0, 3) = 1.0f; // 1 meter (positive direction).&lt;br /&gt;
&lt;br /&gt;
	pcl::transformPointCloud(*cloud, *transformed, transformation);&lt;br /&gt;
&lt;br /&gt;
	// Visualize both the original and the result.&lt;br /&gt;
	pcl::visualization::PCLVisualizer viewer(argv[1]);&lt;br /&gt;
	viewer.addPointCloud(cloud, &amp;quot;original&amp;quot;);&lt;br /&gt;
	// The transformed one's points will be red in color.&lt;br /&gt;
	pcl::visualization::PointCloudColorHandlerCustom&amp;lt;pcl::PointXYZ&amp;gt; colorHandler(transformed, 255, 0, 0);&lt;br /&gt;
	viewer.addPointCloud(transformed, colorHandler, &amp;quot;transformed&amp;quot;);&lt;br /&gt;
	// Add 3D colored axes to help see the transformation.&lt;br /&gt;
	viewer.addCoordinateSystem(1.0, &amp;quot;reference&amp;quot;, 0);&lt;br /&gt;
&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer.spinOnce();&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Cloud_transformation.png|thumb|center|600px|Rotation and translation applied to a cloud.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check the rest of the functions, because you can also use a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Affine3&amp;quot;&amp;lt;/span&amp;gt; or a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Quaternion&amp;quot;&amp;lt;/span&amp;gt; instead of a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Matrix4&amp;quot;&amp;lt;/span&amp;gt; object.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Transformation&lt;br /&gt;
* '''Output''': Transformed points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/matrix_transform.php Using a matrix to transform a point cloud]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl.html#aefbe4956d1c8fb785a97df6708d57c56 pcl::transformPointCloud()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_matrix_transform.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Using indices==&lt;br /&gt;
&lt;br /&gt;
Many of PCL's algorithms return indices. Indices are just a list of some of the points of a cloud (not the points themselves with all their data, but just their index in the cloud). For example, a cylinder segmentation algorithm will give you as output a list of points that were considered to fit the cylinder model. You can then use the indices to extract the points to a second cloud for further work with the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::ExtractIndices&amp;quot;&amp;lt;/span&amp;gt; class. Its &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt; function will instruct it to extract the points NOT indexed (so you can remove all cylinder-shaped clusters from the cloud).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/extract_indices.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/segmentation/sac_segmentation.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudAll(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudExtracted(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudAll) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Plane segmentation (do not worry, we will see this later).&lt;br /&gt;
	pcl::ModelCoefficients::Ptr coefficients(new pcl::ModelCoefficients);&lt;br /&gt;
	pcl::SACSegmentation&amp;lt;pcl::PointXYZ&amp;gt; segmentation;&lt;br /&gt;
	segmentation.setOptimizeCoefficients(true);&lt;br /&gt;
	segmentation.setModelType(pcl::SACMODEL_PLANE);&lt;br /&gt;
	segmentation.setMethodType(pcl::SAC_RANSAC);&lt;br /&gt;
	segmentation.setDistanceThreshold(0.01);&lt;br /&gt;
	segmentation.setInputCloud(cloudAll);&lt;br /&gt;
&lt;br /&gt;
	// Object for storing the indices.&lt;br /&gt;
	pcl::PointIndices::Ptr pointIndices(new pcl::PointIndices);&lt;br /&gt;
&lt;br /&gt;
	segmentation.segment(*pointIndices, *coefficients);&lt;br /&gt;
&lt;br /&gt;
	// Object for extracting points from a list of indices.&lt;br /&gt;
	pcl::ExtractIndices&amp;lt;pcl::PointXYZ&amp;gt; extract;&lt;br /&gt;
	extract.setInputCloud(cloudAll);&lt;br /&gt;
	extract.setIndices(pointIndices);&lt;br /&gt;
	// We will extract the points that are NOT indexed (the ones that are not in a plane).&lt;br /&gt;
	extract.setNegative(true);&lt;br /&gt;
	extract.filter(*cloudExtracted);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Indices, [Negative mode]&lt;br /&gt;
* '''Output''': Points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/extract_indices.php Extracting indices from a PointCloud]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/structpcl_1_1_point_indices.html pcl::PointIndices]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_extract_indices.html pcl::ExtractIndices]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_extract_indices.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Removing NaNs==&lt;br /&gt;
&lt;br /&gt;
The cloud retrieved from the sensor may contain several kind of measurement errors and/or inaccuracies. One of them is the presence of [http://en.wikipedia.org/wiki/NaN NaN (Not a Number)] values in the coordinates of some points, as you can see in the following file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;# .PCD v0.7 - Point Cloud Data file format&lt;br /&gt;
VERSION 0.7&lt;br /&gt;
FIELDS x y z rgba&lt;br /&gt;
SIZE 4 4 4 4&lt;br /&gt;
TYPE F F F U&lt;br /&gt;
COUNT 1 1 1 1&lt;br /&gt;
WIDTH 640&lt;br /&gt;
HEIGHT 480&lt;br /&gt;
VIEWPOINT 0 0 0 1 0 0 0&lt;br /&gt;
POINTS 307200&lt;br /&gt;
DATA ascii&lt;br /&gt;
nan nan nan 10135463&lt;br /&gt;
nan nan nan 10398635&lt;br /&gt;
nan nan nan 10070692&lt;br /&gt;
nan nan nan 10268071&lt;br /&gt;
...&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cloud objects have a member function called &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;is_dense()&amp;quot;&amp;lt;/span&amp;gt; that will return true if all points have valid, finite values. A NaN indicates that there was a problem measuring the distance to that point, maybe because the sensor was too close or too far, or because the surface was reflective.&lt;br /&gt;
&lt;br /&gt;
Sadly, many of PCL's algorithms will fail if even a single NaN is present in the cloud you feed them as input, with a message like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;Assertion `point_representation_-&amp;gt;isValid (point) &amp;amp;&amp;amp; &amp;quot;Invalid (NaN, Inf) point coordinates given to radiusSearch!&amp;quot;' failed.&amp;quot;''&amp;lt;/span&amp;gt;. If this happens, you will have to remove such values from the cloud. You can use a program like the following to &amp;quot;clean&amp;quot; a cloud from all NaN points:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/filter.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	if (argc != 3)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tUsage: &amp;quot; &amp;lt;&amp;lt; argv[0] &amp;lt;&amp;lt; &amp;quot; &amp;lt;input cloud&amp;gt; &amp;lt;output cloud&amp;gt;&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// The mapping tells you to what points of the old cloud the new ones correspond,&lt;br /&gt;
	// but we will not use it.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; mapping;&lt;br /&gt;
	pcl::removeNaNFromPointCloud(*cloud, *cloud, mapping);&lt;br /&gt;
&lt;br /&gt;
	// Save it back.&lt;br /&gt;
	pcl::io::savePCDFileASCII(argv[2], *cloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The problem with this method is that it will not keep the cloud organized. All clouds store a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;width&amp;quot;&amp;lt;/span&amp;gt; and a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;height&amp;quot;&amp;lt;/span&amp;gt; variable. In ''unorganized'' clouds, the total number of points is the same as the width, and the height is set to 1. In ''organized'' clouds (like the ones taken from camera-like sensors such as Kinect or Xtion), the width and the height are the same that the pixels of the image resolution the sensor works with. Points are distributed in rows like in the depth image, and every one of them corresponds to a pixel. The member function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;isOrganized()&amp;quot;&amp;lt;/span&amp;gt; will return true if the height is greater than 1.&lt;br /&gt;
&lt;br /&gt;
Because removing NaNs will change the number of points of the cloud, it can no longer be kept organized with the original width/height ratio, so the function will set the height to 1. This is not a big issue, as only a few of PCL's algorithms work explicitly with organized clouds (and they mostly use it for optimization), but you must take it into account.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Cloud&lt;br /&gt;
* '''Output''': Cloud without NaNs&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__filters.html#gac463283a9e9c18a66d3d29b28a575064 pcl::removeNaNFromPointCloud()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_remove_NaNs.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Saving a PNG==&lt;br /&gt;
&lt;br /&gt;
PCL offers the possibility of saving the values of a point cloud to a PNG image file. Obviously, this can only be done with organized clouds, because the rows and columns of the resulting image will correspond exactly to the ones of the cloud. For example, if you have a point cloud taken from a sensor like Kinect or Xtion, you can use this to retrieve the 640x480 RGB image that matches the cloud.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/io/png_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGB&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Save the image (cloud must be organized).&lt;br /&gt;
	pcl::io::savePNGFile(&amp;quot;output.png&amp;quot;, *cloud, &amp;quot;rgb&amp;quot;);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;savePNGFile()&amp;quot;&amp;lt;/span&amp;gt; has several implementations; in this case, we have chosen the one that lets you specify which fields to save. If you omit that parameter, the function will save the RGB fields by default.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__io.html#gabd03e43fac3048635512f447c8bc50b3 pcl::io::savePNGFile()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_save_PNG.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Computing the centroid==&lt;br /&gt;
&lt;br /&gt;
The [http://en.wikipedia.org/wiki/Centroid centroid] of a cloud is a point with coordinates that result from computing the mean of the values of all points in the cloud. You could say it is the &amp;quot;center of mass&amp;quot;, and it has several uses for certain algorithms. If you intend to compute the actual center of gravity of a [[PCL/OpenNI_tutorial_3:_Cloud_processing_%28advanced%29#Segmentation|clustered]] object though, remember that the sensor does not retrieve the part of the object that is hidden from the camera, like the back faces that are occluded by the front ones, or the inside. You only have the part of the surface that faces the camera.&lt;br /&gt;
&lt;br /&gt;
The centroid of a cloud can be computed with a single function call:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/common/centroid.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object to store the centroid coordinates.&lt;br /&gt;
	Eigen::Vector4f centroid;&lt;br /&gt;
	&lt;br /&gt;
	pcl::compute3DCentroid(*cloud, centroid);&lt;br /&gt;
&lt;br /&gt;
	std::cout &amp;lt;&amp;lt; &amp;quot;The XYZ coordinates of the centroid are: (&amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[0] &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[1] &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[2] &amp;lt;&amp;lt; &amp;quot;).&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl.html#a23daec3829d2d4100a2f185372b3753a pcl::compute3DCentroid()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_compute_centroid.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Feature estimation=&lt;br /&gt;
&lt;br /&gt;
A ''feature'' of a point is some characteristic that can help us to tell it apart from a different point. Of course, you could say that the XYZ coordinates would be characteristic enough, but when interpolating the data of two different, yet corresponding, clouds, they become useless. We need something better, that will have similar values when computed for the same spot of a surface in both sets.&lt;br /&gt;
&lt;br /&gt;
Normals are an example of a very simple feature. 3D [http://pointclouds.org/documentation/tutorials/how_features_work.php features] will be important later in our tutorials, when we talk about [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29|descriptors]] (more detailed &amp;quot;signatures&amp;quot; of points used for close matching).&lt;br /&gt;
&lt;br /&gt;
==Normal estimation==&lt;br /&gt;
&lt;br /&gt;
As you may remember from geometry class, the [http://en.wikipedia.org/wiki/Surface_normal normal] of a plane is an unit vector that is perpendicular to it. The normal of a surface at a point is defined as the vector that is perpendicular to the plane that is tangent to the surface at the point. Surface normals can be calculated for the points of a cloud, too. It is considered a feature, albeit not a very discriminative one.&lt;br /&gt;
&lt;br /&gt;
I will not go into detail with the math of the estimation method, but you just have to know that is uses the nearest neighbors (the points that are closest to the one we are calculating the normal for) to find out the tangent plane and the normal vector. You can customize the method with the search radius (think about a sphere of that radius, centered in the point; all neighboring points that lie within will be used for the computation) and the viewpoint (by default, the output normals will be directionless; by supposing that all vectors must point towards the camera - because otherwise they would belong to surfaces that are not visible from the sensor - they can all be re-oriented accordingly).&lt;br /&gt;
&lt;br /&gt;
Normals are also important because they give us information about the ''curvature'' of the surface at some point, which can be used to our advantage. Many of PCL's algorithms will require us to provide the normals of the input cloud. To estimate them, you can use the following code:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for normal estimation.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	// For every point, use all neighbors in a radius of 3cm.&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	// A kd-tree is a data structure that makes searches efficient. More about it later.&lt;br /&gt;
	// The normal estimation object will use it to find nearest neighbors.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
&lt;br /&gt;
	// Calculate the normals.&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Visualize them.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Normals&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(cloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	// Display one normal out of 20, as a line of length 3cm.&lt;br /&gt;
	viewer-&amp;gt;addPointCloudNormals&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt;(cloud, normals, 20, 0.03, &amp;quot;normals&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, normals are stored in &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointCloud&amp;quot;&amp;lt;/span&amp;gt; objects too, instantiated to the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt; type. If you ever extract some indices from a cloud before feeding them to an algorithm, be sure to do the same with the normals. I have not made use of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setViewPoint()&amp;quot;&amp;lt;/span&amp;gt; function, but you get the idea.&lt;br /&gt;
&lt;br /&gt;
Instead of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setRadiusSearch()&amp;quot;&amp;lt;/span&amp;gt;, you can make use of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setKSearch()&amp;quot;&amp;lt;/span&amp;gt;, which takes an integer, ''K''. This will take into account a point's ''K'' nearest neighbors to compute the normal. Please mind that this can produce some odd results for points that are not in dense areas, as some of the &amp;quot;neighbors&amp;quot; will be too far from them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Normal_estimation.png|thumb|center|600px|Visualization of the normals computed for a point cloud.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search method, Radius | Neighbor count, [Indices], [Viewpoint]&lt;br /&gt;
* '''Output''': Normals&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/normal_estimation.php Estimating Surface Normals in a Point Cloud]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_normal_estimation.html pcl::NormalEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_estimate_normals.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Integral images===&lt;br /&gt;
&lt;br /&gt;
Integral images are a method for normal estimation on organized clouds. The algorithm sees the cloud as a depth image, and creates certain rectangular areas over which the normals are computed, by taking into account the relationship between neighboring &amp;quot;pixels&amp;quot; (points), without the need to run lookups on a search structure like a tree. Because of this, it is very efficient and the normals are usually computed in a split second. If you are performing real time computations on clouds taken from a RGB-D sensor, this is the approach you should use because the previous method will probably take several seconds to finish.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/integral_image_normal.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for normal estimation.&lt;br /&gt;
	pcl::IntegralImageNormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	// Other estimation methods: COVARIANCE_MATRIX, AVERAGE_DEPTH_CHANGE, SIMPLE_3D_GRADIENT.&lt;br /&gt;
	// They determine the smoothness of the result, and the running time.&lt;br /&gt;
	normalEstimation.setNormalEstimationMethod(normalEstimation.AVERAGE_3D_GRADIENT);&lt;br /&gt;
	// Depth threshold for computing object borders based on depth changes, in meters.&lt;br /&gt;
	normalEstimation.setMaxDepthChangeFactor(0.02f);&lt;br /&gt;
	// Factor that influences the size of the area used to smooth the normals.&lt;br /&gt;
	normalEstimation.setNormalSmoothingSize(10.0f);&lt;br /&gt;
&lt;br /&gt;
	// Calculate the normals.&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Visualize them.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Normals&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(cloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	// Display one normal out of 20, as a line of length 3cm.&lt;br /&gt;
	viewer-&amp;gt;addPointCloudNormals&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt;(cloud, normals, 20, 0.03, &amp;quot;normals&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Estimation method, [Smoothing factor], [Depth threshold], [Rectangle size], [Border policy], [Depth dependent smoothing]&lt;br /&gt;
* '''Output''': Normals&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/normal_estimation_using_integral_images.php Normal Estimation Using Integral Images]&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://islab.ulsan.ac.kr/files/announcement/411/Adaptive%20Neighborhood%20Selection%20for%20Real-Time%20Surface%20Normal.pdf Adaptive Neighborhood Selection for Real-Time Surface Normal Estimation from Organized Point Cloud Data Using Integral Images] (S. Holzer et al., 2012)&lt;br /&gt;
** [https://www.willowgarage.com/sites/default/files/holz_rgbd_seg.pdf Real-Time Plane Segmentation using RGB-D Cameras] (Dirk Holz et al., 2012)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_integral_image_normal_estimation.html pcl::IntegralImageNormalEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_estimate_normals_integral.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Decomposition=&lt;br /&gt;
&lt;br /&gt;
When I talk about ''decomposition'', I mean the process of creating an ordered, organized data structure from the points of the cloud, with some purpose in mind like making further analysis easier, or performing filters.&lt;br /&gt;
&lt;br /&gt;
==''k''-d tree==&lt;br /&gt;
&lt;br /&gt;
A [http://en.wikipedia.org/wiki/K-d_tree ''k''-d tree] (''k''-dimensional tree) is a data structure that organizes a set of points in a ''k''-dimensional space, in a way that makes range search operations very efficient (for example, finding the nearest neighbor of a point, which is the point that is closest to it in space; or finding all neighbors within a radius).&lt;br /&gt;
&lt;br /&gt;
It is a binary tree, that is, every non-leaf node in it has two children nodes, the &amp;quot;left&amp;quot; and &amp;quot;right&amp;quot; ones. Each level splits space on a specific dimension. For example, in 3-dimensional space, at the root node (first level) all children would be split based on the first dimension, X (points having coordinates with greater X values would go to the right subtree, points with lesser values to the left one). At the second level (the nodes we just created), the split would be done on the Y axis, following the same criteria. At the third level (the grandchildren), we would use the Z axis. At the fourth level, we would get back to the X axis, and so on. Usually, the median point is chosen to be root at every level.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Kdtree.png|thumb|center|700px|Example of a 2D ''k''d-tree (image from Wikimedia Commons).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In PCL, creating a ''k''-d tree from a cloud is a straightforward and fast operation:&lt;br /&gt;
&lt;br /&gt;
===Searching===&lt;br /&gt;
&lt;br /&gt;
The following code snippet shows you how to use a ''k''-d tree to perform efficient searches:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/search/kdtree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// kd-tree object.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt; kdtree;&lt;br /&gt;
	kdtree.setInputCloud(cloud);&lt;br /&gt;
&lt;br /&gt;
	// We will find the 5 nearest neighbors of this point&lt;br /&gt;
	// (it does not have to be one of the cloud's, we can use any coordinate).&lt;br /&gt;
	pcl::PointXYZ point;&lt;br /&gt;
	point.x = 0.0524343;&lt;br /&gt;
	point.y = -0.58016;&lt;br /&gt;
	point.z = 1.776;&lt;br /&gt;
	// This vector will store the output neighbors.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; pointIndices(5);&lt;br /&gt;
	// This vector will store their squared distances to the search point.&lt;br /&gt;
	std::vector&amp;lt;float&amp;gt; squaredDistances(5);&lt;br /&gt;
	// Perform the search, and print out results.&lt;br /&gt;
	if (kdtree.nearestKSearch(point, 5, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;5 nearest neighbors of the point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Now we find all neighbors within 3cm of the point&lt;br /&gt;
	// (inside a sphere of radius 3cm centered at the point).&lt;br /&gt;
	if (kdtree.radiusSearch(point, 0.03, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors within 3cm:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another alternative would be &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;KdTreeFLANN&amp;quot;&amp;lt;/span&amp;gt;, which makes use of [http://www.cs.ubc.ca/research/flann/ FLANN] (Fast Library for Approximate Nearest Neighbors).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius | Neighbor count&lt;br /&gt;
* '''Output''': Point indices, Squared distances&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/kdtree_search.php How to use a KdTree to search]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1search_1_1_kd_tree.html pcl::search::KdTree]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_kd_tree_f_l_a_n_n.html pcl::KdTreeFLANN]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_kdtree_search.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Octree==&lt;br /&gt;
&lt;br /&gt;
Like the ''k''-d tree, the [http://en.wikipedia.org/wiki/Octree octree] is an hierarchical tree data structure useful for searches, but also compression or downsampling. Each octree node (called a ''voxel'', a term that will sound familiar for users of 3D engines, and which could be described as a &amp;quot;3D pixel&amp;quot;, that is, a cube) has either eight children or no children. The root node describes a cubic bounding box which encapsulates all points. At every level, it becomes subdivided by a factor of 2 which results in an increased&lt;br /&gt;
voxel resolution. That way, we can selectively give more resolution to certain zones.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Octree.png|thumb|center|700px|Depiction of an octree (image from Wikimedia Commons).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Searching===&lt;br /&gt;
&lt;br /&gt;
Downsampling will be covered in a different section, so now I will show you how to perform searches with an octree:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/octree/octree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Octree object, with a maximum resolution of 128&lt;br /&gt;
	// (resolution at lowest octree level).&lt;br /&gt;
	pcl::octree::OctreePointCloudSearch&amp;lt;pcl::PointXYZ&amp;gt; octree(128);&lt;br /&gt;
	octree.setInputCloud(cloud);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
&lt;br /&gt;
	// We will find all neighbors within the same voxel as this point.&lt;br /&gt;
	// (it does not have to be one of the cloud's, we can use any coordinate).&lt;br /&gt;
	pcl::PointXYZ point;&lt;br /&gt;
	point.x = 0.0524343;&lt;br /&gt;
	point.y = -0.58016;&lt;br /&gt;
	point.z = 1.776;&lt;br /&gt;
	// This vector will store the output neighbors.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; pointIndices;&lt;br /&gt;
	// Perform the search, and print out results.&lt;br /&gt;
	if (! octree.voxelSearch(point, pointIndices) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors in the same voxel:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// We will find the 5 nearest neighbors of the point.&lt;br /&gt;
	// This vector will store their squared distances to the search point.&lt;br /&gt;
	std::vector&amp;lt;float&amp;gt; squaredDistances;&lt;br /&gt;
	if (octree.nearestKSearch(point, 5, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;5 nearest neighbors of the point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Now we find all neighbors within 3cm of the point&lt;br /&gt;
	// (inside a sphere of radius 3cm centered at the point).&lt;br /&gt;
	// The point DOES have to belong in the cloud.&lt;br /&gt;
	if (octree.radiusSearch(point, 0.03, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors within 3cm:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, the code is mostly interchangeable with the one of the ''k''d-tree.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius | Neighbor count&lt;br /&gt;
* '''Output''': Point indices, Squared distances&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/octree.php Spatial Partitioning and Search Operations with Octrees]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1search_1_1_octree.html pcl::search::Octree]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1octree_1_1_octree_point_cloud_search.html pcl::octree::OctreePointCloudSearch]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_octree_search.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Compressing/Decompressing===&lt;br /&gt;
&lt;br /&gt;
On the other hand, this is the code needed to compress and decompress a point cloud with PCL:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/compression/octree_pointcloud_compression.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr decompressedCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Octree compressor object.&lt;br /&gt;
	// Check /usr/include/pcl-&amp;lt;version&amp;gt;/pcl/compression/compression_profiles.h for more profiles.&lt;br /&gt;
	// The second parameter enables the output of information.&lt;br /&gt;
	pcl::io::OctreePointCloudCompression&amp;lt;pcl::PointXYZ&amp;gt; octreeCompression(pcl::io::MED_RES_ONLINE_COMPRESSION_WITHOUT_COLOR, true);&lt;br /&gt;
	// Stringstream that will hold the compressed cloud.&lt;br /&gt;
	std::stringstream compressedData;&lt;br /&gt;
&lt;br /&gt;
	// Compress the cloud (you would save the stream to disk).&lt;br /&gt;
	octreeCompression.encodePointCloud(cloud, compressedData);&lt;br /&gt;
&lt;br /&gt;
	// Decompress the cloud.&lt;br /&gt;
	octreeCompression.decodePointCloud(compressedData, decompressedCloud);&lt;br /&gt;
&lt;br /&gt;
	// Display the decompressed cloud.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Octree compression&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(decompressedCloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforeCompression.png | Original point cloud.&lt;br /&gt;
File:AfterCompression.png | Point cloud retrieved after octree compression.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As you can see, the process has decreased the amount of points in the cloud. The default parameters of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;OctreePointCloudCompression&amp;quot;&amp;lt;/span&amp;gt; object establish an octree resolution (at lowest level) of 1cm, and a coordinate precision of 1mm. These values can of course be changed to suit your needs, but simplifying the cloud in this way is an useful operation that we will cover later.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Resolution, Compression profile&lt;br /&gt;
* '''Output''': Compressed stream&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/compression.php Point Cloud Compression]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1io_1_1_octree_point_cloud_compression.html pcl::io::OctreePointCloudCompression]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_octree_compress.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Detecting changes===&lt;br /&gt;
&lt;br /&gt;
PCL has an octree-based object that allows to detect changes between two point clouds (for example, two clouds taken one after another with an OpenNI device). This object stores the trees of both clouds at the same time, and compares them, returning a list of indices of the points that are new in the second cloud (they did not exist in the first):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/octree/octree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudA(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudB(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read two PCD files from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudA) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[2], *cloudB) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Change detector object, with a resolution of 128&lt;br /&gt;
	// (resolution at lowest octree level).&lt;br /&gt;
	pcl::octree::OctreePointCloudChangeDetector&amp;lt;pcl::PointXYZ&amp;gt; octree(128.0f);&lt;br /&gt;
&lt;br /&gt;
	// Add cloudA to the octree.&lt;br /&gt;
	octree.setInputCloud(cloudA);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
	// The change detector object is able to store two clouds at the same time;&lt;br /&gt;
	// with this we can reset the buffer but keep the previous tree saved.&lt;br /&gt;
	octree.switchBuffers();&lt;br /&gt;
	// Now, add cloudB to the octree like we did before.&lt;br /&gt;
	octree.setInputCloud(cloudB);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; newPoints;&lt;br /&gt;
	// Get a vector with the indices of all points that are new in cloud B,&lt;br /&gt;
	// when compared with the ones that existed in cloud A.&lt;br /&gt;
	octree.getPointIndicesFromNewVoxels(newPoints);&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; newPoints.size(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Point (&amp;quot; &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].x &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].y &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].z&lt;br /&gt;
				  &amp;lt;&amp;lt; &amp;quot;) was not in cloud A but is in cloud B&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
	std::cout &amp;lt;&amp;lt; newPoints.size() &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
} &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': First cloud, Second cloud, Resolution&lt;br /&gt;
* '''Output''': Point indices&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/octree_change.php Spatial change detection on unorganized point cloud data]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1octree_1_1_octree_point_cloud_change_detector.html pcl::octree::OctreePointCloudChangeDetector]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_change_detection.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Filtering=&lt;br /&gt;
&lt;br /&gt;
The raw point cloud, as returned from the sensor, usually needs some kind of polishing before being handed to processing algorithms. Normally, this is due to the low precision of the sensor, which leads to noise, incorrect measurements, holes in surfaces... Sometimes it is just the opposite: the cloud has so many points that processing it would take forever. Luckily, several filtering methods exist to &amp;quot;fix&amp;quot; many recurring errors in point clouds.&lt;br /&gt;
&lt;br /&gt;
==Passthrough filter==&lt;br /&gt;
&lt;br /&gt;
A passthrough filter will remove any point from the cloud whose values do not fall in a certain range, user-given. For example, if you wanted to discard all points farther away than, say, 3 meters, you would run the filter on the Z coordinate with a range of [0,3]. This filter can be useful to discard unneeded objects from the cloud, but you may have to adapt a different reference frame if the default one (relative to the sensor) is not fitting. For example, filtering on the Y value to remove all points not sitting on a table will yield unwanted results if the camera was at an odd angle.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/passthrough.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::PassThrough&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Filter out all points with Z values not in the [0-2] range.&lt;br /&gt;
	filter.setFilterFieldName(&amp;quot;z&amp;quot;);&lt;br /&gt;
	filter.setFilterLimits(0.0, 2.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforePassthrough.png | Original point cloud.&lt;br /&gt;
File:AfterPassthrough.png | Point cloud after applying a passthrough filter with Z in [0,2].&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setFilterLimitsNegative()&amp;quot;&amp;lt;/span&amp;gt; function (not seen in the example) will tell the filter object whether to return the points inside the range (&amp;quot;false&amp;quot;, the default value) or outside it (&amp;quot;true&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Filter field, Filter limits, [Negative mode]&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/passthrough.php Filtering a PointCloud using a PassThrough filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_pass_through.html pcl::PassThrough]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_passthrough_filter.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Conditional removal==&lt;br /&gt;
&lt;br /&gt;
Another way of getting the exact same result than with the previous example would be to use a conditional removal filter. We can build any kind of condition we want for the values of the point:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/conditional_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// We must build a condition.&lt;br /&gt;
	// And &amp;quot;And&amp;quot; condition requires all tests to check true. &amp;quot;Or&amp;quot; conditions also available.&lt;br /&gt;
	pcl::ConditionAnd&amp;lt;pcl::PointXYZ&amp;gt;::Ptr condition(new pcl::ConditionAnd&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// First test, the point's Z value must be greater than (GT) 0.&lt;br /&gt;
	condition-&amp;gt;addComparison(pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;::ConstPtr(new pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;(&amp;quot;z&amp;quot;, pcl::ComparisonOps::GT, 0.0)));&lt;br /&gt;
	// Second test, the point's Z value must be less than (LT) 2.&lt;br /&gt;
	condition-&amp;gt;addComparison(pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;::ConstPtr(new pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;(&amp;quot;z&amp;quot;, pcl::ComparisonOps::LT, 2.0)));&lt;br /&gt;
	// Checks available: GT, GE, LT, LE, EQ.&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::ConditionalRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setCondition(condition);&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// If true, points that do not pass the filter will be set to a certain value (default NaN).&lt;br /&gt;
	// If false, they will be just removed, but that could break the structure of the cloud&lt;br /&gt;
	// (organized clouds are clouds taken from camera-like sensors that return a matrix-like image).&lt;br /&gt;
	filter.setKeepOrganized(true);&lt;br /&gt;
	// If keep organized was set true, points that failed the test will have their Z value set to this.&lt;br /&gt;
	filter.setUserFilterValue(0.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Condition, [Keep organized]&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/remove_outliers.php Removing outliers using a Conditional or RadiusOutlier removal]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_conditional_removal.html pcl::ConditionalRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_conditional_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Outlier removal==&lt;br /&gt;
&lt;br /&gt;
Outliers are lonely points that are spread here and there in the cloud, like annoying mosquitoes. They are the product of the sensor's inaccuracy, which registers measurements where there should not be any. Outliers are considered undesired noise because they can introduce errors in calculations, like for example the normal estimation. Hence, trimming they out of the cloud will not only make computations faster (because the less points there are, the less time it takes, as we will see with downsampling) but also help us get more precise values.&lt;br /&gt;
&lt;br /&gt;
===Radius-based===&lt;br /&gt;
&lt;br /&gt;
The radius-based outlier removal is the simplest method of all. You must specify a search radius and the minimum number of neighbors than a point must have to avoid being labelled as outlier. The algorithm will then iterate through all points (which can be slow in if the cloud is big) and perform the check: if less than that number of points are found within the radius, it is removed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/radius_outlier_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::RadiusOutlierRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Every point must have 10 neighbors within 15cm, or it will be removed.&lt;br /&gt;
	filter.setRadiusSearch(0.15);&lt;br /&gt;
	filter.setMinNeighborsInRadius(10);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All PCL classes that inherit from &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::FilterIndices&amp;quot;&amp;lt;/span&amp;gt; (like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::RadiusOutlierRemoval&amp;quot;&amp;lt;/span&amp;gt;) provide the function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt;, which if set to true, makes the filter return the points that did NOT pass the filter.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius, Neighbor count&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/remove_outliers.php Removing outliers using a Conditional or RadiusOutlier removal]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_radius_outlier_removal.html pcl::RadiusOutlierRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_radius_outlier_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Statistical===&lt;br /&gt;
&lt;br /&gt;
The statistical outlier removal process is a bit more refined. First, for every point, the mean distance to its ''K'' neighbors is computed. Then, if we asume that the result is a [http://en.wikipedia.org/wiki/Normal_distribution normal (gaussian) distribution] with a mean ''μ'' and a standard deviation ''σ'', we can deem it safe to remove all points with mean distances that fall out of the global mean plus deviation. Basically, it runs a statistical analysis of the distances between neighboring points, and trims all which are not considered &amp;quot;normal&amp;quot; (you define what &amp;quot;normal&amp;quot; is with the parameters of the algorithm).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/statistical_outlier_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::StatisticalOutlierRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Set number of neighbors to consider to 50.&lt;br /&gt;
	filter.setMeanK(50);&lt;br /&gt;
	// Set standard deviation multiplier to 1.&lt;br /&gt;
	// Points with a distance larger than 1 standard deviation of the mean distance will be outliers.&lt;br /&gt;
	filter.setStddevMulThresh(1.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can make use of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt; with this filter too.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Neighbor count, Deviation multiplier&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/statistical_outlier.php Removing outliers using a StatisticalOutlierRemoval filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_statistical_outlier_removal.html pcl::StatisticalOutlierRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_statistical_outlier_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Resampling==&lt;br /&gt;
&lt;br /&gt;
Resampling means changing the number of points of the cloud, either by reducing (downsampling) or increasing (upsampling). Both have a purpose, as we will see.&lt;br /&gt;
&lt;br /&gt;
===Downsampling===&lt;br /&gt;
&lt;br /&gt;
A sensor like Kinect or Xtion produces a cloud with 307200 (640x480) points. Believe it or not, that is a lot of information, and the new Kinect is already out there, working at a higher resolution. Keep in mind that, if we were to perform a simple operation on every point of a cloud, it would be of ''O(n)'', n being the number of points. If we had to compare every point with its ''k'' nearest neighbors, it would be ''O(nk)''. You may have noticed by now that some of the algorithms I showed you take several seconds to run on your PC; you now know why.&lt;br /&gt;
&lt;br /&gt;
Optimization techniques are being implemented as I write this to allow PCL to employ the computer's GPU (graphic card) to perform the computations using [https://developer.nvidia.com/about-cuda CUDA]. Many classes already provide an alternative in the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::gpu&amp;quot;&amp;lt;/span&amp;gt; namespace. This offers dramatic improvements in speed, but it is not always a possibility.&lt;br /&gt;
&lt;br /&gt;
Another option is to reduce the complexity of the cloud, that is, to remove all points that are not needed for the purpose. There are several ways to do this. You could trim all outliers from the cloud, then perform segmentation to extract only the parts that interest you. A common operation is also downsampling, which will give you back a cloud that, for all intents and purposes, equivalent to the original one, despite having less points.&lt;br /&gt;
&lt;br /&gt;
Downsampling is done via ''voxelization''. I already explained what voxels were in the [[#Octree | octree section]]. The cloud is divided in multiple cube-shaped regions with the desired resolution. Then, all points inside every voxel are processed so only one remains. The simplest way would be to randomly select one of them, but a more accurate approach would be to compute the [[PCL/OpenNI_tutorial_2:_Cloud_processing_%28basic%29#Computing_the_centroid|centroid]], which is the point whose coordinates are the mean values of all the points that belong to the voxel.&lt;br /&gt;
&lt;br /&gt;
If done with sensible parameters, downsampling will yield results that are precise enough to work with, at the cost of less CPU power.&lt;br /&gt;
&lt;br /&gt;
====Voxel grid====&lt;br /&gt;
&lt;br /&gt;
A straightforward way of downsampling a cloud:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/voxel_grid.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::VoxelGrid&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// We set the size of every voxel to be 1x1x1cm&lt;br /&gt;
	// (only one point per every cubic centimeter will survive).&lt;br /&gt;
	filter.setLeafSize(0.01f, 0.01f, 0.01f);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforeDownsampling.png | Original point cloud.&lt;br /&gt;
File:AfterDownsampling.png | Point cloud after being downsampled with a leaf size of 1cm.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Voxel size&lt;br /&gt;
* '''Output''': Downsampled cloud&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/voxel_grid.php Downsampling a PointCloud using a VoxelGrid filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_voxel_grid.html pcl::VoxelGrid]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_downsample.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Uniform sampling====&lt;br /&gt;
&lt;br /&gt;
This class does basically the same, but it outputs the indices of the points that survived the process. It is a common way of choosing keypoints when computing [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29|descriptors]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/keypoints/uniform_sampling.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Uniform sampling object.&lt;br /&gt;
	pcl::UniformSampling&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// We set the size of every voxel to be 1x1x1cm&lt;br /&gt;
	// (only one point per every cubic centimeter will survive).&lt;br /&gt;
	filter.setRadiusSearch(0.01f);&lt;br /&gt;
	// We need an additional object to store the indices of surviving points.&lt;br /&gt;
	pcl::PointCloud&amp;lt;int&amp;gt; keypointIndices;&lt;br /&gt;
&lt;br /&gt;
	filter.compute(keypointIndices);&lt;br /&gt;
	pcl::copyPointCloud(*cloud, keypointIndices.points, *filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Voxel size&lt;br /&gt;
* '''Output''': Point indices&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_uniform_sampling.html pcl::UniformSampling]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_downsample_uniform.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Upsampling===&lt;br /&gt;
&lt;br /&gt;
Upsampling is a form of [[PCL/OpenNI_tutorial_2:_Cloud_processing_%28basic%29#Reconstruction| surface reconstruction]], but I put it here. When you have fewer points than you would like, upsampling can help you to recover the original surface(s), by [http://en.wikipedia.org/wiki/Upsampling interpolating] the ones you currently have, which is just a sophisticated guess. The result will never be a hundred percent accurate, but sometimes it is the only choice. So, before downsampling your clouds, be sure to keep a copy of the original data!&lt;br /&gt;
&lt;br /&gt;
PCL uses the [http://en.wikipedia.org/wiki/Moving_least_squares Moving Least Squares (MLS)] algorithm for this. The code looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/mls.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filtering object.&lt;br /&gt;
	pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Object for searching.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree;&lt;br /&gt;
	filter.setSearchMethod(kdtree);&lt;br /&gt;
	// Use all neighbors in a radius of 3cm.&lt;br /&gt;
	filter.setSearchRadius(0.03);&lt;br /&gt;
	// Upsampling method. Other possibilites are DISTINCT_CLOUD, RANDOM_UNIFORM_DENSITY&lt;br /&gt;
	// and VOXEL_GRID_DILATION. NONE disables upsampling. Check the API for details.&lt;br /&gt;
	filter.setUpsamplingMethod(pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointXYZ&amp;gt;::SAMPLE_LOCAL_PLANE);&lt;br /&gt;
	// Radius around each point, where the local plane will be sampled.&lt;br /&gt;
	filter.setUpsamplingRadius(0.03);&lt;br /&gt;
	// Sampling step size. Bigger values will yield less (if any) new points.&lt;br /&gt;
	filter.setUpsamplingStepSize(0.02);&lt;br /&gt;
&lt;br /&gt;
	filter.process(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:PCL_upsampling_original.png | Original point cloud (coffee mug on a table).&lt;br /&gt;
File:PCL_upsampling_downsampled.png | Downsampled with a leaf size of 1cm.&lt;br /&gt;
File:PCL_upsampling_local_plane.png | Upsampled using SAMPLE_LOCAL_PLANE (upsampling radius 3cm, step size 2cm).&lt;br /&gt;
File:PCL_upsampling_uniform_density.png | Upsampled using RANDOM_UNIFORM_DENSITY (target point density of 300 per 3cm radius). &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check the [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html#ac29ad97b98353d64ce64e2ff924f7d20 setUpsamplingMethod()] API to see all upsampling methods available, and the parameters you need to set for them to work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search radius, Upsampling method, [Upsampling parameters]&lt;br /&gt;
* '''Output''': Upsampled cloud&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html pcl::MovingLeastSquares]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_upsample.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Reconstruction=&lt;br /&gt;
&lt;br /&gt;
==Surface smoothing==&lt;br /&gt;
&lt;br /&gt;
As stated, depth sensors are not very accurate, and the resulting clouds have measurement errors, outliers, holes in surfaces, etc. Surfaces can be reconstructed by means of an algorithm, that iterates through all points and interpolates the data, trying to guess how the original surface was. Like with upsampling, PCL uses the [http://en.wikipedia.org/wiki/Moving_least_squares MLS] algorithm and class. Performing this step is important, because the resulting cloud's normals will be more accurate.&lt;br /&gt;
&lt;br /&gt;
Surface smoothing is easily done with the following code:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/mls.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// The output will also contain the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;::Ptr smoothedCloud(new pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Smoothing object (we choose what point types we want as input and output).&lt;br /&gt;
	pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointNormal&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Use all neighbors in a radius of 3cm.&lt;br /&gt;
	filter.setSearchRadius(0.03);&lt;br /&gt;
	// If true, the surface and normal are approximated using a polynomial estimation&lt;br /&gt;
	// (if false, only a tangent one).&lt;br /&gt;
	filter.setPolynomialFit(true);&lt;br /&gt;
	// We can tell the algorithm to also compute smoothed normals (optional).&lt;br /&gt;
	filter.setComputeNormals(true);&lt;br /&gt;
	// kd-tree object for performing searches.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree;&lt;br /&gt;
	filter.setSearchMethod(kdtree);&lt;br /&gt;
&lt;br /&gt;
	filter.process(*smoothedCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The output cloud will also contain the improved normals of the smoother cloud.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search radius, [Polynomial fit], [Compute normals]&lt;br /&gt;
* '''Output''': Smoothed cloud, [Smoothed normals]&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/resampling.php Smoothing and normal estimation based on polynomial reconstruction]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html pcl::MovingLeastSquares]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_surface_smoothing.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)&amp;diff=4737</id>
		<title>PCL/OpenNI tutorial 2: Cloud processing (basic)</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)&amp;diff=4737"/>
				<updated>2015-11-01T23:01:21Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Point_cloud.png|thumb|right|200px|Rendering of a point cloud, showing a desktop.]]&lt;br /&gt;
&lt;br /&gt;
All right: in the [[PCL/OpenNI tutorial 1: Installing and testing| previous tutorial]] you installed OpenNI and PCL. You managed to get your device working. You compiled and ran the example program, watched as it fetched one frame after another, and even saved a couple of point clouds to disk. Now you may be wondering &amp;quot;What about now? What use depth sensors are? What can I do with point clouds?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Well, depth cameras have a lot of useful applications. For example, you could use them as [http://tech.integrate.biz/kinect_mocap.htm motion tracking] hardware and save you thousands of dollars in professional gear. The original Kinect was actually designed as an alternative to physical controllers (e.g., gamepads) in games. [https://frantracerkinectft.codeplex.com/ Hand gestures] can also be recognized, so people have managed to successfully implement control interfaces similar to what could be seen in Minority Report.&lt;br /&gt;
&lt;br /&gt;
Working low-level with point clouds offers interesting possibilities too. As they are essentially a 3D &amp;quot;scan&amp;quot; or the scene, you could capture your whole room as a [http://pointclouds.org/documentation/tutorials/using_kinfu_large_scale.php continuous 3D mesh], then import it in your modelling software of choice and use it as a reference for a better reconstruction. You could use it to measure distances, or detect obstacles, for a robot navigation system. But the most exciting alternative by far, and the one where most of the research is being focused on, is to perform 3D object recognition and pose estimation, even in [https://www.youtube.com/watch?feature=player_detailpage&amp;amp;v=quGhaggn3cQ#t=349 real time]. This allows a robot to fulfill tasks like going to the kitchen to get you a cup of coffee.&lt;br /&gt;
&lt;br /&gt;
Most of the advanced stuff that can be done with a point cloud requires some previous steps, like filtering, reconstruction or normal estimation. In this tutorial, I will introduce you to the basics of point cloud processing, and leave the complicated methods for the [[PCL/OpenNI tutorial 3: Cloud processing (advanced) | next tutorial]]. I will explain what every technique does and what it should be used for, and include PCL code snippets so you can check how to implement it in your programs.&lt;br /&gt;
&lt;br /&gt;
=Point clouds=&lt;br /&gt;
&lt;br /&gt;
First, a little explanation. A point cloud as taken from a depth sensor consists of a series of points in 3D space, as simple as that. A &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::PointCloud&amp;lt;PointT&amp;gt;&amp;quot;&amp;lt;/span&amp;gt; object stores the points inside a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;std::vector&amp;lt;PointT&amp;gt;&amp;quot;&amp;lt;/span&amp;gt; structure. The cloud object exposes some functions that let you get information about them, like the point count. As you can see, the class is templated so you can instantiate it for many types of points. For example, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt;, which stores three floats for the X, Y and Z coordinates, and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZRGB&amp;quot;&amp;lt;/span&amp;gt;, which also stores color (texture) information for each point, the kind we would get with Kinect or Xtion RGB-D cameras. A list of [http://pointclouds.org/documentation/tutorials/adding_custom_ptype.php#what-pointt-types-are-available-in-pcl point types] and a basic description of the [http://pointclouds.org/documentation/tutorials/basic_structures.php cloud object] are available on PCL's website.&lt;br /&gt;
&lt;br /&gt;
Clouds will usually be handled with pointers, as they tend to be large. PCL implements the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;boost::shared_ptr&amp;quot;&amp;lt;/span&amp;gt; class from [http://www.boost.org/ Boost C++ libraries], that keeps track of the references to a certain object. Every time you pass a shared pointer and a copy of it is created, a reference counter is automatically increased. When the pointer is destroyed, it is decreased. If the counter ever reaches 0, it means that no one is holding an active reference to the object, and its destructor is called to free memory. This can take a while to get used to, and you will be mixing cloud objects and cloud object pointers until then, but do not worry.&lt;br /&gt;
&lt;br /&gt;
==PCD files==&lt;br /&gt;
&lt;br /&gt;
When clouds are saved to disk, a [http://pointclouds.org/documentation/tutorials/pcd_file_format.php PCD] (Point Cloud Data) file is produced. You can choose to do it so in binary or plain text format. The first one is faster to work with, whereas the second one will let you to inspect the cloud by opening it with a common text editor. If you do so, you may see something like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;# .PCD v.7 − Point Cloud Data file format&lt;br /&gt;
VERSION .7&lt;br /&gt;
FIELDS x y z&lt;br /&gt;
SIZE 4 4 4&lt;br /&gt;
TYPE F F F&lt;br /&gt;
WIDTH 2&lt;br /&gt;
HEIGHT 1&lt;br /&gt;
VIEWPOINT 0 0 0 1 0 0 0&lt;br /&gt;
POINTS 2&lt;br /&gt;
DATA ascii&lt;br /&gt;
0.73412 -1.12643 0.82218&lt;br /&gt;
0.44739 -0.34735 -0.04624&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This tells us that the cloud consist of 2 points of type &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; (because each point has only 3 fields; the X, Y and Z coordinates, each stored in a 4-byte float), and lists their coordinates. It also stores the viewport information (the relative position of the points to the sensor) as a translation quaternion. Knowing the position of the camera can be useful for certain procedures like normal estimation.&lt;br /&gt;
&lt;br /&gt;
===Reading from file===&lt;br /&gt;
&lt;br /&gt;
Reading a point cloud from a PCD file to and storing it in memory can be done with a single function call. You must specify the correct point type. For example, for .pcd files saved with the OpenNI viewer application that we saw in the [[PCL/OpenNI tutorial 1: Installing and testing | previous tutorial]], you could use either &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZRGBA&amp;quot;&amp;lt;/span&amp;gt; or &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; if you are not going to use the color information.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I have not bothered to include proper error/argument checking, but you should do it in your programs. Be sure to check the [http://docs.pointclouds.org PCL API] every time I introduce you to one of its classes or methods.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/reading_pcd.php Reading Point Cloud data from PCD files]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl_1_1io.html pcl::io namespace]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_file_read.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Writing to file===&lt;br /&gt;
&lt;br /&gt;
Saving a point cloud to a file is as easy as reading it. You can choose between doing it in plain readable (ASCII) or binary format. The first one can be read and altered with a text editor, the second one is faster.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Write it back to disk under a different name.&lt;br /&gt;
	// Another possibility would be &amp;quot;savePCDFileBinary()&amp;quot;.&lt;br /&gt;
	pcl::io::savePCDFileASCII(&amp;quot;output.pcd&amp;quot;, *cloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/writing_pcd.php Writing Point Cloud data to PCD files]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl_1_1io.html pcl::io namespace]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_file_write.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you want a better alternative for loading or saving PCD files (some of the functions we saw are a bit outdated, but they do the job), take a look at the [http://docs.pointclouds.org/trunk/classpcl_1_1_p_c_d_reader.html PCDReader] and [http://docs.pointclouds.org/trunk/classpcl_1_1_p_c_d_writer.html PCDWriter] classes.&lt;br /&gt;
&lt;br /&gt;
==Visualization==&lt;br /&gt;
&lt;br /&gt;
Of course, PCL ships with a dedicated program to load and visualize point clouds and a lot of other stuff, &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''pcl_viewer''&amp;lt;/span&amp;gt;. You can run it from a terminal with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_viewer &amp;lt;file 1&amp;gt; [file 2] ... [file N]&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you load several files, each cloud will be rendered with a different color to help you tell them apart. The program uses internally the [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_visualizer.html pcl::visualization::PCLVisualizer] class. Press the &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''H'''&amp;lt;/span&amp;gt; key while it is running to print the help on the terminal. Keys I tend to find useful are &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''1'''&amp;lt;/span&amp;gt;, which chooses a different color for the cloud (useful if the current one is too hard to see), &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''R'''&amp;lt;/span&amp;gt;, which resets the camera, and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''J'''&amp;lt;/span&amp;gt;, which saves a PNG snapshot.&lt;br /&gt;
&lt;br /&gt;
In your own code, it is possible to iterate through the cloud and print the information of each point to console, but this is cumbersome. Sometimes you just want to visualize the cloud to check something. PCL can create a window for you and display it, letting users pan and zoom their way around. The simplest method is the following one:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/cloud_viewer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Cloud viewer object. You can set the window title.&lt;br /&gt;
	pcl::visualization::CloudViewer viewer(argv[1]);&lt;br /&gt;
	viewer.showCloud(cloud);&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		// Do nothing but wait.&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;CloudViewer&amp;quot;&amp;lt;/span&amp;gt; has less features than &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PCLVisualizer&amp;quot;&amp;lt;/span&amp;gt; but it is easier to set up. The program will remain active until you close the window with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Alt+F4'''&amp;lt;/span&amp;gt; or press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Q'''&amp;lt;/span&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/cloud_viewer.php The CloudViewer]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/pcl_visualizer.php PCLVisualizer]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/walkthrough.php Visualization library overview]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_cloud_viewer.html pcl::visualization::CloudViewer]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_visualize_cloud.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Concatenating two clouds==&lt;br /&gt;
&lt;br /&gt;
There are two types of concatenation with point clouds:&lt;br /&gt;
&lt;br /&gt;
* '''Points''':  You initially have two clouds, &amp;quot;A&amp;quot; and &amp;quot;B&amp;quot;, with &amp;quot;N&amp;quot; and &amp;quot;M&amp;quot; points each. A third cloud is created (&amp;quot;C&amp;quot;), that has &amp;quot;N + M&amp;quot; points (all points of the first two clouds). For this to work, the point type must be the same for all clouds (&amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt;, or &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt;, or whatever).&lt;br /&gt;
* '''Fields''': With this, you can join the fields of two clouds with different types. The only constraint is that both must have the same number of points. For example, if you have a cloud of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; type and another of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt; type with &amp;quot;N&amp;quot; points each, you can concatenate them to create a cloud of type &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointNormal&amp;quot;&amp;lt;/span&amp;gt; (that stores X, Y and Z coordinates, plus the normal) that also has &amp;quot;N&amp;quot; points.&lt;br /&gt;
&lt;br /&gt;
Point concatenation is the easiest one becase you can use the cloud's add operator (+), which is overloaded.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudA(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudB(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudC(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read two PCD files from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGBA&amp;gt;(argv[1], *cloudA) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGBA&amp;gt;(argv[2], *cloudB) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Create cloud &amp;quot;C&amp;quot;, with the points of both &amp;quot;A&amp;quot; and &amp;quot;B&amp;quot;.&lt;br /&gt;
	*cloudC = (*cloudA) + (*cloudB);&lt;br /&gt;
	// Now cloudC-&amp;gt;points.size() equals cloudA-&amp;gt;points.size() + cloudB-&amp;gt;points.size().&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/concatenate_clouds.php Concatenate the points of two Point Clouds]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_concatenate_clouds.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Field concatenation is done with a function call:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudPoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr cloudNormals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;::Ptr cloudAll(new pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudPoints) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Compute the normals of the cloud (do not worry, we will see this later).&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloudPoints);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.05);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*cloudNormals);&lt;br /&gt;
&lt;br /&gt;
	// Concatenate the fields (PointXYZ + Normal = PointNormal).&lt;br /&gt;
	pcl::concatenateFields(*cloudPoints, *cloudNormals, *cloudAll);&lt;br /&gt;
&lt;br /&gt;
	// Print the data to standard output.&lt;br /&gt;
	for (size_t currentPoint = 0; currentPoint &amp;lt; cloudAll-&amp;gt;points.size(); currentPoint++)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tXYZ:&amp;quot; &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].x &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].y &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].z &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tNormal:&amp;quot; &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[0] &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[1] &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[2] &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/concatenate_clouds.php Concatenate the points of two Point Clouds]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__common.html#gac6add803f86fd16a998dce541e9ef402 pcl::concatenateFields()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_concatenate_fields.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Matrix transformations==&lt;br /&gt;
&lt;br /&gt;
Clouds can be directly manipulated with [http://en.wikipedia.org/wiki/Transformation_matrix transformations] (translation, rotation, scaling). 3D transformations use a 4x4 matrix, which can be a little complicated to understand (specially when it comes to rotations), so check some book or [http://web.iitd.ac.in/~hegde/cad/lecture/L6_3dtrans.pdf tutorial] before trying to use one. The transformed cloud will be saved to another object so you can keep both versions and use them as you wish.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/common/transforms.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr transformed(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Transformation matrix object, initialized to the identity matrix&lt;br /&gt;
	// (a null transformation).&lt;br /&gt;
	Eigen::Matrix4f transformation = Eigen::Matrix4f::Identity();&lt;br /&gt;
&lt;br /&gt;
	// Set a rotation around the Z axis.&lt;br /&gt;
	float theta = M_PI / 2; // 180 degrees.&lt;br /&gt;
	transformation(0, 0) = cos(theta);&lt;br /&gt;
	transformation(0, 1) = -sin(theta);&lt;br /&gt;
	transformation(1, 0) = sin(theta);&lt;br /&gt;
	transformation(1, 1) = cos(theta);&lt;br /&gt;
&lt;br /&gt;
	// Set a translation on the X axis.&lt;br /&gt;
	transformation(0, 3) = 1.0f; // 1 meter (positive direction).&lt;br /&gt;
&lt;br /&gt;
	pcl::transformPointCloud(*cloud, *transformed, transformation);&lt;br /&gt;
&lt;br /&gt;
	// Visualize both the original and the result.&lt;br /&gt;
	pcl::visualization::PCLVisualizer viewer(argv[1]);&lt;br /&gt;
	viewer.addPointCloud(cloud, &amp;quot;original&amp;quot;);&lt;br /&gt;
	// The transformed one's points will be red in color.&lt;br /&gt;
	pcl::visualization::PointCloudColorHandlerCustom&amp;lt;pcl::PointXYZ&amp;gt; colorHandler(transformed, 255, 0, 0);&lt;br /&gt;
	viewer.addPointCloud(transformed, colorHandler, &amp;quot;transformed&amp;quot;);&lt;br /&gt;
	// Add 3D colored axes to help see the transformation.&lt;br /&gt;
	viewer.addCoordinateSystem(1.0, &amp;quot;reference&amp;quot;, 0);&lt;br /&gt;
&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer.spinOnce();&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Cloud_transformation.png|thumb|center|600px|Rotation and translation applied to a cloud.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check the rest of the functions, because you can also use a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Affine3&amp;quot;&amp;lt;/span&amp;gt; or a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Quaternion&amp;quot;&amp;lt;/span&amp;gt; instead of a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Matrix4&amp;quot;&amp;lt;/span&amp;gt; object.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Transformation&lt;br /&gt;
* '''Output''': Transformed points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/matrix_transform.php Using a matrix to transform a point cloud]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl.html#aefbe4956d1c8fb785a97df6708d57c56 pcl::transformPointCloud()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_matrix_transform.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Using indices==&lt;br /&gt;
&lt;br /&gt;
Many of PCL's algorithms return indices. Indices are just a list of some of the points of a cloud (not the points themselves with all their data, but just their index in the cloud). For example, a cylinder segmentation algorithm will give you as output a list of points that were considered to fit the cylinder model. You can then use the indices to extract the points to a second cloud for further work with the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::ExtractIndices&amp;quot;&amp;lt;/span&amp;gt; class. Its &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt; function will instruct it to extract the points NOT indexed (so you can remove all cylinder-shaped clusters from the cloud).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/extract_indices.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/segmentation/sac_segmentation.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudAll(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudExtracted(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudAll) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Plane segmentation (do not worry, we will see this later).&lt;br /&gt;
	pcl::ModelCoefficients::Ptr coefficients(new pcl::ModelCoefficients);&lt;br /&gt;
	pcl::SACSegmentation&amp;lt;pcl::PointXYZ&amp;gt; segmentation;&lt;br /&gt;
	segmentation.setOptimizeCoefficients(true);&lt;br /&gt;
	segmentation.setModelType(pcl::SACMODEL_PLANE);&lt;br /&gt;
	segmentation.setMethodType(pcl::SAC_RANSAC);&lt;br /&gt;
	segmentation.setDistanceThreshold(0.01);&lt;br /&gt;
	segmentation.setInputCloud(cloudAll);&lt;br /&gt;
&lt;br /&gt;
	// Object for storing the indices.&lt;br /&gt;
	pcl::PointIndices::Ptr pointIndices(new pcl::PointIndices);&lt;br /&gt;
&lt;br /&gt;
	segmentation.segment(*pointIndices, *coefficients);&lt;br /&gt;
&lt;br /&gt;
	// Object for extracting points from a list of indices.&lt;br /&gt;
	pcl::ExtractIndices&amp;lt;pcl::PointXYZ&amp;gt; extract;&lt;br /&gt;
	extract.setInputCloud(cloudAll);&lt;br /&gt;
	extract.setIndices(pointIndices);&lt;br /&gt;
	// We will extract the points that are NOT indexed (the ones that are not in a plane).&lt;br /&gt;
	extract.setNegative(true);&lt;br /&gt;
	extract.filter(*cloudExtracted);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Indices, [Negative mode]&lt;br /&gt;
* '''Output''': Points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/extract_indices.php Extracting indices from a PointCloud]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/structpcl_1_1_point_indices.html pcl::PointIndices]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_extract_indices.html pcl::ExtractIndices]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_extract_indices.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Removing NaNs==&lt;br /&gt;
&lt;br /&gt;
The cloud retrieved from the sensor may contain several kind of measurement errors and/or inaccuracies. One of them is the presence of [http://en.wikipedia.org/wiki/NaN NaN (Not a Number)] values in the coordinates of some points, as you can see in the following file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;# .PCD v0.7 - Point Cloud Data file format&lt;br /&gt;
VERSION 0.7&lt;br /&gt;
FIELDS x y z rgba&lt;br /&gt;
SIZE 4 4 4 4&lt;br /&gt;
TYPE F F F U&lt;br /&gt;
COUNT 1 1 1 1&lt;br /&gt;
WIDTH 640&lt;br /&gt;
HEIGHT 480&lt;br /&gt;
VIEWPOINT 0 0 0 1 0 0 0&lt;br /&gt;
POINTS 307200&lt;br /&gt;
DATA ascii&lt;br /&gt;
nan nan nan 10135463&lt;br /&gt;
nan nan nan 10398635&lt;br /&gt;
nan nan nan 10070692&lt;br /&gt;
nan nan nan 10268071&lt;br /&gt;
...&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cloud objects have a member function called &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;is_dense()&amp;quot;&amp;lt;/span&amp;gt; that will return true if all points have valid, finite values. A NaN indicates that there was a problem measuring the distance to that point, maybe because the sensor was too close or too far, or because the surface was reflective.&lt;br /&gt;
&lt;br /&gt;
Sadly, many of PCL's algorithms will fail if even a single NaN is present in the cloud you feed them as input, with a message like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;Assertion `point_representation_-&amp;gt;isValid (point) &amp;amp;&amp;amp; &amp;quot;Invalid (NaN, Inf) point coordinates given to radiusSearch!&amp;quot;' failed.&amp;quot;''&amp;lt;/span&amp;gt;. If this happens, you will have to remove such values from the cloud. You can use a program like the following to &amp;quot;clean&amp;quot; a cloud from all NaN points:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/filter.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	if (argc != 3)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tUsage: &amp;quot; &amp;lt;&amp;lt; argv[0] &amp;lt;&amp;lt; &amp;quot; &amp;lt;input cloud&amp;gt; &amp;lt;output cloud&amp;gt;&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// The mapping tells you to what points of the old cloud the new ones correspond,&lt;br /&gt;
	// but we will not use it.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; mapping;&lt;br /&gt;
	pcl::removeNaNFromPointCloud(*cloud, *cloud, mapping);&lt;br /&gt;
&lt;br /&gt;
	// Save it back.&lt;br /&gt;
	pcl::io::savePCDFileASCII(argv[2], *cloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The problem with this method is that it will not keep the cloud organized. All clouds store a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;width&amp;quot;&amp;lt;/span&amp;gt; and a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;height&amp;quot;&amp;lt;/span&amp;gt; variable. In ''unorganized'' clouds, the total number of points is the same as the width, and the height is set to 1. In ''organized'' clouds (like the ones taken from camera-like sensors such as Kinect or Xtion), the width and the height are the same that the pixels of the image resolution the sensor works with. Points are distributed in rows like in the depth image, and every one of them corresponds to a pixel. The member function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;isOrganized()&amp;quot;&amp;lt;/span&amp;gt; will return true if the height is greater than 1.&lt;br /&gt;
&lt;br /&gt;
Because removing NaNs will change the number of points of the cloud, it can no longer be kept organized with the original width/height ratio, so the function will set the height to 1. This is not a big issue, as only a few of PCL's algorithms work explicitly with organized clouds (and they mostly use it for optimization), but you must take it into account.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Cloud&lt;br /&gt;
* '''Output''': Cloud without NaNs&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__filters.html#gac463283a9e9c18a66d3d29b28a575064 pcl::removeNaNFromPointCloud()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_remove_NaNs.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Saving a PNG==&lt;br /&gt;
&lt;br /&gt;
PCL offers the possibility of saving the values of a point cloud to a PNG image file. Obviously, this can only be done with organized clouds, because the rows and columns of the resulting image will correspond exactly to the ones of the cloud. For example, if you have a point cloud taken from a sensor like Kinect or Xtion, you can use this to retrieve the 640x480 RGB image that matches the cloud.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/io/png_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGB&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Save the image (cloud must be organized).&lt;br /&gt;
	pcl::io::savePNGFile(&amp;quot;output.png&amp;quot;, *cloud, &amp;quot;rgb&amp;quot;);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;savePNGFile()&amp;quot;&amp;lt;/span&amp;gt; has several implementations; in this case, we have chosen the one that lets you specify which fields to save. If you omit that parameter, the function will save the RGB fields by default.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__io.html#gabd03e43fac3048635512f447c8bc50b3 pcl::io::savePNGFile()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_save_PNG.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Computing the centroid==&lt;br /&gt;
&lt;br /&gt;
The [http://en.wikipedia.org/wiki/Centroid centroid] of a cloud is a point with coordinates that result from computing the mean of the values of all points in the cloud. You could say it is the &amp;quot;center of mass&amp;quot;, and it has several uses for certain algorithms. If you intend to compute the actual center of gravity of a [[PCL/OpenNI_tutorial_3:_Cloud_processing_%28advanced%29#Segmentation|clustered]] object though, remember that the sensor does not retrieve the part of the object that is hidden from the camera, like the back faces that are occluded by the front ones, or the inside. You only have the part of the surface that faces the camera.&lt;br /&gt;
&lt;br /&gt;
The centroid of a cloud can be computed with a single function call:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/common/centroid.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object to store the centroid coordinates.&lt;br /&gt;
	Eigen::Vector4f centroid;&lt;br /&gt;
	&lt;br /&gt;
	pcl::compute3DCentroid(*cloud, centroid);&lt;br /&gt;
&lt;br /&gt;
	std::cout &amp;lt;&amp;lt; &amp;quot;The XYZ coordinates of the centroid are: (&amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[0] &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[1] &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[2] &amp;lt;&amp;lt; &amp;quot;).&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl.html#a23daec3829d2d4100a2f185372b3753a pcl::compute3DCentroid()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_compute_centroid.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Feature estimation=&lt;br /&gt;
&lt;br /&gt;
A ''feature'' of a point is some characteristic that can help us to tell it apart from a different point. Of course, you could say that the XYZ coordinates would be characteristic enough, but when interpolating the data of two different, yet corresponding, clouds, they become useless. We need something better, that will have similar values when computed for the same spot of a surface in both sets.&lt;br /&gt;
&lt;br /&gt;
Normals are an example of a very simple feature. 3D [http://pointclouds.org/documentation/tutorials/how_features_work.php features] will be important later in our tutorials, when we talk about [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29|descriptors]] (more detailed &amp;quot;signatures&amp;quot; of points used for close matching).&lt;br /&gt;
&lt;br /&gt;
==Normal estimation==&lt;br /&gt;
&lt;br /&gt;
As you may remember from geometry class, the [http://en.wikipedia.org/wiki/Surface_normal normal] of a plane is an unit vector that is perpendicular to it. The normal of a surface at a point is defined as the vector that is perpendicular to the plane that is tangent to the surface at the point. Surface normals can be calculated for the points of a cloud, too. It is considered a feature, albeit not a very discriminative one.&lt;br /&gt;
&lt;br /&gt;
I will not go into detail with the math of the estimation method, but you just have to know that is uses the nearest neighbors (the points that are closest to the one we are calculating the normal for) to find out the tangent plane and the normal vector. You can customize the method with the search radius (think about a sphere of that radius, centered in the point; all neighboring points that lie within will be used for the computation) and the viewpoint (by default, the output normals will be directionless; by supposing that all vectors must point towards the camera - because otherwise they would belong to surfaces that are not visible from the sensor - they can all be re-oriented accordingly).&lt;br /&gt;
&lt;br /&gt;
Normals are also important because they give us information about the ''curvature'' of the surface at some point, which can be used to our advantage. Many of PCL's algorithms will require us to provide the normals of the input cloud. To estimate them, you can use the following code:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for normal estimation.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	// For every point, use all neighbors in a radius of 3cm.&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	// A kd-tree is a data structure that makes searches efficient. More about it later.&lt;br /&gt;
	// The normal estimation object will use it to find nearest neighbors.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
&lt;br /&gt;
	// Calculate the normals.&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Visualize them.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Normals&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(cloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	// Display one normal out of 20, as a line of length 3cm.&lt;br /&gt;
	viewer-&amp;gt;addPointCloudNormals&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt;(cloud, normals, 20, 0.03, &amp;quot;normals&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, normals are stored in &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointCloud&amp;quot;&amp;lt;/span&amp;gt; objects too, instantiated to the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt; type. If you ever extract some indices from a cloud before feeding them to an algorithm, be sure to do the same with the normals. I have not made use of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setViewPoint()&amp;quot;&amp;lt;/span&amp;gt; function, but you get the idea.&lt;br /&gt;
&lt;br /&gt;
Instead of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setRadiusSearch()&amp;quot;&amp;lt;/span&amp;gt;, you can make use of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setKSearch()&amp;quot;&amp;lt;/span&amp;gt;, which takes an integer, ''K''. This will take into account a point's ''K'' nearest neighbors to compute the normal. Please mind that this can produce some odd results for points that are not in dense areas, as some of the &amp;quot;neighbors&amp;quot; will be too far from them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Normal_estimation.png|thumb|center|600px|Visualization of the normals computed for a point cloud.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search method, Radius | Neighbor count, [Indices], [Viewpoint]&lt;br /&gt;
* '''Output''': Normals&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/normal_estimation.php Estimating Surface Normals in a Point Cloud]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_normal_estimation.html pcl::NormalEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_estimate_normals.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Integral images===&lt;br /&gt;
&lt;br /&gt;
Integral images are a method for normal estimation on organized clouds. The algorithm sees the cloud as a depth image, and creates certain rectangular areas over which the normals are computed, by taking into account the relationship between neighboring &amp;quot;pixels&amp;quot; (points), without the need to run lookups on a search structure like a tree. Because of this, it is very efficient and the normals are usually computed in a split second. If you are performing real time computations on clouds taken from a RGB-D sensor, this is the approach you should use because the previous method will probably take several seconds to finish.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/integral_image_normal.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for normal estimation.&lt;br /&gt;
	pcl::IntegralImageNormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	// Other estimation methods: COVARIANCE_MATRIX, AVERAGE_DEPTH_CHANGE, SIMPLE_3D_GRADIENT.&lt;br /&gt;
	// They determine the smoothness of the result, and the running time.&lt;br /&gt;
	normalEstimation.setNormalEstimationMethod(normalEstimation.AVERAGE_3D_GRADIENT);&lt;br /&gt;
	// Depth threshold for computing object borders based on depth changes, in meters.&lt;br /&gt;
	normalEstimation.setMaxDepthChangeFactor(0.02f);&lt;br /&gt;
	// Factor that influences the size of the area used to smooth the normals.&lt;br /&gt;
	normalEstimation.setNormalSmoothingSize(10.0f);&lt;br /&gt;
&lt;br /&gt;
	// Calculate the normals.&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Visualize them.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Normals&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(cloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	// Display one normal out of 20, as a line of length 3cm.&lt;br /&gt;
	viewer-&amp;gt;addPointCloudNormals&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt;(cloud, normals, 20, 0.03, &amp;quot;normals&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Estimation method, [Smoothing factor], [Depth threshold], [Rectangle size], [Border policy], [Depth dependent smoothing]&lt;br /&gt;
* '''Output''': Normals&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/normal_estimation_using_integral_images.php Normal Estimation Using Integral Images]&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://islab.ulsan.ac.kr/files/announcement/411/Adaptive%20Neighborhood%20Selection%20for%20Real-Time%20Surface%20Normal.pdf Adaptive Neighborhood Selection for Real-Time Surface Normal Estimation from Organized Point Cloud Data Using Integral Images] (S. Holzer et al., 2012)&lt;br /&gt;
** [https://www.willowgarage.com/sites/default/files/holz_rgbd_seg.pdf Real-Time Plane Segmentation using RGB-D Cameras] (Dirk Holz et al., 2012)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_integral_image_normal_estimation.html pcl::IntegralImageNormalEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_estimate_normals_integral.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Decomposition=&lt;br /&gt;
&lt;br /&gt;
When I talk about ''decomposition'', I mean the process of creating an ordered, organized data structure from the points of the cloud, with some purpose in mind like making further analysis easier, or performing filters.&lt;br /&gt;
&lt;br /&gt;
==''k''-d tree==&lt;br /&gt;
&lt;br /&gt;
A [http://en.wikipedia.org/wiki/K-d_tree ''k''-d tree] (''k''-dimensional tree) is a data structure that organizes a set of points in a ''k''-dimensional space, in a way that makes range search operations very efficient (for example, finding the nearest neighbor of a point, which is the point that is closest to it in space; or finding all neighbors within a radius).&lt;br /&gt;
&lt;br /&gt;
It is a binary tree, that is, every non-leaf node in it has two children nodes, the &amp;quot;left&amp;quot; and &amp;quot;right&amp;quot; ones. Each level splits space on a specific dimension. For example, in 3-dimensional space, at the root node (first level) all children would be split based on the first dimension, X (points having coordinates with greater X values would go to the right subtree, points with lesser values to the left one). At the second level (the nodes we just created), the split would be done on the Y axis, following the same criteria. At the third level (the grandchildren), we would use the Z axis. At the fourth level, we would get back to the X axis, and so on. Usually, the median point is chosen to be root at every level.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Kdtree.png|thumb|center|700px|Example of a 2D ''k''d-tree (image from Wikimedia Commons).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In PCL, creating a ''k''-d tree from a cloud is a straightforward and fast operation:&lt;br /&gt;
&lt;br /&gt;
===Searching===&lt;br /&gt;
&lt;br /&gt;
The following code snippet shows you how to use a ''k''-d tree to perform efficient searches:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/search/kdtree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// kd-tree object.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt; kdtree;&lt;br /&gt;
	kdtree.setInputCloud(cloud);&lt;br /&gt;
&lt;br /&gt;
	// We will find the 5 nearest neighbors of this point&lt;br /&gt;
	// (it does not have to be one of the cloud's, we can use any coordinate).&lt;br /&gt;
	pcl::PointXYZ point;&lt;br /&gt;
	point.x = 0.0524343;&lt;br /&gt;
	point.y = -0.58016;&lt;br /&gt;
	point.z = 1.776;&lt;br /&gt;
	// This vector will store the output neighbors.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; pointIndices(5);&lt;br /&gt;
	// This vector will store their squared distances to the search point.&lt;br /&gt;
	std::vector&amp;lt;float&amp;gt; squaredDistances(5);&lt;br /&gt;
	// Perform the search, and print out results.&lt;br /&gt;
	if (kdtree.nearestKSearch(point, 5, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;5 nearest neighbors of the point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Now we find all neighbors within 3cm of the point&lt;br /&gt;
	// (inside a sphere of radius 3cm centered at the point).&lt;br /&gt;
	if (kdtree.radiusSearch(point, 0.03, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors within 3cm:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another alternative would be &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;KdTreeFLANN&amp;quot;&amp;lt;/span&amp;gt;, which makes use of [http://www.cs.ubc.ca/research/flann/ FLANN] (Fast Library for Approximate Nearest Neighbors).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius | Neighbor count&lt;br /&gt;
* '''Output''': Point indices, Squared distances&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/kdtree_search.php How to use a KdTree to search]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1search_1_1_kd_tree.html pcl::search::KdTree]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_kd_tree_f_l_a_n_n.html pcl::KdTreeFLANN]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_kdtree_search.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Octree==&lt;br /&gt;
&lt;br /&gt;
Like the ''k''-d tree, the [http://en.wikipedia.org/wiki/Octree octree] is an hierarchical tree data structure useful for searches, but also compression or downsampling. Each octree node (called a ''voxel'', a term that will sound familiar for users of 3D engines, and which could be described as a &amp;quot;3D pixel&amp;quot;, that is, a cube) has either eight children or no children. The root node describes a cubic bounding box which encapsulates all points. At every level, it becomes subdivided by a factor of 2 which results in an increased&lt;br /&gt;
voxel resolution. That way, we can selectively give more resolution to certain zones.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Octree.png|thumb|center|700px|Depiction of an octree (image from Wikimedia Commons).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Searching===&lt;br /&gt;
&lt;br /&gt;
Downsampling will be covered in a different section, so now I will show you how to perform searches with an octree:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/octree/octree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Octree object, with a maximum resolution of 128&lt;br /&gt;
	// (resolution at lowest octree level).&lt;br /&gt;
	pcl::octree::OctreePointCloudSearch&amp;lt;pcl::PointXYZ&amp;gt; octree(128);&lt;br /&gt;
	octree.setInputCloud(cloud);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
&lt;br /&gt;
	// We will find all neighbors within the same voxel as this point.&lt;br /&gt;
	// (it does not have to be one of the cloud's, we can use any coordinate).&lt;br /&gt;
	pcl::PointXYZ point;&lt;br /&gt;
	point.x = 0.0524343;&lt;br /&gt;
	point.y = -0.58016;&lt;br /&gt;
	point.z = 1.776;&lt;br /&gt;
	// This vector will store the output neighbors.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; pointIndices;&lt;br /&gt;
	// Perform the search, and print out results.&lt;br /&gt;
	if (! octree.voxelSearch(point, pointIndices) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors in the same voxel:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// We will find the 5 nearest neighbors of the point.&lt;br /&gt;
	// This vector will store their squared distances to the search point.&lt;br /&gt;
	std::vector&amp;lt;float&amp;gt; squaredDistances;&lt;br /&gt;
	if (octree.nearestKSearch(point, 5, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;5 nearest neighbors of the point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Now we find all neighbors within 3cm of the point&lt;br /&gt;
	// (inside a sphere of radius 3cm centered at the point).&lt;br /&gt;
	// The point DOES have to belong in the cloud.&lt;br /&gt;
	if (octree.radiusSearch(point, 0.03, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors within 3cm:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, the code is mostly interchangeable with the one of the ''k''d-tree.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius | Neighbor count&lt;br /&gt;
* '''Output''': Point indices, Squared distances&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/octree.php Spatial Partitioning and Search Operations with Octrees]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1search_1_1_octree.html pcl::search::Octree]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1octree_1_1_octree_point_cloud_search.html pcl::octree::OctreePointCloudSearch]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_octree_search.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Compressing/Decompressing===&lt;br /&gt;
&lt;br /&gt;
On the other hand, this is the code needed to compress and decompress a point cloud with PCL:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/compression/octree_pointcloud_compression.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr decompressedCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Octree compressor object.&lt;br /&gt;
	// Check /usr/include/pcl-&amp;lt;version&amp;gt;/pcl/compression/compression_profiles.h for more profiles.&lt;br /&gt;
	// The second parameter enables the output of information.&lt;br /&gt;
	pcl::io::OctreePointCloudCompression&amp;lt;pcl::PointXYZ&amp;gt; octreeCompression(pcl::io::MED_RES_ONLINE_COMPRESSION_WITHOUT_COLOR, true);&lt;br /&gt;
	// Stringstream that will hold the compressed cloud.&lt;br /&gt;
	std::stringstream compressedData;&lt;br /&gt;
&lt;br /&gt;
	// Compress the cloud (you would save the stream to disk).&lt;br /&gt;
	octreeCompression.encodePointCloud(cloud, compressedData);&lt;br /&gt;
&lt;br /&gt;
	// Decompress the cloud.&lt;br /&gt;
	octreeCompression.decodePointCloud(compressedData, decompressedCloud);&lt;br /&gt;
&lt;br /&gt;
	// Display the decompressed cloud.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Octree compression&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(decompressedCloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforeCompression.png | Original point cloud.&lt;br /&gt;
File:AfterCompression.png | Point cloud retrieved after octree compression.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As you can see, the process has decreased the amount of points in the cloud. The default parameters of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;OctreePointCloudCompression&amp;quot;&amp;lt;/span&amp;gt; object establish an octree resolution (at lowest level) of 1cm, and a coordinate precision of 1mm. These values can of course be changed to suit your needs, but simplifying the cloud in this way is an useful operation that we will cover later.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Resolution, Compression profile&lt;br /&gt;
* '''Output''': Compressed stream&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/compression.php Point Cloud Compression]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1io_1_1_octree_point_cloud_compression.html pcl::io::OctreePointCloudCompression]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_octree_compress.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Detecting changes===&lt;br /&gt;
&lt;br /&gt;
PCL has an octree-based object that allows to detect changes between two point clouds (for example, two clouds taken one after another with an OpenNI device). This object stores the trees of both clouds at the same time, and compares them, returning a list of indices of the points that are new in the second cloud (they did not exist in the first):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/octree/octree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudA(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudB(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read two PCD files from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudA) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[2], *cloudB) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Change detector object, with a resolution of 128&lt;br /&gt;
	// (resolution at lowest octree level).&lt;br /&gt;
	pcl::octree::OctreePointCloudChangeDetector&amp;lt;pcl::PointXYZ&amp;gt; octree(128.0f);&lt;br /&gt;
&lt;br /&gt;
	// Add cloudA to the octree.&lt;br /&gt;
	octree.setInputCloud(cloudA);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
	// The change detector object is able to store two clouds at the same time;&lt;br /&gt;
	// with this we can reset the buffer but keep the previous tree saved.&lt;br /&gt;
	octree.switchBuffers();&lt;br /&gt;
	// Now, add cloudB to the octree like we did before.&lt;br /&gt;
	octree.setInputCloud(cloudB);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; newPoints;&lt;br /&gt;
	// Get a vector with the indices of all points that are new in cloud B,&lt;br /&gt;
	// when compared with the ones that existed in cloud A.&lt;br /&gt;
	octree.getPointIndicesFromNewVoxels(newPoints);&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; newPoints.size(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Point (&amp;quot; &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].x &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].y &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].z&lt;br /&gt;
				  &amp;lt;&amp;lt; &amp;quot;) was not in cloud A but is in cloud B&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
	std::cout &amp;lt;&amp;lt; newPoints.size() &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
} &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': First cloud, Second cloud, Resolution&lt;br /&gt;
* '''Output''': Point indices&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/octree_change.php Spatial change detection on unorganized point cloud data]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1octree_1_1_octree_point_cloud_change_detector.html pcl::octree::OctreePointCloudChangeDetector]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_change_detection.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Filtering=&lt;br /&gt;
&lt;br /&gt;
The raw point cloud, as returned from the sensor, usually needs some kind of polishing before being handed to processing algorithms. Normally, this is due to the low precision of the sensor, which leads to noise, incorrect measurements, holes in surfaces... Sometimes it is just the opposite: the cloud has so many points that processing it would take forever. Luckily, several filtering methods exist to &amp;quot;fix&amp;quot; many recurring errors in point clouds.&lt;br /&gt;
&lt;br /&gt;
==Passthrough filter==&lt;br /&gt;
&lt;br /&gt;
A passthrough filter will remove any point from the cloud whose values do not fall in a certain range, user-given. For example, if you wanted to discard all points farther away than, say, 3 meters, you would run the filter on the Z coordinate with a range of [0,3]. This filter can be useful to discard unneeded objects from the cloud, but you may have to adapt a different reference frame if the default one (relative to the sensor) is not fitting. For example, filtering on the Y value to remove all points not sitting on a table will yield unwanted results if the camera was at an odd angle.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/passthrough.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::PassThrough&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Filter out all points with Z values not in the [0-2] range.&lt;br /&gt;
	filter.setFilterFieldName(&amp;quot;z&amp;quot;);&lt;br /&gt;
	filter.setFilterLimits(0.0, 2.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforePassthrough.png | Original point cloud.&lt;br /&gt;
File:AfterPassthrough.png | Point cloud after applying a passthrough filter with Z in [0,2].&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setFilterLimitsNegative()&amp;quot;&amp;lt;/span&amp;gt; function (not seen in the example) will tell the filter object whether to return the points inside the range (&amp;quot;false&amp;quot;, the default value) or outside it (&amp;quot;true&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Filter field, Filter limits, [Negative mode]&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/passthrough.php Filtering a PointCloud using a PassThrough filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_pass_through.html pcl::PassThrough]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_passthrough_filter.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Conditional removal==&lt;br /&gt;
&lt;br /&gt;
Another way of getting the exact same result than with the previous example would be to use a conditional removal filter. We can build any kind of condition we want for the values of the point:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/conditional_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// We must build a condition.&lt;br /&gt;
	// And &amp;quot;And&amp;quot; condition requires all tests to check true. &amp;quot;Or&amp;quot; conditions also available.&lt;br /&gt;
	pcl::ConditionAnd&amp;lt;pcl::PointXYZ&amp;gt;::Ptr condition(new pcl::ConditionAnd&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// First test, the point's Z value must be greater than (GT) 0.&lt;br /&gt;
	condition-&amp;gt;addComparison(pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;::ConstPtr(new pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;(&amp;quot;z&amp;quot;, pcl::ComparisonOps::GT, 0.0)));&lt;br /&gt;
	// Second test, the point's Z value must be less than (LT) 2.&lt;br /&gt;
	condition-&amp;gt;addComparison(pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;::ConstPtr(new pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;(&amp;quot;z&amp;quot;, pcl::ComparisonOps::LT, 2.0)));&lt;br /&gt;
	// Checks available: GT, GE, LT, LE, EQ.&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::ConditionalRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setCondition(condition);&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// If true, points that do not pass the filter will be set to a certain value (default NaN).&lt;br /&gt;
	// If false, they will be just removed, but that could break the structure of the cloud&lt;br /&gt;
	// (organized clouds are clouds taken from camera-like sensors that return a matrix-like image).&lt;br /&gt;
	filter.setKeepOrganized(true);&lt;br /&gt;
	// If keep organized was set true, points that failed the test will have their Z value set to this.&lt;br /&gt;
	filter.setUserFilterValue(0.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Condition, [Keep organized]&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/remove_outliers.php Removing outliers using a Conditional or RadiusOutlier removal]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_conditional_removal.html pcl::ConditionalRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_conditional_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Outlier removal==&lt;br /&gt;
&lt;br /&gt;
Outliers are lonely points that are spread here and there in the cloud, like annoying mosquitoes. They are the product of the sensor's inaccuracy, which registers measurements where there should not be any. Outliers are considered undesired noise because they can introduce errors in calculations, like for example the normal estimation. Hence, trimming they out of the cloud will not only make computations faster (because the less points there are, the less time it takes, as we will see with downsampling) but also help us get more precise values.&lt;br /&gt;
&lt;br /&gt;
===Radius-based===&lt;br /&gt;
&lt;br /&gt;
The radius-based outlier removal is the simplest method of all. You must specify a search radius and the minimum number of neighbors than a point must have to avoid being labelled as outlier. The algorithm will then iterate through all points (which can be slow in if the cloud is big) and perform the check: if less than that number of points are found within the radius, it is removed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/radius_outlier_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::RadiusOutlierRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Every point must have 10 neighbors within 15cm, or it will be removed.&lt;br /&gt;
	filter.setRadiusSearch(0.15);&lt;br /&gt;
	filter.setMinNeighborsInRadius(10);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All PCL classes that inherit from &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::FilterIndices&amp;quot;&amp;lt;/span&amp;gt; (like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::RadiusOutlierRemoval&amp;quot;&amp;lt;/span&amp;gt;) provide the function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt;, which if set to true, makes the filter return the points that did NOT pass the filter.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius, Neighbor count&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/remove_outliers.php Removing outliers using a Conditional or RadiusOutlier removal]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_radius_outlier_removal.html pcl::RadiusOutlierRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_radius_outlier_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Statistical===&lt;br /&gt;
&lt;br /&gt;
The statistical outlier removal process is a bit more refined. First, for every point, the mean distance to its ''K'' neighbors is computed. Then, if we asume that the result is a [http://en.wikipedia.org/wiki/Normal_distribution normal (gaussian) distribution] with a mean ''μ'' and a standard deviation ''σ'', we can deem it safe to remove all points with mean distances that fall out of the global mean plus deviation. Basically, it runs a statistical analysis of the distances between neighboring points, and trims all which are not considered &amp;quot;normal&amp;quot; (you define what &amp;quot;normal&amp;quot; is with the parameters of the algorithm).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/statistical_outlier_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::StatisticalOutlierRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Set number of neighbors to consider to 50.&lt;br /&gt;
	filter.setMeanK(50);&lt;br /&gt;
	// Set standard deviation multiplier to 1.&lt;br /&gt;
	// Points with a distance larger than 1 standard deviation of the mean distance will be outliers.&lt;br /&gt;
	filter.setStddevMulThresh(1.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can make use of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt; with this filter too.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Neighbor count, Deviation multiplier&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/statistical_outlier.php Removing outliers using a StatisticalOutlierRemoval filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_statistical_outlier_removal.html pcl::StatisticalOutlierRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_statistical_outlier_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Resampling==&lt;br /&gt;
&lt;br /&gt;
Resampling means changing the number of points of the cloud, either by reducing (downsampling) or increasing (upsampling). Both have a purpose, as we will see.&lt;br /&gt;
&lt;br /&gt;
===Downsampling===&lt;br /&gt;
&lt;br /&gt;
A sensor like Kinect or Xtion produces a cloud with 307200 (640x480) points. Believe it or not, that is a lot of information, and the new Kinect is already out there, working at a higher resolution. Keep in mind that, if we were to perform a simple operation on every point of a cloud, it would be of ''O(n)'', n being the number of points. If we had to compare every point with its ''k'' nearest neighbors, it would be ''O(nk)''. You may have noticed by now that some of the algorithms I showed you take several seconds to run on your PC; you now know why.&lt;br /&gt;
&lt;br /&gt;
Optimization techniques are being implemented as I write this to allow PCL to employ the computer's GPU (graphic card) to perform the computations using [https://developer.nvidia.com/about-cuda CUDA]. Many classes already provide an alternative in the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::gpu&amp;quot;&amp;lt;/span&amp;gt; namespace. This offers dramatic improvements in speed, but it is not always a possibility.&lt;br /&gt;
&lt;br /&gt;
Another option is to reduce the complexity of the cloud, that is, to remove all points that are not needed for the purpose. There are several ways to do this. You could trim all outliers from the cloud, then perform segmentation to extract only the parts that interest you. A common operation is also downsampling, which will give you back a cloud that, for all intents and purposes, equivalent to the original one, despite having less points.&lt;br /&gt;
&lt;br /&gt;
Downsampling is done via ''voxelization''. I already explained what voxels were in the [[#Octree | octree section]]. The cloud is divided in multiple cube-shaped regions with the desired resolution. Then, all points inside every voxel are processed so only one remains. The simplest way would be to randomly select one of them, but a more accurate approach would be to compute the [[PCL/OpenNI_tutorial_2:_Cloud_processing_%28basic%29#Computing_the_centroid|centroid]], which is the point whose coordinates are the mean values of all the points that belong to the voxel.&lt;br /&gt;
&lt;br /&gt;
If done with sensible parameters, downsampling will yield results that are precise enough to work with, at the cost of less CPU power.&lt;br /&gt;
&lt;br /&gt;
====Voxel grid====&lt;br /&gt;
&lt;br /&gt;
A straightforward way of downsampling a cloud:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/voxel_grid.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::VoxelGrid&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// We set the size of every voxel to be 1x1x1cm&lt;br /&gt;
	// (only one point per every cubic centimeter will survive).&lt;br /&gt;
	filter.setLeafSize(0.01f, 0.01f, 0.01f);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforeDownsampling.png | Original point cloud.&lt;br /&gt;
File:AfterDownsampling.png | Point cloud after being downsampled with a leaf size of 1cm.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Voxel size&lt;br /&gt;
* '''Output''': Downsampled cloud&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/voxel_grid.php Downsampling a PointCloud using a VoxelGrid filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_voxel_grid.html pcl::VoxelGrid]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_downsample.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Uniform sampling====&lt;br /&gt;
&lt;br /&gt;
This class does basically the same, but it outputs the indices of the points that survived the process. It is a common way of choosing keypoints when computing [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29|descriptors]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/keypoints/uniform_sampling.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Uniform sampling object.&lt;br /&gt;
	pcl::UniformSampling&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// We set the size of every voxel to be 1x1x1cm&lt;br /&gt;
	// (only one point per every cubic centimeter will survive).&lt;br /&gt;
	filter.setRadiusSearch(0.01f);&lt;br /&gt;
	// We need an additional object to store the indices of surviving points.&lt;br /&gt;
	pcl::PointCloud&amp;lt;int&amp;gt; keypointIndices;&lt;br /&gt;
&lt;br /&gt;
	filter.compute(keypointIndices);&lt;br /&gt;
	pcl::copyPointCloud(*cloud, keypointIndices.points, *filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Voxel size&lt;br /&gt;
* '''Output''': Point indices&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_uniform_sampling.html pcl::UniformSampling]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_downsample_uniform.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Upsampling===&lt;br /&gt;
&lt;br /&gt;
Upsampling is a form of [[PCL/OpenNI_tutorial_2:_Cloud_processing_%28basic%29#Reconstruction| surface reconstruction]], but I put it here. When you have fewer points than you would like, upsampling can help you to recover the original surface(s), by [http://en.wikipedia.org/wiki/Upsampling interpolating] the ones you currently have, which is just a sophisticated guess. The result will never be a hundred percent accurate, but sometimes it is the only choice. So, before downsampling your clouds, be sure to keep a copy of the original data!&lt;br /&gt;
&lt;br /&gt;
PCL uses the [http://en.wikipedia.org/wiki/Moving_least_squares Moving Least Squares (MLS)] algorithm for this. The code looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/mls.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filtering object.&lt;br /&gt;
	pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Object for searching.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree;&lt;br /&gt;
	filter.setSearchMethod(kdtree);&lt;br /&gt;
	// Use all neighbors in a radius of 3cm.&lt;br /&gt;
	filter.setSearchRadius(0.03);&lt;br /&gt;
	// Upsampling method. Other possibilites are DISTINCT_CLOUD, RANDOM_UNIFORM_DENSITY&lt;br /&gt;
	// and VOXEL_GRID_DILATION. NONE disables upsampling. Check the API for details.&lt;br /&gt;
	filter.setUpsamplingMethod(pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointXYZ&amp;gt;::SAMPLE_LOCAL_PLANE);&lt;br /&gt;
	// Radius around each point, where the local plane will be sampled.&lt;br /&gt;
	filter.setUpsamplingRadius(0.03);&lt;br /&gt;
	// Sampling step size. Bigger values will yield less (if any) new points.&lt;br /&gt;
	filter.setUpsamplingStepSize(0.02);&lt;br /&gt;
&lt;br /&gt;
	filter.process(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:PCL_upsampling_original.png | Original point cloud (coffee mug on a table).&lt;br /&gt;
File:PCL_upsampling_downsampled.png | Downsampled with a leaf size of 1cm.&lt;br /&gt;
File:PCL_upsampling_local_plane.png | Upsampled using SAMPLE_LOCAL_PLANE (upsampling radius 3cm, step size 2cm).&lt;br /&gt;
File:PCL_upsampling_uniform_density.png | Upsampled using RANDOM_UNIFORM_DENSITY (target point density of 300 per 3cm radius). &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check the [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html#ac29ad97b98353d64ce64e2ff924f7d20 setUpsamplingMethod()] API to see all upsampling methods available, and the parameters you need to set for them to work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search radius, Upsampling method, [Upsampling parameters]&lt;br /&gt;
* '''Output''': Upsampled cloud&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html pcl::MovingLeastSquares]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_upsample.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Reconstruction=&lt;br /&gt;
&lt;br /&gt;
==Surface smoothing==&lt;br /&gt;
&lt;br /&gt;
As stated, depth sensors are not very accurate, and the resulting clouds have measurement errors, outliers, holes in surfaces, etc. Surfaces can be reconstructed by means of an algorithm, that iterates through all points and interpolates the data, trying to guess how the original surface was. Like with upsampling, PCL uses the [http://en.wikipedia.org/wiki/Moving_least_squares MLS] algorithm and class. Performing this step is important, because the resulting cloud's normals will be more accurate.&lt;br /&gt;
&lt;br /&gt;
Surface smoothing is easily done with the following code:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/mls.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// The output will also contain the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;::Ptr smoothedCloud(new pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Smoothing object (we choose what point types we want as input and output).&lt;br /&gt;
	pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointNormal&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Use all neighbors in a radius of 3cm.&lt;br /&gt;
	filter.setSearchRadius(0.03);&lt;br /&gt;
	// If true, the surface and normal are approximated using a polynomial estimation&lt;br /&gt;
	// (if false, only a tangent one).&lt;br /&gt;
	filter.setPolynomialFit(true);&lt;br /&gt;
	// We can tell the algorithm to also compute smoothed normals (optional).&lt;br /&gt;
	filter.setComputeNormals(true);&lt;br /&gt;
	// kd-tree object for performing searches.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree;&lt;br /&gt;
	filter.setSearchMethod(kdtree);&lt;br /&gt;
&lt;br /&gt;
	filter.process(*smoothedCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The output cloud will also contain the improved normals of the smoother cloud.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search radius, [Polynomial fit], [Compute normals]&lt;br /&gt;
* '''Output''': Smoothed cloud, [Smoothed normals]&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/resampling.php Smoothing and normal estimation based on polynomial reconstruction]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html pcl::MovingLeastSquares]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_surface_smoothing.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)&amp;diff=4736</id>
		<title>PCL/OpenNI tutorial 2: Cloud processing (basic)</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_2:_Cloud_processing_(basic)&amp;diff=4736"/>
				<updated>2015-11-01T23:00:38Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Point_cloud.png|thumb|right|200px|Rendering of a point cloud, showing a desktop.]]&lt;br /&gt;
&lt;br /&gt;
All right: in the [[PCL/OpenNI tutorial 1: Installing and testing| previous tutorial]] you installed OpenNI and PCL. You managed to get your device working. You compiled and ran the example program, watched as it fetched one frame after another, and even saved a couple of point clouds to disk. Now you may be wondering &amp;quot;What about now? What use depth sensors are? What can I do with point clouds?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Well, depth cameras have a lot of useful applications. For example, you could use them as [http://tech.integrate.biz/kinect_mocap.htm motion tracking] hardware and save you thousands of dollars in professional gear. The original Kinect was actually designed as an alternative to physical controllers (e.g., gamepads) in games. [https://frantracerkinectft.codeplex.com/ Hand gestures] can also be recognized, so people have managed to successfully implement control interfaces similar to what could be seen in Minority Report.&lt;br /&gt;
&lt;br /&gt;
Working low-level with point clouds offers interesting possibilities too. As they are essentially a 3D &amp;quot;scan&amp;quot; or the scene, you could capture your whole room as a [http://pointclouds.org/documentation/tutorials/using_kinfu_large_scale.php continuous 3D mesh], then import it in your modelling software of choice and use it as a reference for a better reconstruction. You could use it to measure distances, or detect obstacles, for a robot navigation system. But the most exciting alternative by far, and the one where most of the research is being focused on, is to perform 3D object recognition and pose estimation, even in [http://www.youtube.com/watch?feature=player_detailpage&amp;amp;v=quGhaggn3cQ#t=349 real time]. This allows a robot to fulfill tasks like going to the kitchen to get you a cup of coffee.&lt;br /&gt;
&lt;br /&gt;
Most of the advanced stuff that can be done with a point cloud requires some previous steps, like filtering, reconstruction or normal estimation. In this tutorial, I will introduce you to the basics of point cloud processing, and leave the complicated methods for the [[PCL/OpenNI tutorial 3: Cloud processing (advanced) | next tutorial]]. I will explain what every technique does and what it should be used for, and include PCL code snippets so you can check how to implement it in your programs.&lt;br /&gt;
&lt;br /&gt;
=Point clouds=&lt;br /&gt;
&lt;br /&gt;
First, a little explanation. A point cloud as taken from a depth sensor consists of a series of points in 3D space, as simple as that. A &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::PointCloud&amp;lt;PointT&amp;gt;&amp;quot;&amp;lt;/span&amp;gt; object stores the points inside a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;std::vector&amp;lt;PointT&amp;gt;&amp;quot;&amp;lt;/span&amp;gt; structure. The cloud object exposes some functions that let you get information about them, like the point count. As you can see, the class is templated so you can instantiate it for many types of points. For example, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt;, which stores three floats for the X, Y and Z coordinates, and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZRGB&amp;quot;&amp;lt;/span&amp;gt;, which also stores color (texture) information for each point, the kind we would get with Kinect or Xtion RGB-D cameras. A list of [http://pointclouds.org/documentation/tutorials/adding_custom_ptype.php#what-pointt-types-are-available-in-pcl point types] and a basic description of the [http://pointclouds.org/documentation/tutorials/basic_structures.php cloud object] are available on PCL's website.&lt;br /&gt;
&lt;br /&gt;
Clouds will usually be handled with pointers, as they tend to be large. PCL implements the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;boost::shared_ptr&amp;quot;&amp;lt;/span&amp;gt; class from [http://www.boost.org/ Boost C++ libraries], that keeps track of the references to a certain object. Every time you pass a shared pointer and a copy of it is created, a reference counter is automatically increased. When the pointer is destroyed, it is decreased. If the counter ever reaches 0, it means that no one is holding an active reference to the object, and its destructor is called to free memory. This can take a while to get used to, and you will be mixing cloud objects and cloud object pointers until then, but do not worry.&lt;br /&gt;
&lt;br /&gt;
==PCD files==&lt;br /&gt;
&lt;br /&gt;
When clouds are saved to disk, a [http://pointclouds.org/documentation/tutorials/pcd_file_format.php PCD] (Point Cloud Data) file is produced. You can choose to do it so in binary or plain text format. The first one is faster to work with, whereas the second one will let you to inspect the cloud by opening it with a common text editor. If you do so, you may see something like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;# .PCD v.7 − Point Cloud Data file format&lt;br /&gt;
VERSION .7&lt;br /&gt;
FIELDS x y z&lt;br /&gt;
SIZE 4 4 4&lt;br /&gt;
TYPE F F F&lt;br /&gt;
WIDTH 2&lt;br /&gt;
HEIGHT 1&lt;br /&gt;
VIEWPOINT 0 0 0 1 0 0 0&lt;br /&gt;
POINTS 2&lt;br /&gt;
DATA ascii&lt;br /&gt;
0.73412 -1.12643 0.82218&lt;br /&gt;
0.44739 -0.34735 -0.04624&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This tells us that the cloud consist of 2 points of type &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; (because each point has only 3 fields; the X, Y and Z coordinates, each stored in a 4-byte float), and lists their coordinates. It also stores the viewport information (the relative position of the points to the sensor) as a translation quaternion. Knowing the position of the camera can be useful for certain procedures like normal estimation.&lt;br /&gt;
&lt;br /&gt;
===Reading from file===&lt;br /&gt;
&lt;br /&gt;
Reading a point cloud from a PCD file to and storing it in memory can be done with a single function call. You must specify the correct point type. For example, for .pcd files saved with the OpenNI viewer application that we saw in the [[PCL/OpenNI tutorial 1: Installing and testing | previous tutorial]], you could use either &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZRGBA&amp;quot;&amp;lt;/span&amp;gt; or &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; if you are not going to use the color information.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I have not bothered to include proper error/argument checking, but you should do it in your programs. Be sure to check the [http://docs.pointclouds.org PCL API] every time I introduce you to one of its classes or methods.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/reading_pcd.php Reading Point Cloud data from PCD files]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl_1_1io.html pcl::io namespace]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_file_read.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Writing to file===&lt;br /&gt;
&lt;br /&gt;
Saving a point cloud to a file is as easy as reading it. You can choose between doing it in plain readable (ASCII) or binary format. The first one can be read and altered with a text editor, the second one is faster.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Write it back to disk under a different name.&lt;br /&gt;
	// Another possibility would be &amp;quot;savePCDFileBinary()&amp;quot;.&lt;br /&gt;
	pcl::io::savePCDFileASCII(&amp;quot;output.pcd&amp;quot;, *cloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/writing_pcd.php Writing Point Cloud data to PCD files]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl_1_1io.html pcl::io namespace]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_file_write.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you want a better alternative for loading or saving PCD files (some of the functions we saw are a bit outdated, but they do the job), take a look at the [http://docs.pointclouds.org/trunk/classpcl_1_1_p_c_d_reader.html PCDReader] and [http://docs.pointclouds.org/trunk/classpcl_1_1_p_c_d_writer.html PCDWriter] classes.&lt;br /&gt;
&lt;br /&gt;
==Visualization==&lt;br /&gt;
&lt;br /&gt;
Of course, PCL ships with a dedicated program to load and visualize point clouds and a lot of other stuff, &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''pcl_viewer''&amp;lt;/span&amp;gt;. You can run it from a terminal with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;pcl_viewer &amp;lt;file 1&amp;gt; [file 2] ... [file N]&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you load several files, each cloud will be rendered with a different color to help you tell them apart. The program uses internally the [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_p_c_l_visualizer.html pcl::visualization::PCLVisualizer] class. Press the &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''H'''&amp;lt;/span&amp;gt; key while it is running to print the help on the terminal. Keys I tend to find useful are &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''1'''&amp;lt;/span&amp;gt;, which chooses a different color for the cloud (useful if the current one is too hard to see), &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''R'''&amp;lt;/span&amp;gt;, which resets the camera, and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''J'''&amp;lt;/span&amp;gt;, which saves a PNG snapshot.&lt;br /&gt;
&lt;br /&gt;
In your own code, it is possible to iterate through the cloud and print the information of each point to console, but this is cumbersome. Sometimes you just want to visualize the cloud to check something. PCL can create a window for you and display it, letting users pan and zoom their way around. The simplest method is the following one:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/cloud_viewer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Cloud viewer object. You can set the window title.&lt;br /&gt;
	pcl::visualization::CloudViewer viewer(argv[1]);&lt;br /&gt;
	viewer.showCloud(cloud);&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		// Do nothing but wait.&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;CloudViewer&amp;quot;&amp;lt;/span&amp;gt; has less features than &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PCLVisualizer&amp;quot;&amp;lt;/span&amp;gt; but it is easier to set up. The program will remain active until you close the window with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Alt+F4'''&amp;lt;/span&amp;gt; or press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Q'''&amp;lt;/span&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorials''':&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/cloud_viewer.php The CloudViewer]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/pcl_visualizer.php PCLVisualizer]&lt;br /&gt;
** [http://pointclouds.org/documentation/tutorials/walkthrough.php Visualization library overview]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_cloud_viewer.html pcl::visualization::CloudViewer]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_visualize_cloud.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Concatenating two clouds==&lt;br /&gt;
&lt;br /&gt;
There are two types of concatenation with point clouds:&lt;br /&gt;
&lt;br /&gt;
* '''Points''':  You initially have two clouds, &amp;quot;A&amp;quot; and &amp;quot;B&amp;quot;, with &amp;quot;N&amp;quot; and &amp;quot;M&amp;quot; points each. A third cloud is created (&amp;quot;C&amp;quot;), that has &amp;quot;N + M&amp;quot; points (all points of the first two clouds). For this to work, the point type must be the same for all clouds (&amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt;, or &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt;, or whatever).&lt;br /&gt;
* '''Fields''': With this, you can join the fields of two clouds with different types. The only constraint is that both must have the same number of points. For example, if you have a cloud of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointXYZ&amp;quot;&amp;lt;/span&amp;gt; type and another of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt; type with &amp;quot;N&amp;quot; points each, you can concatenate them to create a cloud of type &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointNormal&amp;quot;&amp;lt;/span&amp;gt; (that stores X, Y and Z coordinates, plus the normal) that also has &amp;quot;N&amp;quot; points.&lt;br /&gt;
&lt;br /&gt;
Point concatenation is the easiest one becase you can use the cloud's add operator (+), which is overloaded.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudA(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudB(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;::Ptr cloudC(new pcl::PointCloud&amp;lt;pcl::PointXYZRGBA&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read two PCD files from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGBA&amp;gt;(argv[1], *cloudA) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGBA&amp;gt;(argv[2], *cloudB) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Create cloud &amp;quot;C&amp;quot;, with the points of both &amp;quot;A&amp;quot; and &amp;quot;B&amp;quot;.&lt;br /&gt;
	*cloudC = (*cloudA) + (*cloudB);&lt;br /&gt;
	// Now cloudC-&amp;gt;points.size() equals cloudA-&amp;gt;points.size() + cloudB-&amp;gt;points.size().&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/concatenate_clouds.php Concatenate the points of two Point Clouds]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_concatenate_clouds.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Field concatenation is done with a function call:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudPoints(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr cloudNormals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;::Ptr cloudAll(new pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudPoints) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Compute the normals of the cloud (do not worry, we will see this later).&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloudPoints);&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.05);&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
	normalEstimation.compute(*cloudNormals);&lt;br /&gt;
&lt;br /&gt;
	// Concatenate the fields (PointXYZ + Normal = PointNormal).&lt;br /&gt;
	pcl::concatenateFields(*cloudPoints, *cloudNormals, *cloudAll);&lt;br /&gt;
&lt;br /&gt;
	// Print the data to standard output.&lt;br /&gt;
	for (size_t currentPoint = 0; currentPoint &amp;lt; cloudAll-&amp;gt;points.size(); currentPoint++)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tXYZ:&amp;quot; &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].x &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].y &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].z &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tNormal:&amp;quot; &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[0] &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[1] &amp;lt;&amp;lt; &amp;quot; &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudAll-&amp;gt;points[currentPoint].normal[2] &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/concatenate_clouds.php Concatenate the points of two Point Clouds]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__common.html#gac6add803f86fd16a998dce541e9ef402 pcl::concatenateFields()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_concatenate_fields.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Matrix transformations==&lt;br /&gt;
&lt;br /&gt;
Clouds can be directly manipulated with [http://en.wikipedia.org/wiki/Transformation_matrix transformations] (translation, rotation, scaling). 3D transformations use a 4x4 matrix, which can be a little complicated to understand (specially when it comes to rotations), so check some book or [http://web.iitd.ac.in/~hegde/cad/lecture/L6_3dtrans.pdf tutorial] before trying to use one. The transformed cloud will be saved to another object so you can keep both versions and use them as you wish.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/common/transforms.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr transformed(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Transformation matrix object, initialized to the identity matrix&lt;br /&gt;
	// (a null transformation).&lt;br /&gt;
	Eigen::Matrix4f transformation = Eigen::Matrix4f::Identity();&lt;br /&gt;
&lt;br /&gt;
	// Set a rotation around the Z axis.&lt;br /&gt;
	float theta = M_PI / 2; // 180 degrees.&lt;br /&gt;
	transformation(0, 0) = cos(theta);&lt;br /&gt;
	transformation(0, 1) = -sin(theta);&lt;br /&gt;
	transformation(1, 0) = sin(theta);&lt;br /&gt;
	transformation(1, 1) = cos(theta);&lt;br /&gt;
&lt;br /&gt;
	// Set a translation on the X axis.&lt;br /&gt;
	transformation(0, 3) = 1.0f; // 1 meter (positive direction).&lt;br /&gt;
&lt;br /&gt;
	pcl::transformPointCloud(*cloud, *transformed, transformation);&lt;br /&gt;
&lt;br /&gt;
	// Visualize both the original and the result.&lt;br /&gt;
	pcl::visualization::PCLVisualizer viewer(argv[1]);&lt;br /&gt;
	viewer.addPointCloud(cloud, &amp;quot;original&amp;quot;);&lt;br /&gt;
	// The transformed one's points will be red in color.&lt;br /&gt;
	pcl::visualization::PointCloudColorHandlerCustom&amp;lt;pcl::PointXYZ&amp;gt; colorHandler(transformed, 255, 0, 0);&lt;br /&gt;
	viewer.addPointCloud(transformed, colorHandler, &amp;quot;transformed&amp;quot;);&lt;br /&gt;
	// Add 3D colored axes to help see the transformation.&lt;br /&gt;
	viewer.addCoordinateSystem(1.0, &amp;quot;reference&amp;quot;, 0);&lt;br /&gt;
&lt;br /&gt;
	while (!viewer.wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer.spinOnce();&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Cloud_transformation.png|thumb|center|600px|Rotation and translation applied to a cloud.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check the rest of the functions, because you can also use a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Affine3&amp;quot;&amp;lt;/span&amp;gt; or a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Quaternion&amp;quot;&amp;lt;/span&amp;gt; instead of a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Eigen::Matrix4&amp;quot;&amp;lt;/span&amp;gt; object.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Transformation&lt;br /&gt;
* '''Output''': Transformed points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/matrix_transform.php Using a matrix to transform a point cloud]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl.html#aefbe4956d1c8fb785a97df6708d57c56 pcl::transformPointCloud()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_matrix_transform.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Using indices==&lt;br /&gt;
&lt;br /&gt;
Many of PCL's algorithms return indices. Indices are just a list of some of the points of a cloud (not the points themselves with all their data, but just their index in the cloud). For example, a cylinder segmentation algorithm will give you as output a list of points that were considered to fit the cylinder model. You can then use the indices to extract the points to a second cloud for further work with the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::ExtractIndices&amp;quot;&amp;lt;/span&amp;gt; class. Its &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt; function will instruct it to extract the points NOT indexed (so you can remove all cylinder-shaped clusters from the cloud).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/extract_indices.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/segmentation/sac_segmentation.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudAll(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudExtracted(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudAll) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Plane segmentation (do not worry, we will see this later).&lt;br /&gt;
	pcl::ModelCoefficients::Ptr coefficients(new pcl::ModelCoefficients);&lt;br /&gt;
	pcl::SACSegmentation&amp;lt;pcl::PointXYZ&amp;gt; segmentation;&lt;br /&gt;
	segmentation.setOptimizeCoefficients(true);&lt;br /&gt;
	segmentation.setModelType(pcl::SACMODEL_PLANE);&lt;br /&gt;
	segmentation.setMethodType(pcl::SAC_RANSAC);&lt;br /&gt;
	segmentation.setDistanceThreshold(0.01);&lt;br /&gt;
	segmentation.setInputCloud(cloudAll);&lt;br /&gt;
&lt;br /&gt;
	// Object for storing the indices.&lt;br /&gt;
	pcl::PointIndices::Ptr pointIndices(new pcl::PointIndices);&lt;br /&gt;
&lt;br /&gt;
	segmentation.segment(*pointIndices, *coefficients);&lt;br /&gt;
&lt;br /&gt;
	// Object for extracting points from a list of indices.&lt;br /&gt;
	pcl::ExtractIndices&amp;lt;pcl::PointXYZ&amp;gt; extract;&lt;br /&gt;
	extract.setInputCloud(cloudAll);&lt;br /&gt;
	extract.setIndices(pointIndices);&lt;br /&gt;
	// We will extract the points that are NOT indexed (the ones that are not in a plane).&lt;br /&gt;
	extract.setNegative(true);&lt;br /&gt;
	extract.filter(*cloudExtracted);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Indices, [Negative mode]&lt;br /&gt;
* '''Output''': Points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/extract_indices.php Extracting indices from a PointCloud]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/structpcl_1_1_point_indices.html pcl::PointIndices]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_extract_indices.html pcl::ExtractIndices]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_extract_indices.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Removing NaNs==&lt;br /&gt;
&lt;br /&gt;
The cloud retrieved from the sensor may contain several kind of measurement errors and/or inaccuracies. One of them is the presence of [http://en.wikipedia.org/wiki/NaN NaN (Not a Number)] values in the coordinates of some points, as you can see in the following file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;# .PCD v0.7 - Point Cloud Data file format&lt;br /&gt;
VERSION 0.7&lt;br /&gt;
FIELDS x y z rgba&lt;br /&gt;
SIZE 4 4 4 4&lt;br /&gt;
TYPE F F F U&lt;br /&gt;
COUNT 1 1 1 1&lt;br /&gt;
WIDTH 640&lt;br /&gt;
HEIGHT 480&lt;br /&gt;
VIEWPOINT 0 0 0 1 0 0 0&lt;br /&gt;
POINTS 307200&lt;br /&gt;
DATA ascii&lt;br /&gt;
nan nan nan 10135463&lt;br /&gt;
nan nan nan 10398635&lt;br /&gt;
nan nan nan 10070692&lt;br /&gt;
nan nan nan 10268071&lt;br /&gt;
...&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cloud objects have a member function called &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;is_dense()&amp;quot;&amp;lt;/span&amp;gt; that will return true if all points have valid, finite values. A NaN indicates that there was a problem measuring the distance to that point, maybe because the sensor was too close or too far, or because the surface was reflective.&lt;br /&gt;
&lt;br /&gt;
Sadly, many of PCL's algorithms will fail if even a single NaN is present in the cloud you feed them as input, with a message like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;Assertion `point_representation_-&amp;gt;isValid (point) &amp;amp;&amp;amp; &amp;quot;Invalid (NaN, Inf) point coordinates given to radiusSearch!&amp;quot;' failed.&amp;quot;''&amp;lt;/span&amp;gt;. If this happens, you will have to remove such values from the cloud. You can use a program like the following to &amp;quot;clean&amp;quot; a cloud from all NaN points:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/filter.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	if (argc != 3)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;\tUsage: &amp;quot; &amp;lt;&amp;lt; argv[0] &amp;lt;&amp;lt; &amp;quot; &amp;lt;input cloud&amp;gt; &amp;lt;output cloud&amp;gt;&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// The mapping tells you to what points of the old cloud the new ones correspond,&lt;br /&gt;
	// but we will not use it.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; mapping;&lt;br /&gt;
	pcl::removeNaNFromPointCloud(*cloud, *cloud, mapping);&lt;br /&gt;
&lt;br /&gt;
	// Save it back.&lt;br /&gt;
	pcl::io::savePCDFileASCII(argv[2], *cloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The problem with this method is that it will not keep the cloud organized. All clouds store a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;width&amp;quot;&amp;lt;/span&amp;gt; and a &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;height&amp;quot;&amp;lt;/span&amp;gt; variable. In ''unorganized'' clouds, the total number of points is the same as the width, and the height is set to 1. In ''organized'' clouds (like the ones taken from camera-like sensors such as Kinect or Xtion), the width and the height are the same that the pixels of the image resolution the sensor works with. Points are distributed in rows like in the depth image, and every one of them corresponds to a pixel. The member function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;isOrganized()&amp;quot;&amp;lt;/span&amp;gt; will return true if the height is greater than 1.&lt;br /&gt;
&lt;br /&gt;
Because removing NaNs will change the number of points of the cloud, it can no longer be kept organized with the original width/height ratio, so the function will set the height to 1. This is not a big issue, as only a few of PCL's algorithms work explicitly with organized clouds (and they mostly use it for optimization), but you must take it into account.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Cloud&lt;br /&gt;
* '''Output''': Cloud without NaNs&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__filters.html#gac463283a9e9c18a66d3d29b28a575064 pcl::removeNaNFromPointCloud()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_remove_NaNs.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Saving a PNG==&lt;br /&gt;
&lt;br /&gt;
PCL offers the possibility of saving the values of a point cloud to a PNG image file. Obviously, this can only be done with organized clouds, because the rows and columns of the resulting image will correspond exactly to the ones of the cloud. For example, if you have a point cloud taken from a sensor like Kinect or Xtion, you can use this to retrieve the 640x480 RGB image that matches the cloud.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/io/png_io.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZRGB&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZRGB&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Save the image (cloud must be organized).&lt;br /&gt;
	pcl::io::savePNGFile(&amp;quot;output.png&amp;quot;, *cloud, &amp;quot;rgb&amp;quot;);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;savePNGFile()&amp;quot;&amp;lt;/span&amp;gt; has several implementations; in this case, we have chosen the one that lets you specify which fields to save. If you omit that parameter, the function will save the RGB fields by default.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/group__io.html#gabd03e43fac3048635512f447c8bc50b3 pcl::io::savePNGFile()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_save_PNG.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Computing the centroid==&lt;br /&gt;
&lt;br /&gt;
The [http://en.wikipedia.org/wiki/Centroid centroid] of a cloud is a point with coordinates that result from computing the mean of the values of all points in the cloud. You could say it is the &amp;quot;center of mass&amp;quot;, and it has several uses for certain algorithms. If you intend to compute the actual center of gravity of a [[PCL/OpenNI_tutorial_3:_Cloud_processing_%28advanced%29#Segmentation|clustered]] object though, remember that the sensor does not retrieve the part of the object that is hidden from the camera, like the back faces that are occluded by the front ones, or the inside. You only have the part of the surface that faces the camera.&lt;br /&gt;
&lt;br /&gt;
The centroid of a cloud can be computed with a single function call:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/common/centroid.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object to store the centroid coordinates.&lt;br /&gt;
	Eigen::Vector4f centroid;&lt;br /&gt;
	&lt;br /&gt;
	pcl::compute3DCentroid(*cloud, centroid);&lt;br /&gt;
&lt;br /&gt;
	std::cout &amp;lt;&amp;lt; &amp;quot;The XYZ coordinates of the centroid are: (&amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[0] &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[1] &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
			  &amp;lt;&amp;lt; centroid[2] &amp;lt;&amp;lt; &amp;quot;).&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/namespacepcl.html#a23daec3829d2d4100a2f185372b3753a pcl::compute3DCentroid()]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_compute_centroid.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Feature estimation=&lt;br /&gt;
&lt;br /&gt;
A ''feature'' of a point is some characteristic that can help us to tell it apart from a different point. Of course, you could say that the XYZ coordinates would be characteristic enough, but when interpolating the data of two different, yet corresponding, clouds, they become useless. We need something better, that will have similar values when computed for the same spot of a surface in both sets.&lt;br /&gt;
&lt;br /&gt;
Normals are an example of a very simple feature. 3D [http://pointclouds.org/documentation/tutorials/how_features_work.php features] will be important later in our tutorials, when we talk about [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29|descriptors]] (more detailed &amp;quot;signatures&amp;quot; of points used for close matching).&lt;br /&gt;
&lt;br /&gt;
==Normal estimation==&lt;br /&gt;
&lt;br /&gt;
As you may remember from geometry class, the [http://en.wikipedia.org/wiki/Surface_normal normal] of a plane is an unit vector that is perpendicular to it. The normal of a surface at a point is defined as the vector that is perpendicular to the plane that is tangent to the surface at the point. Surface normals can be calculated for the points of a cloud, too. It is considered a feature, albeit not a very discriminative one.&lt;br /&gt;
&lt;br /&gt;
I will not go into detail with the math of the estimation method, but you just have to know that is uses the nearest neighbors (the points that are closest to the one we are calculating the normal for) to find out the tangent plane and the normal vector. You can customize the method with the search radius (think about a sphere of that radius, centered in the point; all neighboring points that lie within will be used for the computation) and the viewpoint (by default, the output normals will be directionless; by supposing that all vectors must point towards the camera - because otherwise they would belong to surfaces that are not visible from the sensor - they can all be re-oriented accordingly).&lt;br /&gt;
&lt;br /&gt;
Normals are also important because they give us information about the ''curvature'' of the surface at some point, which can be used to our advantage. Many of PCL's algorithms will require us to provide the normals of the input cloud. To estimate them, you can use the following code:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/normal_3d.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for normal estimation.&lt;br /&gt;
	pcl::NormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	// For every point, use all neighbors in a radius of 3cm.&lt;br /&gt;
	normalEstimation.setRadiusSearch(0.03);&lt;br /&gt;
	// A kd-tree is a data structure that makes searches efficient. More about it later.&lt;br /&gt;
	// The normal estimation object will use it to find nearest neighbors.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree(new pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	normalEstimation.setSearchMethod(kdtree);&lt;br /&gt;
&lt;br /&gt;
	// Calculate the normals.&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Visualize them.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Normals&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(cloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	// Display one normal out of 20, as a line of length 3cm.&lt;br /&gt;
	viewer-&amp;gt;addPointCloudNormals&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt;(cloud, normals, 20, 0.03, &amp;quot;normals&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, normals are stored in &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;PointCloud&amp;quot;&amp;lt;/span&amp;gt; objects too, instantiated to the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Normal&amp;quot;&amp;lt;/span&amp;gt; type. If you ever extract some indices from a cloud before feeding them to an algorithm, be sure to do the same with the normals. I have not made use of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setViewPoint()&amp;quot;&amp;lt;/span&amp;gt; function, but you get the idea.&lt;br /&gt;
&lt;br /&gt;
Instead of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setRadiusSearch()&amp;quot;&amp;lt;/span&amp;gt;, you can make use of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setKSearch()&amp;quot;&amp;lt;/span&amp;gt;, which takes an integer, ''K''. This will take into account a point's ''K'' nearest neighbors to compute the normal. Please mind that this can produce some odd results for points that are not in dense areas, as some of the &amp;quot;neighbors&amp;quot; will be too far from them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Normal_estimation.png|thumb|center|600px|Visualization of the normals computed for a point cloud.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search method, Radius | Neighbor count, [Indices], [Viewpoint]&lt;br /&gt;
* '''Output''': Normals&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/normal_estimation.php Estimating Surface Normals in a Point Cloud]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_normal_estimation.html pcl::NormalEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_estimate_normals.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Integral images===&lt;br /&gt;
&lt;br /&gt;
Integral images are a method for normal estimation on organized clouds. The algorithm sees the cloud as a depth image, and creates certain rectangular areas over which the normals are computed, by taking into account the relationship between neighboring &amp;quot;pixels&amp;quot; (points), without the need to run lookups on a search structure like a tree. Because of this, it is very efficient and the normals are usually computed in a split second. If you are performing real time computations on clouds taken from a RGB-D sensor, this is the approach you should use because the previous method will probably take several seconds to finish.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/features/integral_image_normal.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// Object for storing the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;::Ptr normals(new pcl::PointCloud&amp;lt;pcl::Normal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Object for normal estimation.&lt;br /&gt;
	pcl::IntegralImageNormalEstimation&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt; normalEstimation;&lt;br /&gt;
	normalEstimation.setInputCloud(cloud);&lt;br /&gt;
	// Other estimation methods: COVARIANCE_MATRIX, AVERAGE_DEPTH_CHANGE, SIMPLE_3D_GRADIENT.&lt;br /&gt;
	// They determine the smoothness of the result, and the running time.&lt;br /&gt;
	normalEstimation.setNormalEstimationMethod(normalEstimation.AVERAGE_3D_GRADIENT);&lt;br /&gt;
	// Depth threshold for computing object borders based on depth changes, in meters.&lt;br /&gt;
	normalEstimation.setMaxDepthChangeFactor(0.02f);&lt;br /&gt;
	// Factor that influences the size of the area used to smooth the normals.&lt;br /&gt;
	normalEstimation.setNormalSmoothingSize(10.0f);&lt;br /&gt;
&lt;br /&gt;
	// Calculate the normals.&lt;br /&gt;
	normalEstimation.compute(*normals);&lt;br /&gt;
&lt;br /&gt;
	// Visualize them.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Normals&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(cloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	// Display one normal out of 20, as a line of length 3cm.&lt;br /&gt;
	viewer-&amp;gt;addPointCloudNormals&amp;lt;pcl::PointXYZ, pcl::Normal&amp;gt;(cloud, normals, 20, 0.03, &amp;quot;normals&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Estimation method, [Smoothing factor], [Depth threshold], [Rectangle size], [Border policy], [Depth dependent smoothing]&lt;br /&gt;
* '''Output''': Normals&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/normal_estimation_using_integral_images.php Normal Estimation Using Integral Images]&lt;br /&gt;
* '''Publications''':&lt;br /&gt;
** [http://islab.ulsan.ac.kr/files/announcement/411/Adaptive%20Neighborhood%20Selection%20for%20Real-Time%20Surface%20Normal.pdf Adaptive Neighborhood Selection for Real-Time Surface Normal Estimation from Organized Point Cloud Data Using Integral Images] (S. Holzer et al., 2012)&lt;br /&gt;
** [https://www.willowgarage.com/sites/default/files/holz_rgbd_seg.pdf Real-Time Plane Segmentation using RGB-D Cameras] (Dirk Holz et al., 2012)&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_integral_image_normal_estimation.html pcl::IntegralImageNormalEstimation]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_estimate_normals_integral.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Decomposition=&lt;br /&gt;
&lt;br /&gt;
When I talk about ''decomposition'', I mean the process of creating an ordered, organized data structure from the points of the cloud, with some purpose in mind like making further analysis easier, or performing filters.&lt;br /&gt;
&lt;br /&gt;
==''k''-d tree==&lt;br /&gt;
&lt;br /&gt;
A [http://en.wikipedia.org/wiki/K-d_tree ''k''-d tree] (''k''-dimensional tree) is a data structure that organizes a set of points in a ''k''-dimensional space, in a way that makes range search operations very efficient (for example, finding the nearest neighbor of a point, which is the point that is closest to it in space; or finding all neighbors within a radius).&lt;br /&gt;
&lt;br /&gt;
It is a binary tree, that is, every non-leaf node in it has two children nodes, the &amp;quot;left&amp;quot; and &amp;quot;right&amp;quot; ones. Each level splits space on a specific dimension. For example, in 3-dimensional space, at the root node (first level) all children would be split based on the first dimension, X (points having coordinates with greater X values would go to the right subtree, points with lesser values to the left one). At the second level (the nodes we just created), the split would be done on the Y axis, following the same criteria. At the third level (the grandchildren), we would use the Z axis. At the fourth level, we would get back to the X axis, and so on. Usually, the median point is chosen to be root at every level.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Kdtree.png|thumb|center|700px|Example of a 2D ''k''d-tree (image from Wikimedia Commons).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In PCL, creating a ''k''-d tree from a cloud is a straightforward and fast operation:&lt;br /&gt;
&lt;br /&gt;
===Searching===&lt;br /&gt;
&lt;br /&gt;
The following code snippet shows you how to use a ''k''-d tree to perform efficient searches:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/search/kdtree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// kd-tree object.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt; kdtree;&lt;br /&gt;
	kdtree.setInputCloud(cloud);&lt;br /&gt;
&lt;br /&gt;
	// We will find the 5 nearest neighbors of this point&lt;br /&gt;
	// (it does not have to be one of the cloud's, we can use any coordinate).&lt;br /&gt;
	pcl::PointXYZ point;&lt;br /&gt;
	point.x = 0.0524343;&lt;br /&gt;
	point.y = -0.58016;&lt;br /&gt;
	point.z = 1.776;&lt;br /&gt;
	// This vector will store the output neighbors.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; pointIndices(5);&lt;br /&gt;
	// This vector will store their squared distances to the search point.&lt;br /&gt;
	std::vector&amp;lt;float&amp;gt; squaredDistances(5);&lt;br /&gt;
	// Perform the search, and print out results.&lt;br /&gt;
	if (kdtree.nearestKSearch(point, 5, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;5 nearest neighbors of the point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Now we find all neighbors within 3cm of the point&lt;br /&gt;
	// (inside a sphere of radius 3cm centered at the point).&lt;br /&gt;
	if (kdtree.radiusSearch(point, 0.03, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors within 3cm:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another alternative would be &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;KdTreeFLANN&amp;quot;&amp;lt;/span&amp;gt;, which makes use of [http://www.cs.ubc.ca/research/flann/ FLANN] (Fast Library for Approximate Nearest Neighbors).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius | Neighbor count&lt;br /&gt;
* '''Output''': Point indices, Squared distances&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/kdtree_search.php How to use a KdTree to search]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1search_1_1_kd_tree.html pcl::search::KdTree]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1_kd_tree_f_l_a_n_n.html pcl::KdTreeFLANN]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_kdtree_search.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Octree==&lt;br /&gt;
&lt;br /&gt;
Like the ''k''-d tree, the [http://en.wikipedia.org/wiki/Octree octree] is an hierarchical tree data structure useful for searches, but also compression or downsampling. Each octree node (called a ''voxel'', a term that will sound familiar for users of 3D engines, and which could be described as a &amp;quot;3D pixel&amp;quot;, that is, a cube) has either eight children or no children. The root node describes a cubic bounding box which encapsulates all points. At every level, it becomes subdivided by a factor of 2 which results in an increased&lt;br /&gt;
voxel resolution. That way, we can selectively give more resolution to certain zones.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Octree.png|thumb|center|700px|Depiction of an octree (image from Wikimedia Commons).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Searching===&lt;br /&gt;
&lt;br /&gt;
Downsampling will be covered in a different section, so now I will show you how to perform searches with an octree:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/octree/octree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;vector&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Octree object, with a maximum resolution of 128&lt;br /&gt;
	// (resolution at lowest octree level).&lt;br /&gt;
	pcl::octree::OctreePointCloudSearch&amp;lt;pcl::PointXYZ&amp;gt; octree(128);&lt;br /&gt;
	octree.setInputCloud(cloud);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
&lt;br /&gt;
	// We will find all neighbors within the same voxel as this point.&lt;br /&gt;
	// (it does not have to be one of the cloud's, we can use any coordinate).&lt;br /&gt;
	pcl::PointXYZ point;&lt;br /&gt;
	point.x = 0.0524343;&lt;br /&gt;
	point.y = -0.58016;&lt;br /&gt;
	point.z = 1.776;&lt;br /&gt;
	// This vector will store the output neighbors.&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; pointIndices;&lt;br /&gt;
	// Perform the search, and print out results.&lt;br /&gt;
	if (! octree.voxelSearch(point, pointIndices) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors in the same voxel:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// We will find the 5 nearest neighbors of the point.&lt;br /&gt;
	// This vector will store their squared distances to the search point.&lt;br /&gt;
	std::vector&amp;lt;float&amp;gt; squaredDistances;&lt;br /&gt;
	if (octree.nearestKSearch(point, 5, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;5 nearest neighbors of the point:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Now we find all neighbors within 3cm of the point&lt;br /&gt;
	// (inside a sphere of radius 3cm centered at the point).&lt;br /&gt;
	// The point DOES have to belong in the cloud.&lt;br /&gt;
	if (octree.radiusSearch(point, 0.03, pointIndices, squaredDistances) &amp;gt; 0)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Neighbors within 3cm:&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
		for (size_t i = 0; i &amp;lt; pointIndices.size(); ++i)&lt;br /&gt;
			std::cout &amp;lt;&amp;lt; &amp;quot;\t&amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].x&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].y&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; &amp;quot; &amp;lt;&amp;lt; cloud-&amp;gt;points[pointIndices[i]].z&lt;br /&gt;
					  &amp;lt;&amp;lt; &amp;quot; (squared distance: &amp;quot; &amp;lt;&amp;lt; squaredDistances[i] &amp;lt;&amp;lt; &amp;quot;)&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, the code is mostly interchangeable with the one of the ''k''d-tree.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius | Neighbor count&lt;br /&gt;
* '''Output''': Point indices, Squared distances&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/octree.php Spatial Partitioning and Search Operations with Octrees]&lt;br /&gt;
* '''API''':&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1search_1_1_octree.html pcl::search::Octree]&lt;br /&gt;
** [http://docs.pointclouds.org/trunk/classpcl_1_1octree_1_1_octree_point_cloud_search.html pcl::octree::OctreePointCloudSearch]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_octree_search.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Compressing/Decompressing===&lt;br /&gt;
&lt;br /&gt;
On the other hand, this is the code needed to compress and decompress a point cloud with PCL:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/compression/octree_pointcloud_compression.h&amp;gt;&lt;br /&gt;
#include &amp;lt;boost/thread/thread.hpp&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/pcl_visualizer.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr decompressedCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Octree compressor object.&lt;br /&gt;
	// Check /usr/include/pcl-&amp;lt;version&amp;gt;/pcl/compression/compression_profiles.h for more profiles.&lt;br /&gt;
	// The second parameter enables the output of information.&lt;br /&gt;
	pcl::io::OctreePointCloudCompression&amp;lt;pcl::PointXYZ&amp;gt; octreeCompression(pcl::io::MED_RES_ONLINE_COMPRESSION_WITHOUT_COLOR, true);&lt;br /&gt;
	// Stringstream that will hold the compressed cloud.&lt;br /&gt;
	std::stringstream compressedData;&lt;br /&gt;
&lt;br /&gt;
	// Compress the cloud (you would save the stream to disk).&lt;br /&gt;
	octreeCompression.encodePointCloud(cloud, compressedData);&lt;br /&gt;
&lt;br /&gt;
	// Decompress the cloud.&lt;br /&gt;
	octreeCompression.decodePointCloud(compressedData, decompressedCloud);&lt;br /&gt;
&lt;br /&gt;
	// Display the decompressed cloud.&lt;br /&gt;
	boost::shared_ptr&amp;lt;pcl::visualization::PCLVisualizer&amp;gt; viewer(new pcl::visualization::PCLVisualizer(&amp;quot;Octree compression&amp;quot;));&lt;br /&gt;
	viewer-&amp;gt;addPointCloud&amp;lt;pcl::PointXYZ&amp;gt;(decompressedCloud, &amp;quot;cloud&amp;quot;);&lt;br /&gt;
	while (!viewer-&amp;gt;wasStopped())&lt;br /&gt;
	{&lt;br /&gt;
		viewer-&amp;gt;spinOnce(100);&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));&lt;br /&gt;
	}&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforeCompression.png | Original point cloud.&lt;br /&gt;
File:AfterCompression.png | Point cloud retrieved after octree compression.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As you can see, the process has decreased the amount of points in the cloud. The default parameters of the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;OctreePointCloudCompression&amp;quot;&amp;lt;/span&amp;gt; object establish an octree resolution (at lowest level) of 1cm, and a coordinate precision of 1mm. These values can of course be changed to suit your needs, but simplifying the cloud in this way is an useful operation that we will cover later.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Resolution, Compression profile&lt;br /&gt;
* '''Output''': Compressed stream&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/compression.php Point Cloud Compression]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1io_1_1_octree_point_cloud_compression.html pcl::io::OctreePointCloudCompression]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_octree_compress.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Detecting changes===&lt;br /&gt;
&lt;br /&gt;
PCL has an octree-based object that allows to detect changes between two point clouds (for example, two clouds taken one after another with an OpenNI device). This object stores the trees of both clouds at the same time, and compares them, returning a list of indices of the points that are new in the second cloud (they did not exist in the first):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/octree/octree.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudA(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloudB(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read two PCD files from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloudA) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[2], *cloudB) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Change detector object, with a resolution of 128&lt;br /&gt;
	// (resolution at lowest octree level).&lt;br /&gt;
	pcl::octree::OctreePointCloudChangeDetector&amp;lt;pcl::PointXYZ&amp;gt; octree(128.0f);&lt;br /&gt;
&lt;br /&gt;
	// Add cloudA to the octree.&lt;br /&gt;
	octree.setInputCloud(cloudA);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
	// The change detector object is able to store two clouds at the same time;&lt;br /&gt;
	// with this we can reset the buffer but keep the previous tree saved.&lt;br /&gt;
	octree.switchBuffers();&lt;br /&gt;
	// Now, add cloudB to the octree like we did before.&lt;br /&gt;
	octree.setInputCloud(cloudB);&lt;br /&gt;
	octree.addPointsFromInputCloud();&lt;br /&gt;
&lt;br /&gt;
	std::vector&amp;lt;int&amp;gt; newPoints;&lt;br /&gt;
	// Get a vector with the indices of all points that are new in cloud B,&lt;br /&gt;
	// when compared with the ones that existed in cloud A.&lt;br /&gt;
	octree.getPointIndicesFromNewVoxels(newPoints);&lt;br /&gt;
	for (size_t i = 0; i &amp;lt; newPoints.size(); ++i)&lt;br /&gt;
	{&lt;br /&gt;
		std::cout &amp;lt;&amp;lt; &amp;quot;Point (&amp;quot; &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].x &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].y &amp;lt;&amp;lt; &amp;quot;, &amp;quot;&lt;br /&gt;
				  &amp;lt;&amp;lt; cloudB-&amp;gt;points[newPoints[i]].z&lt;br /&gt;
				  &amp;lt;&amp;lt; &amp;quot;) was not in cloud A but is in cloud B&amp;quot; &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
	}&lt;br /&gt;
	std::cout &amp;lt;&amp;lt; newPoints.size() &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
} &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': First cloud, Second cloud, Resolution&lt;br /&gt;
* '''Output''': Point indices&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/octree_change.php Spatial change detection on unorganized point cloud data]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1octree_1_1_octree_point_cloud_change_detector.html pcl::octree::OctreePointCloudChangeDetector]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_change_detection.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Filtering=&lt;br /&gt;
&lt;br /&gt;
The raw point cloud, as returned from the sensor, usually needs some kind of polishing before being handed to processing algorithms. Normally, this is due to the low precision of the sensor, which leads to noise, incorrect measurements, holes in surfaces... Sometimes it is just the opposite: the cloud has so many points that processing it would take forever. Luckily, several filtering methods exist to &amp;quot;fix&amp;quot; many recurring errors in point clouds.&lt;br /&gt;
&lt;br /&gt;
==Passthrough filter==&lt;br /&gt;
&lt;br /&gt;
A passthrough filter will remove any point from the cloud whose values do not fall in a certain range, user-given. For example, if you wanted to discard all points farther away than, say, 3 meters, you would run the filter on the Z coordinate with a range of [0,3]. This filter can be useful to discard unneeded objects from the cloud, but you may have to adapt a different reference frame if the default one (relative to the sensor) is not fitting. For example, filtering on the Y value to remove all points not sitting on a table will yield unwanted results if the camera was at an odd angle.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/passthrough.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::PassThrough&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Filter out all points with Z values not in the [0-2] range.&lt;br /&gt;
	filter.setFilterFieldName(&amp;quot;z&amp;quot;);&lt;br /&gt;
	filter.setFilterLimits(0.0, 2.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforePassthrough.png | Original point cloud.&lt;br /&gt;
File:AfterPassthrough.png | Point cloud after applying a passthrough filter with Z in [0,2].&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setFilterLimitsNegative()&amp;quot;&amp;lt;/span&amp;gt; function (not seen in the example) will tell the filter object whether to return the points inside the range (&amp;quot;false&amp;quot;, the default value) or outside it (&amp;quot;true&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Filter field, Filter limits, [Negative mode]&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/passthrough.php Filtering a PointCloud using a PassThrough filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_pass_through.html pcl::PassThrough]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_passthrough_filter.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Conditional removal==&lt;br /&gt;
&lt;br /&gt;
Another way of getting the exact same result than with the previous example would be to use a conditional removal filter. We can build any kind of condition we want for the values of the point:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/conditional_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// We must build a condition.&lt;br /&gt;
	// And &amp;quot;And&amp;quot; condition requires all tests to check true. &amp;quot;Or&amp;quot; conditions also available.&lt;br /&gt;
	pcl::ConditionAnd&amp;lt;pcl::PointXYZ&amp;gt;::Ptr condition(new pcl::ConditionAnd&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// First test, the point's Z value must be greater than (GT) 0.&lt;br /&gt;
	condition-&amp;gt;addComparison(pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;::ConstPtr(new pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;(&amp;quot;z&amp;quot;, pcl::ComparisonOps::GT, 0.0)));&lt;br /&gt;
	// Second test, the point's Z value must be less than (LT) 2.&lt;br /&gt;
	condition-&amp;gt;addComparison(pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;::ConstPtr(new pcl::FieldComparison&amp;lt;pcl::PointXYZ&amp;gt;(&amp;quot;z&amp;quot;, pcl::ComparisonOps::LT, 2.0)));&lt;br /&gt;
	// Checks available: GT, GE, LT, LE, EQ.&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::ConditionalRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setCondition(condition);&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// If true, points that do not pass the filter will be set to a certain value (default NaN).&lt;br /&gt;
	// If false, they will be just removed, but that could break the structure of the cloud&lt;br /&gt;
	// (organized clouds are clouds taken from camera-like sensors that return a matrix-like image).&lt;br /&gt;
	filter.setKeepOrganized(true);&lt;br /&gt;
	// If keep organized was set true, points that failed the test will have their Z value set to this.&lt;br /&gt;
	filter.setUserFilterValue(0.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Condition, [Keep organized]&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/remove_outliers.php Removing outliers using a Conditional or RadiusOutlier removal]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_conditional_removal.html pcl::ConditionalRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_conditional_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Outlier removal==&lt;br /&gt;
&lt;br /&gt;
Outliers are lonely points that are spread here and there in the cloud, like annoying mosquitoes. They are the product of the sensor's inaccuracy, which registers measurements where there should not be any. Outliers are considered undesired noise because they can introduce errors in calculations, like for example the normal estimation. Hence, trimming they out of the cloud will not only make computations faster (because the less points there are, the less time it takes, as we will see with downsampling) but also help us get more precise values.&lt;br /&gt;
&lt;br /&gt;
===Radius-based===&lt;br /&gt;
&lt;br /&gt;
The radius-based outlier removal is the simplest method of all. You must specify a search radius and the minimum number of neighbors than a point must have to avoid being labelled as outlier. The algorithm will then iterate through all points (which can be slow in if the cloud is big) and perform the check: if less than that number of points are found within the radius, it is removed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/radius_outlier_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::RadiusOutlierRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Every point must have 10 neighbors within 15cm, or it will be removed.&lt;br /&gt;
	filter.setRadiusSearch(0.15);&lt;br /&gt;
	filter.setMinNeighborsInRadius(10);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All PCL classes that inherit from &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::FilterIndices&amp;quot;&amp;lt;/span&amp;gt; (like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::RadiusOutlierRemoval&amp;quot;&amp;lt;/span&amp;gt;) provide the function &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt;, which if set to true, makes the filter return the points that did NOT pass the filter.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Radius, Neighbor count&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/remove_outliers.php Removing outliers using a Conditional or RadiusOutlier removal]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_radius_outlier_removal.html pcl::RadiusOutlierRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_radius_outlier_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Statistical===&lt;br /&gt;
&lt;br /&gt;
The statistical outlier removal process is a bit more refined. First, for every point, the mean distance to its ''K'' neighbors is computed. Then, if we asume that the result is a [http://en.wikipedia.org/wiki/Normal_distribution normal (gaussian) distribution] with a mean ''μ'' and a standard deviation ''σ'', we can deem it safe to remove all points with mean distances that fall out of the global mean plus deviation. Basically, it runs a statistical analysis of the distances between neighboring points, and trims all which are not considered &amp;quot;normal&amp;quot; (you define what &amp;quot;normal&amp;quot; is with the parameters of the algorithm).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/statistical_outlier_removal.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::StatisticalOutlierRemoval&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Set number of neighbors to consider to 50.&lt;br /&gt;
	filter.setMeanK(50);&lt;br /&gt;
	// Set standard deviation multiplier to 1.&lt;br /&gt;
	// Points with a distance larger than 1 standard deviation of the mean distance will be outliers.&lt;br /&gt;
	filter.setStddevMulThresh(1.0);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can make use of &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;setNegative()&amp;quot;&amp;lt;/span&amp;gt; with this filter too.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Neighbor count, Deviation multiplier&lt;br /&gt;
* '''Output''': Filtered points&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/statistical_outlier.php Removing outliers using a StatisticalOutlierRemoval filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_statistical_outlier_removal.html pcl::StatisticalOutlierRemoval]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_statistical_outlier_removal.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Resampling==&lt;br /&gt;
&lt;br /&gt;
Resampling means changing the number of points of the cloud, either by reducing (downsampling) or increasing (upsampling). Both have a purpose, as we will see.&lt;br /&gt;
&lt;br /&gt;
===Downsampling===&lt;br /&gt;
&lt;br /&gt;
A sensor like Kinect or Xtion produces a cloud with 307200 (640x480) points. Believe it or not, that is a lot of information, and the new Kinect is already out there, working at a higher resolution. Keep in mind that, if we were to perform a simple operation on every point of a cloud, it would be of ''O(n)'', n being the number of points. If we had to compare every point with its ''k'' nearest neighbors, it would be ''O(nk)''. You may have noticed by now that some of the algorithms I showed you take several seconds to run on your PC; you now know why.&lt;br /&gt;
&lt;br /&gt;
Optimization techniques are being implemented as I write this to allow PCL to employ the computer's GPU (graphic card) to perform the computations using [https://developer.nvidia.com/about-cuda CUDA]. Many classes already provide an alternative in the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;pcl::gpu&amp;quot;&amp;lt;/span&amp;gt; namespace. This offers dramatic improvements in speed, but it is not always a possibility.&lt;br /&gt;
&lt;br /&gt;
Another option is to reduce the complexity of the cloud, that is, to remove all points that are not needed for the purpose. There are several ways to do this. You could trim all outliers from the cloud, then perform segmentation to extract only the parts that interest you. A common operation is also downsampling, which will give you back a cloud that, for all intents and purposes, equivalent to the original one, despite having less points.&lt;br /&gt;
&lt;br /&gt;
Downsampling is done via ''voxelization''. I already explained what voxels were in the [[#Octree | octree section]]. The cloud is divided in multiple cube-shaped regions with the desired resolution. Then, all points inside every voxel are processed so only one remains. The simplest way would be to randomly select one of them, but a more accurate approach would be to compute the [[PCL/OpenNI_tutorial_2:_Cloud_processing_%28basic%29#Computing_the_centroid|centroid]], which is the point whose coordinates are the mean values of all the points that belong to the voxel.&lt;br /&gt;
&lt;br /&gt;
If done with sensible parameters, downsampling will yield results that are precise enough to work with, at the cost of less CPU power.&lt;br /&gt;
&lt;br /&gt;
====Voxel grid====&lt;br /&gt;
&lt;br /&gt;
A straightforward way of downsampling a cloud:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/filters/voxel_grid.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filter object.&lt;br /&gt;
	pcl::VoxelGrid&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// We set the size of every voxel to be 1x1x1cm&lt;br /&gt;
	// (only one point per every cubic centimeter will survive).&lt;br /&gt;
	filter.setLeafSize(0.01f, 0.01f, 0.01f);&lt;br /&gt;
&lt;br /&gt;
	filter.filter(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:BeforeDownsampling.png | Original point cloud.&lt;br /&gt;
File:AfterDownsampling.png | Point cloud after being downsampled with a leaf size of 1cm.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Voxel size&lt;br /&gt;
* '''Output''': Downsampled cloud&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/voxel_grid.php Downsampling a PointCloud using a VoxelGrid filter]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_voxel_grid.html pcl::VoxelGrid]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_downsample.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Uniform sampling====&lt;br /&gt;
&lt;br /&gt;
This class does basically the same, but it outputs the indices of the points that survived the process. It is a common way of choosing keypoints when computing [[PCL/OpenNI_tutorial_4:_3D_object_recognition_%28descriptors%29|descriptors]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/keypoints/uniform_sampling.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Uniform sampling object.&lt;br /&gt;
	pcl::UniformSampling&amp;lt;pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// We set the size of every voxel to be 1x1x1cm&lt;br /&gt;
	// (only one point per every cubic centimeter will survive).&lt;br /&gt;
	filter.setRadiusSearch(0.01f);&lt;br /&gt;
	// We need an additional object to store the indices of surviving points.&lt;br /&gt;
	pcl::PointCloud&amp;lt;int&amp;gt; keypointIndices;&lt;br /&gt;
&lt;br /&gt;
	filter.compute(keypointIndices);&lt;br /&gt;
	pcl::copyPointCloud(*cloud, keypointIndices.points, *filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Voxel size&lt;br /&gt;
* '''Output''': Point indices&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_uniform_sampling.html pcl::UniformSampling]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_downsample_uniform.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Upsampling===&lt;br /&gt;
&lt;br /&gt;
Upsampling is a form of [[PCL/OpenNI_tutorial_2:_Cloud_processing_%28basic%29#Reconstruction| surface reconstruction]], but I put it here. When you have fewer points than you would like, upsampling can help you to recover the original surface(s), by [http://en.wikipedia.org/wiki/Upsampling interpolating] the ones you currently have, which is just a sophisticated guess. The result will never be a hundred percent accurate, but sometimes it is the only choice. So, before downsampling your clouds, be sure to keep a copy of the original data!&lt;br /&gt;
&lt;br /&gt;
PCL uses the [http://en.wikipedia.org/wiki/Moving_least_squares Moving Least Squares (MLS)] algorithm for this. The code looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/mls.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Objects for storing the point clouds.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr filteredCloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Filtering object.&lt;br /&gt;
	pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointXYZ&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Object for searching.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree;&lt;br /&gt;
	filter.setSearchMethod(kdtree);&lt;br /&gt;
	// Use all neighbors in a radius of 3cm.&lt;br /&gt;
	filter.setSearchRadius(0.03);&lt;br /&gt;
	// Upsampling method. Other possibilites are DISTINCT_CLOUD, RANDOM_UNIFORM_DENSITY&lt;br /&gt;
	// and VOXEL_GRID_DILATION. NONE disables upsampling. Check the API for details.&lt;br /&gt;
	filter.setUpsamplingMethod(pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointXYZ&amp;gt;::SAMPLE_LOCAL_PLANE);&lt;br /&gt;
	// Radius around each point, where the local plane will be sampled.&lt;br /&gt;
	filter.setUpsamplingRadius(0.03);&lt;br /&gt;
	// Sampling step size. Bigger values will yield less (if any) new points.&lt;br /&gt;
	filter.setUpsamplingStepSize(0.02);&lt;br /&gt;
&lt;br /&gt;
	filter.process(*filteredCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:PCL_upsampling_original.png | Original point cloud (coffee mug on a table).&lt;br /&gt;
File:PCL_upsampling_downsampled.png | Downsampled with a leaf size of 1cm.&lt;br /&gt;
File:PCL_upsampling_local_plane.png | Upsampled using SAMPLE_LOCAL_PLANE (upsampling radius 3cm, step size 2cm).&lt;br /&gt;
File:PCL_upsampling_uniform_density.png | Upsampled using RANDOM_UNIFORM_DENSITY (target point density of 300 per 3cm radius). &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check the [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html#ac29ad97b98353d64ce64e2ff924f7d20 setUpsamplingMethod()] API to see all upsampling methods available, and the parameters you need to set for them to work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search radius, Upsampling method, [Upsampling parameters]&lt;br /&gt;
* '''Output''': Upsampled cloud&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html pcl::MovingLeastSquares]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_upsample.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Reconstruction=&lt;br /&gt;
&lt;br /&gt;
==Surface smoothing==&lt;br /&gt;
&lt;br /&gt;
As stated, depth sensors are not very accurate, and the resulting clouds have measurement errors, outliers, holes in surfaces, etc. Surfaces can be reconstructed by means of an algorithm, that iterates through all points and interpolates the data, trying to guess how the original surface was. Like with upsampling, PCL uses the [http://en.wikipedia.org/wiki/Moving_least_squares MLS] algorithm and class. Performing this step is important, because the resulting cloud's normals will be more accurate.&lt;br /&gt;
&lt;br /&gt;
Surface smoothing is easily done with the following code:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/surface/mls.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	// Object for storing the point cloud.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;::Ptr cloud(new pcl::PointCloud&amp;lt;pcl::PointXYZ&amp;gt;);&lt;br /&gt;
	// The output will also contain the normals.&lt;br /&gt;
	pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;::Ptr smoothedCloud(new pcl::PointCloud&amp;lt;pcl::PointNormal&amp;gt;);&lt;br /&gt;
&lt;br /&gt;
	// Read a PCD file from disk.&lt;br /&gt;
	if (pcl::io::loadPCDFile&amp;lt;pcl::PointXYZ&amp;gt;(argv[1], *cloud) != 0)&lt;br /&gt;
	{&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// Smoothing object (we choose what point types we want as input and output).&lt;br /&gt;
	pcl::MovingLeastSquares&amp;lt;pcl::PointXYZ, pcl::PointNormal&amp;gt; filter;&lt;br /&gt;
	filter.setInputCloud(cloud);&lt;br /&gt;
	// Use all neighbors in a radius of 3cm.&lt;br /&gt;
	filter.setSearchRadius(0.03);&lt;br /&gt;
	// If true, the surface and normal are approximated using a polynomial estimation&lt;br /&gt;
	// (if false, only a tangent one).&lt;br /&gt;
	filter.setPolynomialFit(true);&lt;br /&gt;
	// We can tell the algorithm to also compute smoothed normals (optional).&lt;br /&gt;
	filter.setComputeNormals(true);&lt;br /&gt;
	// kd-tree object for performing searches.&lt;br /&gt;
	pcl::search::KdTree&amp;lt;pcl::PointXYZ&amp;gt;::Ptr kdtree;&lt;br /&gt;
	filter.setSearchMethod(kdtree);&lt;br /&gt;
&lt;br /&gt;
	filter.process(*smoothedCloud);&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The output cloud will also contain the improved normals of the smoother cloud.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;background-color: #F8F8F8; border-style: dotted;&amp;quot;&amp;gt;&lt;br /&gt;
* '''Input''': Points, Search radius, [Polynomial fit], [Compute normals]&lt;br /&gt;
* '''Output''': Smoothed cloud, [Smoothed normals]&lt;br /&gt;
* '''Tutorial''': [http://pointclouds.org/documentation/tutorials/resampling.php Smoothing and normal estimation based on polynomial reconstruction]&lt;br /&gt;
* '''API''': [http://docs.pointclouds.org/trunk/classpcl_1_1_moving_least_squares.html pcl::MovingLeastSquares]&lt;br /&gt;
* [http://robotica.unileon.es/~victorm/PCL_surface_smoothing.tar.gz Download]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_1:_Installing_and_testing&amp;diff=4735</id>
		<title>PCL/OpenNI tutorial 1: Installing and testing</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_1:_Installing_and_testing&amp;diff=4735"/>
				<updated>2015-11-01T22:59:22Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Kinect.jpg|thumb|right|200px|Microsoft Kinect device.]]&lt;br /&gt;
&lt;br /&gt;
This series of tutorials will explain the usage of a depth camera like [https://en.wikipedia.org/wiki/Kinect Kinect] for &amp;quot;serious&amp;quot; researching purposes. As you may know, Kinect is in fact an affordable depth sensor, developed with technology from [https://en.wikipedia.org/wiki/PrimeSense PrimeSense], based on infrarred structured light method. It also has a common camera (which makes it a RGB-D device), a microphone and a motorized pivot. Its use is not limited to playing with a Xbox360 console, you can plug it to a computer and use it like any other sensor. Many open-source drivers and frameworks are available.&lt;br /&gt;
&lt;br /&gt;
Since its release on November 2010, it has gained a lot of popularity, specially among the scientific community. Many researches have procured themselves one because, despite the low cost (about 100 $), it has proven to be a powerful solution for depth sensing projects. Current investigations focus on real-time surface mapping, object recognition and tracking, and localization. Impressive results (like the [http://research.microsoft.com/en-us/projects/surfacerecon/ KinectFusion] project from Microsoft) are already possible.&lt;br /&gt;
&lt;br /&gt;
The new [https://en.wikipedia.org/wiki/Xbox_One Xbox One] ships with an upgraded version, [https://en.wikipedia.org/wiki/Kinect_for_Xbox_One Kinect v2], with enhanced resolution, that is able to detect your facial expression, measure your heart rate, and track every one of your fingers. A PC development-ready version ([https://en.wikipedia.org/wiki/Kinect_for_Xbox_One#Kinect_for_Windows Kinect for Windows v2]) was released in July 2014, but it could only be used with the official Windows SDK (open source support [https://github.com/OpenKinect/libfreenect2 exists] but is still young). In October 2014 an adapter that allows to plug the standard Kinect v2 to a PC was released, so the development version of the sensor was discontinued in April 2015. Now you can just buy the [http://www.microsoftstore.com/store/msusa/en_US/pdp/Kinect-for-Xbox-One/productID.307499400 standard one] for the console plus the [http://www.microsoftstore.com/store/msusa/en_US/pdp/Kinect-Adapter-for-Windows/productID.308803600 adapter].&lt;br /&gt;
&lt;br /&gt;
I will explain the installation and usage of one of the &amp;quot;legacy&amp;quot; Kinect 1.0 devices with a common PC, and the possibilities it offers. I will do it in an easy to understand way, intended for students that have just acquired it and want to start from scratch.&lt;br /&gt;
&lt;br /&gt;
Keep in mind that the software that we are going to use (Point Cloud Library and OpenNI drivers) will also let you use any other device like the [https://www.asus.com/3D-Sensor/Xtion_PRO/ Xtion PRO] or [https://www.asus.com/3D-Sensor/Xtion_PRO_LIVE/ Xtion PRO LIVE] from ASUS (the PRO only has a depth sensor, the PRO LIVE is a RGB-D camera) without changing a line of code.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: The tutorials are written for Linux platforms. Also, 64-bit versions seem to work better than 32-bit.'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Requirements=&lt;br /&gt;
&lt;br /&gt;
You will need the following:&lt;br /&gt;
&lt;br /&gt;
* A common Kinect device, out of the box. You can buy it in your local electronics shop, or online. It also includes a free copy of ''[https://en.wikipedia.org/wiki/Kinect_Adventures! Kinect Adventures!]'', which is useless if you do not own the console. Microsoft has released a ''Kinect for Windows'' device, which is a normal looking Kinect no longer compatible with Xbox360, that will only work with their official SDK, intended for developers only. Also, like I stated earlier, you can use an ASUS Xtion indistinctly.&lt;br /&gt;
* A computer running Linux (Debian or Ubuntu preferably).&lt;br /&gt;
* A medium-sized room. Kinect has some limitations for depth measurement: 40cm minimum, 8m maximum (make it 6).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: ''Kinect for Windows'' may have problems working with open source drivers on Linux .'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Connecting everything=&lt;br /&gt;
&lt;br /&gt;
Kinect does not work with a common USB port. Its power consumption is a bit higher because of the motor, so Microsoft came up with a connector that combines USB and power supply. Old Xbox 360 models needed a special adapter, new S model ones already have this new port. Luckily, Kinect comes with the official adapter out of the box (otherwise you will have to [http://www.amazon.com/Power-Supply-Cable-Kinect-Xbox-360/dp/B004S7GA46/ref=sr_1_1?ie=UTF8&amp;amp;qid=1378288327&amp;amp;sr=8-1 buy one]).&lt;br /&gt;
&lt;br /&gt;
Just plug the adapter to any power socket, and the USB to your computer. Let's check typing this in a terminal:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;lsusb&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Output should list the following devices:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;Bus 001 Device 005: ID 045e:02b0 Microsoft Corp. Xbox NUI Motor&lt;br /&gt;
Bus 001 Device 006: ID 045e:02ad Microsoft Corp. Xbox NUI Audio&lt;br /&gt;
Bus 001 Device 007: ID 045e:02ae Microsoft Corp. Xbox NUI Camera&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are using a Xtion, you should see:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;Bus 001 Device 004: ID 1d27:0601 ASUS&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;0601&amp;quot; identifies the new Xtion model, &amp;quot;0600&amp;quot; the older one. Both should work the same. But try to avoid [[PCL/OpenNI_troubleshooting#ASUS_Xtion_PRO|USB 3.0 ports]]!&lt;br /&gt;
&lt;br /&gt;
=Installing the software=&lt;br /&gt;
&lt;br /&gt;
There is more than one way to get your Kinect working on your PC, and start developing applications for it:&lt;br /&gt;
&lt;br /&gt;
* [https://www.microsoft.com/en-us/download/details.aspx?id=40278 Kinect for Windows SDK] and [https://www.microsoft.com/en-us/download/details.aspx?id=40276 Developer Toolkit]: released on June 16, 2011 as a non-commercial SDK intended for application development. Version 1.8, the last there will ever be now that Kinect v2 is out, was released on September 2013. Because it comes from Microsoft, it is obviously the easiest way to get everything working. Sadly, there is no Linux version.&lt;br /&gt;
* [https://github.com/OpenKinect/libfreenect libfreenect] library: from the [http://openkinect.org/wiki/Main_Page OpenKinect] project, it is intended to be a free and open source alternative to the official drivers. libfreenect is used by projects like [https://github.com/ofTheo/ofxKinect ofxKinect], an addon for the [http://www.openframeworks.cc/ openFrameworks] toolkit (and as of version 0.8, included in the core package) that runs on Linux and OS X. ofxKinect packs a nice example application to show the RGB and point cloud taken from Kinect.&lt;br /&gt;
* [https://github.com/PrimeSense/Sensor PrimeSense drivers]: they were released as open source after the the [https://en.wikipedia.org/wiki/OpenNI OpenNI] project was created, along with the motion tracking middleware, ''NITE'', and the SDK. NI stands for Natural Interaction, and the project tried to enforce a common standard for human input using Kinect-like sensors. These official drivers are used by [http://wiki.ros.org/openni_kinect ROS] (the Robot Operating System, a massive collection of libraries and tools for robotic researchers) and [http://pointclouds.org/ PCL] (the Point Cloud Library, with everything needed for 3D point cloud processing). Sadly, version 2.0 of the OpenNI SDK dropped support for Kinect on Linux due to licensing issues, so for now PCL relies on legacy 1.x versions. Also, Apple bought PrimeSense on November 2013, and on April 2014 OpenNI's webpage was closed. The source is now being maintained by [http://structure.io/openni a third party].&lt;br /&gt;
* [https://github.com/avin2/SensorKinect SensorKinect]: a modified version of the official PrimeSense drivers, used for example by [https://github.com/gameoverhack/ofxOpenNI ofxOpenNI] (another openFrameworks addon). Last updated on 2012.&lt;br /&gt;
&lt;br /&gt;
For this tutorial, we are going to use PCL with the OpenNI drivers, so owners of a Xtion can also get it to work.&lt;br /&gt;
&lt;br /&gt;
==Precompiled PCL for Ubuntu==&lt;br /&gt;
&lt;br /&gt;
If you have a valid [http://wiki.ros.org/ROS/Installation installation of ROS] (through their repository), you do not have to install anything. Both OpenNI and PrimeSense drivers, as well as PCL, should be already installed. You can check it with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install libpcl-1.7-all libpcl-1.7-all-dev libopenni-dev libopenni-sensor-primesense-dev&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The previous command should state that all packages are already installed (change the PCL version number as needed), install them if not. If you get an error about overwriting some file, check [[PCL/OpenNI_troubleshooting#Trying_to_overwrite_X.2C_which_is_also_in_package_Y|this]].&lt;br /&gt;
&lt;br /&gt;
In case you do not want ROS, there is a [https://launchpad.net/~v-launchpad-jochen-sprickerhof-de/+archive/ubuntu/pcl PPA] (Personal Package Archive, a private repository) which has everything we need. Add it to your sources, and install everything:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo add-apt-repository ppa:v-launchpad-jochen-sprickerhof-de/pcl&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt-get install build-essential cmake libpcl1.7 libpcl-dev pcl-tools&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Trying to mix ROS and PCL repositories and packages can cause some errors, so choose one of them and stick with it. Check the [[PCL/OpenNI troubleshooting]] page because your Kinect may not work by default in 32 bits.&lt;br /&gt;
&lt;br /&gt;
==Compiling PCL from source==&lt;br /&gt;
&lt;br /&gt;
For Linuxes without a precompiled version of PCL, you will need to compile it yourself. This has an advantage, actually: you can customize the build options and choose what you want. Also, the resulting binaries and libraries should be a bit faster. And you always get the latest version! The instructions are [http://pointclouds.org/documentation/tutorials/compiling_pcl_posix.php here], but the steps are easy so I will show them to you.&lt;br /&gt;
&lt;br /&gt;
===Installing the dependencies===&lt;br /&gt;
&lt;br /&gt;
Some of PCL dependencies can be installed via the package manager. Others will require additional work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash enclose=&amp;quot;div&amp;quot;&amp;gt;sudo apt-get install build-essential cmake cmake-curses-gui libboost-all-dev libeigen3-dev libflann-dev libvtk5-dev libvtk5-qt4-dev libglew-dev libxmu-dev libsuitesparse-dev libqhull-dev libpcap-dev libxmu-dev libxi-dev libgtest-dev libqt4-opengl-dev&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The trunk version of PCL uses [https://en.wikipedia.org/wiki/VTK VTK] 6 and [https://en.wikipedia.org/wiki/Qt_%28software%29 Qt] 5, so if you intend to compile it, you must install the following packages (say yes if you are asked to remove VTK 5 and Qt 4 packages):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash enclose=&amp;quot;div&amp;quot;&amp;gt;sudo apt-get install libvtk6-dev libqt5opengl5-dev&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====JDK====&lt;br /&gt;
&lt;br /&gt;
OpenNI requires Sun's official JDK (Java Development Kit), which is no longer available on official apt repositories. You can use [http://www.webupd8.org/2012/09/install-oracle-java-8-in-ubuntu-via-ppa.html unofficial ones], or do it manually. For the last method, go to the [http://www.oracle.com/technetwork/java/javase/downloads/index.html Java SE downloads] page (SE means Standard Edition) and download the latest version (e.g., &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''jdk-8u66-linux-x64.tar.gz''&amp;lt;/span&amp;gt;). Extract it, then move the contents to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''/usr/lib/jvm/''&amp;lt;/span&amp;gt; so it is available system-wide:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo mkdir -p /usr/lib/jvm/&lt;br /&gt;
sudo cp -r jdk1.8.0_66/ /usr/lib/jvm/&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, make it the default choice to compile and run Java programs. Remember to change the version number as needed!&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo update-alternatives --install &amp;quot;/usr/bin/java&amp;quot; &amp;quot;java&amp;quot; &amp;quot;/usr/lib/jvm/jdk1.8.0_66/bin/java&amp;quot; 1&lt;br /&gt;
sudo update-alternatives --install &amp;quot;/usr/bin/javac&amp;quot; &amp;quot;javac&amp;quot; &amp;quot;/usr/lib/jvm/jdk1.8.0_66/bin/javac&amp;quot; 1&lt;br /&gt;
sudo update-alternatives --install &amp;quot;/usr/bin/jar&amp;quot; &amp;quot;jar&amp;quot; &amp;quot;/usr/lib/jvm/jdk1.8.0_66/bin/jar&amp;quot; 1&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To be sure, use the following commands to manually select the correct option, in case there is more than one choice:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo update-alternatives --config java&lt;br /&gt;
sudo update-alternatives --config javac&lt;br /&gt;
sudo update-alternatives --config jar&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are still not sure, run them and display the version, making sure it is the one you installed:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;java -version&lt;br /&gt;
javac -version&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sun's JDK is now installed.&lt;br /&gt;
&lt;br /&gt;
====OpenNI====&lt;br /&gt;
&lt;br /&gt;
PCL uses OpenNI and the PrimeSense drivers to get data from devices like Kinect or Xtion. It is optional, but it would not make much sense not to install it, would it? If you are using Ubuntu, add the [[#Precompiled PCL for Ubuntu | PPA]] and install &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''libopenni-dev''&amp;lt;/span&amp;gt; and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''libopenni-sensor-primesense-dev''&amp;lt;/span&amp;gt;, which should be done already. Otherwise, fetch the [https://github.com/OpenNI/OpenNI OpenNI] and [https://github.com/PrimeSense/Sensor PrimeSense Sensor] sources from GitHub (download them as .zip, the link is on the right). Extract them, and install the dependencies:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install python libusb-1.0-0-dev freeglut3-dev doxygen graphviz&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Go to the directory where you extracted OpenNI (&amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''OpenNI-master/''&amp;lt;/span&amp;gt; for me), and open a terminal in the &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/CreateRedist/''&amp;lt;/span&amp;gt; subdirectory. Issue:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;./RedistMaker&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When it finishes, and if there are no errors (check the [[PCL/OpenNI troubleshooting]] page if you get any), go to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/Redist/OpenNI-Bin-Dev-Linux-x64-v1.5.7.10/''&amp;lt;/span&amp;gt; (or your equivalent), and install (you must be root):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo ./install.sh&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, go to the directory where you extracted the PrimeSense drivers (&amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Sensor-master/''&amp;lt;/span&amp;gt; for me), and repeat the exact same procedure (go to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/CreateRedist/''&amp;lt;/span&amp;gt;, issue &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''./RedistMaker''&amp;lt;/span&amp;gt;, go to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/Redist/Sensor-Bin-Linux-x64-v5.1.6.6/''&amp;lt;/span&amp;gt;, issue &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''sudo ./install.sh''&amp;lt;/span&amp;gt;). Congratulations, you have now installed OpenNI.&lt;br /&gt;
&lt;br /&gt;
====CUDA====&lt;br /&gt;
&lt;br /&gt;
Like OpenNI, [https://en.wikipedia.org/wiki/CUDA nVidia CUDA] is an optional dependency, that will allow PCL to use your GPU (Graphics Processing Unit, that is, your graphics card) for certain computations. This is mandatory for tools like KinFu (do not bother unless you have at least a series 400 card with 1.5 GB of VRAM).&lt;br /&gt;
&lt;br /&gt;
Some distributions provide packages for CUDA in the repositories. For example, in Ubuntu:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install nvidia-cuda-dev nvidia-cuda-toolkit&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you want to install it manually (which is incompatible with the previous method), go to the [https://developer.nvidia.com/cuda-downloads CUDA downloads] page, which is self-explanatory, and get the small .deb file or the huge .run toolkit/SDK installer file for your system (you should have installed the nVidia drivers already, but the installer will also do it for you if needed).&lt;br /&gt;
&lt;br /&gt;
If you chose the .deb file, install it using the method of your choice, like Gdebi package manager, or through console:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo dpkg -i &amp;lt;package.deb&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The .deb does not contain all CUDA stuff, it just adds their repository to your software lists. Now you must install everything:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get update&lt;br /&gt;
sudo apt-get install cuda&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If, on the other hand, you downloaded the .run, give it execution permissions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;chmod +x cuda_7.0.28_linux.run&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And install it. You can use the default options, although if you have a working nVidia graphics driver for your card, you may want to say &amp;quot;no&amp;quot; when the installer offers to install it for you:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo ./cuda_7.0.28_linux.run&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The environment variables need to be changed so your system can find CUDA's libraries and binaries. Open &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''/etc/ld.so.conf''&amp;lt;/span&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo nano /etc/ld.so.conf&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And append one of these two lines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;/usr/local/cuda/lib64 # Add this on 64-bit only.&lt;br /&gt;
/usr/local/cuda/lib # Add this on 32-bit only.&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Ctrl+O'''&amp;lt;/span&amp;gt; and Enter, exit with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Ctrl+X'''&amp;lt;/span&amp;gt;. Reload the cache of the dynamic linker with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo ldconfig&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, append CUDA's bin directory to your PATH. Do this by editing your local &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''.bashrc''&amp;lt;/span&amp;gt; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;nano ~/.bashrc&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And append this line:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;export PATH=$PATH:/usr/local/cuda/bin&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CUDA is now installed.&lt;br /&gt;
&lt;br /&gt;
===Getting the source===&lt;br /&gt;
&lt;br /&gt;
Every dependency is installed. Time to download PCL's source code. First, you must choose whether to install the stable or the experimental branch of PCL. The stable branch is the latest official release and it is guaranteed to work without problems. The experimental branch may give you a compiling error seldomly, but you can find some interesting features that stable users will have to wait some months for. Apart from that, both are built the same way.&lt;br /&gt;
&lt;br /&gt;
To get the stable version, go to the [https://github.com/PointCloudLibrary/pcl/releases downloads] page, get &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''pcl-pcl-1.7.2.tar.gz''&amp;lt;/span&amp;gt; or whatever the latest release is, and extract it somewhere. For the experimental version, use Git:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install git&lt;br /&gt;
git clone https://github.com/PointCloudLibrary/pcl PCL-trunk-Source&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The compiled trunk version of PCL will take up more than 8 GB. So make sure you put the source folder in a partition with enough free space!&lt;br /&gt;
&lt;br /&gt;
===Compiling===&lt;br /&gt;
&lt;br /&gt;
Go the the PCL source directory (&amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''pcl-pcl-1.7.2/''&amp;lt;/span&amp;gt; or &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''PCL-trunk-Source/''&amp;lt;/span&amp;gt;), and create a new subdirectory to keep the build files in:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;mkdir build&lt;br /&gt;
cd build&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now it is time to configure the project using CMake. We will tell it to build in Release (fully optimized, no debug capabilities) mode now, and customize the rest of the options later:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;cmake -DCMAKE_BUILD_TYPE=Release ..&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CMake should be able to find every dependency, thus being able to build every subsystem except for the ones marked as &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;Disabled by default&amp;quot;''&amp;lt;/span&amp;gt;. If you are happy, you can build now, otherwise let's invoke CMake's curses interface to change a couple of things (mind the final dot):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;ccmake .&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Ccmake_GUI_PCL.png|thumb|center|900px|Interface of ''ccmake''.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here you can change the build options. The program usage can be found at the bottom of the screen. Try turning all functionality &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;ON&amp;quot;&amp;lt;/span&amp;gt;. The most important thing, in case you want to use CUDA, is to enable it and give CMake the path to your SDK. Go to the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;CUDA_SDK_ROOT_DIR&amp;quot;&amp;lt;/span&amp;gt; option and enter the correct path (probably &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''/usr/local/cuda/''&amp;lt;/span&amp;gt; or similar).&lt;br /&gt;
&lt;br /&gt;
When you are done, press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''C'''&amp;lt;/span&amp;gt; to configure and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''G'''&amp;lt;/span&amp;gt; to generate and exit the tool. Sometimes, the options you change can activate previously omitted parameters, or prompt some warning text. Just press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''E'''&amp;lt;/span&amp;gt; when you are finished reading the message, and keep pressing &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''C'''&amp;lt;/span&amp;gt; until it lets you generate (new parameters will be marked with an asterisk, so you can check them and decide whether or not you want further customization).&lt;br /&gt;
&lt;br /&gt;
If you are done configuring, it is time to build:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;make&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: Additionally, you can append the parameter -jX to speed up the compilation, X being the number of cores or processors of your PC, plus one.'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Remember that, at any time, you can manually force the project to be reconfigured and built from scratch by emptying the &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''build/''&amp;lt;/span&amp;gt; directory with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;rm -rf ./*&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Installing===&lt;br /&gt;
&lt;br /&gt;
It will take some time to compile PCL (up to a few hours if your PC is not powerful enough). When it is finished, install it system-wide with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo make install&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And you should reboot and proceed to the next section, to see if your computer now recognizes (and uses) your Kinect device.&lt;br /&gt;
&lt;br /&gt;
=Testing (OpenNI viewer)=&lt;br /&gt;
&lt;br /&gt;
We are going to write a simple example program that will fetch data from the Kinect or Xtion and present it to the user, using the PCL library. It will also allow to save the current frame (as point cloud) to disk. If you feel lazy, you can download it [http://robotica.unileon.es/~victorm/PCL_openni_viewer.tar.gz here], and skip the next two sections. Otherwise, create a new directory anywhere in your hard disk and proceed.&lt;br /&gt;
&lt;br /&gt;
==CMakeLists.txt==&lt;br /&gt;
&lt;br /&gt;
Inside that directory, create a new text file named &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''CMakeLists.txt''&amp;lt;/span&amp;gt;. PCL-based programs use the CMake build system, too. Open it with any editor and paste the following content:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CMake&amp;gt;cmake_minimum_required(VERSION 2.8 FATAL_ERROR)&lt;br /&gt;
 &lt;br /&gt;
project(PCL_openni_viewer)&lt;br /&gt;
 &lt;br /&gt;
find_package(PCL 1.7 REQUIRED)&lt;br /&gt;
 &lt;br /&gt;
include_directories(${PCL_INCLUDE_DIRS})&lt;br /&gt;
link_directories(${PCL_LIBRARY_DIRS})&lt;br /&gt;
add_definitions(${PCL_DEFINITIONS})&lt;br /&gt;
 &lt;br /&gt;
set(PCL_BUILD_TYPE Release)&lt;br /&gt;
 &lt;br /&gt;
file(GLOB PCL_openni_viewer_SRC&lt;br /&gt;
    &amp;quot;src/*.h&amp;quot;&lt;br /&gt;
    &amp;quot;src/*.cpp&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
add_executable(openniViewer ${PCL_openni_viewer_SRC})&lt;br /&gt;
 &lt;br /&gt;
target_link_libraries (openniViewer ${PCL_LIBRARIES})&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CMake syntax is quite self-explanatory. We ask for a CMake version 2.8 installation, minimum. We declare a new project named &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;openni_PCL_viewer&amp;quot;&amp;lt;/span&amp;gt;. We tell CMake to check for the presence of PCL library development files, version 1.7. If our system can not meet the CMake and PCL version requirement, the process will fail.&lt;br /&gt;
&lt;br /&gt;
Next, we feed the compiler and linker the directories where PCL includes and libraries can be found, and the defined symbols. We tell CMake to use the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Release&amp;quot;&amp;lt;/span&amp;gt; build type, which will activate certain optimizations depending on the compiler we use. Other build types are available, like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Debug&amp;quot;&amp;lt;/span&amp;gt;, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;MinSizeRel&amp;quot;&amp;lt;/span&amp;gt;, and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;RelWithDebInfo&amp;quot;&amp;lt;/span&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Finally, we create a variable, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;opennipclviewer_SRC&amp;quot;&amp;lt;/span&amp;gt;, that will store a list of files to be compiled (though we will only have one). We create a new binary to be compiled from these source files, and we link it with the PCL library.&lt;br /&gt;
&lt;br /&gt;
Check the [https://cmake.org/Wiki/CMake_Useful_Variables CMake help] for more interesting options.&lt;br /&gt;
&lt;br /&gt;
==main.cpp==&lt;br /&gt;
&lt;br /&gt;
We told CMake it could find the source files in a &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''src/''&amp;lt;/span&amp;gt; subdirectory, so let's keep to out word and create it. Then, add a new &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''main.cpp''&amp;lt;/span&amp;gt; file inside and paste the following lines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;// Original code by Geoffrey Biggs, taken from the PCL tutorial in&lt;br /&gt;
// http://pointclouds.org/documentation/tutorials/pcl_visualizer.php&lt;br /&gt;
&lt;br /&gt;
// Simple OpenNI viewer that also allows to write the current scene to a .pcd&lt;br /&gt;
// when pressing SPACE.&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;pcl/io/openni_grabber.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/cloud_viewer.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/console/parse.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
using namespace std;&lt;br /&gt;
using namespace pcl;&lt;br /&gt;
&lt;br /&gt;
PointCloud&amp;lt;PointXYZRGBA&amp;gt;::Ptr cloudptr(new PointCloud&amp;lt;PointXYZRGBA&amp;gt;); // A cloud that will store color info.&lt;br /&gt;
PointCloud&amp;lt;PointXYZ&amp;gt;::Ptr fallbackCloud(new PointCloud&amp;lt;PointXYZ&amp;gt;);    // A fallback cloud with just depth data.&lt;br /&gt;
boost::shared_ptr&amp;lt;visualization::CloudViewer&amp;gt; viewer;                 // Point cloud viewer object.&lt;br /&gt;
Grabber* openniGrabber;                                               // OpenNI grabber that takes data from the device.&lt;br /&gt;
unsigned int filesSaved = 0;                                          // For the numbering of the clouds saved to disk.&lt;br /&gt;
bool saveCloud(false), noColor(false);                                // Program control.&lt;br /&gt;
&lt;br /&gt;
void&lt;br /&gt;
printUsage(const char* programName)&lt;br /&gt;
{&lt;br /&gt;
	cout &amp;lt;&amp;lt; &amp;quot;Usage: &amp;quot; &amp;lt;&amp;lt; programName &amp;lt;&amp;lt; &amp;quot; [options]&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; endl&lt;br /&gt;
		 &amp;lt;&amp;lt; endl&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;Options:\n&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; endl&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;\t&amp;lt;none&amp;gt;     start capturing from an OpenNI device.\n&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;\t-v FILE    visualize the given .pcd file.\n&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;\t-h         shows this help.\n&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// This function is called every time the device has new data.&lt;br /&gt;
void&lt;br /&gt;
grabberCallback(const PointCloud&amp;lt;PointXYZRGBA&amp;gt;::ConstPtr&amp;amp; cloud)&lt;br /&gt;
{&lt;br /&gt;
	if (! viewer-&amp;gt;wasStopped())&lt;br /&gt;
		viewer-&amp;gt;showCloud(cloud);&lt;br /&gt;
&lt;br /&gt;
	if (saveCloud)&lt;br /&gt;
	{&lt;br /&gt;
		stringstream stream;&lt;br /&gt;
		stream &amp;lt;&amp;lt; &amp;quot;inputCloud&amp;quot; &amp;lt;&amp;lt; filesSaved &amp;lt;&amp;lt; &amp;quot;.pcd&amp;quot;;&lt;br /&gt;
		string filename = stream.str();&lt;br /&gt;
		if (io::savePCDFile(filename, *cloud, true) == 0)&lt;br /&gt;
		{&lt;br /&gt;
			filesSaved++;&lt;br /&gt;
			cout &amp;lt;&amp;lt; &amp;quot;Saved &amp;quot; &amp;lt;&amp;lt; filename &amp;lt;&amp;lt; &amp;quot;.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
		}&lt;br /&gt;
		else PCL_ERROR(&amp;quot;Problem saving %s.\n&amp;quot;, filename.c_str());&lt;br /&gt;
&lt;br /&gt;
		saveCloud = false;&lt;br /&gt;
	}&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// For detecting when SPACE is pressed.&lt;br /&gt;
void&lt;br /&gt;
keyboardEventOccurred(const visualization::KeyboardEvent&amp;amp; event,&lt;br /&gt;
					  void* nothing)&lt;br /&gt;
{&lt;br /&gt;
	if (event.getKeySym() == &amp;quot;space&amp;quot; &amp;amp;&amp;amp; event.keyDown())&lt;br /&gt;
		saveCloud = true;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Creates, initializes and returns a new viewer.&lt;br /&gt;
boost::shared_ptr&amp;lt;visualization::CloudViewer&amp;gt;&lt;br /&gt;
createViewer()&lt;br /&gt;
{&lt;br /&gt;
	boost::shared_ptr&amp;lt;visualization::CloudViewer&amp;gt; v&lt;br /&gt;
	(new visualization::CloudViewer(&amp;quot;OpenNI viewer&amp;quot;));&lt;br /&gt;
	v-&amp;gt;registerKeyboardCallback(keyboardEventOccurred);&lt;br /&gt;
&lt;br /&gt;
	return (v);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	if (console::find_argument(argc, argv, &amp;quot;-h&amp;quot;) &amp;gt;= 0)&lt;br /&gt;
	{&lt;br /&gt;
		printUsage(argv[0]);&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	bool justVisualize(false);&lt;br /&gt;
	string filename;&lt;br /&gt;
	if (console::find_argument(argc, argv, &amp;quot;-v&amp;quot;) &amp;gt;= 0)&lt;br /&gt;
	{&lt;br /&gt;
		if (argc != 3)&lt;br /&gt;
		{&lt;br /&gt;
			printUsage(argv[0]);&lt;br /&gt;
			return -1;&lt;br /&gt;
		}&lt;br /&gt;
&lt;br /&gt;
		filename = argv[2];&lt;br /&gt;
		justVisualize = true;&lt;br /&gt;
	}&lt;br /&gt;
	else if (argc != 1)&lt;br /&gt;
	{&lt;br /&gt;
		printUsage(argv[0]);&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// First mode, open and show a cloud from disk.&lt;br /&gt;
	if (justVisualize)&lt;br /&gt;
	{&lt;br /&gt;
		// Try with color information...&lt;br /&gt;
		try&lt;br /&gt;
		{&lt;br /&gt;
			io::loadPCDFile&amp;lt;PointXYZRGBA&amp;gt;(filename.c_str(), *cloudptr);&lt;br /&gt;
		}&lt;br /&gt;
		catch (PCLException e1)&lt;br /&gt;
		{&lt;br /&gt;
			try&lt;br /&gt;
			{&lt;br /&gt;
				// ...and if it fails, fall back to just depth.&lt;br /&gt;
				io::loadPCDFile&amp;lt;PointXYZ&amp;gt;(filename.c_str(), *fallbackCloud);&lt;br /&gt;
			}&lt;br /&gt;
			catch (PCLException e2)&lt;br /&gt;
			{&lt;br /&gt;
				return -1;&lt;br /&gt;
			}&lt;br /&gt;
&lt;br /&gt;
			noColor = true;&lt;br /&gt;
		}&lt;br /&gt;
&lt;br /&gt;
		cout &amp;lt;&amp;lt; &amp;quot;Loaded &amp;quot; &amp;lt;&amp;lt; filename &amp;lt;&amp;lt; &amp;quot;.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
		if (noColor)&lt;br /&gt;
			cout &amp;lt;&amp;lt; &amp;quot;This cloud has no RGBA color information present.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
		else cout &amp;lt;&amp;lt; &amp;quot;This cloud has RGBA color information present.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
	}&lt;br /&gt;
	// Second mode, start fetching and displaying frames from the device.&lt;br /&gt;
	else&lt;br /&gt;
	{&lt;br /&gt;
		openniGrabber = new OpenNIGrabber();&lt;br /&gt;
		if (openniGrabber == 0)&lt;br /&gt;
			return -1;&lt;br /&gt;
		boost::function&amp;lt;void (const PointCloud&amp;lt;PointXYZRGBA&amp;gt;::ConstPtr&amp;amp;)&amp;gt; f =&lt;br /&gt;
			boost::bind(&amp;amp;grabberCallback, _1);&lt;br /&gt;
		openniGrabber-&amp;gt;registerCallback(f);&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	viewer = createViewer();&lt;br /&gt;
&lt;br /&gt;
	if (justVisualize)&lt;br /&gt;
	{&lt;br /&gt;
		if (noColor)&lt;br /&gt;
			viewer-&amp;gt;showCloud(fallbackCloud);&lt;br /&gt;
		else viewer-&amp;gt;showCloud(cloudptr);&lt;br /&gt;
	}&lt;br /&gt;
	else openniGrabber-&amp;gt;start();&lt;br /&gt;
&lt;br /&gt;
	// Main loop.&lt;br /&gt;
	while (! viewer-&amp;gt;wasStopped())&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::seconds(1));&lt;br /&gt;
&lt;br /&gt;
	if (! justVisualize)&lt;br /&gt;
		openniGrabber-&amp;gt;stop();&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save and close.&lt;br /&gt;
&lt;br /&gt;
==Compiling==&lt;br /&gt;
&lt;br /&gt;
Follow the same steps you used to build PCL. That is, create a new &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''build/''&amp;lt;/span&amp;gt; subdirectory next to the &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''src/''&amp;lt;/span&amp;gt; one. Open a terminal there and issue:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;cmake -DCMAKE_BUILD_TYPE=Release ..&lt;br /&gt;
make&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Executing==&lt;br /&gt;
&lt;br /&gt;
Still from the same terminal, run the compiled example program:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;./openniViewer&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After some seconds, the main window will appear and the application will start grabbing frames from the device. You can inspect the current point cloud using the mouse, holding the left button to rotate, the right one (or the mouse wheel) to zoom, and the middle one to pan the camera around. At first, you may see only a black screen, or some big colored axes, but no cloud. Try zooming out, to see the whole scene. Another useful key is &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''R'''&amp;lt;/span&amp;gt;, which will reset the camera parameters when pressed. Use it whenever you notice that zooming has gotten slow after some camera movement, or if you still can not see the cloud. See the [http://pointclouds.org/documentation/tutorials/pcl_visualizer.php PCLVisualizer] tutorial for additional controls and features.&lt;br /&gt;
&lt;br /&gt;
Whenever you feel ready, press the &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''SPACE'''&amp;lt;/span&amp;gt; key. The program will pause for a fraction of a second and the output &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;Saved inputCloud0.pcd.&amp;quot;''&amp;lt;/span&amp;gt; will appear on the console. Check the current folder to see that file &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''inputCloud0.pcd''&amp;lt;/span&amp;gt; has indeed been written. You can now close the program with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Q'''&amp;lt;/span&amp;gt; or &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Alt+F4'''&amp;lt;/span&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Next, run it again giving the following parameter:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;./openniViewer -v inputCloud0.pcd&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will tell the program not to take data from the device, but from the saved point cloud file instead. After it loads, you will realize that you are presented the same scene you saved to disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: PCD data is saved relative to the sensor. No matter how much you have manipulated the view, it will reset to default when you load the file.'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
&lt;br /&gt;
At this point, your Kinect device should be working and getting depth data for you. There is a collection of excellent [http://pointclouds.org/documentation/tutorials/ tutorials] for PCL in the official webpage. I encourage you to finish them all before proceeding your experiments with the Kinect sensor. You can also find a good introduction/tutorial at the PCL library [http://www.pointclouds.org/assets/uploads/PCL-IAS13_Introduction.pdf here].&lt;br /&gt;
&lt;br /&gt;
If you use an ASUS Xtion PRO device instead, you should have gotten everything to work without problems or additional steps (except maybe for [[PCL/OpenNI_troubleshooting#ASUS_Xtion_PRO | this one]]).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_1:_Installing_and_testing&amp;diff=4734</id>
		<title>PCL/OpenNI tutorial 1: Installing and testing</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_1:_Installing_and_testing&amp;diff=4734"/>
				<updated>2015-11-01T22:57:28Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Kinect.jpg|thumb|right|200px|Microsoft Kinect device.]]&lt;br /&gt;
&lt;br /&gt;
This series of tutorials will explain the usage of a depth camera like [https://en.wikipedia.org/wiki/Kinect Kinect] for &amp;quot;serious&amp;quot; researching purposes. As you may know, Kinect is in fact an affordable depth sensor, developed with technology from [https://en.wikipedia.org/wiki/PrimeSense PrimeSense], based on infrarred structured light method. It also has a common camera (which makes it a RGB-D device), a microphone and a motorized pivot. Its use is not limited to playing with a Xbox360 console, you can plug it to a computer and use it like any other sensor. Many open-source drivers and frameworks are available.&lt;br /&gt;
&lt;br /&gt;
Since its release on November 2010, it has gained a lot of popularity, specially among the scientific community. Many researches have procured themselves one because, despite the low cost (about 100 $), it has proven to be a powerful solution for depth sensing projects. Current investigations focus on real-time surface mapping, object recognition and tracking, and localization. Impressive results (like the [http://research.microsoft.com/en-us/projects/surfacerecon/ KinectFusion] project from Microsoft) are already possible.&lt;br /&gt;
&lt;br /&gt;
The new [https://en.wikipedia.org/wiki/Xbox_One Xbox One] ships with an upgraded version, [https://en.wikipedia.org/wiki/Kinect_for_Xbox_One Kinect v2], with enhanced resolution, that is able to detect your facial expression, measure your heart rate, and track every one of your fingers. A PC development-ready version ([https://en.wikipedia.org/wiki/Kinect_for_Xbox_One#Kinect_for_Windows Kinect for Windows v2]) was released in July 2014, but it could only be used with the official Windows SDK (open source support [https://github.com/OpenKinect/libfreenect2 exists] but is still young). In October 2014 an adapter that allows to plug the standard Kinect v2 to a PC was released, so the development version of the sensor was discontinued in April 2015. Now you can just buy the [http://www.microsoftstore.com/store/msusa/en_US/pdp/Kinect-for-Xbox-One/productID.307499400 standard one] for the console plus the [http://www.microsoftstore.com/store/msusa/en_US/pdp/Kinect-Adapter-for-Windows/productID.308803600 adapter].&lt;br /&gt;
&lt;br /&gt;
I will explain the installation and usage of one of the &amp;quot;legacy&amp;quot; Kinect 1.0 devices with a common PC, and the possibilities it offers. I will do it in an easy to understand way, intended for students that have just acquired it and want to start from scratch.&lt;br /&gt;
&lt;br /&gt;
Keep in mind that the software that we are going to use (Point Cloud Library and OpenNI drivers) will also let you use any other device like the [https://www.asus.com/3D-Sensor/Xtion_PRO/ Xtion PRO] or [https://www.asus.com/3D-Sensor/Xtion_PRO_LIVE/ Xtion PRO LIVE] from ASUS (the PRO only has a depth sensor, the PRO LIVE is a RGB-D camera) without changing a line of code.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: The tutorials are written for Linux platforms. Also, 64-bit versions seem to work better than 32-bit.'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Requirements=&lt;br /&gt;
&lt;br /&gt;
You will need the following:&lt;br /&gt;
&lt;br /&gt;
* A common Kinect device, out of the box. You can buy it in your local electronics shop, or online. It also includes a free copy of ''[https://en.wikipedia.org/wiki/Kinect_Adventures! Kinect Adventures!]'', which is useless if you do not own the console. Microsoft has released a ''Kinect for Windows'' device, which is a normal looking Kinect no longer compatible with Xbox360, that will only work with their official SDK, intended for developers only. Also, like I stated earlier, you can use an ASUS Xtion indistinctly.&lt;br /&gt;
* A computer running Linux (Debian or Ubuntu preferably).&lt;br /&gt;
* A medium-sized room. Kinect has some limitations for depth measurement: 40cm minimum, 8m maximum (make it 6).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: ''Kinect for Windows'' may have problems working with open source drivers on Linux .'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Connecting everything=&lt;br /&gt;
&lt;br /&gt;
Kinect does not work with a common USB port. Its power consumption is a bit higher because of the motor, so Microsoft came up with a connector that combines USB and power supply. Old Xbox 360 models needed a special adapter, new S model ones already have this new port. Luckily, Kinect comes with the official adapter out of the box (otherwise you will have to [http://www.amazon.com/Power-Supply-Cable-Kinect-Xbox-360/dp/B004S7GA46/ref=sr_1_1?ie=UTF8&amp;amp;qid=1378288327&amp;amp;sr=8-1 buy one]).&lt;br /&gt;
&lt;br /&gt;
Just plug the adapter to any power socket, and the USB to your computer. Let's check typing this in a terminal:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;lsusb&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Output should list the following devices:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;Bus 001 Device 005: ID 045e:02b0 Microsoft Corp. Xbox NUI Motor&lt;br /&gt;
Bus 001 Device 006: ID 045e:02ad Microsoft Corp. Xbox NUI Audio&lt;br /&gt;
Bus 001 Device 007: ID 045e:02ae Microsoft Corp. Xbox NUI Camera&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are using a Xtion, you should see:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;Bus 001 Device 004: ID 1d27:0601 ASUS&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;0601&amp;quot; identifies the new Xtion model, &amp;quot;0600&amp;quot; the older one. Both should work the same. But try to avoid [[PCL/OpenNI_troubleshooting#ASUS_Xtion_PRO|USB 3.0 ports]]!&lt;br /&gt;
&lt;br /&gt;
=Installing the software=&lt;br /&gt;
&lt;br /&gt;
There is more than one way to get your Kinect working on your PC, and start developing applications for it:&lt;br /&gt;
&lt;br /&gt;
* [https://www.microsoft.com/en-us/download/details.aspx?id=40278 Kinect for Windows SDK] and [https://www.microsoft.com/en-us/download/details.aspx?id=40276 Developer Toolkit]: released on June 16, 2011 as a non-commercial SDK intended for application development. Version 1.8, the last there will ever be now that Kinect v2 is out, was released on September 2013. Because it comes from Microsoft, it is obviously the easiest way to get everything working. Sadly, there is no Linux version.&lt;br /&gt;
* [https://github.com/OpenKinect/libfreenect libfreenect] library: from the [http://openkinect.org/wiki/Main_Page OpenKinect] project, it is intended to be a free and open source alternative to the official drivers. libfreenect is used by projects like [https://github.com/ofTheo/ofxKinect ofxKinect], an addon for the [http://www.openframeworks.cc/ openFrameworks] toolkit (and as of version 0.8, included in the core package) that runs on Linux and OS X. ofxKinect packs a nice example application to show the RGB and point cloud taken from Kinect.&lt;br /&gt;
* [https://github.com/PrimeSense/Sensor PrimeSense drivers]: they were released as open source after the the [https://en.wikipedia.org/wiki/OpenNI OpenNI] project was created, along with the motion tracking middleware, ''NITE'', and the SDK. NI stands for Natural Interaction, and the project tried to enforce a common standard for human input using Kinect-like sensors. These official drivers are used by [http://wiki.ros.org/openni_kinect ROS] (the Robot Operating System, a massive collection of libraries and tools for robotic researchers) and [http://pointclouds.org/ PCL] (the Point Cloud Library, with everything needed for 3D point cloud processing). Sadly, version 2.0 of the OpenNI SDK dropped support for Kinect on Linux due to licensing issues, so for now PCL relies on legacy 1.x versions. Also, Apple bought PrimeSense on November 2013, and on April 2014 OpenNI's webpage was closed. The source is now being maintained by [http://structure.io/openni a third party].&lt;br /&gt;
* [https://github.com/avin2/SensorKinect SensorKinect]: a modified version of the official PrimeSense drivers, used for example by [https://github.com/gameoverhack/ofxOpenNI ofxOpenNI] (another openFrameworks addon). Last updated on 2012.&lt;br /&gt;
&lt;br /&gt;
For this tutorial, we are going to use PCL with the OpenNI drivers, so owners of a Xtion can also get it to work.&lt;br /&gt;
&lt;br /&gt;
==Precompiled PCL for Ubuntu==&lt;br /&gt;
&lt;br /&gt;
If you have a valid [http://wiki.ros.org/ROS/Installation installation of ROS] (through their repository), you do not have to install anything. Both OpenNI and PrimeSense drivers, as well as PCL, should be already installed. You can check it with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install libpcl-1.7-all libpcl-1.7-all-dev libopenni-dev libopenni-sensor-primesense-dev&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The previous command should state that all packages are already installed (change the PCL version number as needed), install them if not. If you get an error about overwriting some file, check [[PCL/OpenNI_troubleshooting#Trying_to_overwrite_X.2C_which_is_also_in_package_Y|this]].&lt;br /&gt;
&lt;br /&gt;
In case you do not want ROS, there is a [https://launchpad.net/~v-launchpad-jochen-sprickerhof-de/+archive/ubuntu/pcl PPA] (Personal Package Archive, a private repository) which has everything we need. Add it to your sources, and install everything:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo add-apt-repository ppa:v-launchpad-jochen-sprickerhof-de/pcl&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt-get install build-essential cmake libpcl1.7 libpcl-dev pcl-tools&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Trying to mix ROS and PCL repositories and packages can cause some errors, so choose one of them and stick with it. Check the [[PCL/OpenNI troubleshooting]] page because your Kinect may not work by default in 32 bits.&lt;br /&gt;
&lt;br /&gt;
==Compiling PCL from source==&lt;br /&gt;
&lt;br /&gt;
For Linuxes without a precompiled version of PCL, you will need to compile it yourself. This has an advantage, actually: you can customize the build options and choose what you want. Also, the resulting binaries and libraries should be a bit faster. And you always get the latest version! The instructions are [http://pointclouds.org/documentation/tutorials/compiling_pcl_posix.php here], but the steps are easy so I will show them to you.&lt;br /&gt;
&lt;br /&gt;
===Installing the dependencies===&lt;br /&gt;
&lt;br /&gt;
Some of PCL dependencies can be installed via the package manager. Others will require additional work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash enclose=&amp;quot;div&amp;quot;&amp;gt;sudo apt-get install build-essential cmake cmake-curses-gui libboost-all-dev libeigen3-dev libflann-dev libvtk5-dev libvtk5-qt4-dev libglew-dev libxmu-dev libsuitesparse-dev libqhull-dev libpcap-dev libxmu-dev libxi-dev libgtest-dev libqt4-opengl-dev&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The trunk version of PCL uses [https://en.wikipedia.org/wiki/VTK VTK] 6 and [https://en.wikipedia.org/wiki/Qt_%28software%29 Qt] 5, so if you intend to compile it, you must install the following packages (say yes if you are asked to remove VTK 5 and Qt 4 packages):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash enclose=&amp;quot;div&amp;quot;&amp;gt;sudo apt-get install libvtk6-dev libqt5opengl5-dev&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====JDK====&lt;br /&gt;
&lt;br /&gt;
OpenNI requires Sun's official JDK (Java Development Kit), which is no longer available on official apt repositories. You can use [http://www.webupd8.org/2012/09/install-oracle-java-8-in-ubuntu-via-ppa.html unofficial ones], or do it manually. For the last method, go to the [http://www.oracle.com/technetwork/java/javase/downloads/index.html Java SE downloads] page (SE means Standard Edition) and download the latest version (e.g., &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''jdk-8u66-linux-x64.tar.gz''&amp;lt;/span&amp;gt;). Extract it, then move the contents to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''/usr/lib/jvm/''&amp;lt;/span&amp;gt; so it is available system-wide:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo mkdir -p /usr/lib/jvm/&lt;br /&gt;
sudo cp -r jdk1.8.0_66/ /usr/lib/jvm/&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, make it the default choice to compile and run Java programs. Remember to change the version number as needed!&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo update-alternatives --install &amp;quot;/usr/bin/java&amp;quot; &amp;quot;java&amp;quot; &amp;quot;/usr/lib/jvm/jdk1.8.0_66/bin/java&amp;quot; 1&lt;br /&gt;
sudo update-alternatives --install &amp;quot;/usr/bin/javac&amp;quot; &amp;quot;javac&amp;quot; &amp;quot;/usr/lib/jvm/jdk1.8.0_66/bin/javac&amp;quot; 1&lt;br /&gt;
sudo update-alternatives --install &amp;quot;/usr/bin/jar&amp;quot; &amp;quot;jar&amp;quot; &amp;quot;/usr/lib/jvm/jdk1.8.0_66/bin/jar&amp;quot; 1&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To be sure, use the following commands to manually select the correct option, in case there is more than one choice:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo update-alternatives --config java&lt;br /&gt;
sudo update-alternatives --config javac&lt;br /&gt;
sudo update-alternatives --config jar&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are still not sure, run them and display the version, making sure it is the one you installed:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;java -version&lt;br /&gt;
javac -version&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sun's JDK is now installed.&lt;br /&gt;
&lt;br /&gt;
====OpenNI====&lt;br /&gt;
&lt;br /&gt;
PCL uses OpenNI and the PrimeSense drivers to get data from devices like Kinect or Xtion. It is optional, but it would not make much sense not to install it, would it? If you are using Ubuntu, add the [[#Precompiled PCL for Ubuntu | PPA]] and install &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''libopenni-dev''&amp;lt;/span&amp;gt; and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''libopenni-sensor-primesense-dev''&amp;lt;/span&amp;gt;, which should be done already. Otherwise, fetch the [https://github.com/OpenNI/OpenNI OpenNI] and [https://github.com/PrimeSense/Sensor PrimeSense Sensor] sources from GitHub (download them as .zip, the link is on the right). Extract them, and install the dependencies:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install python libusb-1.0-0-dev freeglut3-dev doxygen graphviz&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Go to the directory where you extracted OpenNI (&amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''OpenNI-master/''&amp;lt;/span&amp;gt; for me), and open a terminal in the &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/CreateRedist/''&amp;lt;/span&amp;gt; subdirectory. Issue:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;./RedistMaker&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When it finishes, and if there are no errors (check the [[PCL/OpenNI troubleshooting]] page if you get any), go to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/Redist/OpenNI-Bin-Dev-Linux-x64-v1.5.7.10/''&amp;lt;/span&amp;gt; (or your equivalent), and install (you must be root):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo ./install.sh&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, go to the directory where you extracted the PrimeSense drivers (&amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Sensor-master/''&amp;lt;/span&amp;gt; for me), and repeat the exact same procedure (go to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/CreateRedist/''&amp;lt;/span&amp;gt;, issue &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''./RedistMaker''&amp;lt;/span&amp;gt;, go to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/Redist/Sensor-Bin-Linux-x64-v5.1.6.6/''&amp;lt;/span&amp;gt;, issue &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''sudo ./install.sh''&amp;lt;/span&amp;gt;). Congratulations, you have now installed OpenNI.&lt;br /&gt;
&lt;br /&gt;
====CUDA====&lt;br /&gt;
&lt;br /&gt;
Like OpenNI, [https://en.wikipedia.org/wiki/CUDA nVidia CUDA] is an optional dependency, that will allow PCL to use your GPU (Graphics Processing Unit, that is, your graphics card) for certain computations. This is mandatory for tools like KinFu (do not bother unless you have at least a series 400 card with 1.5 GB of VRAM).&lt;br /&gt;
&lt;br /&gt;
Some distributions provide packages for CUDA in the repositories. For example, in Ubuntu:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install nvidia-cuda-dev nvidia-cuda-toolkit&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you want to install it manually (which is incompatible with the previous method), go to the [https://developer.nvidia.com/cuda-downloads CUDA downloads] page, which is self-explanatory, and get the small .deb file or the huge .run toolkit/SDK installer file for your system (you should have installed the nVidia drivers already, but the installer will also do it for you if needed).&lt;br /&gt;
&lt;br /&gt;
If you chose the .deb file, install it using the method of your choice, like Gdebi package manager, or through console:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo dpkg -i &amp;lt;package.deb&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The .deb does not contain all CUDA stuff, it just adds their repository to your software lists. Now you must install everything:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get update&lt;br /&gt;
sudo apt-get install cuda&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If, on the other hand, you downloaded the .run, give it execution permissions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;chmod +x cuda_7.0.28_linux.run&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And install it. You can use the default options, although if you have a working nVidia graphics driver for your card, you may want to say &amp;quot;no&amp;quot; when the installer offers to install it for you:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo ./cuda_7.0.28_linux.run&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The environment variables need to be changed so your system can find CUDA's libraries and binaries. Open &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''/etc/ld.so.conf''&amp;lt;/span&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo nano /etc/ld.so.conf&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And append one of these two lines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;/usr/local/cuda/lib64 # Add this on 64-bit only.&lt;br /&gt;
/usr/local/cuda/lib # Add this on 32-bit only.&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Ctrl+O'''&amp;lt;/span&amp;gt; and Enter, exit with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Ctrl+X'''&amp;lt;/span&amp;gt;. Reload the cache of the dynamic linker with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo ldconfig&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, append CUDA's bin directory to your PATH. Do this by editing your local &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''.bashrc''&amp;lt;/span&amp;gt; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;nano ~/.bashrc&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And append this line:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;export PATH=$PATH:/usr/local/cuda/bin&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CUDA is now installed.&lt;br /&gt;
&lt;br /&gt;
===Getting the source===&lt;br /&gt;
&lt;br /&gt;
Every dependency is installed. Time to download PCL's source code. First, you must choose whether to install the stable or the experimental branch of PCL. The stable branch is the latest official release and it is guaranteed to work without problems. The experimental branch may give you a compiling error seldomly, but you can find some interesting features that stable users will have to wait some months for. Apart from that, both are built the same way.&lt;br /&gt;
&lt;br /&gt;
To get the stable version, go to the [https://github.com/PointCloudLibrary/pcl/releases downloads] page, get &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''pcl-pcl-1.7.2.tar.gz''&amp;lt;/span&amp;gt; or whatever the latest release is, and extract it somewhere. For the experimental version, use Git:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install git&lt;br /&gt;
git clone https://github.com/PointCloudLibrary/pcl PCL-trunk-Source&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The compiled trunk version of PCL will take up more than 8 GB. So make sure you put the source folder in a partition with enough free space!&lt;br /&gt;
&lt;br /&gt;
===Compiling===&lt;br /&gt;
&lt;br /&gt;
Go the the PCL source directory (&amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''pcl-pcl-1.7.2/''&amp;lt;/span&amp;gt; or &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''PCL-trunk-Source/''&amp;lt;/span&amp;gt;), and create a new subdirectory to keep the build files in:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;mkdir build&lt;br /&gt;
cd build&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now it is time to configure the project using CMake. We will tell it to build in Release (fully optimized, no debug capabilities) mode now, and customize the rest of the options later:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;cmake -DCMAKE_BUILD_TYPE=Release ..&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CMake should be able to find every dependency, thus being able to build every subsystem except for the ones marked as &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;Disabled by default&amp;quot;''&amp;lt;/span&amp;gt;. If you are happy, you can build now, otherwise let's invoke CMake's curses interface to change a couple of things (mind the final dot):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;ccmake .&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Ccmake_GUI_PCL.png|thumb|center|900px|Interface of ''ccmake''.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here you can change the build options. The program usage can be found at the bottom of the screen. Try turning all functionality &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;ON&amp;quot;&amp;lt;/span&amp;gt;. The most important thing, in case you want to use CUDA, is to enable it and give CMake the path to your SDK. Go to the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;CUDA_SDK_ROOT_DIR&amp;quot;&amp;lt;/span&amp;gt; option and enter the correct path (probably &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''/usr/local/cuda/''&amp;lt;/span&amp;gt; or similar).&lt;br /&gt;
&lt;br /&gt;
When you are done, press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''C'''&amp;lt;/span&amp;gt; to configure and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''G'''&amp;lt;/span&amp;gt; to generate and exit the tool. Sometimes, the options you change can activate previously omitted parameters, or prompt some warning text. Just press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''E'''&amp;lt;/span&amp;gt; when you are finished reading the message, and keep pressing &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''C'''&amp;lt;/span&amp;gt; until it lets you generate (new parameters will be marked with an asterisk, so you can check them and decide whether or not you want further customization).&lt;br /&gt;
&lt;br /&gt;
If you are done configuring, it is time to build:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;make&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: Additionally, you can append the parameter -jX to speed up the compilation, X being the number of cores or processors of your PC, plus one.'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Remember that, at any time, you can manually force the project to be reconfigured and built from scratch by emptying the &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''build/''&amp;lt;/span&amp;gt; directory with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;rm -rf ./*&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Installing===&lt;br /&gt;
&lt;br /&gt;
It will take some time to compile PCL (up to a few hours if your PC is not powerful enough). When it is finished, install it system-wide with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo make install&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And you should reboot and proceed to the next section, to see if your computer now recognizes (and uses) your Kinect device.&lt;br /&gt;
&lt;br /&gt;
=Testing (OpenNI viewer)=&lt;br /&gt;
&lt;br /&gt;
We are going to write a simple example program that will fetch data from the Kinect or Xtion and present it to the user, using the PCL library. It will also allow to save the current frame (as point cloud) to disk. If you feel lazy, you can download it [http://robotica.unileon.es/~victorm/PCL_openni_viewer.tar.gz here], and skip the next two sections. Otherwise, create a new directory anywhere in your hard disk and proceed.&lt;br /&gt;
&lt;br /&gt;
==CMakeLists.txt==&lt;br /&gt;
&lt;br /&gt;
Inside that directory, create a new text file named &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''CMakeLists.txt''&amp;lt;/span&amp;gt;. PCL-based programs use the CMake build system, too. Open it with any editor and paste the following content:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CMake&amp;gt;cmake_minimum_required(VERSION 2.8 FATAL_ERROR)&lt;br /&gt;
 &lt;br /&gt;
project(PCL_openni_viewer)&lt;br /&gt;
 &lt;br /&gt;
find_package(PCL 1.7 REQUIRED)&lt;br /&gt;
 &lt;br /&gt;
include_directories(${PCL_INCLUDE_DIRS})&lt;br /&gt;
link_directories(${PCL_LIBRARY_DIRS})&lt;br /&gt;
add_definitions(${PCL_DEFINITIONS})&lt;br /&gt;
 &lt;br /&gt;
set(PCL_BUILD_TYPE Release)&lt;br /&gt;
 &lt;br /&gt;
file(GLOB PCL_openni_viewer_SRC&lt;br /&gt;
    &amp;quot;src/*.h&amp;quot;&lt;br /&gt;
    &amp;quot;src/*.cpp&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
add_executable(openniViewer ${PCL_openni_viewer_SRC})&lt;br /&gt;
 &lt;br /&gt;
target_link_libraries (openniViewer ${PCL_LIBRARIES})&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CMake syntax is quite self-explanatory. We ask for a CMake version 2.8 installation, minimum. We declare a new project named &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;openni_PCL_viewer&amp;quot;&amp;lt;/span&amp;gt;. We tell CMake to check for the presence of PCL library development files, version 1.7. If our system can not meet the CMake and PCL version requirement, the process will fail.&lt;br /&gt;
&lt;br /&gt;
Next, we feed the compiler and linker the directories where PCL includes and libraries can be found, and the defined symbols. We tell CMake to use the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Release&amp;quot;&amp;lt;/span&amp;gt; build type, which will activate certain optimizations depending on the compiler we use. Other build types are available, like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Debug&amp;quot;&amp;lt;/span&amp;gt;, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;MinSizeRel&amp;quot;&amp;lt;/span&amp;gt;, and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;RelWithDebInfo&amp;quot;&amp;lt;/span&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Finally, we create a variable, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;opennipclviewer_SRC&amp;quot;&amp;lt;/span&amp;gt;, that will store a list of files to be compiled (though we will only have one). We create a new binary to be compiled from these source files, and we link it with the PCL library.&lt;br /&gt;
&lt;br /&gt;
Check the [http://www.cmake.org/Wiki/CMake_Useful_Variables CMake help] for more interesting options.&lt;br /&gt;
&lt;br /&gt;
==main.cpp==&lt;br /&gt;
&lt;br /&gt;
We told CMake it could find the source files in a &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''src/''&amp;lt;/span&amp;gt; subdirectory, so let's keep to out word and create it. Then, add a new &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''main.cpp''&amp;lt;/span&amp;gt; file inside and paste the following lines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;// Original code by Geoffrey Biggs, taken from the PCL tutorial in&lt;br /&gt;
// http://pointclouds.org/documentation/tutorials/pcl_visualizer.php&lt;br /&gt;
&lt;br /&gt;
// Simple OpenNI viewer that also allows to write the current scene to a .pcd&lt;br /&gt;
// when pressing SPACE.&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;pcl/io/openni_grabber.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/cloud_viewer.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/console/parse.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
using namespace std;&lt;br /&gt;
using namespace pcl;&lt;br /&gt;
&lt;br /&gt;
PointCloud&amp;lt;PointXYZRGBA&amp;gt;::Ptr cloudptr(new PointCloud&amp;lt;PointXYZRGBA&amp;gt;); // A cloud that will store color info.&lt;br /&gt;
PointCloud&amp;lt;PointXYZ&amp;gt;::Ptr fallbackCloud(new PointCloud&amp;lt;PointXYZ&amp;gt;);    // A fallback cloud with just depth data.&lt;br /&gt;
boost::shared_ptr&amp;lt;visualization::CloudViewer&amp;gt; viewer;                 // Point cloud viewer object.&lt;br /&gt;
Grabber* openniGrabber;                                               // OpenNI grabber that takes data from the device.&lt;br /&gt;
unsigned int filesSaved = 0;                                          // For the numbering of the clouds saved to disk.&lt;br /&gt;
bool saveCloud(false), noColor(false);                                // Program control.&lt;br /&gt;
&lt;br /&gt;
void&lt;br /&gt;
printUsage(const char* programName)&lt;br /&gt;
{&lt;br /&gt;
	cout &amp;lt;&amp;lt; &amp;quot;Usage: &amp;quot; &amp;lt;&amp;lt; programName &amp;lt;&amp;lt; &amp;quot; [options]&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; endl&lt;br /&gt;
		 &amp;lt;&amp;lt; endl&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;Options:\n&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; endl&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;\t&amp;lt;none&amp;gt;     start capturing from an OpenNI device.\n&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;\t-v FILE    visualize the given .pcd file.\n&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;\t-h         shows this help.\n&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// This function is called every time the device has new data.&lt;br /&gt;
void&lt;br /&gt;
grabberCallback(const PointCloud&amp;lt;PointXYZRGBA&amp;gt;::ConstPtr&amp;amp; cloud)&lt;br /&gt;
{&lt;br /&gt;
	if (! viewer-&amp;gt;wasStopped())&lt;br /&gt;
		viewer-&amp;gt;showCloud(cloud);&lt;br /&gt;
&lt;br /&gt;
	if (saveCloud)&lt;br /&gt;
	{&lt;br /&gt;
		stringstream stream;&lt;br /&gt;
		stream &amp;lt;&amp;lt; &amp;quot;inputCloud&amp;quot; &amp;lt;&amp;lt; filesSaved &amp;lt;&amp;lt; &amp;quot;.pcd&amp;quot;;&lt;br /&gt;
		string filename = stream.str();&lt;br /&gt;
		if (io::savePCDFile(filename, *cloud, true) == 0)&lt;br /&gt;
		{&lt;br /&gt;
			filesSaved++;&lt;br /&gt;
			cout &amp;lt;&amp;lt; &amp;quot;Saved &amp;quot; &amp;lt;&amp;lt; filename &amp;lt;&amp;lt; &amp;quot;.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
		}&lt;br /&gt;
		else PCL_ERROR(&amp;quot;Problem saving %s.\n&amp;quot;, filename.c_str());&lt;br /&gt;
&lt;br /&gt;
		saveCloud = false;&lt;br /&gt;
	}&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// For detecting when SPACE is pressed.&lt;br /&gt;
void&lt;br /&gt;
keyboardEventOccurred(const visualization::KeyboardEvent&amp;amp; event,&lt;br /&gt;
					  void* nothing)&lt;br /&gt;
{&lt;br /&gt;
	if (event.getKeySym() == &amp;quot;space&amp;quot; &amp;amp;&amp;amp; event.keyDown())&lt;br /&gt;
		saveCloud = true;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Creates, initializes and returns a new viewer.&lt;br /&gt;
boost::shared_ptr&amp;lt;visualization::CloudViewer&amp;gt;&lt;br /&gt;
createViewer()&lt;br /&gt;
{&lt;br /&gt;
	boost::shared_ptr&amp;lt;visualization::CloudViewer&amp;gt; v&lt;br /&gt;
	(new visualization::CloudViewer(&amp;quot;OpenNI viewer&amp;quot;));&lt;br /&gt;
	v-&amp;gt;registerKeyboardCallback(keyboardEventOccurred);&lt;br /&gt;
&lt;br /&gt;
	return (v);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	if (console::find_argument(argc, argv, &amp;quot;-h&amp;quot;) &amp;gt;= 0)&lt;br /&gt;
	{&lt;br /&gt;
		printUsage(argv[0]);&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	bool justVisualize(false);&lt;br /&gt;
	string filename;&lt;br /&gt;
	if (console::find_argument(argc, argv, &amp;quot;-v&amp;quot;) &amp;gt;= 0)&lt;br /&gt;
	{&lt;br /&gt;
		if (argc != 3)&lt;br /&gt;
		{&lt;br /&gt;
			printUsage(argv[0]);&lt;br /&gt;
			return -1;&lt;br /&gt;
		}&lt;br /&gt;
&lt;br /&gt;
		filename = argv[2];&lt;br /&gt;
		justVisualize = true;&lt;br /&gt;
	}&lt;br /&gt;
	else if (argc != 1)&lt;br /&gt;
	{&lt;br /&gt;
		printUsage(argv[0]);&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// First mode, open and show a cloud from disk.&lt;br /&gt;
	if (justVisualize)&lt;br /&gt;
	{&lt;br /&gt;
		// Try with color information...&lt;br /&gt;
		try&lt;br /&gt;
		{&lt;br /&gt;
			io::loadPCDFile&amp;lt;PointXYZRGBA&amp;gt;(filename.c_str(), *cloudptr);&lt;br /&gt;
		}&lt;br /&gt;
		catch (PCLException e1)&lt;br /&gt;
		{&lt;br /&gt;
			try&lt;br /&gt;
			{&lt;br /&gt;
				// ...and if it fails, fall back to just depth.&lt;br /&gt;
				io::loadPCDFile&amp;lt;PointXYZ&amp;gt;(filename.c_str(), *fallbackCloud);&lt;br /&gt;
			}&lt;br /&gt;
			catch (PCLException e2)&lt;br /&gt;
			{&lt;br /&gt;
				return -1;&lt;br /&gt;
			}&lt;br /&gt;
&lt;br /&gt;
			noColor = true;&lt;br /&gt;
		}&lt;br /&gt;
&lt;br /&gt;
		cout &amp;lt;&amp;lt; &amp;quot;Loaded &amp;quot; &amp;lt;&amp;lt; filename &amp;lt;&amp;lt; &amp;quot;.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
		if (noColor)&lt;br /&gt;
			cout &amp;lt;&amp;lt; &amp;quot;This cloud has no RGBA color information present.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
		else cout &amp;lt;&amp;lt; &amp;quot;This cloud has RGBA color information present.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
	}&lt;br /&gt;
	// Second mode, start fetching and displaying frames from the device.&lt;br /&gt;
	else&lt;br /&gt;
	{&lt;br /&gt;
		openniGrabber = new OpenNIGrabber();&lt;br /&gt;
		if (openniGrabber == 0)&lt;br /&gt;
			return -1;&lt;br /&gt;
		boost::function&amp;lt;void (const PointCloud&amp;lt;PointXYZRGBA&amp;gt;::ConstPtr&amp;amp;)&amp;gt; f =&lt;br /&gt;
			boost::bind(&amp;amp;grabberCallback, _1);&lt;br /&gt;
		openniGrabber-&amp;gt;registerCallback(f);&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	viewer = createViewer();&lt;br /&gt;
&lt;br /&gt;
	if (justVisualize)&lt;br /&gt;
	{&lt;br /&gt;
		if (noColor)&lt;br /&gt;
			viewer-&amp;gt;showCloud(fallbackCloud);&lt;br /&gt;
		else viewer-&amp;gt;showCloud(cloudptr);&lt;br /&gt;
	}&lt;br /&gt;
	else openniGrabber-&amp;gt;start();&lt;br /&gt;
&lt;br /&gt;
	// Main loop.&lt;br /&gt;
	while (! viewer-&amp;gt;wasStopped())&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::seconds(1));&lt;br /&gt;
&lt;br /&gt;
	if (! justVisualize)&lt;br /&gt;
		openniGrabber-&amp;gt;stop();&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save and close.&lt;br /&gt;
&lt;br /&gt;
==Compiling==&lt;br /&gt;
&lt;br /&gt;
Follow the same steps you used to build PCL. That is, create a new &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''build/''&amp;lt;/span&amp;gt; subdirectory next to the &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''src/''&amp;lt;/span&amp;gt; one. Open a terminal there and issue:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;cmake -DCMAKE_BUILD_TYPE=Release ..&lt;br /&gt;
make&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Executing==&lt;br /&gt;
&lt;br /&gt;
Still from the same terminal, run the compiled example program:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;./openniViewer&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After some seconds, the main window will appear and the application will start grabbing frames from the device. You can inspect the current point cloud using the mouse, holding the left button to rotate, the right one (or the mouse wheel) to zoom, and the middle one to pan the camera around. At first, you may see only a black screen, or some big colored axes, but no cloud. Try zooming out, to see the whole scene. Another useful key is &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''R'''&amp;lt;/span&amp;gt;, which will reset the camera parameters when pressed. Use it whenever you notice that zooming has gotten slow after some camera movement, or if you still can not see the cloud. See the [http://pointclouds.org/documentation/tutorials/pcl_visualizer.php PCLVisualizer] tutorial for additional controls and features.&lt;br /&gt;
&lt;br /&gt;
Whenever you feel ready, press the &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''SPACE'''&amp;lt;/span&amp;gt; key. The program will pause for a fraction of a second and the output &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;Saved inputCloud0.pcd.&amp;quot;''&amp;lt;/span&amp;gt; will appear on the console. Check the current folder to see that file &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''inputCloud0.pcd''&amp;lt;/span&amp;gt; has indeed been written. You can now close the program with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Q'''&amp;lt;/span&amp;gt; or &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Alt+F4'''&amp;lt;/span&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Next, run it again giving the following parameter:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;./openniViewer -v inputCloud0.pcd&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will tell the program not to take data from the device, but from the saved point cloud file instead. After it loads, you will realize that you are presented the same scene you saved to disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: PCD data is saved relative to the sensor. No matter how much you have manipulated the view, it will reset to default when you load the file.'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
&lt;br /&gt;
At this point, your Kinect device should be working and getting depth data for you. There is a collection of excellent [http://pointclouds.org/documentation/tutorials/ tutorials] for PCL in the official webpage. I encourage you to finish them all before proceeding your experiments with the Kinect sensor. You can also find a good introduction/tutorial at the PCL library [http://www.pointclouds.org/assets/uploads/PCL-IAS13_Introduction.pdf here].&lt;br /&gt;
&lt;br /&gt;
If you use an ASUS Xtion PRO device instead, you should have gotten everything to work without problems or additional steps (except maybe for [[PCL/OpenNI_troubleshooting#ASUS_Xtion_PRO | this one]]).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_1:_Installing_and_testing&amp;diff=4733</id>
		<title>PCL/OpenNI tutorial 1: Installing and testing</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_1:_Installing_and_testing&amp;diff=4733"/>
				<updated>2015-11-01T22:55:47Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Kinect.jpg|thumb|right|200px|Microsoft Kinect device.]]&lt;br /&gt;
&lt;br /&gt;
This series of tutorials will explain the usage of a depth camera like [https://en.wikipedia.org/wiki/Kinect Kinect] for &amp;quot;serious&amp;quot; researching purposes. As you may know, Kinect is in fact an affordable depth sensor, developed with technology from [https://en.wikipedia.org/wiki/PrimeSense PrimeSense], based on infrarred structured light method. It also has a common camera (which makes it a RGB-D device), a microphone and a motorized pivot. Its use is not limited to playing with a Xbox360 console, you can plug it to a computer and use it like any other sensor. Many open-source drivers and frameworks are available.&lt;br /&gt;
&lt;br /&gt;
Since its release on November 2010, it has gained a lot of popularity, specially among the scientific community. Many researches have procured themselves one because, despite the low cost (about 100 $), it has proven to be a powerful solution for depth sensing projects. Current investigations focus on real-time surface mapping, object recognition and tracking, and localization. Impressive results (like the [http://research.microsoft.com/en-us/projects/surfacerecon/ KinectFusion] project from Microsoft) are already possible.&lt;br /&gt;
&lt;br /&gt;
The new [https://en.wikipedia.org/wiki/Xbox_One Xbox One] ships with an upgraded version, [https://en.wikipedia.org/wiki/Kinect_for_Xbox_One Kinect v2], with enhanced resolution, that is able to detect your facial expression, measure your heart rate, and track every one of your fingers. A PC development-ready version ([https://en.wikipedia.org/wiki/Kinect_for_Xbox_One#Kinect_for_Windows Kinect for Windows v2]) was released in July 2014, but it could only be used with the official Windows SDK (open source support [https://github.com/OpenKinect/libfreenect2 exists] but is still young). In October 2014 an adapter that allows to plug the standard Kinect v2 to a PC was released, so the development version of the sensor was discontinued in April 2015. Now you can just buy the [http://www.microsoftstore.com/store/msusa/en_US/pdp/Kinect-for-Xbox-One/productID.307499400 standard one] for the console plus the [http://www.microsoftstore.com/store/msusa/en_US/pdp/Kinect-Adapter-for-Windows/productID.308803600 adapter].&lt;br /&gt;
&lt;br /&gt;
I will explain the installation and usage of one of the &amp;quot;legacy&amp;quot; Kinect 1.0 devices with a common PC, and the possibilities it offers. I will do it in an easy to understand way, intended for students that have just acquired it and want to start from scratch.&lt;br /&gt;
&lt;br /&gt;
Keep in mind that the software that we are going to use (Point Cloud Library and OpenNI drivers) will also let you use any other device like the [https://www.asus.com/3D-Sensor/Xtion_PRO/ Xtion PRO] or [https://www.asus.com/3D-Sensor/Xtion_PRO_LIVE/ Xtion PRO LIVE] from ASUS (the PRO only has a depth sensor, the PRO LIVE is a RGB-D camera) without changing a line of code.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: The tutorials are written for Linux platforms. Also, 64-bit versions seem to work better than 32-bit.'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Requirements=&lt;br /&gt;
&lt;br /&gt;
You will need the following:&lt;br /&gt;
&lt;br /&gt;
* A common Kinect device, out of the box. You can buy it in your local electronics shop, or online. It also includes a free copy of ''[https://en.wikipedia.org/wiki/Kinect_Adventures! Kinect Adventures!]'', which is useless if you do not own the console. Microsoft has released a ''Kinect for Windows'' device, which is a normal looking Kinect no longer compatible with Xbox360, that will only work with their official SDK, intended for developers only. Also, like I stated earlier, you can use an ASUS Xtion indistinctly.&lt;br /&gt;
* A computer running Linux (Debian or Ubuntu preferably).&lt;br /&gt;
* A medium-sized room. Kinect has some limitations for depth measurement: 40cm minimum, 8m maximum (make it 6).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: ''Kinect for Windows'' may have problems working with open source drivers on Linux .'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Connecting everything=&lt;br /&gt;
&lt;br /&gt;
Kinect does not work with a common USB port. Its power consumption is a bit higher because of the motor, so Microsoft came up with a connector that combines USB and power supply. Old Xbox 360 models needed a special adapter, new S model ones already have this new port. Luckily, Kinect comes with the official adapter out of the box (otherwise you will have to [http://www.amazon.com/Power-Supply-Cable-Kinect-Xbox-360/dp/B004S7GA46/ref=sr_1_1?ie=UTF8&amp;amp;qid=1378288327&amp;amp;sr=8-1 buy one]).&lt;br /&gt;
&lt;br /&gt;
Just plug the adapter to any power socket, and the USB to your computer. Let's check typing this in a terminal:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;lsusb&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Output should list the following devices:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;Bus 001 Device 005: ID 045e:02b0 Microsoft Corp. Xbox NUI Motor&lt;br /&gt;
Bus 001 Device 006: ID 045e:02ad Microsoft Corp. Xbox NUI Audio&lt;br /&gt;
Bus 001 Device 007: ID 045e:02ae Microsoft Corp. Xbox NUI Camera&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are using a Xtion, you should see:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;Bus 001 Device 004: ID 1d27:0601 ASUS&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;0601&amp;quot; identifies the new Xtion model, &amp;quot;0600&amp;quot; the older one. Both should work the same. But try to avoid [[PCL/OpenNI_troubleshooting#ASUS_Xtion_PRO|USB 3.0 ports]]!&lt;br /&gt;
&lt;br /&gt;
=Installing the software=&lt;br /&gt;
&lt;br /&gt;
There is more than one way to get your Kinect working on your PC, and start developing applications for it:&lt;br /&gt;
&lt;br /&gt;
* [https://www.microsoft.com/en-us/download/details.aspx?id=40278 Kinect for Windows SDK] and [https://www.microsoft.com/en-us/download/details.aspx?id=40276 Developer Toolkit]: released on June 16, 2011 as a non-commercial SDK intended for application development. Version 1.8, the last there will ever be now that Kinect v2 is out, was released on September 2013. Because it comes from Microsoft, it is obviously the easiest way to get everything working. Sadly, there is no Linux version.&lt;br /&gt;
* [https://github.com/OpenKinect/libfreenect libfreenect] library: from the [http://openkinect.org/wiki/Main_Page OpenKinect] project, it is intended to be a free and open source alternative to the official drivers. libfreenect is used by projects like [https://github.com/ofTheo/ofxKinect ofxKinect], an addon for the [http://www.openframeworks.cc/ openFrameworks] toolkit (and as of version 0.8, included in the core package) that runs on Linux and OS X. ofxKinect packs a nice example application to show the RGB and point cloud taken from Kinect.&lt;br /&gt;
* [https://github.com/PrimeSense/Sensor PrimeSense drivers]: they were released as open source after the the [https://en.wikipedia.org/wiki/OpenNI OpenNI] project was created, along with the motion tracking middleware, ''NITE'', and the SDK. NI stands for Natural Interaction, and the project tried to enforce a common standard for human input using Kinect-like sensors. These official drivers are used by [http://wiki.ros.org/openni_kinect ROS] (the Robot Operating System, a massive collection of libraries and tools for robotic researchers) and [http://pointclouds.org/ PCL] (the Point Cloud Library, with everything needed for 3D point cloud processing). Sadly, version 2.0 of the OpenNI SDK dropped support for Kinect on Linux due to licensing issues, so for now PCL relies on legacy 1.x versions. Also, Apple bought PrimeSense on November 2013, and on April 2014 OpenNI's webpage was closed. The source is now being maintained by [http://structure.io/openni a third party].&lt;br /&gt;
* [https://github.com/avin2/SensorKinect SensorKinect]: a modified version of the official PrimeSense drivers, used for example by [https://github.com/gameoverhack/ofxOpenNI ofxOpenNI] (another openFrameworks addon). Last updated on 2012.&lt;br /&gt;
&lt;br /&gt;
For this tutorial, we are going to use PCL with the OpenNI drivers, so owners of a Xtion can also get it to work.&lt;br /&gt;
&lt;br /&gt;
==Precompiled PCL for Ubuntu==&lt;br /&gt;
&lt;br /&gt;
If you have a valid installation of [http://wiki.ros.org/indigo/Installation/Ubuntu ROS] (through their repository), you do not have to install anything. Both OpenNI and PrimeSense drivers, as well as PCL, should be already installed. You can check it with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install libpcl-1.7-all libpcl-1.7-all-dev libopenni-dev libopenni-sensor-primesense-dev&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The previous command should state that all packages are already installed (change the PCL version number as needed), install them if not. If you get an error about overwriting some file, check [[PCL/OpenNI_troubleshooting#Trying_to_overwrite_X.2C_which_is_also_in_package_Y|this]].&lt;br /&gt;
&lt;br /&gt;
In case you do not want ROS, there is a [https://launchpad.net/~v-launchpad-jochen-sprickerhof-de/+archive/ubuntu/pcl PPA] (Personal Package Archive, a private repository) which has everything we need. Add it to your sources, and install everything:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo add-apt-repository ppa:v-launchpad-jochen-sprickerhof-de/pcl&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt-get install build-essential cmake libpcl1.7 libpcl-dev pcl-tools&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Trying to mix ROS and PCL repositories and packages can cause some errors, so choose one of them and stick with it. Check the [[PCL/OpenNI troubleshooting]] page because your Kinect may not work by default in 32 bits.&lt;br /&gt;
&lt;br /&gt;
==Compiling PCL from source==&lt;br /&gt;
&lt;br /&gt;
For Linuxes without a precompiled version of PCL, you will need to compile it yourself. This has an advantage, actually: you can customize the build options and choose what you want. Also, the resulting binaries and libraries should be a bit faster. And you always get the latest version! The instructions are [http://pointclouds.org/documentation/tutorials/compiling_pcl_posix.php here], but the steps are easy so I will show them to you.&lt;br /&gt;
&lt;br /&gt;
===Installing the dependencies===&lt;br /&gt;
&lt;br /&gt;
Some of PCL dependencies can be installed via the package manager. Others will require additional work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash enclose=&amp;quot;div&amp;quot;&amp;gt;sudo apt-get install build-essential cmake cmake-curses-gui libboost-all-dev libeigen3-dev libflann-dev libvtk5-dev libvtk5-qt4-dev libglew-dev libxmu-dev libsuitesparse-dev libqhull-dev libpcap-dev libxmu-dev libxi-dev libgtest-dev libqt4-opengl-dev&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The trunk version of PCL uses [https://en.wikipedia.org/wiki/VTK VTK] 6 and [https://en.wikipedia.org/wiki/Qt_%28software%29 Qt] 5, so if you intend to compile it, you must install the following packages (say yes if you are asked to remove VTK 5 and Qt 4 packages):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash enclose=&amp;quot;div&amp;quot;&amp;gt;sudo apt-get install libvtk6-dev libqt5opengl5-dev&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====JDK====&lt;br /&gt;
&lt;br /&gt;
OpenNI requires Sun's official JDK (Java Development Kit), which is no longer available on official apt repositories. You can use [http://www.webupd8.org/2012/09/install-oracle-java-8-in-ubuntu-via-ppa.html unofficial ones], or do it manually. For the last method, go to the [http://www.oracle.com/technetwork/java/javase/downloads/index.html Java SE downloads] page (SE means Standard Edition) and download the latest version (e.g., &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''jdk-8u66-linux-x64.tar.gz''&amp;lt;/span&amp;gt;). Extract it, then move the contents to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''/usr/lib/jvm/''&amp;lt;/span&amp;gt; so it is available system-wide:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo mkdir -p /usr/lib/jvm/&lt;br /&gt;
sudo cp -r jdk1.8.0_66/ /usr/lib/jvm/&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, make it the default choice to compile and run Java programs. Remember to change the version number as needed!&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo update-alternatives --install &amp;quot;/usr/bin/java&amp;quot; &amp;quot;java&amp;quot; &amp;quot;/usr/lib/jvm/jdk1.8.0_66/bin/java&amp;quot; 1&lt;br /&gt;
sudo update-alternatives --install &amp;quot;/usr/bin/javac&amp;quot; &amp;quot;javac&amp;quot; &amp;quot;/usr/lib/jvm/jdk1.8.0_66/bin/javac&amp;quot; 1&lt;br /&gt;
sudo update-alternatives --install &amp;quot;/usr/bin/jar&amp;quot; &amp;quot;jar&amp;quot; &amp;quot;/usr/lib/jvm/jdk1.8.0_66/bin/jar&amp;quot; 1&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To be sure, use the following commands to manually select the correct option, in case there is more than one choice:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo update-alternatives --config java&lt;br /&gt;
sudo update-alternatives --config javac&lt;br /&gt;
sudo update-alternatives --config jar&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are still not sure, run them and display the version, making sure it is the one you installed:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;java -version&lt;br /&gt;
javac -version&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sun's JDK is now installed.&lt;br /&gt;
&lt;br /&gt;
====OpenNI====&lt;br /&gt;
&lt;br /&gt;
PCL uses OpenNI and the PrimeSense drivers to get data from devices like Kinect or Xtion. It is optional, but it would not make much sense not to install it, would it? If you are using Ubuntu, add the [[#Precompiled PCL for Ubuntu | PPA]] and install &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''libopenni-dev''&amp;lt;/span&amp;gt; and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''libopenni-sensor-primesense-dev''&amp;lt;/span&amp;gt;, which should be done already. Otherwise, fetch the [https://github.com/OpenNI/OpenNI OpenNI] and [https://github.com/PrimeSense/Sensor PrimeSense Sensor] sources from GitHub (download them as .zip, the link is on the right). Extract them, and install the dependencies:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install python libusb-1.0-0-dev freeglut3-dev doxygen graphviz&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Go to the directory where you extracted OpenNI (&amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''OpenNI-master/''&amp;lt;/span&amp;gt; for me), and open a terminal in the &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/CreateRedist/''&amp;lt;/span&amp;gt; subdirectory. Issue:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;./RedistMaker&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When it finishes, and if there are no errors (check the [[PCL/OpenNI troubleshooting]] page if you get any), go to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/Redist/OpenNI-Bin-Dev-Linux-x64-v1.5.7.10/''&amp;lt;/span&amp;gt; (or your equivalent), and install (you must be root):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo ./install.sh&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, go to the directory where you extracted the PrimeSense drivers (&amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Sensor-master/''&amp;lt;/span&amp;gt; for me), and repeat the exact same procedure (go to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/CreateRedist/''&amp;lt;/span&amp;gt;, issue &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''./RedistMaker''&amp;lt;/span&amp;gt;, go to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/Redist/Sensor-Bin-Linux-x64-v5.1.6.6/''&amp;lt;/span&amp;gt;, issue &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''sudo ./install.sh''&amp;lt;/span&amp;gt;). Congratulations, you have now installed OpenNI.&lt;br /&gt;
&lt;br /&gt;
====CUDA====&lt;br /&gt;
&lt;br /&gt;
Like OpenNI, [https://en.wikipedia.org/wiki/CUDA nVidia CUDA] is an optional dependency, that will allow PCL to use your GPU (Graphics Processing Unit, that is, your graphics card) for certain computations. This is mandatory for tools like KinFu (do not bother unless you have at least a series 400 card with 1.5 GB of VRAM).&lt;br /&gt;
&lt;br /&gt;
Some distributions provide packages for CUDA in the repositories. For example, in Ubuntu:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install nvidia-cuda-dev nvidia-cuda-toolkit&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you want to install it manually (which is incompatible with the previous method), go to the [https://developer.nvidia.com/cuda-downloads CUDA downloads] page, which is self-explanatory, and get the small .deb file or the huge .run toolkit/SDK installer file for your system (you should have installed the nVidia drivers already, but the installer will also do it for you if needed).&lt;br /&gt;
&lt;br /&gt;
If you chose the .deb file, install it using the method of your choice, like Gdebi package manager, or through console:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo dpkg -i &amp;lt;package.deb&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The .deb does not contain all CUDA stuff, it just adds their repository to your software lists. Now you must install everything:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get update&lt;br /&gt;
sudo apt-get install cuda&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If, on the other hand, you downloaded the .run, give it execution permissions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;chmod +x cuda_7.0.28_linux.run&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And install it. You can use the default options, although if you have a working nVidia graphics driver for your card, you may want to say &amp;quot;no&amp;quot; when the installer offers to install it for you:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo ./cuda_7.0.28_linux.run&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The environment variables need to be changed so your system can find CUDA's libraries and binaries. Open &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''/etc/ld.so.conf''&amp;lt;/span&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo nano /etc/ld.so.conf&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And append one of these two lines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;/usr/local/cuda/lib64 # Add this on 64-bit only.&lt;br /&gt;
/usr/local/cuda/lib # Add this on 32-bit only.&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Ctrl+O'''&amp;lt;/span&amp;gt; and Enter, exit with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Ctrl+X'''&amp;lt;/span&amp;gt;. Reload the cache of the dynamic linker with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo ldconfig&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, append CUDA's bin directory to your PATH. Do this by editing your local &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''.bashrc''&amp;lt;/span&amp;gt; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;nano ~/.bashrc&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And append this line:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;export PATH=$PATH:/usr/local/cuda/bin&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CUDA is now installed.&lt;br /&gt;
&lt;br /&gt;
===Getting the source===&lt;br /&gt;
&lt;br /&gt;
Every dependency is installed. Time to download PCL's source code. First, you must choose whether to install the stable or the experimental branch of PCL. The stable branch is the latest official release and it is guaranteed to work without problems. The experimental branch may give you a compiling error seldomly, but you can find some interesting features that stable users will have to wait some months for. Apart from that, both are built the same way.&lt;br /&gt;
&lt;br /&gt;
To get the stable version, go to the [https://github.com/PointCloudLibrary/pcl/releases downloads] page, get &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''pcl-pcl-1.7.2.tar.gz''&amp;lt;/span&amp;gt; or whatever the latest release is, and extract it somewhere. For the experimental version, use Git:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install git&lt;br /&gt;
git clone https://github.com/PointCloudLibrary/pcl PCL-trunk-Source&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The compiled trunk version of PCL will take up more than 8 GB. So make sure you put the source folder in a partition with enough free space!&lt;br /&gt;
&lt;br /&gt;
===Compiling===&lt;br /&gt;
&lt;br /&gt;
Go the the PCL source directory (&amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''pcl-pcl-1.7.2/''&amp;lt;/span&amp;gt; or &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''PCL-trunk-Source/''&amp;lt;/span&amp;gt;), and create a new subdirectory to keep the build files in:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;mkdir build&lt;br /&gt;
cd build&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now it is time to configure the project using CMake. We will tell it to build in Release (fully optimized, no debug capabilities) mode now, and customize the rest of the options later:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;cmake -DCMAKE_BUILD_TYPE=Release ..&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CMake should be able to find every dependency, thus being able to build every subsystem except for the ones marked as &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;Disabled by default&amp;quot;''&amp;lt;/span&amp;gt;. If you are happy, you can build now, otherwise let's invoke CMake's curses interface to change a couple of things (mind the final dot):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;ccmake .&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Ccmake_GUI_PCL.png|thumb|center|900px|Interface of ''ccmake''.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here you can change the build options. The program usage can be found at the bottom of the screen. Try turning all functionality &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;ON&amp;quot;&amp;lt;/span&amp;gt;. The most important thing, in case you want to use CUDA, is to enable it and give CMake the path to your SDK. Go to the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;CUDA_SDK_ROOT_DIR&amp;quot;&amp;lt;/span&amp;gt; option and enter the correct path (probably &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''/usr/local/cuda/''&amp;lt;/span&amp;gt; or similar).&lt;br /&gt;
&lt;br /&gt;
When you are done, press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''C'''&amp;lt;/span&amp;gt; to configure and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''G'''&amp;lt;/span&amp;gt; to generate and exit the tool. Sometimes, the options you change can activate previously omitted parameters, or prompt some warning text. Just press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''E'''&amp;lt;/span&amp;gt; when you are finished reading the message, and keep pressing &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''C'''&amp;lt;/span&amp;gt; until it lets you generate (new parameters will be marked with an asterisk, so you can check them and decide whether or not you want further customization).&lt;br /&gt;
&lt;br /&gt;
If you are done configuring, it is time to build:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;make&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: Additionally, you can append the parameter -jX to speed up the compilation, X being the number of cores or processors of your PC, plus one.'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Remember that, at any time, you can manually force the project to be reconfigured and built from scratch by emptying the &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''build/''&amp;lt;/span&amp;gt; directory with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;rm -rf ./*&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Installing===&lt;br /&gt;
&lt;br /&gt;
It will take some time to compile PCL (up to a few hours if your PC is not powerful enough). When it is finished, install it system-wide with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo make install&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And you should reboot and proceed to the next section, to see if your computer now recognizes (and uses) your Kinect device.&lt;br /&gt;
&lt;br /&gt;
=Testing (OpenNI viewer)=&lt;br /&gt;
&lt;br /&gt;
We are going to write a simple example program that will fetch data from the Kinect or Xtion and present it to the user, using the PCL library. It will also allow to save the current frame (as point cloud) to disk. If you feel lazy, you can download it [http://robotica.unileon.es/~victorm/PCL_openni_viewer.tar.gz here], and skip the next two sections. Otherwise, create a new directory anywhere in your hard disk and proceed.&lt;br /&gt;
&lt;br /&gt;
==CMakeLists.txt==&lt;br /&gt;
&lt;br /&gt;
Inside that directory, create a new text file named &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''CMakeLists.txt''&amp;lt;/span&amp;gt;. PCL-based programs use the CMake build system, too. Open it with any editor and paste the following content:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CMake&amp;gt;cmake_minimum_required(VERSION 2.8 FATAL_ERROR)&lt;br /&gt;
 &lt;br /&gt;
project(PCL_openni_viewer)&lt;br /&gt;
 &lt;br /&gt;
find_package(PCL 1.7 REQUIRED)&lt;br /&gt;
 &lt;br /&gt;
include_directories(${PCL_INCLUDE_DIRS})&lt;br /&gt;
link_directories(${PCL_LIBRARY_DIRS})&lt;br /&gt;
add_definitions(${PCL_DEFINITIONS})&lt;br /&gt;
 &lt;br /&gt;
set(PCL_BUILD_TYPE Release)&lt;br /&gt;
 &lt;br /&gt;
file(GLOB PCL_openni_viewer_SRC&lt;br /&gt;
    &amp;quot;src/*.h&amp;quot;&lt;br /&gt;
    &amp;quot;src/*.cpp&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
add_executable(openniViewer ${PCL_openni_viewer_SRC})&lt;br /&gt;
 &lt;br /&gt;
target_link_libraries (openniViewer ${PCL_LIBRARIES})&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CMake syntax is quite self-explanatory. We ask for a CMake version 2.8 installation, minimum. We declare a new project named &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;openni_PCL_viewer&amp;quot;&amp;lt;/span&amp;gt;. We tell CMake to check for the presence of PCL library development files, version 1.7. If our system can not meet the CMake and PCL version requirement, the process will fail.&lt;br /&gt;
&lt;br /&gt;
Next, we feed the compiler and linker the directories where PCL includes and libraries can be found, and the defined symbols. We tell CMake to use the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Release&amp;quot;&amp;lt;/span&amp;gt; build type, which will activate certain optimizations depending on the compiler we use. Other build types are available, like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Debug&amp;quot;&amp;lt;/span&amp;gt;, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;MinSizeRel&amp;quot;&amp;lt;/span&amp;gt;, and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;RelWithDebInfo&amp;quot;&amp;lt;/span&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Finally, we create a variable, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;opennipclviewer_SRC&amp;quot;&amp;lt;/span&amp;gt;, that will store a list of files to be compiled (though we will only have one). We create a new binary to be compiled from these source files, and we link it with the PCL library.&lt;br /&gt;
&lt;br /&gt;
Check the [http://www.cmake.org/Wiki/CMake_Useful_Variables CMake help] for more interesting options.&lt;br /&gt;
&lt;br /&gt;
==main.cpp==&lt;br /&gt;
&lt;br /&gt;
We told CMake it could find the source files in a &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''src/''&amp;lt;/span&amp;gt; subdirectory, so let's keep to out word and create it. Then, add a new &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''main.cpp''&amp;lt;/span&amp;gt; file inside and paste the following lines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;// Original code by Geoffrey Biggs, taken from the PCL tutorial in&lt;br /&gt;
// http://pointclouds.org/documentation/tutorials/pcl_visualizer.php&lt;br /&gt;
&lt;br /&gt;
// Simple OpenNI viewer that also allows to write the current scene to a .pcd&lt;br /&gt;
// when pressing SPACE.&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;pcl/io/openni_grabber.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/cloud_viewer.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/console/parse.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
using namespace std;&lt;br /&gt;
using namespace pcl;&lt;br /&gt;
&lt;br /&gt;
PointCloud&amp;lt;PointXYZRGBA&amp;gt;::Ptr cloudptr(new PointCloud&amp;lt;PointXYZRGBA&amp;gt;); // A cloud that will store color info.&lt;br /&gt;
PointCloud&amp;lt;PointXYZ&amp;gt;::Ptr fallbackCloud(new PointCloud&amp;lt;PointXYZ&amp;gt;);    // A fallback cloud with just depth data.&lt;br /&gt;
boost::shared_ptr&amp;lt;visualization::CloudViewer&amp;gt; viewer;                 // Point cloud viewer object.&lt;br /&gt;
Grabber* openniGrabber;                                               // OpenNI grabber that takes data from the device.&lt;br /&gt;
unsigned int filesSaved = 0;                                          // For the numbering of the clouds saved to disk.&lt;br /&gt;
bool saveCloud(false), noColor(false);                                // Program control.&lt;br /&gt;
&lt;br /&gt;
void&lt;br /&gt;
printUsage(const char* programName)&lt;br /&gt;
{&lt;br /&gt;
	cout &amp;lt;&amp;lt; &amp;quot;Usage: &amp;quot; &amp;lt;&amp;lt; programName &amp;lt;&amp;lt; &amp;quot; [options]&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; endl&lt;br /&gt;
		 &amp;lt;&amp;lt; endl&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;Options:\n&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; endl&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;\t&amp;lt;none&amp;gt;     start capturing from an OpenNI device.\n&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;\t-v FILE    visualize the given .pcd file.\n&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;\t-h         shows this help.\n&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// This function is called every time the device has new data.&lt;br /&gt;
void&lt;br /&gt;
grabberCallback(const PointCloud&amp;lt;PointXYZRGBA&amp;gt;::ConstPtr&amp;amp; cloud)&lt;br /&gt;
{&lt;br /&gt;
	if (! viewer-&amp;gt;wasStopped())&lt;br /&gt;
		viewer-&amp;gt;showCloud(cloud);&lt;br /&gt;
&lt;br /&gt;
	if (saveCloud)&lt;br /&gt;
	{&lt;br /&gt;
		stringstream stream;&lt;br /&gt;
		stream &amp;lt;&amp;lt; &amp;quot;inputCloud&amp;quot; &amp;lt;&amp;lt; filesSaved &amp;lt;&amp;lt; &amp;quot;.pcd&amp;quot;;&lt;br /&gt;
		string filename = stream.str();&lt;br /&gt;
		if (io::savePCDFile(filename, *cloud, true) == 0)&lt;br /&gt;
		{&lt;br /&gt;
			filesSaved++;&lt;br /&gt;
			cout &amp;lt;&amp;lt; &amp;quot;Saved &amp;quot; &amp;lt;&amp;lt; filename &amp;lt;&amp;lt; &amp;quot;.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
		}&lt;br /&gt;
		else PCL_ERROR(&amp;quot;Problem saving %s.\n&amp;quot;, filename.c_str());&lt;br /&gt;
&lt;br /&gt;
		saveCloud = false;&lt;br /&gt;
	}&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// For detecting when SPACE is pressed.&lt;br /&gt;
void&lt;br /&gt;
keyboardEventOccurred(const visualization::KeyboardEvent&amp;amp; event,&lt;br /&gt;
					  void* nothing)&lt;br /&gt;
{&lt;br /&gt;
	if (event.getKeySym() == &amp;quot;space&amp;quot; &amp;amp;&amp;amp; event.keyDown())&lt;br /&gt;
		saveCloud = true;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Creates, initializes and returns a new viewer.&lt;br /&gt;
boost::shared_ptr&amp;lt;visualization::CloudViewer&amp;gt;&lt;br /&gt;
createViewer()&lt;br /&gt;
{&lt;br /&gt;
	boost::shared_ptr&amp;lt;visualization::CloudViewer&amp;gt; v&lt;br /&gt;
	(new visualization::CloudViewer(&amp;quot;OpenNI viewer&amp;quot;));&lt;br /&gt;
	v-&amp;gt;registerKeyboardCallback(keyboardEventOccurred);&lt;br /&gt;
&lt;br /&gt;
	return (v);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	if (console::find_argument(argc, argv, &amp;quot;-h&amp;quot;) &amp;gt;= 0)&lt;br /&gt;
	{&lt;br /&gt;
		printUsage(argv[0]);&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	bool justVisualize(false);&lt;br /&gt;
	string filename;&lt;br /&gt;
	if (console::find_argument(argc, argv, &amp;quot;-v&amp;quot;) &amp;gt;= 0)&lt;br /&gt;
	{&lt;br /&gt;
		if (argc != 3)&lt;br /&gt;
		{&lt;br /&gt;
			printUsage(argv[0]);&lt;br /&gt;
			return -1;&lt;br /&gt;
		}&lt;br /&gt;
&lt;br /&gt;
		filename = argv[2];&lt;br /&gt;
		justVisualize = true;&lt;br /&gt;
	}&lt;br /&gt;
	else if (argc != 1)&lt;br /&gt;
	{&lt;br /&gt;
		printUsage(argv[0]);&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// First mode, open and show a cloud from disk.&lt;br /&gt;
	if (justVisualize)&lt;br /&gt;
	{&lt;br /&gt;
		// Try with color information...&lt;br /&gt;
		try&lt;br /&gt;
		{&lt;br /&gt;
			io::loadPCDFile&amp;lt;PointXYZRGBA&amp;gt;(filename.c_str(), *cloudptr);&lt;br /&gt;
		}&lt;br /&gt;
		catch (PCLException e1)&lt;br /&gt;
		{&lt;br /&gt;
			try&lt;br /&gt;
			{&lt;br /&gt;
				// ...and if it fails, fall back to just depth.&lt;br /&gt;
				io::loadPCDFile&amp;lt;PointXYZ&amp;gt;(filename.c_str(), *fallbackCloud);&lt;br /&gt;
			}&lt;br /&gt;
			catch (PCLException e2)&lt;br /&gt;
			{&lt;br /&gt;
				return -1;&lt;br /&gt;
			}&lt;br /&gt;
&lt;br /&gt;
			noColor = true;&lt;br /&gt;
		}&lt;br /&gt;
&lt;br /&gt;
		cout &amp;lt;&amp;lt; &amp;quot;Loaded &amp;quot; &amp;lt;&amp;lt; filename &amp;lt;&amp;lt; &amp;quot;.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
		if (noColor)&lt;br /&gt;
			cout &amp;lt;&amp;lt; &amp;quot;This cloud has no RGBA color information present.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
		else cout &amp;lt;&amp;lt; &amp;quot;This cloud has RGBA color information present.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
	}&lt;br /&gt;
	// Second mode, start fetching and displaying frames from the device.&lt;br /&gt;
	else&lt;br /&gt;
	{&lt;br /&gt;
		openniGrabber = new OpenNIGrabber();&lt;br /&gt;
		if (openniGrabber == 0)&lt;br /&gt;
			return -1;&lt;br /&gt;
		boost::function&amp;lt;void (const PointCloud&amp;lt;PointXYZRGBA&amp;gt;::ConstPtr&amp;amp;)&amp;gt; f =&lt;br /&gt;
			boost::bind(&amp;amp;grabberCallback, _1);&lt;br /&gt;
		openniGrabber-&amp;gt;registerCallback(f);&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	viewer = createViewer();&lt;br /&gt;
&lt;br /&gt;
	if (justVisualize)&lt;br /&gt;
	{&lt;br /&gt;
		if (noColor)&lt;br /&gt;
			viewer-&amp;gt;showCloud(fallbackCloud);&lt;br /&gt;
		else viewer-&amp;gt;showCloud(cloudptr);&lt;br /&gt;
	}&lt;br /&gt;
	else openniGrabber-&amp;gt;start();&lt;br /&gt;
&lt;br /&gt;
	// Main loop.&lt;br /&gt;
	while (! viewer-&amp;gt;wasStopped())&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::seconds(1));&lt;br /&gt;
&lt;br /&gt;
	if (! justVisualize)&lt;br /&gt;
		openniGrabber-&amp;gt;stop();&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save and close.&lt;br /&gt;
&lt;br /&gt;
==Compiling==&lt;br /&gt;
&lt;br /&gt;
Follow the same steps you used to build PCL. That is, create a new &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''build/''&amp;lt;/span&amp;gt; subdirectory next to the &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''src/''&amp;lt;/span&amp;gt; one. Open a terminal there and issue:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;cmake -DCMAKE_BUILD_TYPE=Release ..&lt;br /&gt;
make&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Executing==&lt;br /&gt;
&lt;br /&gt;
Still from the same terminal, run the compiled example program:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;./openniViewer&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After some seconds, the main window will appear and the application will start grabbing frames from the device. You can inspect the current point cloud using the mouse, holding the left button to rotate, the right one (or the mouse wheel) to zoom, and the middle one to pan the camera around. At first, you may see only a black screen, or some big colored axes, but no cloud. Try zooming out, to see the whole scene. Another useful key is &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''R'''&amp;lt;/span&amp;gt;, which will reset the camera parameters when pressed. Use it whenever you notice that zooming has gotten slow after some camera movement, or if you still can not see the cloud. See the [http://pointclouds.org/documentation/tutorials/pcl_visualizer.php PCLVisualizer] tutorial for additional controls and features.&lt;br /&gt;
&lt;br /&gt;
Whenever you feel ready, press the &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''SPACE'''&amp;lt;/span&amp;gt; key. The program will pause for a fraction of a second and the output &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;Saved inputCloud0.pcd.&amp;quot;''&amp;lt;/span&amp;gt; will appear on the console. Check the current folder to see that file &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''inputCloud0.pcd''&amp;lt;/span&amp;gt; has indeed been written. You can now close the program with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Q'''&amp;lt;/span&amp;gt; or &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Alt+F4'''&amp;lt;/span&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Next, run it again giving the following parameter:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;./openniViewer -v inputCloud0.pcd&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will tell the program not to take data from the device, but from the saved point cloud file instead. After it loads, you will realize that you are presented the same scene you saved to disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: PCD data is saved relative to the sensor. No matter how much you have manipulated the view, it will reset to default when you load the file.'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
&lt;br /&gt;
At this point, your Kinect device should be working and getting depth data for you. There is a collection of excellent [http://pointclouds.org/documentation/tutorials/ tutorials] for PCL in the official webpage. I encourage you to finish them all before proceeding your experiments with the Kinect sensor. You can also find a good introduction/tutorial at the PCL library [http://www.pointclouds.org/assets/uploads/PCL-IAS13_Introduction.pdf here].&lt;br /&gt;
&lt;br /&gt;
If you use an ASUS Xtion PRO device instead, you should have gotten everything to work without problems or additional steps (except maybe for [[PCL/OpenNI_troubleshooting#ASUS_Xtion_PRO | this one]]).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_1:_Installing_and_testing&amp;diff=4732</id>
		<title>PCL/OpenNI tutorial 1: Installing and testing</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_1:_Installing_and_testing&amp;diff=4732"/>
				<updated>2015-11-01T22:53:30Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Kinect.jpg|thumb|right|200px|Microsoft Kinect device.]]&lt;br /&gt;
&lt;br /&gt;
This series of tutorials will explain the usage of a depth camera like [https://en.wikipedia.org/wiki/Kinect Kinect] for &amp;quot;serious&amp;quot; researching purposes. As you may know, Kinect is in fact an affordable depth sensor, developed with technology from [https://en.wikipedia.org/wiki/PrimeSense PrimeSense], based on infrarred structured light method. It also has a common camera (which makes it a RGB-D device), a microphone and a motorized pivot. Its use is not limited to playing with a Xbox360 console, you can plug it to a computer and use it like any other sensor. Many open-source drivers and frameworks are available.&lt;br /&gt;
&lt;br /&gt;
Since its release on November 2010, it has gained a lot of popularity, specially among the scientific community. Many researches have procured themselves one because, despite the low cost (about 100 $), it has proven to be a powerful solution for depth sensing projects. Current investigations focus on real-time surface mapping, object recognition and tracking, and localization. Impressive results (like the [http://research.microsoft.com/en-us/projects/surfacerecon/ KinectFusion] project from Microsoft) are already possible.&lt;br /&gt;
&lt;br /&gt;
The new [https://en.wikipedia.org/wiki/Xbox_One Xbox One] ships with an upgraded version, [https://en.wikipedia.org/wiki/Kinect_for_Xbox_One Kinect v2], with enhanced resolution, that is able to detect your facial expression, measure your heart rate, and track every one of your fingers. A PC development-ready version ([https://en.wikipedia.org/wiki/Kinect_for_Xbox_One#Kinect_for_Windows Kinect for Windows v2]) was released in July 2014, but it could only be used with the official Windows SDK (open source support [https://github.com/OpenKinect/libfreenect2 exists] but is still young). In October 2014 an adapter that allows to plug the standard Kinect v2 to a PC was released, so the development version of the sensor was discontinued in April 2015. Now you can just buy the [http://www.microsoftstore.com/store/msusa/en_US/pdp/Kinect-for-Xbox-One/productID.307499400 standard one] for the console plus the [http://www.microsoftstore.com/store/msusa/en_US/pdp/Kinect-Adapter-for-Windows/productID.308803600 adapter].&lt;br /&gt;
&lt;br /&gt;
I will explain the installation and usage of one of the &amp;quot;legacy&amp;quot; Kinect 1.0 devices with a common PC, and the possibilities it offers. I will do it in an easy to understand way, intended for students that have just acquired it and want to start from scratch.&lt;br /&gt;
&lt;br /&gt;
Keep in mind that the software that we are going to use (Point Cloud Library and OpenNI drivers) will also let you use any other device like the [https://www.asus.com/3D-Sensor/Xtion_PRO/ Xtion PRO] or [https://www.asus.com/3D-Sensor/Xtion_PRO_LIVE/ Xtion PRO LIVE] from ASUS (the PRO only has a depth sensor, the PRO LIVE is a RGB-D camera) without changing a line of code.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: The tutorials are written for Linux platforms. Also, 64-bit versions seem to work better than 32-bit.'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Requirements=&lt;br /&gt;
&lt;br /&gt;
You will need the following:&lt;br /&gt;
&lt;br /&gt;
* A common Kinect device, out of the box. You can buy it in your local electronics shop, or online. It also includes a free copy of ''[https://en.wikipedia.org/wiki/Kinect_Adventures! Kinect Adventures!]'', which is useless if you do not own the console. Microsoft has released a ''Kinect for Windows'' device, which is a normal looking Kinect no longer compatible with Xbox360, that will only work with their official SDK, intended for developers only. Also, like I stated earlier, you can use an ASUS Xtion indistinctly.&lt;br /&gt;
* A computer running Linux (Debian or Ubuntu preferably).&lt;br /&gt;
* A medium-sized room. Kinect has some limitations for depth measurement: 40cm minimum, 8m maximum (make it 6).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: ''Kinect for Windows'' may have problems working with open source drivers on Linux .'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Connecting everything=&lt;br /&gt;
&lt;br /&gt;
Kinect does not work with a common USB port. Its power consumption is a bit higher because of the motor, so Microsoft came up with a connector that combines USB and power supply. Old Xbox 360 models needed a special adapter, new S model ones already have this new port. Luckily, Kinect comes with the official adapter out of the box (otherwise you will have to [http://www.amazon.com/Power-Supply-Cable-Kinect-Xbox-360/dp/B004S7GA46/ref=sr_1_1?ie=UTF8&amp;amp;qid=1378288327&amp;amp;sr=8-1 buy one]).&lt;br /&gt;
&lt;br /&gt;
Just plug the adapter to any power socket, and the USB to your computer. Let's check typing this in a terminal:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;lsusb&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Output should list the following devices:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;Bus 001 Device 005: ID 045e:02b0 Microsoft Corp. Xbox NUI Motor&lt;br /&gt;
Bus 001 Device 006: ID 045e:02ad Microsoft Corp. Xbox NUI Audio&lt;br /&gt;
Bus 001 Device 007: ID 045e:02ae Microsoft Corp. Xbox NUI Camera&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are using a Xtion, you should see:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;Bus 001 Device 004: ID 1d27:0601 ASUS&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;0601&amp;quot; identifies the new Xtion model, &amp;quot;0600&amp;quot; the older one. Both should work the same. But try to avoid [[PCL/OpenNI_troubleshooting#ASUS_Xtion_PRO|USB 3.0 ports]]!&lt;br /&gt;
&lt;br /&gt;
=Installing the software=&lt;br /&gt;
&lt;br /&gt;
There is more than one way to get your Kinect working on your PC, and start developing applications for it:&lt;br /&gt;
&lt;br /&gt;
* [http://www.microsoft.com/en-us/download/details.aspx?id=40278 Kinect for Windows SDK] and [http://www.microsoft.com/en-us/download/details.aspx?id=40276 Developer Toolkit]: released on June 16, 2011 as a non-commercial SDK intended for application development. Version 1.8, the last there will ever be now that Kinect v2 is out, was released on September 2013. Because it comes from Microsoft, it is obviously the easiest way to get everything working. Sadly, there is no Linux version.&lt;br /&gt;
* [https://github.com/OpenKinect/libfreenect libfreenect] library: from the [http://openkinect.org/wiki/Main_Page OpenKinect] project, it is intended to be a free and open source alternative to the official drivers. libfreenect is used by projects like [https://github.com/ofTheo/ofxKinect ofxKinect], an addon for the [http://www.openframeworks.cc/ openFrameworks] toolkit (and as of version 0.8, included in the core package) that runs on Linux and OS X. ofxKinect packs a nice example application to show the RGB and point cloud taken from Kinect.&lt;br /&gt;
* [https://github.com/PrimeSense/Sensor PrimeSense drivers]: they were released as open source after the the [http://en.wikipedia.org/wiki/OpenNI OpenNI] project was created, along with the motion tracking middleware, ''NITE'', and the SDK. NI stands for Natural Interaction, and the project tried to enforce a common standard for human input using Kinect-like sensors. These official drivers are used by [http://wiki.ros.org/openni_kinect ROS] (the Robot Operating System, a massive collection of libraries and tools for robotic researchers) and [http://pointclouds.org/ PCL] (the Point Cloud Library, with everything needed for 3D point cloud processing). Sadly, version 2.0 of the OpenNI SDK dropped support for Kinect on Linux due to licensing issues, so for now PCL relies on legacy 1.x versions. Also, Apple bought PrimeSense on November 2013, and on April 2014 OpenNI's webpage was closed. The source is now being maintained by [http://structure.io/openni a third party].&lt;br /&gt;
* [https://github.com/avin2/SensorKinect SensorKinect]: a modified version of the official PrimeSense drivers, used for example by [https://github.com/gameoverhack/ofxOpenNI ofxOpenNI] (another openFrameworks addon). Last updated on 2012.&lt;br /&gt;
&lt;br /&gt;
For this tutorial, we are going to use PCL with the OpenNI drivers, so owners of a Xtion can also get it to work.&lt;br /&gt;
&lt;br /&gt;
==Precompiled PCL for Ubuntu==&lt;br /&gt;
&lt;br /&gt;
If you have a valid installation of [http://wiki.ros.org/indigo/Installation/Ubuntu ROS] (through their repository), you do not have to install anything. Both OpenNI and PrimeSense drivers, as well as PCL, should be already installed. You can check it with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install libpcl-1.7-all libpcl-1.7-all-dev libopenni-dev libopenni-sensor-primesense-dev&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The previous command should state that all packages are already installed (change the PCL version number as needed), install them if not. If you get an error about overwriting some file, check [[PCL/OpenNI_troubleshooting#Trying_to_overwrite_X.2C_which_is_also_in_package_Y|this]].&lt;br /&gt;
&lt;br /&gt;
In case you do not want ROS, there is a [https://launchpad.net/~v-launchpad-jochen-sprickerhof-de/+archive/ubuntu/pcl PPA] (Personal Package Archive, a private repository) which has everything we need. Add it to your sources, and install everything:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo add-apt-repository ppa:v-launchpad-jochen-sprickerhof-de/pcl&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt-get install build-essential cmake libpcl1.7 libpcl-dev pcl-tools&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Trying to mix ROS and PCL repositories and packages can cause some errors, so choose one of them and stick with it. Check the [[PCL/OpenNI troubleshooting]] page because your Kinect may not work by default in 32 bits.&lt;br /&gt;
&lt;br /&gt;
==Compiling PCL from source==&lt;br /&gt;
&lt;br /&gt;
For Linuxes without a precompiled version of PCL, you will need to compile it yourself. This has an advantage, actually: you can customize the build options and choose what you want. Also, the resulting binaries and libraries should be a bit faster. And you always get the latest version! The instructions are [http://pointclouds.org/documentation/tutorials/compiling_pcl_posix.php here], but the steps are easy so I will show them to you.&lt;br /&gt;
&lt;br /&gt;
===Installing the dependencies===&lt;br /&gt;
&lt;br /&gt;
Some of PCL dependencies can be installed via the package manager. Others will require additional work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash enclose=&amp;quot;div&amp;quot;&amp;gt;sudo apt-get install build-essential cmake cmake-curses-gui libboost-all-dev libeigen3-dev libflann-dev libvtk5-dev libvtk5-qt4-dev libglew-dev libxmu-dev libsuitesparse-dev libqhull-dev libpcap-dev libxmu-dev libxi-dev libgtest-dev libqt4-opengl-dev&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The trunk version of PCL uses [http://en.wikipedia.org/wiki/VTK VTK] 6 and [http://en.wikipedia.org/wiki/Qt_%28software%29 Qt] 5, so if you intend to compile it, you must install the following packages (say yes if you are asked to remove VTK 5 and Qt 4 packages):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash enclose=&amp;quot;div&amp;quot;&amp;gt;sudo apt-get install libvtk6-dev libqt5opengl5-dev&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====JDK====&lt;br /&gt;
&lt;br /&gt;
OpenNI requires Sun's official JDK (Java Development Kit), which is no longer available on official apt repositories. You can use [http://www.webupd8.org/2012/09/install-oracle-java-8-in-ubuntu-via-ppa.html unofficial ones], or do it manually. For the last method, go to the [http://www.oracle.com/technetwork/java/javase/downloads/index.html Java SE downloads] page (SE means Standard Edition) and download the latest version (e.g., &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''jdk-8u66-linux-x64.tar.gz''&amp;lt;/span&amp;gt;). Extract it, then move the contents to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''/usr/lib/jvm/''&amp;lt;/span&amp;gt; so it is available system-wide:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo mkdir -p /usr/lib/jvm/&lt;br /&gt;
sudo cp -r jdk1.8.0_66/ /usr/lib/jvm/&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, make it the default choice to compile and run Java programs. Remember to change the version number as needed!&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo update-alternatives --install &amp;quot;/usr/bin/java&amp;quot; &amp;quot;java&amp;quot; &amp;quot;/usr/lib/jvm/jdk1.8.0_66/bin/java&amp;quot; 1&lt;br /&gt;
sudo update-alternatives --install &amp;quot;/usr/bin/javac&amp;quot; &amp;quot;javac&amp;quot; &amp;quot;/usr/lib/jvm/jdk1.8.0_66/bin/javac&amp;quot; 1&lt;br /&gt;
sudo update-alternatives --install &amp;quot;/usr/bin/jar&amp;quot; &amp;quot;jar&amp;quot; &amp;quot;/usr/lib/jvm/jdk1.8.0_66/bin/jar&amp;quot; 1&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To be sure, use the following commands to manually select the correct option, in case there is more than one choice:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo update-alternatives --config java&lt;br /&gt;
sudo update-alternatives --config javac&lt;br /&gt;
sudo update-alternatives --config jar&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are still not sure, run them and display the version, making sure it is the one you installed:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;java -version&lt;br /&gt;
javac -version&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sun's JDK is now installed.&lt;br /&gt;
&lt;br /&gt;
====OpenNI====&lt;br /&gt;
&lt;br /&gt;
PCL uses OpenNI and the PrimeSense drivers to get data from devices like Kinect or Xtion. It is optional, but it would not make much sense not to install it, would it? If you are using Ubuntu, add the [[#Precompiled PCL for Ubuntu | PPA]] and install &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''libopenni-dev''&amp;lt;/span&amp;gt; and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''libopenni-sensor-primesense-dev''&amp;lt;/span&amp;gt;, which should be done already. Otherwise, fetch the [https://github.com/OpenNI/OpenNI OpenNI] and [https://github.com/PrimeSense/Sensor PrimeSense Sensor] sources from GitHub (download them as .zip, the link is on the right). Extract them, and install the dependencies:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install python libusb-1.0-0-dev freeglut3-dev doxygen graphviz&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Go to the directory where you extracted OpenNI (&amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''OpenNI-master/''&amp;lt;/span&amp;gt; for me), and open a terminal in the &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/CreateRedist/''&amp;lt;/span&amp;gt; subdirectory. Issue:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;./RedistMaker&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When it finishes, and if there are no errors (check the [[PCL/OpenNI troubleshooting]] page if you get any), go to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/Redist/OpenNI-Bin-Dev-Linux-x64-v1.5.7.10/''&amp;lt;/span&amp;gt; (or your equivalent), and install (you must be root):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo ./install.sh&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, go to the directory where you extracted the PrimeSense drivers (&amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Sensor-master/''&amp;lt;/span&amp;gt; for me), and repeat the exact same procedure (go to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/CreateRedist/''&amp;lt;/span&amp;gt;, issue &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''./RedistMaker''&amp;lt;/span&amp;gt;, go to &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''Platform/Linux/Redist/Sensor-Bin-Linux-x64-v5.1.6.6/''&amp;lt;/span&amp;gt;, issue &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;''sudo ./install.sh''&amp;lt;/span&amp;gt;). Congratulations, you have now installed OpenNI.&lt;br /&gt;
&lt;br /&gt;
====CUDA====&lt;br /&gt;
&lt;br /&gt;
Like OpenNI, [http://en.wikipedia.org/wiki/CUDA nVidia CUDA] is an optional dependency, that will allow PCL to use your GPU (Graphics Processing Unit, that is, your graphics card) for certain computations. This is mandatory for tools like KinFu (do not bother unless you have at least a series 400 card with 1.5 GB of VRAM).&lt;br /&gt;
&lt;br /&gt;
Some distributions provide packages for CUDA in the repositories. For example, in Ubuntu:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install nvidia-cuda-dev nvidia-cuda-toolkit&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you want to install it manually (which is incompatible with the previous method), go to the [https://developer.nvidia.com/cuda-downloads CUDA downloads] page, which is self-explanatory, and get the small .deb file or the huge .run toolkit/SDK installer file for your system (you should have installed the nVidia drivers already, but the installer will also do it for you if needed).&lt;br /&gt;
&lt;br /&gt;
If you chose the .deb file, install it using the method of your choice, like Gdebi package manager, or through console:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo dpkg -i &amp;lt;package.deb&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The .deb does not contain all CUDA stuff, it just adds their repository to your software lists. Now you must install everything:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get update&lt;br /&gt;
sudo apt-get install cuda&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If, on the other hand, you downloaded the .run, give it execution permissions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;chmod +x cuda_7.0.28_linux.run&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And install it. You can use the default options, although if you have a working nVidia graphics driver for your card, you may want to say &amp;quot;no&amp;quot; when the installer offers to install it for you:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo ./cuda_7.0.28_linux.run&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The environment variables need to be changed so your system can find CUDA's libraries and binaries. Open &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''/etc/ld.so.conf''&amp;lt;/span&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo nano /etc/ld.so.conf&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And append one of these two lines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;/usr/local/cuda/lib64 # Add this on 64-bit only.&lt;br /&gt;
/usr/local/cuda/lib # Add this on 32-bit only.&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Ctrl+O'''&amp;lt;/span&amp;gt; and Enter, exit with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Ctrl+X'''&amp;lt;/span&amp;gt;. Reload the cache of the dynamic linker with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo ldconfig&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, append CUDA's bin directory to your PATH. Do this by editing your local &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''.bashrc''&amp;lt;/span&amp;gt; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;nano ~/.bashrc&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And append this line:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;export PATH=$PATH:/usr/local/cuda/bin&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CUDA is now installed.&lt;br /&gt;
&lt;br /&gt;
===Getting the source===&lt;br /&gt;
&lt;br /&gt;
Every dependency is installed. Time to download PCL's source code. First, you must choose whether to install the stable or the experimental branch of PCL. The stable branch is the latest official release and it is guaranteed to work without problems. The experimental branch may give you a compiling error seldomly, but you can find some interesting features that stable users will have to wait some months for. Apart from that, both are built the same way.&lt;br /&gt;
&lt;br /&gt;
To get the stable version, go to the [https://github.com/PointCloudLibrary/pcl/releases downloads] page, get &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''pcl-pcl-1.7.2.tar.gz''&amp;lt;/span&amp;gt; or whatever the latest release is, and extract it somewhere. For the experimental version, use Git:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo apt-get install git&lt;br /&gt;
git clone https://github.com/PointCloudLibrary/pcl PCL-trunk-Source&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The compiled trunk version of PCL will take up more than 8 GB. So make sure you put the source folder in a partition with enough free space!&lt;br /&gt;
&lt;br /&gt;
===Compiling===&lt;br /&gt;
&lt;br /&gt;
Go the the PCL source directory (&amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''pcl-pcl-1.7.2/''&amp;lt;/span&amp;gt; or &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''PCL-trunk-Source/''&amp;lt;/span&amp;gt;), and create a new subdirectory to keep the build files in:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;mkdir build&lt;br /&gt;
cd build&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now it is time to configure the project using CMake. We will tell it to build in Release (fully optimized, no debug capabilities) mode now, and customize the rest of the options later:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;cmake -DCMAKE_BUILD_TYPE=Release ..&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CMake should be able to find every dependency, thus being able to build every subsystem except for the ones marked as &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;Disabled by default&amp;quot;''&amp;lt;/span&amp;gt;. If you are happy, you can build now, otherwise let's invoke CMake's curses interface to change a couple of things (mind the final dot):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;ccmake .&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Ccmake_GUI_PCL.png|thumb|center|900px|Interface of ''ccmake''.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here you can change the build options. The program usage can be found at the bottom of the screen. Try turning all functionality &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;ON&amp;quot;&amp;lt;/span&amp;gt;. The most important thing, in case you want to use CUDA, is to enable it and give CMake the path to your SDK. Go to the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;CUDA_SDK_ROOT_DIR&amp;quot;&amp;lt;/span&amp;gt; option and enter the correct path (probably &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''/usr/local/cuda/''&amp;lt;/span&amp;gt; or similar).&lt;br /&gt;
&lt;br /&gt;
When you are done, press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''C'''&amp;lt;/span&amp;gt; to configure and &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''G'''&amp;lt;/span&amp;gt; to generate and exit the tool. Sometimes, the options you change can activate previously omitted parameters, or prompt some warning text. Just press &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''E'''&amp;lt;/span&amp;gt; when you are finished reading the message, and keep pressing &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''C'''&amp;lt;/span&amp;gt; until it lets you generate (new parameters will be marked with an asterisk, so you can check them and decide whether or not you want further customization).&lt;br /&gt;
&lt;br /&gt;
If you are done configuring, it is time to build:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;make&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: Additionally, you can append the parameter -jX to speed up the compilation, X being the number of cores or processors of your PC, plus one.'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Remember that, at any time, you can manually force the project to be reconfigured and built from scratch by emptying the &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''build/''&amp;lt;/span&amp;gt; directory with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;rm -rf ./*&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Installing===&lt;br /&gt;
&lt;br /&gt;
It will take some time to compile PCL (up to a few hours if your PC is not powerful enough). When it is finished, install it system-wide with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;sudo make install&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And you should reboot and proceed to the next section, to see if your computer now recognizes (and uses) your Kinect device.&lt;br /&gt;
&lt;br /&gt;
=Testing (OpenNI viewer)=&lt;br /&gt;
&lt;br /&gt;
We are going to write a simple example program that will fetch data from the Kinect or Xtion and present it to the user, using the PCL library. It will also allow to save the current frame (as point cloud) to disk. If you feel lazy, you can download it [http://robotica.unileon.es/~victorm/PCL_openni_viewer.tar.gz here], and skip the next two sections. Otherwise, create a new directory anywhere in your hard disk and proceed.&lt;br /&gt;
&lt;br /&gt;
==CMakeLists.txt==&lt;br /&gt;
&lt;br /&gt;
Inside that directory, create a new text file named &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''CMakeLists.txt''&amp;lt;/span&amp;gt;. PCL-based programs use the CMake build system, too. Open it with any editor and paste the following content:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CMake&amp;gt;cmake_minimum_required(VERSION 2.8 FATAL_ERROR)&lt;br /&gt;
 &lt;br /&gt;
project(PCL_openni_viewer)&lt;br /&gt;
 &lt;br /&gt;
find_package(PCL 1.7 REQUIRED)&lt;br /&gt;
 &lt;br /&gt;
include_directories(${PCL_INCLUDE_DIRS})&lt;br /&gt;
link_directories(${PCL_LIBRARY_DIRS})&lt;br /&gt;
add_definitions(${PCL_DEFINITIONS})&lt;br /&gt;
 &lt;br /&gt;
set(PCL_BUILD_TYPE Release)&lt;br /&gt;
 &lt;br /&gt;
file(GLOB PCL_openni_viewer_SRC&lt;br /&gt;
    &amp;quot;src/*.h&amp;quot;&lt;br /&gt;
    &amp;quot;src/*.cpp&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
add_executable(openniViewer ${PCL_openni_viewer_SRC})&lt;br /&gt;
 &lt;br /&gt;
target_link_libraries (openniViewer ${PCL_LIBRARIES})&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CMake syntax is quite self-explanatory. We ask for a CMake version 2.8 installation, minimum. We declare a new project named &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;openni_PCL_viewer&amp;quot;&amp;lt;/span&amp;gt;. We tell CMake to check for the presence of PCL library development files, version 1.7. If our system can not meet the CMake and PCL version requirement, the process will fail.&lt;br /&gt;
&lt;br /&gt;
Next, we feed the compiler and linker the directories where PCL includes and libraries can be found, and the defined symbols. We tell CMake to use the &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Release&amp;quot;&amp;lt;/span&amp;gt; build type, which will activate certain optimizations depending on the compiler we use. Other build types are available, like &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;Debug&amp;quot;&amp;lt;/span&amp;gt;, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;MinSizeRel&amp;quot;&amp;lt;/span&amp;gt;, and &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;RelWithDebInfo&amp;quot;&amp;lt;/span&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Finally, we create a variable, &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;&amp;quot;opennipclviewer_SRC&amp;quot;&amp;lt;/span&amp;gt;, that will store a list of files to be compiled (though we will only have one). We create a new binary to be compiled from these source files, and we link it with the PCL library.&lt;br /&gt;
&lt;br /&gt;
Check the [http://www.cmake.org/Wiki/CMake_Useful_Variables CMake help] for more interesting options.&lt;br /&gt;
&lt;br /&gt;
==main.cpp==&lt;br /&gt;
&lt;br /&gt;
We told CMake it could find the source files in a &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''src/''&amp;lt;/span&amp;gt; subdirectory, so let's keep to out word and create it. Then, add a new &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''main.cpp''&amp;lt;/span&amp;gt; file inside and paste the following lines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=CPP&amp;gt;// Original code by Geoffrey Biggs, taken from the PCL tutorial in&lt;br /&gt;
// http://pointclouds.org/documentation/tutorials/pcl_visualizer.php&lt;br /&gt;
&lt;br /&gt;
// Simple OpenNI viewer that also allows to write the current scene to a .pcd&lt;br /&gt;
// when pressing SPACE.&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;pcl/io/openni_grabber.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/io/pcd_io.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/visualization/cloud_viewer.h&amp;gt;&lt;br /&gt;
#include &amp;lt;pcl/console/parse.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
using namespace std;&lt;br /&gt;
using namespace pcl;&lt;br /&gt;
&lt;br /&gt;
PointCloud&amp;lt;PointXYZRGBA&amp;gt;::Ptr cloudptr(new PointCloud&amp;lt;PointXYZRGBA&amp;gt;); // A cloud that will store color info.&lt;br /&gt;
PointCloud&amp;lt;PointXYZ&amp;gt;::Ptr fallbackCloud(new PointCloud&amp;lt;PointXYZ&amp;gt;);    // A fallback cloud with just depth data.&lt;br /&gt;
boost::shared_ptr&amp;lt;visualization::CloudViewer&amp;gt; viewer;                 // Point cloud viewer object.&lt;br /&gt;
Grabber* openniGrabber;                                               // OpenNI grabber that takes data from the device.&lt;br /&gt;
unsigned int filesSaved = 0;                                          // For the numbering of the clouds saved to disk.&lt;br /&gt;
bool saveCloud(false), noColor(false);                                // Program control.&lt;br /&gt;
&lt;br /&gt;
void&lt;br /&gt;
printUsage(const char* programName)&lt;br /&gt;
{&lt;br /&gt;
	cout &amp;lt;&amp;lt; &amp;quot;Usage: &amp;quot; &amp;lt;&amp;lt; programName &amp;lt;&amp;lt; &amp;quot; [options]&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; endl&lt;br /&gt;
		 &amp;lt;&amp;lt; endl&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;Options:\n&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; endl&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;\t&amp;lt;none&amp;gt;     start capturing from an OpenNI device.\n&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;\t-v FILE    visualize the given .pcd file.\n&amp;quot;&lt;br /&gt;
		 &amp;lt;&amp;lt; &amp;quot;\t-h         shows this help.\n&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// This function is called every time the device has new data.&lt;br /&gt;
void&lt;br /&gt;
grabberCallback(const PointCloud&amp;lt;PointXYZRGBA&amp;gt;::ConstPtr&amp;amp; cloud)&lt;br /&gt;
{&lt;br /&gt;
	if (! viewer-&amp;gt;wasStopped())&lt;br /&gt;
		viewer-&amp;gt;showCloud(cloud);&lt;br /&gt;
&lt;br /&gt;
	if (saveCloud)&lt;br /&gt;
	{&lt;br /&gt;
		stringstream stream;&lt;br /&gt;
		stream &amp;lt;&amp;lt; &amp;quot;inputCloud&amp;quot; &amp;lt;&amp;lt; filesSaved &amp;lt;&amp;lt; &amp;quot;.pcd&amp;quot;;&lt;br /&gt;
		string filename = stream.str();&lt;br /&gt;
		if (io::savePCDFile(filename, *cloud, true) == 0)&lt;br /&gt;
		{&lt;br /&gt;
			filesSaved++;&lt;br /&gt;
			cout &amp;lt;&amp;lt; &amp;quot;Saved &amp;quot; &amp;lt;&amp;lt; filename &amp;lt;&amp;lt; &amp;quot;.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
		}&lt;br /&gt;
		else PCL_ERROR(&amp;quot;Problem saving %s.\n&amp;quot;, filename.c_str());&lt;br /&gt;
&lt;br /&gt;
		saveCloud = false;&lt;br /&gt;
	}&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// For detecting when SPACE is pressed.&lt;br /&gt;
void&lt;br /&gt;
keyboardEventOccurred(const visualization::KeyboardEvent&amp;amp; event,&lt;br /&gt;
					  void* nothing)&lt;br /&gt;
{&lt;br /&gt;
	if (event.getKeySym() == &amp;quot;space&amp;quot; &amp;amp;&amp;amp; event.keyDown())&lt;br /&gt;
		saveCloud = true;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Creates, initializes and returns a new viewer.&lt;br /&gt;
boost::shared_ptr&amp;lt;visualization::CloudViewer&amp;gt;&lt;br /&gt;
createViewer()&lt;br /&gt;
{&lt;br /&gt;
	boost::shared_ptr&amp;lt;visualization::CloudViewer&amp;gt; v&lt;br /&gt;
	(new visualization::CloudViewer(&amp;quot;OpenNI viewer&amp;quot;));&lt;br /&gt;
	v-&amp;gt;registerKeyboardCallback(keyboardEventOccurred);&lt;br /&gt;
&lt;br /&gt;
	return (v);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
int&lt;br /&gt;
main(int argc, char** argv)&lt;br /&gt;
{&lt;br /&gt;
	if (console::find_argument(argc, argv, &amp;quot;-h&amp;quot;) &amp;gt;= 0)&lt;br /&gt;
	{&lt;br /&gt;
		printUsage(argv[0]);&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	bool justVisualize(false);&lt;br /&gt;
	string filename;&lt;br /&gt;
	if (console::find_argument(argc, argv, &amp;quot;-v&amp;quot;) &amp;gt;= 0)&lt;br /&gt;
	{&lt;br /&gt;
		if (argc != 3)&lt;br /&gt;
		{&lt;br /&gt;
			printUsage(argv[0]);&lt;br /&gt;
			return -1;&lt;br /&gt;
		}&lt;br /&gt;
&lt;br /&gt;
		filename = argv[2];&lt;br /&gt;
		justVisualize = true;&lt;br /&gt;
	}&lt;br /&gt;
	else if (argc != 1)&lt;br /&gt;
	{&lt;br /&gt;
		printUsage(argv[0]);&lt;br /&gt;
		return -1;&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// First mode, open and show a cloud from disk.&lt;br /&gt;
	if (justVisualize)&lt;br /&gt;
	{&lt;br /&gt;
		// Try with color information...&lt;br /&gt;
		try&lt;br /&gt;
		{&lt;br /&gt;
			io::loadPCDFile&amp;lt;PointXYZRGBA&amp;gt;(filename.c_str(), *cloudptr);&lt;br /&gt;
		}&lt;br /&gt;
		catch (PCLException e1)&lt;br /&gt;
		{&lt;br /&gt;
			try&lt;br /&gt;
			{&lt;br /&gt;
				// ...and if it fails, fall back to just depth.&lt;br /&gt;
				io::loadPCDFile&amp;lt;PointXYZ&amp;gt;(filename.c_str(), *fallbackCloud);&lt;br /&gt;
			}&lt;br /&gt;
			catch (PCLException e2)&lt;br /&gt;
			{&lt;br /&gt;
				return -1;&lt;br /&gt;
			}&lt;br /&gt;
&lt;br /&gt;
			noColor = true;&lt;br /&gt;
		}&lt;br /&gt;
&lt;br /&gt;
		cout &amp;lt;&amp;lt; &amp;quot;Loaded &amp;quot; &amp;lt;&amp;lt; filename &amp;lt;&amp;lt; &amp;quot;.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
		if (noColor)&lt;br /&gt;
			cout &amp;lt;&amp;lt; &amp;quot;This cloud has no RGBA color information present.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
		else cout &amp;lt;&amp;lt; &amp;quot;This cloud has RGBA color information present.&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
	}&lt;br /&gt;
	// Second mode, start fetching and displaying frames from the device.&lt;br /&gt;
	else&lt;br /&gt;
	{&lt;br /&gt;
		openniGrabber = new OpenNIGrabber();&lt;br /&gt;
		if (openniGrabber == 0)&lt;br /&gt;
			return -1;&lt;br /&gt;
		boost::function&amp;lt;void (const PointCloud&amp;lt;PointXYZRGBA&amp;gt;::ConstPtr&amp;amp;)&amp;gt; f =&lt;br /&gt;
			boost::bind(&amp;amp;grabberCallback, _1);&lt;br /&gt;
		openniGrabber-&amp;gt;registerCallback(f);&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	viewer = createViewer();&lt;br /&gt;
&lt;br /&gt;
	if (justVisualize)&lt;br /&gt;
	{&lt;br /&gt;
		if (noColor)&lt;br /&gt;
			viewer-&amp;gt;showCloud(fallbackCloud);&lt;br /&gt;
		else viewer-&amp;gt;showCloud(cloudptr);&lt;br /&gt;
	}&lt;br /&gt;
	else openniGrabber-&amp;gt;start();&lt;br /&gt;
&lt;br /&gt;
	// Main loop.&lt;br /&gt;
	while (! viewer-&amp;gt;wasStopped())&lt;br /&gt;
		boost::this_thread::sleep(boost::posix_time::seconds(1));&lt;br /&gt;
&lt;br /&gt;
	if (! justVisualize)&lt;br /&gt;
		openniGrabber-&amp;gt;stop();&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save and close.&lt;br /&gt;
&lt;br /&gt;
==Compiling==&lt;br /&gt;
&lt;br /&gt;
Follow the same steps you used to build PCL. That is, create a new &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''build/''&amp;lt;/span&amp;gt; subdirectory next to the &amp;lt;span style=&amp;quot;color:#FF8C00&amp;quot;&amp;gt;''src/''&amp;lt;/span&amp;gt; one. Open a terminal there and issue:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;cmake -DCMAKE_BUILD_TYPE=Release ..&lt;br /&gt;
make&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Executing==&lt;br /&gt;
&lt;br /&gt;
Still from the same terminal, run the compiled example program:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;./openniViewer&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After some seconds, the main window will appear and the application will start grabbing frames from the device. You can inspect the current point cloud using the mouse, holding the left button to rotate, the right one (or the mouse wheel) to zoom, and the middle one to pan the camera around. At first, you may see only a black screen, or some big colored axes, but no cloud. Try zooming out, to see the whole scene. Another useful key is &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''R'''&amp;lt;/span&amp;gt;, which will reset the camera parameters when pressed. Use it whenever you notice that zooming has gotten slow after some camera movement, or if you still can not see the cloud. See the [http://pointclouds.org/documentation/tutorials/pcl_visualizer.php PCLVisualizer] tutorial for additional controls and features.&lt;br /&gt;
&lt;br /&gt;
Whenever you feel ready, press the &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''SPACE'''&amp;lt;/span&amp;gt; key. The program will pause for a fraction of a second and the output &amp;lt;span style=&amp;quot;color:#FF1493&amp;quot;&amp;gt;''&amp;quot;Saved inputCloud0.pcd.&amp;quot;''&amp;lt;/span&amp;gt; will appear on the console. Check the current folder to see that file &amp;lt;span style=&amp;quot;color:#1E90FF&amp;quot;&amp;gt;''inputCloud0.pcd''&amp;lt;/span&amp;gt; has indeed been written. You can now close the program with &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Q'''&amp;lt;/span&amp;gt; or &amp;lt;span style=&amp;quot;color:#228B22&amp;quot;&amp;gt;'''Alt+F4'''&amp;lt;/span&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Next, run it again giving the following parameter:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=Bash&amp;gt;./openniViewer -v inputCloud0.pcd&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will tell the program not to take data from the device, but from the saved point cloud file instead. After it loads, you will realize that you are presented the same scene you saved to disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#606060&amp;quot;&amp;gt;'''''NOTE: PCD data is saved relative to the sensor. No matter how much you have manipulated the view, it will reset to default when you load the file.'''''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
&lt;br /&gt;
At this point, your Kinect device should be working and getting depth data for you. There is a collection of excellent [http://pointclouds.org/documentation/tutorials/ tutorials] for PCL in the official webpage. I encourage you to finish them all before proceeding your experiments with the Kinect sensor. You can also find a good introduction/tutorial at the PCL library [http://www.pointclouds.org/assets/uploads/PCL-IAS13_Introduction.pdf here].&lt;br /&gt;
&lt;br /&gt;
If you use an ASUS Xtion PRO device instead, you should have gotten everything to work without problems or additional steps (except maybe for [[PCL/OpenNI_troubleshooting#ASUS_Xtion_PRO | this one]]).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_0:_The_very_basics&amp;diff=4731</id>
		<title>PCL/OpenNI tutorial 0: The very basics</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_0:_The_very_basics&amp;diff=4731"/>
				<updated>2015-11-01T22:50:37Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One of the most important fields of robotics is [https://en.wikipedia.org/wiki/Computer_vision computer vision]. A machine that is able to &amp;quot;see&amp;quot; its environment can have a lot of applications. For example, think of a robot in a production line that grabs some parts and moves them somewhere else. Or a surveillance system that is able to recognize how many people are there in a room. Or a biped robot making its way through a room, evading obstacles such as tables, chairs or bystanders.&lt;br /&gt;
&lt;br /&gt;
For many years, the most common sensors for computer vision were 2D cameras, that retrieved a RGB image of the scene (like all the digital cameras that are so common nowadays, in our laptops or smarphones). Algorithms exist that are able to find an object in a picture, even if it is rotated or scaled, or that can retrieve motion data from a stream of video, and even perform 3D analysis to track the camera's position. All these years' worth of research is now available in libraries like [https://en.wikipedia.org/wiki/OpenCV OpenCV] (Open Source Computer Vision Library).&lt;br /&gt;
&lt;br /&gt;
3D sensors are available, too. The biggest advantage they offer is that it is trivial to get measurements about distances and motion, but the addition of a new dimension makes calculations expensive. Working with the data they retrieve is a lot different that working with a 2D image, and texture information is rarely used.&lt;br /&gt;
&lt;br /&gt;
During the next tutorials I will explain how to get a common depth sensor working with a 3D processing library.&lt;br /&gt;
&lt;br /&gt;
=Depth sensors=&lt;br /&gt;
&lt;br /&gt;
3D or depth sensors give you precise information about the distance to the &amp;quot;points&amp;quot; in the scene (a point would be the 3D equivalent of a pixel). There are several types of depth sensors, each working with a different technique. Some sensors are slow, some are fast. Some give accurate, high-res measurements, some are noisy and low-res. Some are expensive, some can be bought for a hundred bucks. There is no &amp;quot;perfect&amp;quot; sensor and which one you choose will depend on your budget and the project that you want to implement.&lt;br /&gt;
&lt;br /&gt;
==Stereo cameras==&lt;br /&gt;
&lt;br /&gt;
[https://en.wikipedia.org/wiki/Computer_stereo_vision Stereo cameras] are the only passive measurement device of the list. They are essentially two identical cameras assembled together (some centimeters apart), that capture slightly different scenes. By computing the differences between both scenes, it is possible to infer depth information about the points in each image.&lt;br /&gt;
&lt;br /&gt;
A stereo pair is cheap, but perhaps the least accurate sensor. Ideally, it would require perfect calibration of both cameras, which is unfeasible in practice. Bad light conditions will render it useless. Also, because of the way the algorithm works (detecting corresponding points of interest in both images), they give poor results with empty scenes or objects that have plain textures, with few interest points. Some models circumvent this by projecting a grid or texture of light on the scene (active stereo vision).&lt;br /&gt;
&lt;br /&gt;
I will not go into detail about the math involved with the triangulation process, as you can find it on the internet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Stereo.png|thumb|center|200px|Example of stereo camera.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Time-of-flight==&lt;br /&gt;
&lt;br /&gt;
[https://en.wikipedia.org/wiki/Time_of_flight Time-of-flight] (ToF) sensors work by measuring the time it has taken a ray or pulse of light to travel a certain distance. Because the speed of light is a known constant, a simple formula can be used to obtain the range to the object. These sensors are not affected by light conditions and have the potential to be very precise.&lt;br /&gt;
&lt;br /&gt;
===LIDAR===&lt;br /&gt;
&lt;br /&gt;
A [https://en.wikipedia.org/wiki/Lidar LIDAR] (light+radar) is just a common laser range finder mounted on a platform that is able to rotate very fast, scanning the scene point by point. They are very precise sensors, but also expensive, and they do not retrieve texture information. They have been used for decades in many different fields like meteorology, archaeology or astronomy. LIDAR devices can be mounted on satellites, planes or mobile robots. The data retrieved by a LIDAR has very high resolution, so some processing is needed in order to use it for real-time applications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:LIDAR.jpg|thumb|center|200px|LIDAR mounted on a mobile robot (image from Wikimedia Commons).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===ToF cameras===&lt;br /&gt;
&lt;br /&gt;
A [https://en.wikipedia.org/wiki/Time-of-flight_camera time-of-flight camera] does not perform point-by-point scans like a LIDAR does. Instead, it employs a single pulse of light to capture the whole scene, once per frame. Thanks to that, they can work a lot faster, with some models topping above 100 Hz. The price to pay is a low resolution of 320×240 or even less. Depth measurements have an accuracy of about 1cm. ToF cameras cost less than LIDAR sensors, but we are talking about 4000 $ or so. Color information is not retrieved.&lt;br /&gt;
&lt;br /&gt;
The new [https://dev.windows.com/en-us/kinect Kinect v2] is a ToF sensor that works at 512×424 internally, and it includes a 1920x1080 RGB camera.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:ToFcam.png | Time-of-flight camera (image from Wikimedia Commons).&lt;br /&gt;
File:Kinect2.png | Microsoft Kinect v2.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Structured light==&lt;br /&gt;
&lt;br /&gt;
[https://en.wikipedia.org/wiki/Structured-light_3D_scanner Structured light] sensors (like Kinect and Xtion) work by projecting a pattern of infrarred light (for example, a grid of lines, or a &amp;quot;constellation&amp;quot; of points) on top of the scene's objects. This pattern is seen distorted when looked from a perspective different from the projector's. By analysing this distortion, information about the depth can be retrieved, and the surface(s) reconstructed.&lt;br /&gt;
&lt;br /&gt;
The resolution and speed of these sensors are similar to common VGA cameras, usually 640x480 at 30 FPS (actually less because of the way this type of sensors work, Kinect uses 320x240 internally, the rest of the data is extrapolated to &amp;quot;fill the gaps&amp;quot;). Precision is similar to that of ToF cameras, 1cm more or less, with a maximum range of 3-6m, but they tend to have trouble seeing small objects. They are a lot cheaper than any other sensors, with a first generation Kinect now costing around 100 $, and hence they have become a lot popular in the last years. In the next tutorial, I will explain the installation and usage of one of these cameras.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:Kinect.jpg | Microsoft Kinect.&lt;br /&gt;
File:Xtion.jpg | ASUS Xtion PRO (the PRO LIVE model includes a RGB camera).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Structured_light.png|thumb|center|200px|Infrarred light pattern (dots) projected by Kinect.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Point clouds=&lt;br /&gt;
&lt;br /&gt;
[[Image:Point_cloud.png|thumb|right|200px|Example of a point cloud.]]&lt;br /&gt;
&lt;br /&gt;
Depth sensors return 3D data in the form of a [https://en.wikipedia.org/wiki/Point_cloud ''point cloud'']. A point cloud is a set of points in three-dimensional space, each with its own XYZ coordinates. In the case of stereo, ToF or structured light cameras, every point corresponds to exactly one pixel of the captured image. Optionally, points can store additional information, like color (if the sensor has a RGB camera that it uses to put texture on them). If you have background with 3D modelling software, then you must know that point clouds can be converted to triangular meshes with certain algorithms, but this is rarely done, and cloud processing is performed with the original set of data.&lt;br /&gt;
&lt;br /&gt;
==Point Cloud Library==&lt;br /&gt;
&lt;br /&gt;
[https://en.wikipedia.org/wiki/Point_Cloud_Library Point Cloud Library (PCL)] is a project started in early 2010 by [https://en.wikipedia.org/wiki/Willow_Garage Willow Garage], the same robotics research company behind the [https://en.wikipedia.org/wiki/Robot_Operating_System Robot Operating System (ROS)] and OpenCV (and they also sell [http://www.willowgarage.com/robot/overview robots] like TurtleBot, too). The first version was [http://www.pointclouds.org/assets/pdf/pcl_icra2011.pdf fully introduced] in 2011, and it has been actively maintained ever since.&lt;br /&gt;
&lt;br /&gt;
PCL aims to be an one-for-all solution for point cloud and 3D processing. It is an open source, multiplatform library divided in many [http://pointclouds.org/documentation/tutorials/walkthrough.php submodules] for different tasks, like visualization, filtering, segmentation, registration, searching, feature estimation... Additionally, it can be used to retrieve data from a list of compatible sensors, so it is an obvious choice for our tutorials, because you will not have to compile or install dozens of separate libraries.&lt;br /&gt;
&lt;br /&gt;
Apart from Willow Garage, PCL is backed by [http://www.openperception.org/ Open Perception], a non-profit organization founded by Radu Bogdan Rusu, one of the most active developers of the library. I recommend you to read his very interesting [http://files.rbrusu.com/publications/RusuPhDThesis.pdf dissertation], written in 2009, which introduces and details many concepts and techniques that have been implemented into PCL.&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
&lt;br /&gt;
Over the course of the next tutorials I will tell you how to install PCL in a Linux environment, how to retrieve point clouds from a sensor like Kinect or Xtion, and how to work with them for things like object recognition. I will keep it nice and easy, just like I want tutorials to be, and leave complex stuff aside (you can always search the original paper if you want to learn more about some technique).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_0:_The_very_basics&amp;diff=4730</id>
		<title>PCL/OpenNI tutorial 0: The very basics</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_0:_The_very_basics&amp;diff=4730"/>
				<updated>2015-11-01T22:48:34Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One of the most important fields of robotics is [https://en.wikipedia.org/wiki/Computer_vision computer vision]. A machine that is able to &amp;quot;see&amp;quot; its environment can have a lot of applications. For example, think of a robot in a production line that grabs some parts and moves them somewhere else. Or a surveillance system that is able to recognize how many people are there in a room. Or a biped robot making its way through a room, evading obstacles such as tables, chairs or bystanders.&lt;br /&gt;
&lt;br /&gt;
For many years, the most common sensors for computer vision were 2D cameras, that retrieved a RGB image of the scene (like all the digital cameras that are so common nowadays, in our laptops or smarphones). Algorithms exist that are able to find an object in a picture, even if it is rotated or scaled, or that can retrieve motion data from a stream of video, and even perform 3D analysis to track the camera's position. All these years' worth of research is now available in libraries like [https://en.wikipedia.org/wiki/OpenCV OpenCV] (Open Source Computer Vision Library).&lt;br /&gt;
&lt;br /&gt;
3D sensors are available, too. The biggest advantage they offer is that it is trivial to get measurements about distances and motion, but the addition of a new dimension makes calculations expensive. Working with the data they retrieve is a lot different that working with a 2D image, and texture information is rarely used.&lt;br /&gt;
&lt;br /&gt;
During the next tutorials I will explain how to get a common depth sensor working with a 3D processing library.&lt;br /&gt;
&lt;br /&gt;
=Depth sensors=&lt;br /&gt;
&lt;br /&gt;
3D or depth sensors give you precise information about the distance to the &amp;quot;points&amp;quot; in the scene (a point would be the 3D equivalent of a pixel). There are several types of depth sensors, each working with a different technique. Some sensors are slow, some are fast. Some give accurate, high-res measurements, some are noisy and low-res. Some are expensive, some can be bought for a hundred bucks. There is no &amp;quot;perfect&amp;quot; sensor and which one you choose will depend on your budget and the project that you want to implement.&lt;br /&gt;
&lt;br /&gt;
==Stereo cameras==&lt;br /&gt;
&lt;br /&gt;
[https://en.wikipedia.org/wiki/Computer_stereo_vision Stereo cameras] are the only passive measurement device of the list. They are essentially two identical cameras assembled together (some centimeters apart), that capture slightly different scenes. By computing the differences between both scenes, it is possible to infer depth information about the points in each image.&lt;br /&gt;
&lt;br /&gt;
A stereo pair is cheap, but perhaps the least accurate sensor. Ideally, it would require perfect calibration of both cameras, which is unfeasible in practice. Bad light conditions will render it useless. Also, because of the way the algorithm works (detecting corresponding points of interest in both images), they give poor results with empty scenes or objects that have plain textures, with few interest points. Some models circumvent this by projecting a grid or texture of light on the scene (active stereo vision).&lt;br /&gt;
&lt;br /&gt;
I will not go into detail about the math involved with the triangulation process, as you can find it on the internet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Stereo.png|thumb|center|200px|Example of stereo camera.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Time-of-flight==&lt;br /&gt;
&lt;br /&gt;
[https://en.wikipedia.org/wiki/Time_of_flight Time-of-flight] (ToF) sensors work by measuring the time it has taken a ray or pulse of light to travel a certain distance. Because the speed of light is a known constant, a simple formula can be used to obtain the range to the object. These sensors are not affected by light conditions and have the potential to be very precise.&lt;br /&gt;
&lt;br /&gt;
===LIDAR===&lt;br /&gt;
&lt;br /&gt;
A [https://en.wikipedia.org/wiki/Lidar LIDAR] (light+radar) is just a common laser range finder mounted on a platform that is able to rotate very fast, scanning the scene point by point. They are very precise sensors, but also expensive, and they do not retrieve texture information. They have been used for decades in many different fields like meteorology, archaeology or astronomy. LIDAR devices can be mounted on satellites, planes or mobile robots. The data retrieved by a LIDAR has very high resolution, so some processing is needed in order to use it for real-time applications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:LIDAR.jpg|thumb|center|200px|LIDAR mounted on a mobile robot (image from Wikimedia Commons).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===ToF cameras===&lt;br /&gt;
&lt;br /&gt;
A [https://en.wikipedia.org/wiki/Time-of-flight_camera time-of-flight camera] does not perform point-by-point scans like a LIDAR does. Instead, it employs a single pulse of light to capture the whole scene, once per frame. Thanks to that, they can work a lot faster, with some models topping above 100 Hz. The price to pay is a low resolution of 320×240 or even less. Depth measurements have an accuracy of about 1cm. ToF cameras cost less than LIDAR sensors, but we are talking about 4000 $ or so. Color information is not retrieved.&lt;br /&gt;
&lt;br /&gt;
The new [https://dev.windows.com/en-us/kinect Kinect v2] is a ToF sensor that works at 512×424 internally, and it includes a 1920x1080 RGB camera.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:ToFcam.png | Time-of-flight camera (image from Wikimedia Commons).&lt;br /&gt;
File:Kinect2.png | Microsoft Kinect v2.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Structured light==&lt;br /&gt;
&lt;br /&gt;
[https://en.wikipedia.org/wiki/Structured-light_3D_scanner Structured light] sensors (like Kinect and Xtion) work by projecting a pattern of infrarred light (for example, a grid of lines, or a &amp;quot;constellation&amp;quot; of points) on top of the scene's objects. This pattern is seen distorted when looked from a perspective different from the projector's. By analysing this distortion, information about the depth can be retrieved, and the surface(s) reconstructed.&lt;br /&gt;
&lt;br /&gt;
The resolution and speed of these sensors are similar to common VGA cameras, usually 640x480 at 30 FPS (actually less because of the way this type of sensors work, Kinect uses 320x240 internally, the rest of the data is extrapolated to &amp;quot;fill the gaps&amp;quot;). Precision is similar to that of ToF cameras, 1cm more or less, with a maximum range of 3-6m, but they tend to have trouble seeing small objects. They are a lot cheaper than any other sensors, with a first generation Kinect now costing around 100 $, and hence they have become a lot popular in the last years. In the next tutorial, I will explain the installation and usage of one of these cameras.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:Kinect.jpg | Microsoft Kinect.&lt;br /&gt;
File:Xtion.jpg | ASUS Xtion PRO (the PRO LIVE model includes a RGB camera).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Structured_light.png|thumb|center|200px|Infrarred light pattern (dots) projected by Kinect.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Point clouds=&lt;br /&gt;
&lt;br /&gt;
[[Image:Point_cloud.png|thumb|right|200px|Example of a point cloud.]]&lt;br /&gt;
&lt;br /&gt;
Depth sensors return 3D data in the form of a [http://en.wikipedia.org/wiki/Point_cloud ''point cloud'']. A point cloud is a set of points in three-dimensional space, each with its own XYZ coordinates. In the case of stereo, ToF or structured light cameras, every point corresponds to exactly one pixel of the captured image. Optionally, points can store additional information, like color (if the sensor has a RGB camera that it uses to put texture on them). If you have background with 3D modelling software, then you must know that point clouds can be converted to triangular meshes with certain algorithms, but this is rarely done, and cloud processing is performed with the original set of data.&lt;br /&gt;
&lt;br /&gt;
==Point Cloud Library==&lt;br /&gt;
&lt;br /&gt;
[http://www.pointclouds.org/ Point Cloud Library (PCL)] is a project started in early 2010 by [http://www.willowgarage.com/ Willow Garage], the same robotics research company behind the [http://www.ros.org/ Robot Operating System (ROS)] and OpenCV (and they also sell [http://www.willowgarage.com/robot/overview robots] like TurtleBot, too). The first version was [http://www.pointclouds.org/assets/pdf/pcl_icra2011.pdf fully introduced] in 2011, and it has been actively maintained ever since.&lt;br /&gt;
&lt;br /&gt;
PCL aims to be an one-for-all solution for point cloud and 3D processing. It is an open source, multiplatform library divided in many [http://pointclouds.org/documentation/tutorials/walkthrough.php submodules] for different tasks, like visualization, filtering, segmentation, registration, searching, feature estimation... Additionally, it can be used to retrieve data from a list of compatible sensors, so it is an obvious choice for our tutorials, because you will not have to compile or install dozens of separate libraries.&lt;br /&gt;
&lt;br /&gt;
Apart from Willow Garage, PCL is backed by [http://www.openperception.org/ Open Perception], a non-profit organization founded by Radu Bogdan Rusu, one of the most active developers of the library. I recommend you to read his very interesting [http://files.rbrusu.com/publications/RusuPhDThesis.pdf dissertation], written in 2009, which introduces and details many concepts and techniques that have been implemented into PCL.&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
&lt;br /&gt;
Over the course of the next tutorials I will tell you how to install PCL in a Linux environment, how to retrieve point clouds from a sensor like Kinect or Xtion, and how to work with them for things like object recognition. I will keep it nice and easy, just like I want tutorials to be, and leave complex stuff aside (you can always search the original paper if you want to learn more about some technique).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	<entry>
		<id>https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_0:_The_very_basics&amp;diff=4729</id>
		<title>PCL/OpenNI tutorial 0: The very basics</title>
		<link rel="alternate" type="text/html" href="https://robotica.unileon.es/index.php?title=PCL/OpenNI_tutorial_0:_The_very_basics&amp;diff=4729"/>
				<updated>2015-11-01T22:45:05Z</updated>
		
		<summary type="html">&lt;p&gt;Victorm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One of the most important fields of robotics is [https://en.wikipedia.org/wiki/Computer_vision computer vision]. A machine that is able to &amp;quot;see&amp;quot; its environment can have a lot of applications. For example, think of a robot in a production line that grabs some parts and moves them somewhere else. Or a surveillance system that is able to recognize how many people are there in a room. Or a biped robot making its way through a room, evading obstacles such as tables, chairs or bystanders.&lt;br /&gt;
&lt;br /&gt;
For many years, the most common sensors for computer vision were 2D cameras, that retrieved a RGB image of the scene (like all the digital cameras that are so common nowadays, in our laptops or smarphones). Algorithms exist that are able to find an object in a picture, even if it is rotated or scaled, or that can retrieve motion data from a stream of video, and even perform 3D analysis to track the camera's position. All these years' worth of research is now available in libraries like [https://en.wikipedia.org/wiki/OpenCV OpenCV] (Open Source Computer Vision Library).&lt;br /&gt;
&lt;br /&gt;
3D sensors are available, too. The biggest advantage they offer is that it is trivial to get measurements about distances and motion, but the addition of a new dimension makes calculations expensive. Working with the data they retrieve is a lot different that working with a 2D image, and texture information is rarely used.&lt;br /&gt;
&lt;br /&gt;
During the next tutorials I will explain how to get a common depth sensor working with a 3D processing library.&lt;br /&gt;
&lt;br /&gt;
=Depth sensors=&lt;br /&gt;
&lt;br /&gt;
3D or depth sensors give you precise information about the distance to the &amp;quot;points&amp;quot; in the scene (a point would be the 3D equivalent of a pixel). There are several types of depth sensors, each working with a different technique. Some sensors are slow, some are fast. Some give accurate, high-res measurements, some are noisy and low-res. Some are expensive, some can be bought for a hundred bucks. There is no &amp;quot;perfect&amp;quot; sensor and which one you choose will depend on your budget and the project that you want to implement.&lt;br /&gt;
&lt;br /&gt;
==Stereo cameras==&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Computer_stereo_vision Stereo cameras] are the only passive measurement device of the list. They are essentially two identical cameras assembled together (some centimeters apart), that capture slightly different scenes. By computing the differences between both scenes, it is possible to infer depth information about the points in each image.&lt;br /&gt;
&lt;br /&gt;
A stereo pair is cheap, but perhaps the least accurate sensor. Ideally, it would require perfect calibration of both cameras, which is unfeasible in practice. Bad light conditions will render it useless. Also, because of the way the algorithm works (detecting corresponding points of interest in both images), they give poor results with empty scenes or objects that have plain textures, with few interest points. Some models circumvent this by projecting a grid or texture of light on the scene (active stereo vision).&lt;br /&gt;
&lt;br /&gt;
I will not go into detail about the math involved with the triangulation process, as you can find it on the internet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Stereo.png|thumb|center|200px|Example of stereo camera.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Time-of-flight==&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Time_of_flight Time-of-flight] (ToF) sensors work by measuring the time it has taken a ray or pulse of light to travel a certain distance. Because the speed of light is a known constant, a simple formula can be used to obtain the range to the object. These sensors are not affected by light conditions and have the potential to be very precise.&lt;br /&gt;
&lt;br /&gt;
===LIDAR===&lt;br /&gt;
&lt;br /&gt;
A [http://en.wikipedia.org/wiki/Lidar LIDAR] (light+radar) is just a common laser range finder mounted on a platform that is able to rotate very fast, scanning the scene point by point. They are very precise sensors, but also expensive, and they do not retrieve texture information. They have been used for decades in many different fields like meteorology, archaeology or astronomy. LIDAR devices can be mounted on satellites, planes or mobile robots. The data retrieved by a LIDAR has very high resolution, so some processing is needed in order to use it for real-time applications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:LIDAR.jpg|thumb|center|200px|LIDAR mounted on a mobile robot (image from Wikimedia Commons).]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===ToF cameras===&lt;br /&gt;
&lt;br /&gt;
A [http://en.wikipedia.org/wiki/Time-of-flight_camera time-of-flight camera] does not perform point-by-point scans like a LIDAR does. Instead, it employs a single pulse of light to capture the whole scene, once per frame. Thanks to that, they can work a lot faster, with some models topping above 100 Hz. The price to pay is a low resolution of 320×240 or even less. Depth measurements have an accuracy of about 1cm. ToF cameras cost less than LIDAR sensors, but we are talking about 4000 $ or so. Color information is not retrieved.&lt;br /&gt;
&lt;br /&gt;
The new [https://dev.windows.com/en-us/kinect Kinect v2] is a ToF sensor that works at 512×424 internally, and it includes a 1920x1080 RGB camera.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:ToFcam.png | Time-of-flight camera (image from Wikimedia Commons).&lt;br /&gt;
File:Kinect2.png | Microsoft Kinect v2.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Structured light==&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Structured-light_3D_scanner Structured light] sensors (like Kinect and Xtion) work by projecting a pattern of infrarred light (for example, a grid of lines, or a &amp;quot;constellation&amp;quot; of points) on top of the scene's objects. This pattern is seen distorted when looked from a perspective different from the projector's. By analysing this distortion, information about the depth can be retrieved, and the surface(s) reconstructed.&lt;br /&gt;
&lt;br /&gt;
The resolution and speed of these sensors are similar to common VGA cameras, usually 640x480 at 30 FPS (actually less because of the way this type of sensors work, Kinect uses 320x240 internally, the rest of the data is extrapolated to &amp;quot;fill the gaps&amp;quot;). Precision is similar to that of ToF cameras, 1cm more or less, with a maximum range of 3-6m, but they tend to have trouble seeing small objects. They are a lot cheaper than any other sensors, with a first generation Kinect now costing around 100 $, and hence they have become a lot popular in the last years. In the next tutorial, I will explain the installation and usage of one of these cameras.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;gallery widths=300px&amp;gt;&lt;br /&gt;
File:Kinect.jpg | Microsoft Kinect.&lt;br /&gt;
File:Xtion.jpg | ASUS Xtion PRO (the PRO LIVE model includes a RGB camera).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Structured_light.png|thumb|center|200px|Infrarred light pattern (dots) projected by Kinect.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Point clouds=&lt;br /&gt;
&lt;br /&gt;
[[Image:Point_cloud.png|thumb|right|200px|Example of a point cloud.]]&lt;br /&gt;
&lt;br /&gt;
Depth sensors return 3D data in the form of a [http://en.wikipedia.org/wiki/Point_cloud ''point cloud'']. A point cloud is a set of points in three-dimensional space, each with its own XYZ coordinates. In the case of stereo, ToF or structured light cameras, every point corresponds to exactly one pixel of the captured image. Optionally, points can store additional information, like color (if the sensor has a RGB camera that it uses to put texture on them). If you have background with 3D modelling software, then you must know that point clouds can be converted to triangular meshes with certain algorithms, but this is rarely done, and cloud processing is performed with the original set of data.&lt;br /&gt;
&lt;br /&gt;
==Point Cloud Library==&lt;br /&gt;
&lt;br /&gt;
[http://www.pointclouds.org/ Point Cloud Library (PCL)] is a project started in early 2010 by [http://www.willowgarage.com/ Willow Garage], the same robotics research company behind the [http://www.ros.org/ Robot Operating System (ROS)] and OpenCV (and they also sell [http://www.willowgarage.com/robot/overview robots] like TurtleBot, too). The first version was [http://www.pointclouds.org/assets/pdf/pcl_icra2011.pdf fully introduced] in 2011, and it has been actively maintained ever since.&lt;br /&gt;
&lt;br /&gt;
PCL aims to be an one-for-all solution for point cloud and 3D processing. It is an open source, multiplatform library divided in many [http://pointclouds.org/documentation/tutorials/walkthrough.php submodules] for different tasks, like visualization, filtering, segmentation, registration, searching, feature estimation... Additionally, it can be used to retrieve data from a list of compatible sensors, so it is an obvious choice for our tutorials, because you will not have to compile or install dozens of separate libraries.&lt;br /&gt;
&lt;br /&gt;
Apart from Willow Garage, PCL is backed by [http://www.openperception.org/ Open Perception], a non-profit organization founded by Radu Bogdan Rusu, one of the most active developers of the library. I recommend you to read his very interesting [http://files.rbrusu.com/publications/RusuPhDThesis.pdf dissertation], written in 2009, which introduces and details many concepts and techniques that have been implemented into PCL.&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
&lt;br /&gt;
Over the course of the next tutorials I will tell you how to install PCL in a Linux environment, how to retrieve point clouds from a sensor like Kinect or Xtion, and how to work with them for things like object recognition. I will keep it nice and easy, just like I want tutorials to be, and leave complex stuff aside (you can always search the original paper if you want to learn more about some technique).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Go to root: [[PhD-3D-Object-Tracking]]&lt;br /&gt;
&lt;br /&gt;
Links to articles:&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 0: The very basics]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 1: Installing and testing]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 2: Cloud processing (basic)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 3: Cloud processing (advanced)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 4: 3D object recognition (descriptors)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI tutorial 5: 3D object recognition (pipeline)]]&lt;br /&gt;
&lt;br /&gt;
[[PCL/OpenNI troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Victorm</name></author>	</entry>

	</feed>