Difference between revisions of "PCL/OpenNI tutorial 1: Installing and testing"

From robotica.unileon.es
Jump to: navigation, search
m
m
Line 86: Line 86:
 
<syntaxhighlight lang=Bash enclose="div">sudo apt-get install build-essential cmake cmake-curses-gui libboost-all-dev libeigen3-dev libflann-dev libvtk5-dev libvtk5-qt4-dev libglew-dev libxmu-dev libsuitesparse-dev libqhull-dev libpcap-dev libxmu-dev libxi-dev libgtest-dev libqt4-opengl-dev</syntaxhighlight>
 
<syntaxhighlight lang=Bash enclose="div">sudo apt-get install build-essential cmake cmake-curses-gui libboost-all-dev libeigen3-dev libflann-dev libvtk5-dev libvtk5-qt4-dev libglew-dev libxmu-dev libsuitesparse-dev libqhull-dev libpcap-dev libxmu-dev libxi-dev libgtest-dev libqt4-opengl-dev</syntaxhighlight>
  
The trunk version of PCL uses [https://en.wikipedia.org/wiki/VTK VTK] 6 and [https://en.wikipedia.org/wiki/Qt_%28software%29 Qt] 5, so if you intend to compile it, you must install the following packages (say yes if you are asked to remove VTK 5 and Qt 4 packages):
+
The trunk (1.8) version of PCL uses [https://en.wikipedia.org/wiki/VTK VTK] 6 and [https://en.wikipedia.org/wiki/Qt_%28software%29 Qt] 5, so if you intend to compile it, you must install the following packages (say yes if you are asked to remove VTK 5 and Qt 4 packages):
  
 
<syntaxhighlight lang=Bash enclose="div">sudo apt-get install libvtk6-dev libqt5opengl5-dev</syntaxhighlight>
 
<syntaxhighlight lang=Bash enclose="div">sudo apt-get install libvtk6-dev libqt5opengl5-dev</syntaxhighlight>

Revision as of 19:10, 4 November 2015

Go to root: PhD-3D-Object-Tracking




Microsoft Kinect device.

This series of tutorials will explain the usage of a depth camera like Kinect for "serious" researching purposes. As you may know, Kinect is in fact an affordable depth sensor, developed with technology from PrimeSense, based on infrarred structured light method. It also has a common camera (which makes it a RGB-D device), a microphone and a motorized pivot. Its use is not limited to playing with a Xbox360 console, you can plug it to a computer and use it like any other sensor. Many open-source drivers and frameworks are available.

Since its release on November 2010, it has gained a lot of popularity, specially among the scientific community. Many researches have procured themselves one because, despite the low cost (about 100 $), it has proven to be a powerful solution for depth sensing projects. Current investigations focus on real-time surface mapping, object recognition and tracking, and localization. Impressive results (like the KinectFusion project from Microsoft) are already possible.

The new Xbox One ships with an upgraded version, Kinect v2, with enhanced resolution, that is able to detect your facial expression, measure your heart rate, and track every one of your fingers. A PC development-ready version (Kinect for Windows v2) was released in July 2014, but it could only be used with the official Windows SDK (open source support exists but is still young). In October 2014 an adapter that allows to plug the standard Kinect v2 to a PC was released, so the development version of the sensor was discontinued in April 2015. Now you can just buy the standard one for the console plus the adapter.

I will explain the installation and usage of one of the "legacy" Kinect 1.0 devices with a common PC, and the possibilities it offers. I will do it in an easy to understand way, intended for students that have just acquired it and want to start from scratch.

Keep in mind that the software that we are going to use (Point Cloud Library and OpenNI drivers) will also let you use any other device like the Xtion PRO or Xtion PRO LIVE from ASUS (the PRO only has a depth sensor, the PRO LIVE is a RGB-D camera) without changing a line of code.

NOTE: The tutorials are written for Linux platforms. Also, 64-bit versions seem to work better than 32-bit.

Requirements

You will need the following:

  • A common Kinect device, out of the box. You can buy it in your local electronics shop, or online. It also includes a free copy of Kinect Adventures!, which is useless if you do not own the console. Microsoft has released a Kinect for Windows device, which is a normal looking Kinect no longer compatible with Xbox360, that will only work with their official SDK, intended for developers only. Also, like I stated earlier, you can use an ASUS Xtion indistinctly.
  • A computer running Linux (Debian or Ubuntu preferably).
  • A medium-sized room. Kinect has some limitations for depth measurement: 40cm minimum, 8m maximum (make it 6).

NOTE: Kinect for Windows may have problems working with open source drivers on Linux .

Connecting everything

Kinect does not work with a common USB port. Its power consumption is a bit higher because of the motor, so Microsoft came up with a connector that combines USB and power supply. Old Xbox 360 models needed a special adapter, new S model ones already have this new port. Luckily, Kinect comes with the official adapter out of the box (otherwise you will have to buy one).

Just plug the adapter to any power socket, and the USB to your computer. Let's check typing this in a terminal:

lsusb

Output should list the following devices:

Bus 001 Device 005: ID 045e:02b0 Microsoft Corp. Xbox NUI Motor
Bus 001 Device 006: ID 045e:02ad Microsoft Corp. Xbox NUI Audio
Bus 001 Device 007: ID 045e:02ae Microsoft Corp. Xbox NUI Camera

If you are using a Xtion, you should see:

Bus 001 Device 004: ID 1d27:0601 ASUS

"0601" identifies the new Xtion model, "0600" the older one. Both should work the same. But try to avoid USB 3.0 ports!

Installing the software

There is more than one way to get your Kinect working on your PC, and start developing applications for it:

  • Kinect for Windows SDK and Developer Toolkit: released on June 16, 2011 as a non-commercial SDK intended for application development. Version 1.8, the last there will ever be now that Kinect v2 is out, was released on September 2013. Because it comes from Microsoft, it is obviously the easiest way to get everything working. Sadly, there is no Linux version.
  • libfreenect library: from the OpenKinect project, it is intended to be a free and open source alternative to the official drivers. libfreenect is used by projects like ofxKinect, an addon for the openFrameworks toolkit (and as of version 0.8, included in the core package) that runs on Linux and OS X. ofxKinect packs a nice example application to show the RGB and point cloud taken from Kinect.
  • PrimeSense drivers: they were released as open source after the the OpenNI project was created, along with the motion tracking middleware, NITE, and the SDK. NI stands for Natural Interaction, and the project tried to enforce a common standard for human input using Kinect-like sensors. These official drivers are used by ROS (the Robot Operating System, a massive collection of libraries and tools for robotic researchers) and PCL (the Point Cloud Library, with everything needed for 3D point cloud processing). Sadly, version 2.0 of the OpenNI SDK dropped support for Kinect on Linux due to licensing issues, so for now PCL relies on legacy 1.x versions. Also, Apple bought PrimeSense on November 2013, and on April 2014 OpenNI's webpage was closed. The source is now being maintained by a third party.
  • SensorKinect: a modified version of the official PrimeSense drivers, used for example by ofxOpenNI (another openFrameworks addon). Last updated on 2012.

For this tutorial, we are going to use PCL with the OpenNI drivers, so owners of a Xtion can also get it to work.

Precompiled PCL for Ubuntu

If you have a valid installation of ROS (through their repository), you do not have to install anything. Both OpenNI and PrimeSense drivers, as well as PCL, should be already installed. You can check it with:

sudo apt-get install libpcl-1.7-all libpcl-1.7-all-dev libopenni-dev libopenni-sensor-primesense-dev

The previous command should state that all packages are already installed (change the PCL version number as needed), install them if not. If you get an error about overwriting some file, check this.

In case you do not want ROS, there is a PPA (Personal Package Archive, a private repository) which has everything we need. Add it to your sources, and install everything:

sudo add-apt-repository ppa:v-launchpad-jochen-sprickerhof-de/pcl
sudo apt-get update
sudo apt-get install build-essential cmake libpcl1.7 libpcl-dev pcl-tools

Trying to mix ROS and PCL repositories and packages can cause some errors, so choose one of them and stick with it. Check the PCL/OpenNI troubleshooting page because your Kinect may not work by default in 32 bits.

Compiling PCL from source

For Linuxes without a precompiled version of PCL, you will need to compile it yourself. This has an advantage, actually: you can customize the build options and choose what you want. Also, the resulting binaries and libraries should be a bit faster. And you always get the latest version! The instructions are here, but the steps are easy so I will show them to you.

Installing the dependencies

Some of PCL dependencies can be installed via the package manager. Others will require additional work.

sudo apt-get install build-essential cmake cmake-curses-gui libboost-all-dev libeigen3-dev libflann-dev libvtk5-dev libvtk5-qt4-dev libglew-dev libxmu-dev libsuitesparse-dev libqhull-dev libpcap-dev libxmu-dev libxi-dev libgtest-dev libqt4-opengl-dev

The trunk (1.8) version of PCL uses VTK 6 and Qt 5, so if you intend to compile it, you must install the following packages (say yes if you are asked to remove VTK 5 and Qt 4 packages):

sudo apt-get install libvtk6-dev libqt5opengl5-dev

JDK

OpenNI requires Sun's official JDK (Java Development Kit), which is no longer available on official apt repositories. You can use unofficial ones, or do it manually. For the last method, go to the Java SE downloads page (SE means Standard Edition) and download the latest version (e.g., jdk-8u66-linux-x64.tar.gz). Extract it, then move the contents to /usr/lib/jvm/ so it is available system-wide:

sudo mkdir -p /usr/lib/jvm/
sudo cp -r jdk1.8.0_66/ /usr/lib/jvm/

Then, make it the default choice to compile and run Java programs. Remember to change the version number as needed!

sudo update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/jdk1.8.0_66/bin/java" 1
sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/lib/jvm/jdk1.8.0_66/bin/javac" 1
sudo update-alternatives --install "/usr/bin/jar" "jar" "/usr/lib/jvm/jdk1.8.0_66/bin/jar" 1

To be sure, use the following commands to manually select the correct option, in case there is more than one choice:

sudo update-alternatives --config java
sudo update-alternatives --config javac
sudo update-alternatives --config jar

If you are still not sure, run them and display the version, making sure it is the one you installed:

java -version
javac -version

Sun's JDK is now installed.

OpenNI

PCL uses OpenNI and the PrimeSense drivers to get data from devices like Kinect or Xtion. It is optional, but it would not make much sense not to install it, would it? If you are using Ubuntu, add the PPA and install libopenni-dev and libopenni-sensor-primesense-dev, which should be done already. Otherwise, fetch the OpenNI and PrimeSense Sensor sources from GitHub (download them as .zip, the link is on the right). Extract them, and install the dependencies:

sudo apt-get install python libusb-1.0-0-dev freeglut3-dev doxygen graphviz

Go to the directory where you extracted OpenNI (OpenNI-master/ for me), and open a terminal in the Platform/Linux/CreateRedist/ subdirectory. Issue:

./RedistMaker

When it finishes, and if there are no errors (check the PCL/OpenNI troubleshooting page if you get any), go to Platform/Linux/Redist/OpenNI-Bin-Dev-Linux-x64-v1.5.7.10/ (or your equivalent), and install (you must be root):

sudo ./install.sh

Now, go to the directory where you extracted the PrimeSense drivers (Sensor-master/ for me), and repeat the exact same procedure (go to Platform/Linux/CreateRedist/, issue ./RedistMaker, go to Platform/Linux/Redist/Sensor-Bin-Linux-x64-v5.1.6.6/, issue sudo ./install.sh). Congratulations, you have now installed OpenNI.

CUDA

Like OpenNI, nVidia CUDA is an optional dependency, that will allow PCL to use your GPU (Graphics Processing Unit, that is, your graphics card) for certain computations. This is mandatory for tools like KinFu (do not bother unless you have at least a series 400 card with 1.5 GB of VRAM).

Some distributions provide packages for CUDA in the repositories. For example, in Ubuntu:

sudo apt-get install nvidia-cuda-dev nvidia-cuda-toolkit

If you want to install it manually (which is incompatible with the previous method), go to the CUDA downloads page, which is self-explanatory, and get the small .deb file or the huge .run toolkit/SDK installer file for your system (you should have installed the nVidia drivers already, but the installer will also do it for you if needed).

If you chose the .deb file, install it using the method of your choice, like Gdebi package manager, or through console:

sudo dpkg -i <package.deb>

The .deb does not contain all CUDA stuff, it just adds their repository to your software lists. Now you must install everything:

sudo apt-get update
sudo apt-get install cuda

If, on the other hand, you downloaded the .run, give it execution permissions:

chmod +x cuda_7.0.28_linux.run

And install it. You can use the default options, although if you have a working nVidia graphics driver for your card, you may want to say "no" when the installer offers to install it for you:

sudo ./cuda_7.0.28_linux.run

The environment variables need to be changed so your system can find CUDA's libraries and binaries. Open /etc/ld.so.conf:

sudo nano /etc/ld.so.conf

And append one of these two lines:

/usr/local/cuda/lib64 # Add this on 64-bit only.
/usr/local/cuda/lib # Add this on 32-bit only.

Save with Ctrl+O and Enter, exit with Ctrl+X. Reload the cache of the dynamic linker with:

sudo ldconfig

Now, append CUDA's bin directory to your PATH. Do this by editing your local .bashrc file:

nano ~/.bashrc

And append this line:

export PATH=$PATH:/usr/local/cuda/bin

CUDA is now installed.

Getting the source

Every dependency is installed. Time to download PCL's source code. First, you must choose whether to install the stable or the experimental branch of PCL. The stable branch is the latest official release and it is guaranteed to work without problems. The experimental branch may give you a compiling error seldomly, but you can find some interesting features that stable users will have to wait some months for. Apart from that, both are built the same way.

To get the stable version, go to the downloads page, get pcl-pcl-1.7.2.tar.gz or whatever the latest release is, and extract it somewhere. For the experimental version, use Git:

sudo apt-get install git
git clone https://github.com/PointCloudLibrary/pcl PCL-trunk-Source

The compiled trunk version of PCL will take up more than 8 GB. So make sure you put the source folder in a partition with enough free space!

Compiling

Go the the PCL source directory (pcl-pcl-1.7.2/ or PCL-trunk-Source/), and create a new subdirectory to keep the build files in:

mkdir build
cd build

Now it is time to configure the project using CMake. We will tell it to build in Release (fully optimized, no debug capabilities) mode now, and customize the rest of the options later:

cmake -DCMAKE_BUILD_TYPE=Release ..

CMake should be able to find every dependency, thus being able to build every subsystem except for the ones marked as "Disabled by default". If you are happy, you can build now, otherwise let's invoke CMake's curses interface to change a couple of things (mind the final dot):

ccmake .


Interface of ccmake.


Here you can change the build options. The program usage can be found at the bottom of the screen. Try turning all functionality "ON". The most important thing, in case you want to use CUDA, is to enable it and give CMake the path to your SDK. Go to the "CUDA_SDK_ROOT_DIR" option and enter the correct path (probably /usr/local/cuda/ or similar).

When you are done, press C to configure and G to generate and exit the tool. Sometimes, the options you change can activate previously omitted parameters, or prompt some warning text. Just press E when you are finished reading the message, and keep pressing C until it lets you generate (new parameters will be marked with an asterisk, so you can check them and decide whether or not you want further customization).

If you are done configuring, it is time to build:

make

NOTE: Additionally, you can append the parameter -jX to speed up the compilation, X being the number of cores or processors of your PC, plus one.

Remember that, at any time, you can manually force the project to be reconfigured and built from scratch by emptying the build/ directory with:

rm -rf ./*

Installing

It will take some time to compile PCL (up to a few hours if your PC is not powerful enough). When it is finished, install it system-wide with:

sudo make install

And you should reboot and proceed to the next section, to see if your computer now recognizes (and uses) your Kinect device.

Testing (OpenNI viewer)

We are going to write a simple example program that will fetch data from the Kinect or Xtion and present it to the user, using the PCL library. It will also allow to save the current frame (as point cloud) to disk. If you feel lazy, you can download it here, and skip the next two sections. Otherwise, create a new directory anywhere in your hard disk and proceed.

CMakeLists.txt

Inside that directory, create a new text file named CMakeLists.txt. PCL-based programs use the CMake build system, too. Open it with any editor and paste the following content:

cmake_minimum_required(VERSION 2.8 FATAL_ERROR)
 
project(PCL_openni_viewer)
 
find_package(PCL 1.7 REQUIRED)
 
include_directories(${PCL_INCLUDE_DIRS})
link_directories(${PCL_LIBRARY_DIRS})
add_definitions(${PCL_DEFINITIONS})
 
set(PCL_BUILD_TYPE Release)
 
file(GLOB PCL_openni_viewer_SRC
    "src/*.h"
    "src/*.cpp"
)
add_executable(openniViewer ${PCL_openni_viewer_SRC})
 
target_link_libraries (openniViewer ${PCL_LIBRARIES})

CMake syntax is quite self-explanatory. We ask for a CMake version 2.8 installation, minimum. We declare a new project named "openni_PCL_viewer". We tell CMake to check for the presence of PCL library development files, version 1.7. If our system can not meet the CMake and PCL version requirement, the process will fail.

Next, we feed the compiler and linker the directories where PCL includes and libraries can be found, and the defined symbols. We tell CMake to use the "Release" build type, which will activate certain optimizations depending on the compiler we use. Other build types are available, like "Debug", "MinSizeRel", and "RelWithDebInfo".

Finally, we create a variable, "opennipclviewer_SRC", that will store a list of files to be compiled (though we will only have one). We create a new binary to be compiled from these source files, and we link it with the PCL library.

Check the CMake help for more interesting options.

main.cpp

We told CMake it could find the source files in a src/ subdirectory, so let's keep to out word and create it. Then, add a new main.cpp file inside and paste the following lines:

// Original code by Geoffrey Biggs, taken from the PCL tutorial in
// http://pointclouds.org/documentation/tutorials/pcl_visualizer.php

// Simple OpenNI viewer that also allows to write the current scene to a .pcd
// when pressing SPACE.

#include <pcl/io/openni_grabber.h>
#include <pcl/io/pcd_io.h>
#include <pcl/visualization/cloud_viewer.h>
#include <pcl/console/parse.h>

#include <iostream>

using namespace std;
using namespace pcl;

PointCloud<PointXYZRGBA>::Ptr cloudptr(new PointCloud<PointXYZRGBA>); // A cloud that will store color info.
PointCloud<PointXYZ>::Ptr fallbackCloud(new PointCloud<PointXYZ>);    // A fallback cloud with just depth data.
boost::shared_ptr<visualization::CloudViewer> viewer;                 // Point cloud viewer object.
Grabber* openniGrabber;                                               // OpenNI grabber that takes data from the device.
unsigned int filesSaved = 0;                                          // For the numbering of the clouds saved to disk.
bool saveCloud(false), noColor(false);                                // Program control.

void
printUsage(const char* programName)
{
	cout << "Usage: " << programName << " [options]"
		 << endl
		 << endl
		 << "Options:\n"
		 << endl
		 << "\t<none>     start capturing from an OpenNI device.\n"
		 << "\t-v FILE    visualize the given .pcd file.\n"
		 << "\t-h         shows this help.\n";
}

// This function is called every time the device has new data.
void
grabberCallback(const PointCloud<PointXYZRGBA>::ConstPtr& cloud)
{
	if (! viewer->wasStopped())
		viewer->showCloud(cloud);

	if (saveCloud)
	{
		stringstream stream;
		stream << "inputCloud" << filesSaved << ".pcd";
		string filename = stream.str();
		if (io::savePCDFile(filename, *cloud, true) == 0)
		{
			filesSaved++;
			cout << "Saved " << filename << "." << endl;
		}
		else PCL_ERROR("Problem saving %s.\n", filename.c_str());

		saveCloud = false;
	}
}

// For detecting when SPACE is pressed.
void
keyboardEventOccurred(const visualization::KeyboardEvent& event,
					  void* nothing)
{
	if (event.getKeySym() == "space" && event.keyDown())
		saveCloud = true;
}

// Creates, initializes and returns a new viewer.
boost::shared_ptr<visualization::CloudViewer>
createViewer()
{
	boost::shared_ptr<visualization::CloudViewer> v
	(new visualization::CloudViewer("OpenNI viewer"));
	v->registerKeyboardCallback(keyboardEventOccurred);

	return (v);
}

int
main(int argc, char** argv)
{
	if (console::find_argument(argc, argv, "-h") >= 0)
	{
		printUsage(argv[0]);
		return -1;
	}

	bool justVisualize(false);
	string filename;
	if (console::find_argument(argc, argv, "-v") >= 0)
	{
		if (argc != 3)
		{
			printUsage(argv[0]);
			return -1;
		}

		filename = argv[2];
		justVisualize = true;
	}
	else if (argc != 1)
	{
		printUsage(argv[0]);
		return -1;
	}

	// First mode, open and show a cloud from disk.
	if (justVisualize)
	{
		// Try with color information...
		try
		{
			io::loadPCDFile<PointXYZRGBA>(filename.c_str(), *cloudptr);
		}
		catch (PCLException e1)
		{
			try
			{
				// ...and if it fails, fall back to just depth.
				io::loadPCDFile<PointXYZ>(filename.c_str(), *fallbackCloud);
			}
			catch (PCLException e2)
			{
				return -1;
			}

			noColor = true;
		}

		cout << "Loaded " << filename << "." << endl;
		if (noColor)
			cout << "This cloud has no RGBA color information present." << endl;
		else cout << "This cloud has RGBA color information present." << endl;
	}
	// Second mode, start fetching and displaying frames from the device.
	else
	{
		openniGrabber = new OpenNIGrabber();
		if (openniGrabber == 0)
			return -1;
		boost::function<void (const PointCloud<PointXYZRGBA>::ConstPtr&)> f =
			boost::bind(&grabberCallback, _1);
		openniGrabber->registerCallback(f);
	}

	viewer = createViewer();

	if (justVisualize)
	{
		if (noColor)
			viewer->showCloud(fallbackCloud);
		else viewer->showCloud(cloudptr);
	}
	else openniGrabber->start();

	// Main loop.
	while (! viewer->wasStopped())
		boost::this_thread::sleep(boost::posix_time::seconds(1));

	if (! justVisualize)
		openniGrabber->stop();
}

Save and close.

Compiling

Follow the same steps you used to build PCL. That is, create a new build/ subdirectory next to the src/ one. Open a terminal there and issue:

cmake -DCMAKE_BUILD_TYPE=Release ..
make

Executing

Still from the same terminal, run the compiled example program:

./openniViewer

After some seconds, the main window will appear and the application will start grabbing frames from the device. You can inspect the current point cloud using the mouse, holding the left button to rotate, the right one (or the mouse wheel) to zoom, and the middle one to pan the camera around. At first, you may see only a black screen, or some big colored axes, but no cloud. Try zooming out, to see the whole scene. Another useful key is R, which will reset the camera parameters when pressed. Use it whenever you notice that zooming has gotten slow after some camera movement, or if you still can not see the cloud. See the PCLVisualizer tutorial for additional controls and features.

Whenever you feel ready, press the SPACE key. The program will pause for a fraction of a second and the output "Saved inputCloud0.pcd." will appear on the console. Check the current folder to see that file inputCloud0.pcd has indeed been written. You can now close the program with Q or Alt+F4.

Next, run it again giving the following parameter:

./openniViewer -v inputCloud0.pcd

This will tell the program not to take data from the device, but from the saved point cloud file instead. After it loads, you will realize that you are presented the same scene you saved to disk.

NOTE: PCD data is saved relative to the sensor. No matter how much you have manipulated the view, it will reset to default when you load the file.

Conclusion

At this point, your Kinect device should be working and getting depth data for you. There is a collection of excellent tutorials for PCL in the official webpage. I encourage you to finish them all before proceeding your experiments with the Kinect sensor. You can also find a good introduction/tutorial at the PCL library here.

If you use an ASUS Xtion PRO device instead, you should have gotten everything to work without problems or additional steps (except maybe for this one).




Go to root: PhD-3D-Object-Tracking

Links to articles:

PCL/OpenNI tutorial 0: The very basics

PCL/OpenNI tutorial 1: Installing and testing

PCL/OpenNI tutorial 2: Cloud processing (basic)

PCL/OpenNI tutorial 3: Cloud processing (advanced)

PCL/OpenNI tutorial 4: 3D object recognition (descriptors)

PCL/OpenNI tutorial 5: 3D object recognition (pipeline)

PCL/OpenNI troubleshooting