PCL/OpenNI tutorial 1: Installing and testing

From robotica.unileon.es
Revision as of 07:26, 18 September 2012 by Victorm (talk | contribs)

Jump to: navigation, search
Microsoft Kinect device.

This series of tutorials will explain the usage of a Kinect device for "serious" researching purposes. As you may know, Kinect is in fact an affordable depth sensor, developed with technology from PrimeSense, based on infrarred structured light method. It also has a common camera (which makes it a RGB-D device), a microphone and a motorized pivot. Its use is not limited to playing with a Xbox360 console, you can plug it to a computer and use it like any other sensor. Many open-source drivers and frameworks are available.

Since its release on November 2010, it has gained a lot of popularity, specially among the scientific community. Many researches have procured themselves one because, despite the low cost (about 150 €), it has proven to be a powerful solution for depth sensing projects. Current investigations focus on real-time surface mapping, object recognition and tracking, and localization. Impressive results (like the KinectFusion project from Microsoft) are already possible.

I will explain the installation and usage of one of these Kinect devices with a common PC, and the possibilities it offers. I will do it in an easy to understand way, intended for students that have just acquired it and want to start from scratch.

NOTE: The tutorials are written for Linux platforms. Also, 64-bit versions seem to work better than 32-bit.

Requirements

You will need the following:

  • A common Kinect device, out of the box. You can buy it in your local electronics shop, or online. It also includes a free copy of Kinect Adventures, which is useless if you do not own the console. Microsoft has released a Kinect for Windows device, which is a normal looking Kinect no longer compatible with Xbox360, that will only work with their official SDK, intended for developers only.
  • A computer running Linux (Debian or Ubuntu preferably).
  • A medium-sized room. Kinect has some limitations for depth measurement: 40cm minimum, 8m maximum (make it 6).

NOTE: Kinect for Windows may have problems working with open source drivers on Linux .

Connecting everything

Kinect does not work with a common USB port. Its power consumption is a bit higher because of the motor, so Microsoft came up with a connector that combines USB and power supply. Old Xbox 360 models needed a special adapter, new ones already have this new port. Luckily, Kinect comes with the official adapter out of the box (otherwise you will have to buy one).

Just plug the adapter to any power socket, and the USB to your computer. Let's check typing this in a terminal:

<geshi lang=Bash lines=0>lsusb</geshi>

Output should list the following devices:

<geshi lang=Bash lines=0>Bus 001 Device 005: ID 045e:02b0 Microsoft Corp. Xbox NUI Motor Bus 001 Device 006: ID 045e:02ad Microsoft Corp. Xbox NUI Audio Bus 001 Device 007: ID 045e:02ae Microsoft Corp. Xbox NUI Camera</geshi>

Installing the software

There is more than one way to get your Kinect working on your PC, and start developing applications for it:

  • Kinect for Windows: released on June 16, 2011 as a non-commercial SDK intended for application development. Version 1.5 was released on May 21, 2012. Because it comes from Microsoft, it is obviously the easiest way to get everything working. Sadly, there is no Linux version.
  • libfreenect library: from the OpenKinect project, it is intended to be a free and open source alternative to the official drivers. libfreenect is used by projects like ofxKinect, an addon for the openFrameworks toolkit that runs on Linux and OS X. ofxKinect packs a nice example application to show the RGB and point cloud taken from Kinect.
  • PrimeSense drivers: they were released as open source after the the OpenNI project was created, along with the motion tracking middleware, NITE. NI stands for Natural Interaction, and the project tries to enforce a common standard for human input using Kinect-like sensors. These official drivers are used by ROS (the Robot Operating System, a massive collection of libraries and tools for robotic researchers) and PCL (the Point Cloud Library, with everything needed for 3D point cloud processing).
  • SensorKinect: a modified version of the official PrimeSense drivers, used for example by ofxOpenNI (another openFrameworks addon).

For this tutorial, we are going to use PCL.

Precompiled PCL for Ubuntu

There is a PPA (Personal Package Archive, a private repository) which has everything we need. Add it to your sources, and install everything:

<geshi lang=Bash lines=0>sudo add-apt-repository ppa:v-launchpad-jochen-sprickerhof-de/pcl sudo apt-get install build-essential libpcl-all libpcl-all-dev openni-dev ps-engine cmake -y</geshi>

Compiling PCL from source

For Linuxes without a precompiled version of PCL, you will need to compile it yourself. This has an advantage, actually: you can customize the build options and choose what you want. Also, the result binaries and libraries should be a bit faster. The instructions are here, but the steps are easy so I will show them to you.

First, you must choose whether to install the stable or the experimental branch of PCL. The stable branch is the latest official release and it is guaranteed to work without problems. The experimental branch may give you a compiling error seldomly, but you can find some interesting features that stable users will have to wait some months for. Apart from that, both are built the same way.

Installing the dependencies

Some of PCL dependencies can be installed via the package manager. Others will require additional work.

<geshi lang=Bash lines=0>sudo apt-get install build-essential libboost-all-dev libeigen3-dev libflann-dev libvtk5-dev libvtk5-qt4-dev libglew-dev libxmu-dev libsuitesparse-dev libqhull-dev cmake cmake-curses-gui -y</geshi>

OpenNI

PCL uses OpenNI and the PrimeSense drivers to get data from Kinect. It is optional, but it would not make much sense not to install it, would it? If you are using Ubuntu, add the PPA and install openni-dev and ps-engine. Otherwise, go to the OpenNI download page and get the OpenNI and PrimeSense Sensor sources. Extract them, and install the dependencies:

<geshi lang=Bash lines=0>sudo apt-get install python libusb-1.0-0-dev freeglut3-dev doxygen graphviz -y</geshi>

You are not done yet. OpenNI requires Sun's official JDK (Java Development Kit), which is no longer available on apt repositories. Go to the Java SE downloads page (SE means Standard Edition) and download the latest version (i.e., jdk-7u7-linux-x64.tar.gz). Extract it, then move the contents to /usr/lib/jvm/ so it is available system-wide:

<geshi lang=Bash lines=0>sudo mkdir -p /usr/lib/jvm/ sudo cp -r jdk1.7.0_07/ /usr/lib/jvm/</geshi>

Then, make it the default choice to compile and run Java programs:

<geshi lang=Bash lines=0>sudo update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/jdk1.7.0_07/bin/java" 1 sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/lib/jvm/jdk1.7.0_07/bin/javac" 1 sudo update-alternatives --install "/usr/bin/jar" "jar" "/usr/lib/jvm/jdk1.7.0_07/bin/jar" 1</geshi>

To be sure, use:

<geshi lang=Bash lines=0>sudo update-alternatives --config java sudo update-alternatives --config javac sudo update-alternatives --config jar</geshi>

Sun's JDK is now installed. You can now go to the directory where you extracted OpenNI (OpenNI-OpenNI-3d355ac/ for me), and open a terminal in the Platform/Linux/CreateRedist/ subdirectory. Issue:

<geshi lang=Bash lines=0>./RedistMaker</geshi>

When it finishes, and if there are no errors, go to Platform/Linux/Redist/OpenNI-Bin-Dev-Linux-x64-v1.5.2.23/ (or your equivalent), and install (you must be root):

<geshi lang=Bash lines=0>sudo ./install.sh</geshi>

Now, go to the directory where you extracted the PrimeSense drivers (PrimeSense-Sensor-fc51d0a/ for me), and repeat the exact same procedure (go to Platform/Linux/CreateRedist/, issue ./RedistMaker, go to Platform/Linux/Redist/Sensor-Bin-Linux-x64-v5.1.0.41/, issue sudo ./install.sh). Congratulations, you have now installed OpenNI.

CUDA

Like OpenNI, nVidia CUDA is an optional dependency, that will allow PCL to use your GPU (Graphics Processing Unit, that is, your graphics card) for certain computations. This is mandatory for tools like KinFu (do not bother unless you have at least a series 400 card with 1.5 GB memory).

Go to the CUDA download page, which is self-explanatory, and get the toolkit and the SDK for your system (the drivers you already have installed, right?). Give them execute permissions:

<geshi lang=Bash lines=0>chmod +x cudatoolkit_4.2.9_linux_64_ubuntu11.04.run chmod +x gpucomputingsdk_4.2.9_linux.run</geshi>

And install them. You can use the default options:

<geshi lang=Bash lines=0>sudo ./cudatoolkit_4.2.9_linux_64_ubuntu11.04.run sudo ./gpucomputingsdk_4.2.9_linux.run</geshi>

Just as the installer output warns you, some additional steps are needed. Open /etc/ld.so.conf:

<geshi lang=Bash lines=0>sudo nano /etc/ld.so.conf</geshi>

And append these two lines:

<geshi lang=Bash lines=0>/usr/local/cuda/lib64 # For 64-bit only, comment it otherwise /usr/local/cuda/lib</geshi>

Save with Ctrl+O and Enter, exit with Ctrl+X. Reload the cache of the dynamic linker with:

<geshi lang=Bash lines=0>sudo ldconfig</geshi>

Now, append CUDA's bin directory to your PATH. Do this by editing your local .bashrc file:

<geshi lang=Bash lines=0>nano ~/.bashrc</geshi>

And append this line:

<geshi lang=Bash lines=0>export PATH=$PATH:/usr/local/cuda/bin</geshi>

CUDA is now installed.

Getting the source

To get the stable version, go to the downloads page, get PCL-1.6.0-Source.tar.bz2 or whatever the latest release is, and extract it somewhere. For the experimental version, use Subversion:

<geshi lang=Bash lines=0>sudo apt-get install subversion -y svn co http://svn.pointclouds.org/pcl/trunk PCL-trunk-Source</geshi>

Compiling

Go the the PCL source directory (PCL-1.6.0-Source/ or PCL-trunk-Source/ for me), and create a new subdirectory to keep the build files in:

<geshi lang=Bash lines=0>mkdir build cd build</geshi>

Now it is time to configure the project using CMake. We will tell it to build in Release (fully optimized, no debug capabilities) mode now, and customize the rest of the options later:

<geshi lang=Bash lines=0>cmake -DCMAKE_BUILD_TYPE=Release ..</geshi>

CMake should be able to find every dependency, thus being able to build every subsystem except for the ones marked as "Disabled by default". If you are happy, you can build now, otherwise let's invoke CMake's curses interface to change a couple of things (mind the final dot):

<geshi lang=Bash lines=0>ccmake .</geshi>

File:Ccmake GUI 2.png
Interface of ccmake.

Here you can change the build options. The program usage can be found at the bottom of the screen. Try turning all functionality "ON". The most important thing, in case you want to use CUDA, is to enable it and give CMake the path to your SDK. Go to the "CUDA_SDK_ROOT_DIR" option and enter the correct path (mine was /home/me/NVIDIA_GPU_Computing_SDK/).

When you are done, press C to configure and G to generate and exit the tool. Sometimes, the options you change can activate previously omitted parameters, or prompt some warning text. Just press E when you are finished reading the message, and keep pressing C until it lets you generate (new parameters will be marked with an asterisk, so you can check them and decide whether or not you want further customization).

If you are done configuring, it is time to build:

<geshi lang=Bash lines=0>make</geshi>

NOTE: Additionally, you can append the parameter -jX to speed up the compilation, X being the number of cores or processors of your PC, plus one.

Remember that, at any time, you can manually force the project to be reconfigured and built from scratch by emptying the build/ directory with:

<geshi lang=Bash lines=0>rm -rf ./*</geshi>

Installing

It will take some time to compile PCL (up to a few hours if your PC is not powerful enough). When it is finished, install it system-wide with:

<geshi lang=Bash lines=0>sudo make install</geshi>

And you should reboot and proceed to the next section, to see if your computer now recognizes (and uses) your Kinect device.

Testing

We are going to write a simple example program that will fetch data from the Kinect and present it to the user, using the PCL library. It will also allow to save the current frame (as point cloud) to disk. So, create a new directory anywhere in your hard disk.

CMakeLists.txt

Inside that directory, create a new text file named CMakeLists.txt. PCL-based programs use the CMake build system, too. Open it with any editor and paste the following content:

<geshi lang=CMake lines=0>cmake_minimum_required(VERSION 2.8 FATAL_ERROR)

project(kinect_PCL_viewer)

find_package(PCL 1.6 REQUIRED)

include_directories(${PCL_INCLUDE_DIRS}) link_directories(${PCL_LIBRARY_DIRS}) add_definitions(${PCL_DEFINITIONS})

set(PCL_BUILD_TYPE Release)

file(GLOB kinectpclviewer_SRC

   "src/*.h"
   "src/*.cpp"

) add_executable(kinectPCLviewer ${kinectpclviewer_SRC})

target_link_libraries (kinectPCLviewer ${PCL_LIBRARIES})</geshi>

CMake syntax is quite self-explanatory. We ask for a CMake version 2.8 installation, minimum. We declare a new project named "kinect_PCL_viewer". We tell CMake to check for the presence of PCL library development files, version 1.6. If our system can not meet the CMake and PCL version requirement, the process will fail.

Next, we feed the compiler and linker the directories where PCL includes and libraries can be found, and the defined symbols. We tell CMake to use the "Release" build type, which will activate certain optimizations depending on the compiler we use. Other build types are available, like "Debug", "MinSizeRel", and "RelWithDebInfo".

Finally, we create a variable, "kinectpclviewer_SRC", that will store a list of files to be compiled (though we will only have one). We create a new binary to be compiled from these source files, and we link it with the PCL library.

Check the CMake help for more interesting options.

main.cpp

We told CMake it could find the source files in a src/ subdirectory, so let's keep to out word and create it. Then, add a new main.cpp file and paste the following lines:


<geshi lang=CPP lines=0>// Original code by Geoffrey Biggs, taken from the PCL tutorial in // http://pointclouds.org/documentation/tutorials/pcl_visualizer.php

// Simple Kinect viewer that also allows to write the current scene to a .pcd // when pressing SPACE.

  1. include <iostream>
  1. include <pcl/io/openni_grabber.h>
  2. include <pcl/io/pcd_io.h>
  3. include <pcl/visualization/cloud_viewer.h>
  4. include <pcl/console/parse.h>

using namespace std; using namespace pcl;

PointCloud<PointXYZRGBA>::Ptr cloudptr(new PointCloud<PointXYZRGBA>); // A cloud that will store colour info. PointCloud<PointXYZ>::Ptr fallbackCloud(new PointCloud<PointXYZ>); // A fallback cloud with just depth data. boost::shared_ptr<visualization::CloudViewer> viewer; // Point cloud viewer object. Grabber* kinectGrabber; // OpenNI grabber that takes data from Kinect. unsigned int filesSaved = 0; // For the numbering of the clouds saved to disk. bool saveCloud(false), noColour(false); // Program control.

void printUsage(const char* programName) {

   cout << "Usage: " << programName << " [options]"
        << endl
        << endl
        << "Options:\n"
        << endl
        << "\t<none>     start capturing from a Kinect device.\n"
        << "\t-v NAME    visualize the given .pcd file.\n"
        << "\t-h         shows this help.\n";

}

// This function is called every time the Kinect has new data. void grabberCallback(const PointCloud<PointXYZRGBA>::ConstPtr& cloud) {

   if (! viewer->wasStopped())
       viewer->showCloud(cloud);
       
   if (saveCloud)
   {
       stringstream stream;
       stream << "inputCloud" << filesSaved << ".pcd";
       string filename = stream.str();
       if (io::savePCDFile(filename, *cloud, true) == 0)
       {
           filesSaved++;
           cout << "Saved " << filename << "." << endl;
       }
       else PCL_ERROR("Problem saving %s.\n", filename.c_str());
       
       saveCloud = false;
   }

}

// For detecting when SPACE is pressed. void keyboardEventOccurred(const visualization::KeyboardEvent& event,

   void* nothing)

{

   if (event.getKeySym() == "space" && event.keyDown())
       saveCloud = true;

}

// Creates, initializes and returns a new viewer. boost::shared_ptr<visualization::CloudViewer> createViewer() {

   boost::shared_ptr<visualization::CloudViewer> v
       (new visualization::CloudViewer("3D Viewer"));
   v->registerKeyboardCallback(keyboardEventOccurred);
   
   return(v);

}

int main(int argc, char** argv) {

   if (console::find_argument(argc, argv, "-h") >= 0)
   {
       printUsage(argv[0]);
       return 0;
   }
   
   bool justVisualize(false);
   string filename;
   if (console::find_argument(argc, argv, "-v") >= 0)
   {
       if (argc != 3)
       {
           printUsage(argv[0]);
           return 0;
       }
       
       filename = argv[2];
       justVisualize = true;
   }
   else if (argc != 1)
   {
       printUsage(argv[0]);
       return 0;
   }
   
   // First mode, open and show a cloud from disk.
   if (justVisualize)
   {
       // Try with colour information...
       try
       {
           io::loadPCDFile<PointXYZRGBA>(filename.c_str(), *cloudptr);
       }
       catch (PCLException e1)
       {
           try
           {
               // ...and if it fails, fall back to just depth.
               io::loadPCDFile<PointXYZ>(filename.c_str(), *fallbackCloud);
           }
           catch (PCLException e2)
           {
               return -1;
           }
           
           noColour = true;
       }
       
       cout << "Loaded " << filename << "." << endl;
       if (noColour)
           cout << "This file has no RGBA colour information present." << endl;
   }
   // Second mode, start fetching and displaying frames from Kinect.
   else
   {
       kinectGrabber = new OpenNIGrabber();
       if (kinectGrabber == 0)
           return false;
       boost::function<void (const PointCloud<PointXYZRGBA>::ConstPtr&)> f =
           boost::bind(&grabberCallback, _1);
       kinectGrabber->registerCallback(f);
   }
   
   viewer = createViewer();
   
   if (justVisualize)
   {
       if (noColour)
           viewer->showCloud(fallbackCloud);
       else viewer->showCloud(cloudptr);
   }
   else kinectGrabber->start();
   
   // Main loop.
   while (! viewer->wasStopped())
       boost::this_thread::sleep(boost::posix_time::seconds(1));
   
   if (! justVisualize)
       kinectGrabber->stop();

}</geshi>

Conclusions




FAQ: Kinect troubleshooting