Difference between revisions of "Ruben-RV-ROS01"

From robotica.unileon.es
Jump to: navigation, search
m
 
(23 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 
* '''Project name:''' Turtlebot robot indoor navigation
 
* '''Project name:''' Turtlebot robot indoor navigation
 
 
* '''Dates:''' July 2013 -  
 
* '''Dates:''' July 2013 -  
 
 
* '''Degree:''' Summer Research Residence
 
* '''Degree:''' Summer Research Residence
 
+
* '''Authors:''' Rubén Rodríguez Fernández
* '''Authors:''' Ruben
+
* '''Contact:''' runix404@gmail.com, rrodrf04@estudiantes.unileon.es
 
 
* '''Contact:''' runix404@gmail.com
 
 
 
* '''SVN Repositories:'''
 
 
 
 
* '''Tags:''' ROS, navigation, turtlebot
 
* '''Tags:''' ROS, navigation, turtlebot
 
 
* '''Technologies:''' kinect, pcl, openni, c++, cmake, ROS
 
* '''Technologies:''' kinect, pcl, openni, c++, cmake, ROS
 +
* '''Status:''' Ongoing
  
* '''State:''' Ongoing
 
  
----
+
=ROS Fuerte Installation on Ubuntu 12.04=
  
 +
==Repository configuration==
  
==ROS Fuerte Installation on Ubuntu 12.04==
+
First we need to add the repository
===Sources.list Configuration===
 
For the computer accept packages.ros.org must enter in the terminal the next line.
 
 
<syntaxhighlight lang=Bash>sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu precise main" > /etc/apt/sources.list.d/ros-latest.list'</syntaxhighlight>
 
<syntaxhighlight lang=Bash>sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu precise main" > /etc/apt/sources.list.d/ros-latest.list'</syntaxhighlight>
  
===Keys Configuration===
+
and setup the keys
 
<syntaxhighlight lang=Bash>wget http://packages.ros.org/ros.key -O - | sudo apt-key add -</syntaxhighlight>
 
<syntaxhighlight lang=Bash>wget http://packages.ros.org/ros.key -O - | sudo apt-key add -</syntaxhighlight>
  
===Installation===
+
==Installation==
We make sure that the server ROS.org is indexed.
+
 
 +
We need to refresh the cache of your package manager
 
<syntaxhighlight lang=Bash>sudo apt-get update</syntaxhighlight>
 
<syntaxhighlight lang=Bash>sudo apt-get update</syntaxhighlight>
  
Install the library package and complete tools: ROS, rx, rviz, robot-generic libraries, 2D/3D simulators, navigation and 2D/3D perception.
+
In our case, we decide to install ros-fuerte-desktop-full that is recommended in the ROS web, but you can also install other as explained on the [http://wiki.ros.org/fuerte/Installation/Ubuntu  web]
 +
 
 
<syntaxhighlight lang=Bash>sudo apt-get install ros-fuerte-desktop-full</syntaxhighlight>
 
<syntaxhighlight lang=Bash>sudo apt-get install ros-fuerte-desktop-full</syntaxhighlight>
  
Also you can [http://ros.org/wiki/fuerte/Installation/Ubuntu#Installation-1/ install independently] the libraries, tools and stacks.
+
==Enviroment Configuration==
 +
 
 +
Now we need to configure the enviroment variables
 +
 
 +
<syntaxhighlight lang=Bash>echo "source /opt/ros/fuerte/setup.bash" >> ~/.bashrc </syntaxhighlight>
 +
 
 +
For refresh the enviroment variables of your terminal you can reset it or type
 +
 
 +
<syntaxhighlight lang=Bash> . ~/.bashrc </syntaxhighlight>
 +
 
 +
=Turtlebot installation=
 +
 
 +
The turtlebot installation is very simple, you only have to type in your terminal
 +
 
 +
<syntaxhighlight lang=Bash> sudo apt-get install ros-fuerte-turtlebot* </syntaxhighlight>
 +
 
 +
=Navigation=
 +
 
 +
==Feasible Solutions==
 +
 
 +
The feasible solutions are Dynamic Window and Potencial Field :
 +
 
 +
1.-Dynamic Window : is a velocity-based local planner that calculates the optimal collision-free ('admissible') linear and angular velocity
 +
for a robot required  to reach its goal
 +
 
 +
2.-Potential-field : In the pontetial-field navigation each obstacle has an obstacle 'force field' for repelling the robot,
 +
and the goal has a attraction fiel
 +
 
 +
==Dynamic window algorithm==
 +
 
 +
1.- Calculate the desired velocities ( angular and linear ).
 +
 
 +
2.- Select the allowable velocities ( angular and linear ) , this is compute the dynamic window, after this step we have a
 +
allowable velocities range ( [ min linear, max linear ], [ min angular, max angular ])
 +
 
 +
3.- For each tuple [ v, w ] determine the closest obstacle
 +
 
 +
4.- Determine if the distance to the closest obstacle is within the robots braking distance.
 +
 
 +
5.- Now, we can calculate the score of this tuple, the velocities that are more near to desired velocities will get a more score
 +
 
 +
6.- Now  we select the velocity that has the most score
 +
 
 +
7.- If the score is higher than a minimun score return this tuple, else the linear and angular velocities will be 0
 +
 
 +
In pseudocode ([http://adrianboeing.blogspot.com.es/2012/05/dynamic-window-algorithm-motion.html source])
 +
 
 +
  BEGIN DWA(robotPose,robotGoal,robotModel)
 +
  desiredV = calculateV(robotPose,robotGoal)
 +
  laserscan = readScanner()
 +
  allowable_v = generateWindow(robotV, robotModel)
 +
  allowable_w  = generateWindow(robotW, robotModel)
 +
  for each v in allowable_v
 +
    for each w in allowable_w
 +
    dist = find_dist(v,w,laserscan,robotModel)
 +
    breakDist = calculateBreakingDistance(v)
 +
    if (dist > breakDist)  //can stop in time
 +
        heading = hDiff(robotPose,goalPose, v,w)
 +
        clearance = (dist-breakDist)/(dmax - breakDist)
 +
        cost = costFunction(heading,clearance, abs(desired_v - v))
 +
        if (cost > optimal)
 +
          best_v = v
 +
          best_w = w
 +
          optimal = cost
 +
  set robot trajectory to best_v, best_w
 +
    END
 +
 
 +
==Experiments==
 +
 
 +
In these experiments, the robot will try to reach the position (3,3) from the origin.
 +
 
 +
1.
 +
Simple scenario
 +
[http://www.youtube.com/watch?v=_48PI78E-hg Simple navigation]
 +
 
 +
2.
 +
A wall scenario
 +
[http://www.youtube.com/watch?v=nSkOCBKolfQ Navigation with obstacle]
 +
 
 +
3.
 +
A realistic scenario
 +
 
 +
=Path Planning=
 +
 
 +
==Voronoi==
 +
 
 +
ongoing
 +
 
 +
==Graph SLAM==
 +
 
 +
ongoing
 +
 
 +
=Convert sensor_msgs/Range to sensor_msgs/LaserScan message=
 +
 
 +
To be able to use Ultrasonic data in Laserscan we need to convert the sensor_msgs/Range message to sensor_msgs/Laserscan. The messages
 +
has two fields in common and the other fields can be calculated using tfs and the other data contained in the Range message.
 +
 
 +
The proccess is simple, in our case we need to convert 5 Ranges that are besides into a Laserscan message. So, the first part is to link the 5 messages and after convert into a Laserscan message. For achieve that, we use a cache and store the messages and when we have the 5 ranges create the Laserscan message, the proccess of caching the messages is simple too but we need to take care because the messages mightn't be sequencial.
 +
 
 +
Create laserscan message process
 +
 
 +
Caching process
 +
 
 +
<syntaxhighlight lang=c++>
 +
ROS_DEBUG("Cache message");
 +
    boost::mutex::scoped_lock lock(cache_mutex);
 +
bool justCached = false;
 +
int maxMessage = 0;
 +
for (unsigned int i = 0; i < this->subscribers.size(); i++) {
 +
if (cache[0][i] == 0) {
 +
maxMessage = i;
 +
break;
 +
}
 +
ROS_DEBUG("cache %s with number %d ", message.header.frame_id.c_str(), i);
 +
if (cache[0][i]->header.frame_id.
 +
compare(message.header.frame_id) == 0) {
 +
justCached = true;
 +
break;
 +
}
 +
}
 +
 +
if (!justCached) {
 +
sensor_msgs::LaserScan * copy = new sensor_msgs::LaserScan;
 +
copy-> header = message.header;
 +
copy-> angle_min = message.angle_min;
 +
copy-> angle_max = message.angle_max;
 +
copy-> angle_increment = message.angle_increment;
 +
copy-> time_increment = message.time_increment;
 +
copy-> scan_time = message.scan_time;
 +
copy-> range_min = message.range_min;
 +
copy-> range_max = message.range_max;
 +
copy-> ranges = message.ranges;
 +
copy-> intensities = message.intensities;
 +
 +
cache[0][maxMessage] = copy;
 +
 +
if (maxMessage == this->subscribers.size() - 1) {
 +
this->proccessTrigger();
 +
for (unsigned int i = 0; i < this->subscribers.size(); i++) {
 +
delete cache[0][i];
 +
cache[0][i] = 0;
 +
}
 +
sensor_msgs::LaserScan ** temp = cache[1];
 +
cache[1] = cache[0];
 +
cache[0] = temp;
 +
}
 +
} else {
 +
maxMessage = 0;
 +
justCached = false;
 +
sensor_msgs::LaserScan * copy = new sensor_msgs::LaserScan;
 +
copy-> header = message.header;
 +
copy-> angle_min = message.angle_min;
 +
copy-> angle_max = message.angle_max;
 +
copy-> angle_increment = message.angle_increment;
 +
copy-> time_increment = message.time_increment;
 +
copy-> scan_time = message.scan_time;
 +
copy-> range_min = message.range_min;
 +
copy-> range_max = message.range_max;
 +
copy-> ranges = message.ranges;
 +
copy-> intensities = message.intensities;
 +
 +
for (unsigned int i = 0; i < this->subscribers.size(); i++) {
 +
if (cache[1][i] == 0) {
 +
maxMessage = i;
 +
break;
 +
}
 +
if (cache[1][i]->header.frame_id.
 +
compare(message.header.frame_id) == 0) {
 +
cache[1][i] = copy;
 +
justCached = true;
 +
}
 +
}
 +
if (!justCached) {
 +
cache[1][maxMessage] = copy;
 +
}
 +
}
 +
</syntaxhighlight>
 +
 
 +
Finally we need to link the messages and publish the laserscan message
  
===Enviroment Configuration===
 
For the enviroment variables are added each time you start the computer type this lines
 
<syntaxhighlight lang=Bash>echo "source /opt/ros/fuerte/setup.bash" >> ~/.bashrc
 
. ~/.bashrc</syntaxhighlight>
 
  
We can check if we have added typing ''gedit ~/.bashrc'' and looking at end of file
+
<syntaxhighlight lang=c++>
 +
void UltrasonicFakeLaserscan::createMessageFromRanges(sensor_msgs::
 +
  LaserScan ** lc,
 +
  int size) {
 +
ROS_DEBUG("Create new message with stamp %d ",sequence);
 +
float minAngle = lc[0]->angle_min;
 +
float maxAngle = lc[0]->angle_max;
 +
float minRange = lc[0]->range_min;
 +
float maxRange = lc[0]->range_max;
 +
int indexMin = 0;
 +
long nanoseconds = 0;
 +
long seconds = 0;
 +
int pointsSize = 0;
 +
for (int i = 0; i < size; i++) {
 +
if (minAngle > lc[i]->angle_min) {
 +
indexMin = i;
 +
minAngle = lc[i]->angle_min;
 +
}
  
[[File:Bashrc.png|500px|thumb|center|bashrc file]]
+
if (maxAngle < lc[i]->angle_max) {
 +
maxAngle = lc[i]->angle_max;
 +
}
  
'''IMPORTANT!!!''' If there is more than one ROS distribution installed in the file ~/.bashrc must only contain the version you're using
+
if (minRange > lc[i]->range_min) {
 +
minRange = lc[i]->range_min;
 +
}
  
===Independent Tools===
+
if (maxRange < lc[i]->range_max) {
Rosinstall and rosdep are used often in the terminal, but they are distributed individually. Rosinstall allows you to download source code trees for packages and ROS stacks. Rosdep allows easy installation of the system dependencies for the source code to be compiled.
+
maxRange = lc[i]->range_max;
 +
}
  
For install those tools, you write in the terminal the next line
+
nanoseconds = nanoseconds + lc[i]->header.stamp.nsec;
<syntaxhighlight lang=Bash>sudo apt-get install python-rosinstall python-rosdep</syntaxhighlight>
+
seconds = seconds + lc[i]->header.stamp.sec;
 +
pointsSize = pointsSize + lc[i]->ranges.size();
 +
}
  
===Create a ROS Workspace===
+
* '''Create a new workspace'''
+
nanoseconds = nanoseconds / size;
With the next command we go to create a workspace in ~/fuerte_workspace which inherits the set of installed packages in /opt/ros/fuerte
+
seconds = seconds / size;
  
<syntaxhighlight lang=Bash>rosws init ~/fuerte_workspace /opt/ros/fuerte</syntaxhighlight>
+
sensor_msgs::LaserScan scan;
 +
//Create laserscan message
 +
scan.header.seq = sequence++;
 +
scan.header.stamp = ros::Time(seconds, nanoseconds);
 +
scan.header.frame_id = UltrasonicFakeLaserscan::CAMERA_LINK;
 +
scan.angle_min = minAngle;
 +
scan.angle_max = maxAngle;
 +
scan.angle_increment = (fabs(scan.angle_max - scan.angle_min)) / pointsSize ;
 +
scan.time_increment = 0;
 +
scan.range_min = minRange;
 +
scan.range_max = maxRange;
 +
scan.ranges.resize(pointsSize);
 +
 +
//Copy the first message
 +
float actualMaxAngle = lc[indexMin]->angle_max;;
 +
int count = 0;
 +
for (unsigned int i = 0; i < lc[indexMin]->ranges.size(); i++) {
 +
scan.ranges[count++] = lc[indexMin]->ranges[i];
 +
}
  
'''NOTE: ''' rosw is a type of rosintall package which doesn't install default. if we haven't  installed rosintall in the last steps, we must do with the next command
+
if (indexMin != 0) {
 +
sensor_msgs::LaserScan * temp = lc[indexMin];
 +
lc[indexMin] = lc[0];
 +
lc[0] = temp;
 +
}
 +
 +
for(int i = 1; i < size; i++) {
 +
int index = i;
 +
double min = lc[i]->angle_min;
 +
for(int j = i; j < size; j++) {
 +
if(min > lc[j]->angle_min) {
 +
index = j;
 +
min = lc[j]->angle_min;
 +
}
 +
}
 +
if(index != i) {
 +
sensor_msgs::LaserScan * temp = lc[index];
 +
lc[index] = lc[i];
 +
lc[i] = temp;
 +
}
 +
 +
for (unsigned int index = 0;
 +
index < lc[i]->ranges.size(); index++) {
 +
scan.ranges[count++] = lc[i]->ranges[index];
 +
}
 +
}
 +
ROS_DEBUG("Sent");
 +
this->scan_pub.publish(scan);
  
<syntaxhighlight lang=Bash>sudo apt-get install python-rosinstall</syntaxhighlight>
+
</syntaxhighlight>
  
* '''Create a sandbox directory for the new packages'''
+
=Stop and resume actual navigation in ROS navigation stack=
When we create a new package we always must put it in a directory that is in the enviroment variable ROS_PACKAGE_PATH. All the directories that are added with rosw is added automatically to this variable. For this. we create a sandbox directory to which we will add the hide file .rosinstall with rosw
 
  
We create the sandbox directory in our workspace created before and we add to the workspace
+
The main goal of this chapter is stop and resume the actual navigation, this can be useful when the robot need to wait for example to a person.
  
<syntaxhighlight lang=Bash>mkdir ~/fuerte_workspace/sandbox
+
For achieve that, we need to modify the move base package and introduce a few lines. This lines only check if the navigation is stopped and then don't sending the velocity command. And for modify the state of the navigation, we create a new topic to change this value.
rosws set ~/fuerte_workspace/sandbox</syntaxhighlight>
 
  
Each time that the entries of workspace change it, is necessary to do a re-source
+
When you want to start the navigation stack, you must start the custom navigation stack that introduces this modifications, after you can change this value using the library libmb_sendskip.so that includes the class MB_robot::SendState that allows to change this parameter with the method sendState(bool state).
  
<syntaxhighlight lang=Bash>source ~/fuerte_workspace/setup.bash</syntaxhighlight>
+
=Goals correction for navigation stack=
  
Also, it is advisable to add the last line to the file ~/.bashrc just after the line to which already added at the beginning of installation for haven't execute each time we starts our computer.
+
One of the problems with the robots is that normally they are smaller than a human, in some situations this isn't a problem, but if a human is follow the robot it can become in a problem. For example, the optimal path can go under a table or other object.
  
To confirm that our path has been modified we can execute the next line
+
In ROS, the navigation stack uses costmaps to find valid paths, this costmap is 2D so we only need to modify the property of the maximun obstacle height and put a value higher than the height of the robot.
  
<syntaxhighlight lang=Bash>$ echo $ROS_PACKAGE_PATH</syntaxhighlight>
+
If you take a look at the [http://wiki.ros.org/costmap_2d Costmap2D doc] you can see an attribute called max_obstacle_height, the doc description about this parameter is the following :
  
and we must obtain a output similar to this
+
''The maximum height in meters of a sensor reading considered valid. This is usually set to be slightly higher than the height of the robot. Setting this parameter to a value greater than the global max_obstacle_height parameter has no effect. Setting this parameter to a value less than the global max_obstacle_height will filter out points from this sensor above that height.''
  
<syntaxhighlight lang=Bash>/home/your_user_name/fuerte_workspace/sandbox:/opt/ros/fuerte/share:/opt/ros/fuerte/stacks</syntaxhighlight>
+
So, if we put the value of a human we solve this problem.

Latest revision as of 18:33, 15 January 2015

  • Project name: Turtlebot robot indoor navigation
  • Dates: July 2013 -
  • Degree: Summer Research Residence
  • Authors: Rubén Rodríguez Fernández
  • Contact: runix404@gmail.com, rrodrf04@estudiantes.unileon.es
  • Tags: ROS, navigation, turtlebot
  • Technologies: kinect, pcl, openni, c++, cmake, ROS
  • Status: Ongoing


ROS Fuerte Installation on Ubuntu 12.04

Repository configuration

First we need to add the repository

sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu precise main" > /etc/apt/sources.list.d/ros-latest.list'

and setup the keys

wget http://packages.ros.org/ros.key -O - | sudo apt-key add -

Installation

We need to refresh the cache of your package manager

sudo apt-get update

In our case, we decide to install ros-fuerte-desktop-full that is recommended in the ROS web, but you can also install other as explained on the web

sudo apt-get install ros-fuerte-desktop-full

Enviroment Configuration

Now we need to configure the enviroment variables

echo "source /opt/ros/fuerte/setup.bash" >> ~/.bashrc

For refresh the enviroment variables of your terminal you can reset it or type

 . ~/.bashrc

Turtlebot installation

The turtlebot installation is very simple, you only have to type in your terminal

 sudo apt-get install ros-fuerte-turtlebot*

Navigation

Feasible Solutions

The feasible solutions are Dynamic Window and Potencial Field :

1.-Dynamic Window : is a velocity-based local planner that calculates the optimal collision-free ('admissible') linear and angular velocity for a robot required to reach its goal

2.-Potential-field : In the pontetial-field navigation each obstacle has an obstacle 'force field' for repelling the robot, and the goal has a attraction fiel

Dynamic window algorithm

1.- Calculate the desired velocities ( angular and linear ).

2.- Select the allowable velocities ( angular and linear ) , this is compute the dynamic window, after this step we have a allowable velocities range ( [ min linear, max linear ], [ min angular, max angular ])

3.- For each tuple [ v, w ] determine the closest obstacle

4.- Determine if the distance to the closest obstacle is within the robots braking distance.

5.- Now, we can calculate the score of this tuple, the velocities that are more near to desired velocities will get a more score

6.- Now we select the velocity that has the most score

7.- If the score is higher than a minimun score return this tuple, else the linear and angular velocities will be 0

In pseudocode (source)

  BEGIN DWA(robotPose,robotGoal,robotModel)
  	desiredV = calculateV(robotPose,robotGoal)
  	laserscan = readScanner()
  	allowable_v = generateWindow(robotV, robotModel)
  	allowable_w  = generateWindow(robotW, robotModel)
  	for each v in allowable_v
  	   for each w in allowable_w
  	   dist = find_dist(v,w,laserscan,robotModel)
  	   breakDist = calculateBreakingDistance(v)
  	   if (dist > breakDist)  //can stop in time
  	      heading = hDiff(robotPose,goalPose, v,w)
  	      clearance = (dist-breakDist)/(dmax - breakDist)
  	      cost = costFunction(heading,clearance, abs(desired_v - v))
  	      if (cost > optimal)
  	         best_v = v
  	         best_w = w
  	         optimal = cost
  	 set robot trajectory to best_v, best_w
   END

Experiments

In these experiments, the robot will try to reach the position (3,3) from the origin.

1. Simple scenario Simple navigation

2. A wall scenario Navigation with obstacle

3. A realistic scenario

Path Planning

Voronoi

ongoing

Graph SLAM

ongoing

Convert sensor_msgs/Range to sensor_msgs/LaserScan message

To be able to use Ultrasonic data in Laserscan we need to convert the sensor_msgs/Range message to sensor_msgs/Laserscan. The messages has two fields in common and the other fields can be calculated using tfs and the other data contained in the Range message.

The proccess is simple, in our case we need to convert 5 Ranges that are besides into a Laserscan message. So, the first part is to link the 5 messages and after convert into a Laserscan message. For achieve that, we use a cache and store the messages and when we have the 5 ranges create the Laserscan message, the proccess of caching the messages is simple too but we need to take care because the messages mightn't be sequencial.

Create laserscan message process

Caching process

	ROS_DEBUG("Cache message");
		    boost::mutex::scoped_lock lock(cache_mutex);
			bool justCached = false;
			int maxMessage = 0;
			for (unsigned int i = 0; i < this->subscribers.size(); i++) {
				if (cache[0][i] == 0) {
					maxMessage = i;
					break;
				}
				ROS_DEBUG("cache %s with number %d ", message.header.frame_id.c_str(), i);
				if (cache[0][i]->header.frame_id.
					compare(message.header.frame_id) == 0) {
					justCached = true;
					break;
				}
			}
			
			if (!justCached) {
				sensor_msgs::LaserScan * copy = new sensor_msgs::LaserScan;
				copy-> header = message.header;
				copy-> angle_min = message.angle_min;
				copy-> angle_max = message.angle_max;
				copy-> angle_increment = message.angle_increment;
				copy-> time_increment = message.time_increment;
				copy-> scan_time = message.scan_time;
				copy-> range_min = message.range_min;
				copy-> range_max = message.range_max;
				copy-> ranges = message.ranges;
				copy-> intensities = message.intensities;
				
				cache[0][maxMessage] = copy;
				
				if (maxMessage == this->subscribers.size() - 1) {
					this->proccessTrigger();
					for (unsigned int i = 0; i < this->subscribers.size(); i++) {
						delete cache[0][i];
						cache[0][i] = 0;
					}
					sensor_msgs::LaserScan ** temp = cache[1];
					cache[1] = cache[0];
					cache[0] = temp;
				}
			} else {
				maxMessage = 0;
				justCached = false;
				sensor_msgs::LaserScan * copy = new sensor_msgs::LaserScan;
				copy-> header = message.header;
				copy-> angle_min = message.angle_min;
				copy-> angle_max = message.angle_max;
				copy-> angle_increment = message.angle_increment;
				copy-> time_increment = message.time_increment;
				copy-> scan_time = message.scan_time;
				copy-> range_min = message.range_min;
				copy-> range_max = message.range_max;
				copy-> ranges = message.ranges;
				copy-> intensities = message.intensities;
				
				for (unsigned int i = 0; i < this->subscribers.size(); i++) {
					if (cache[1][i] == 0) {
						maxMessage = i;
						break;
					}
					if (cache[1][i]->header.frame_id.
						compare(message.header.frame_id) == 0) {
						cache[1][i] = copy;
						justCached = true;
					}
				}
				if (!justCached) {
					cache[1][maxMessage] = copy;
				}
			}

Finally we need to link the messages and publish the laserscan message


void UltrasonicFakeLaserscan::createMessageFromRanges(sensor_msgs::
														  LaserScan ** lc,
														  int size) {
		ROS_DEBUG("Create new message with stamp %d ",sequence);
		float minAngle = lc[0]->angle_min;
		float maxAngle = lc[0]->angle_max;
		float minRange = lc[0]->range_min;
		float maxRange = lc[0]->range_max;
		int indexMin = 0;
		long nanoseconds = 0;
		long seconds = 0;
		int pointsSize = 0;
		for (int i = 0; i < size; i++) {
			if (minAngle > lc[i]->angle_min) {
				indexMin = i;
				minAngle = lc[i]->angle_min;
			}

			if (maxAngle < lc[i]->angle_max) {
				maxAngle = lc[i]->angle_max;
			}

			if (minRange > lc[i]->range_min) {
				minRange = lc[i]->range_min;
			}

			if (maxRange < lc[i]->range_max) {
				maxRange = lc[i]->range_max;
			}

			nanoseconds = nanoseconds + lc[i]->header.stamp.nsec;
			seconds = seconds + lc[i]->header.stamp.sec;
			pointsSize = pointsSize + lc[i]->ranges.size();
		}

		
		nanoseconds = nanoseconds / size;
		seconds = seconds / size;

		sensor_msgs::LaserScan scan;
		//Create laserscan message
		scan.header.seq = sequence++;
		scan.header.stamp = ros::Time(seconds, nanoseconds);
		scan.header.frame_id = UltrasonicFakeLaserscan::CAMERA_LINK;
		scan.angle_min = minAngle;
		scan.angle_max = maxAngle;
		scan.angle_increment = (fabs(scan.angle_max - scan.angle_min)) / pointsSize ;
		scan.time_increment = 0;
		scan.range_min = minRange;
		scan.range_max = maxRange;
		scan.ranges.resize(pointsSize);
		
		//Copy the first message
		float actualMaxAngle = lc[indexMin]->angle_max;;
		int count = 0;
		for (unsigned int i = 0; i < lc[indexMin]->ranges.size(); i++) {
			scan.ranges[count++] = lc[indexMin]->ranges[i];
		}

		if (indexMin != 0) {
			sensor_msgs::LaserScan * temp = lc[indexMin];
			lc[indexMin] = lc[0];
			lc[0] = temp;
		}
		
		for(int i = 1; i < size; i++) {
				int index = i;
				double min = lc[i]->angle_min;
				for(int j = i; j < size; j++) {
					if(min > lc[j]->angle_min) {
						index = j;
						min = lc[j]->angle_min;
					}
				}
				if(index != i) {
					sensor_msgs::LaserScan * temp = lc[index];
					lc[index] = lc[i];
					lc[i] = temp;
				}
				
				for (unsigned int index = 0;
					index < lc[i]->ranges.size(); index++) {	
					scan.ranges[count++] = lc[i]->ranges[index];
				}
		}
		ROS_DEBUG("Sent");
		this->scan_pub.publish(scan);

Stop and resume actual navigation in ROS navigation stack

The main goal of this chapter is stop and resume the actual navigation, this can be useful when the robot need to wait for example to a person.

For achieve that, we need to modify the move base package and introduce a few lines. This lines only check if the navigation is stopped and then don't sending the velocity command. And for modify the state of the navigation, we create a new topic to change this value.

When you want to start the navigation stack, you must start the custom navigation stack that introduces this modifications, after you can change this value using the library libmb_sendskip.so that includes the class MB_robot::SendState that allows to change this parameter with the method sendState(bool state).

Goals correction for navigation stack

One of the problems with the robots is that normally they are smaller than a human, in some situations this isn't a problem, but if a human is follow the robot it can become in a problem. For example, the optimal path can go under a table or other object.

In ROS, the navigation stack uses costmaps to find valid paths, this costmap is 2D so we only need to modify the property of the maximun obstacle height and put a value higher than the height of the robot.

If you take a look at the Costmap2D doc you can see an attribute called max_obstacle_height, the doc description about this parameter is the following :

The maximum height in meters of a sensor reading considered valid. This is usually set to be slightly higher than the height of the robot. Setting this parameter to a value greater than the global max_obstacle_height parameter has no effect. Setting this parameter to a value less than the global max_obstacle_height will filter out points from this sensor above that height.

So, if we put the value of a human we solve this problem.