Difference between revisions of "RoCKIn2014"

From robotica.unileon.es
Jump to: navigation, search
(Ros: Debugging Techniques)
(Ros: Debugging Techniques)
Line 212: Line 212:
 
</syntaxhighlight>
 
</syntaxhighlight>
  
; EXTRA: Core dumps
+
; EXTRA - Core dumps
  
 
The easy way to get [http://wiki.ros.org/roslaunch/Tutorials/Roslaunch%20Nodes%20in%20Valgrind%20or%20GDB core dumps]
 
The easy way to get [http://wiki.ros.org/roslaunch/Tutorials/Roslaunch%20Nodes%20in%20Valgrind%20or%20GDB core dumps]
  
Set the core size to unlimited:  
+
Set the core size to unlimited (if it is not set) :  
 +
 
 
<pre>
 
<pre>
$ sudo ulimit -c unlimited
+
$ ulimit -a
 +
core file size          (blocks, -c) 0
 +
...< more info here >...
 +
 
 +
$ ulimit -c unlimited
 +
 
 +
$ ulimit -a
 +
core file size          (blocks, -c) unlimited
 +
...< more info here >...
 
</pre>
 
</pre>
  

Revision as of 08:32, 21 October 2013

RoCKIn Camp 2014

  • Project Name:

LogoRockin.png

  • Official Web Page
RoCKIn@home 


  • Project Codename
Watermelon :D


  • Advisor:
Vicente Matellán Olivera


  • Staff:
Technical software: Fernando Casado
Technical software: Víctor Rodríguez 
Technical software: Francisco Lera
Technical hardware: Carlos Rodríguez
  • Other Information:
* Academic Year: 2013-2014
* SVN Repositories: soon	... 
* Tags: Augmented Reality, Elderly people, Tele-Assistence
* Technology: ROS, PCL, c++, svn, OpenCV, cmake, OpenGL, Qt, Aruco, 
* State: Development

Project Summary

This challenge focuses on domestic service robots. The project aims robots with enhanced networking and cognitive abilities. They will be able to perform socially useful tasks such as supporting the impaired and the elderly (one of the main goal of our group).

In the initial stages of the competition individual robots will begin by overcoming basic individual tasks, such as navigation through the rooms of a house, manipulating objects or recognizing faces, and then coordinate to handle house-keeping tasks simultaneously, some of them in natural interaction with humans.

Robot

We want to take part in RoCKIn with the platform developed during the las two years in the Catedra Telefónica-ule.

MYRABot robot.

Robot Hardware

  1. iRobot Roomba 520
  2. Dinamixel Arm (5x12a)
  3. wood frame (yes, it is made with wood)
  4. Notebook (Atom processor) (display+computer are separeted)
  5. Kinect
  6. Arduino Mega

Robot Software

  1. ROS (robot control)
  2. MYRA (C/C++, ArUCo, Qt, openCV)


Proposal

We want to deploy in this robot the minimal functional abilities to be part of RoCKIn 2014.

  • Navigation
  • Mapping
  • Person recognition
  • Person tracking
  • Object recognition
  • Object manipulation
  • Speech recognition
  • Gesture recognition
  • Cognition

We are going to separate the development in three phases:

  1. Phase I: Initial Setup
  2. Phase II: Integration and architecture
  3. Phase III: Platform test
  4. Phase IV: Improvements and complex tasks
    1. Technical Challenge: Furniture-type Object perception
    2. Open Challenge: Present and demonstrate most important (scientific) achievements


Phase I: Initial Setup

Hardware Preparation


We need to make some initial tasks to fulfill the basic hardware setup, configuration, and customization of the robot.


Task 1
Get power from roomba brush (It is going to be used in the arm )
BrushModification.jpg
<videoflash>KiNFuWWZwFs</videoflash>
Task 2
Emergency Stop Button
PictureChainButtonStop.jpg


Task 3
Start Button

ToDo

Software Preparation


Environment setup

We are going to define the basis of the system to be deployed.

Packages search

We use ROS so we can find at least a package for each ability ready to deploy in a robot. In this way, this task involves search and test each package to evaluate if we are able to deploy in our platform.

 Navigation 
   2D navigation stack
   Turtlebot Navigation
 Mapping
   SLAM
 Object recognition
    Simple Qt interface to try OpenCV implementations of SIFT, SURF, FAST, BRIEF and other feature detectors and descriptors.
    find-object stack
 Speech recognition
   Speech Recognition and Text-to-Speech (TTS) in π robot
   Packages pocketsphinx and Festival
 Cognition 
    To be done during stacks integration
Person recognition
Person tracking
Object manipulation
Gesture recognition

Ros: Debugging Techniques

It is possible to make debugging in ROS in two ways

Launch file

Following the Roslaunch techniques

 
launch-prefix="xterm -e gdb --args" : run your node in a gdb in a separate xterm window, manually type run to start it

launch-prefix="gdb -ex run --args" : run your node in gdb in the same xterm as your launch without having to type run to start it

launch-prefix="valgrind" : run your node in valgrind

launch-prefix="xterm -e" : run your node in a separate xterm window

launch-prefix="nice" : nice your process to lower its CPU usage

launch-prefix="screen -d -m gdb --args" : useful if the node is being run on another machine; you can then ssh to that machine and do screen -D -R to see the gdb session

launch-prefix="xterm -e python -m pdb" : run your python node a separate xterm window in pdb for debugging; manually type run to start it 

then you only have to do

roslaunch <package> <launch>
Running a single node

Following the Commandline techniques

rosrun <package> <node>

instead use

roscd <package>
valgrind bin/<node>
or
gdb bin/<node>
or
gdb 
      GNU gdb (Ubuntu/Linaro 7.4-2012.04-0ubuntu2.1) 7.4-2012.04
      Copyright (C) 2012 Free Software Foundation, Inc.
      License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
       and "show warranty" for details.
       This GDB was configured as "x86_64-linux-gnu".
      Para las instrucciones de informe de errores, vea:
      <http://bugs.launchpad.net/gdb-linaro/>.

      (gdb) file <route to node>
      (gdb) run 
 

Don't forget to add Debug in the CMakeLists.txt

 
cmake_minimum_required(VERSION 2.4.6)
include($ENV{ROS_ROOT}/core/rosbuild/rosbuild.cmake)
set(ROS_BUILD_TYPE Debug)

#set(ROS_BUILD_TYPE Release)
rosbuild_init(node)

set(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin)

rosbuild_gensrv()
rosbuild_add_boost_directories()

add_subdirectory(src)
EXTRA - Core dumps

The easy way to get core dumps

Set the core size to unlimited (if it is not set) :

$ ulimit -a
core file size          (blocks, -c) 0
...< more info here >...

$ ulimit -c unlimited

$ ulimit -a
core file size          (blocks, -c) unlimited
...< more info here >...

To allow core dumps to be created (Ubuntu way)

$ sudo -s
# echo 1 > /proc/sys/kernel/core_uses_pid

Phase II: Integration and Architecture

Non-Critical (but to-do)

  1. Android/iOS Teleoperation
  2. Desktop Qt interface
  3. Create robot model for Gazebo
  4. Create robot model for rviz (the same as Gazebo?)

Wishlist

  • Computer i7 processor, 8GB RAM, Nvidia (1-2 GB)
  • ASUS Xtion Pro Live Color RGB Sensor
  • Roomba battery
  • Arduino Mega (x2)
  • Roomba base (520, 560)