Difference between revisions of "RoCKIn2014"

From robotica.unileon.es
Jump to: navigation, search
(Phase I: Initial Setup)
Line 79: Line 79:
  
 
[[RoCKIn2014PhaseI | Outline]]: Tasks developed in this phase
 
[[RoCKIn2014PhaseI | Outline]]: Tasks developed in this phase
 
 
 
==== Hardware Preparation  ====
 
----
 
We need to make some initial tasks to fulfill the basic hardware setup, configuration, and customization of the robot.
 
 
 
;Task 1
 
: Get power from roomba brush (It is going to be used in the arm )
 
 
[[Image:BrushModification.jpg|center|400px]]
 
 
<center>  <videoflash>KiNFuWWZwFs</videoflash> </center>
 
 
;Task 2
 
: Emergency Stop Button
 
 
[[Image:PictureChainButtonStop.jpg|center|600px]]
 
 
 
 
;Task 3
 
: Start Button
 
 
ToDo
 
 
====Software Preparation====
 
----
 
 
====Environment setup====
 
 
We are going to define the basis of the system to be deployed.
 
 
* Operative System :  [http://www.ubuntu.com/download/desktop Ubuntu 12.04 LTS]
 
* Software Restriction : [http://wiki.ros.org/fuerte/Installation/Ubuntu ROS Fuerte]
 
* Core drivers for Roomba : [[HowToInstallRoombaPackage | How to install roomba package]]
 
 
====Packages search====
 
 
We use ROS so we can find at least a package for each ability ready to deploy in a robot. In this way, this task involves search and test each package to evaluate if we are able to deploy in our platform.
 
 
  Navigation
 
    [http://wiki.ros.org/navigation?distro=fuerte 2D navigation stack]
 
    [http://wiki.ros.org/turtlebot_navigation/Tutorials/Autonomously%20navigate%20in%20a%20known%20map Turtlebot Navigation]
 
 
  Mapping
 
    [http://wiki.ros.org/turtlebot_navigation/Tutorials/Build%20a%20map%20with%20SLAM SLAM]
 
 
  Object recognition
 
    Simple Qt interface to try OpenCV implementations of SIFT, SURF, FAST, BRIEF and other feature detectors and descriptors.
 
    [http://code.google.com/p/find-object/ find-object stack]
 
 
  Speech recognition
 
    [http://www.pirobot.org/blog/0022/ Speech Recognition and Text-to-Speech (TTS) in π robot]
 
    Packages pocketsphinx and Festival
 
 
  Cognition
 
    To be done during stacks integration
 
 
  <pre style="color:red">Person recognition</pre>
 
 
  <pre style="color:red">Person tracking</pre>
 
 
  <pre style="color: red">Object manipulation</pre>
 
 
  <pre style="color: red">Gesture recognition</pre>
 
 
====Ros: Debugging Techniques====
 
 
It is possible to make debugging in ROS in two ways
 
 
; Launch file
 
 
Following the [http://wiki.ros.org/roslaunch/Tutorials/Roslaunch%20Nodes%20in%20Valgrind%20or%20GDB Roslaunch techniques]
 
 
<pre>
 
launch-prefix="xterm -e gdb --args" : run your node in a gdb in a separate xterm window, manually type run to start it
 
 
launch-prefix="gdb -ex run --args" : run your node in gdb in the same xterm as your launch without having to type run to start it
 
 
launch-prefix="valgrind" : run your node in valgrind
 
 
launch-prefix="xterm -e" : run your node in a separate xterm window
 
 
launch-prefix="nice" : nice your process to lower its CPU usage
 
 
launch-prefix="screen -d -m gdb --args" : useful if the node is being run on another machine; you can then ssh to that machine and do screen -D -R to see the gdb session
 
 
launch-prefix="xterm -e python -m pdb" : run your python node a separate xterm window in pdb for debugging; manually type run to start it
 
</pre>
 
 
then you only have to do
 
 
<pre>roslaunch <package> <launch></pre>
 
 
; Running a single node 
 
 
Following the [http://projects.csail.mit.edu/pr2/wiki/index.php?title=GDB,_Valgrind_and_ROS Commandline techniques]
 
 
rosrun <package> <node>
 
instead use
 
roscd <package>
 
<pre>valgrind bin/<node></pre> or <pre>gdb bin/<node></pre> or
 
<pre>gdb
 
      GNU gdb (Ubuntu/Linaro 7.4-2012.04-0ubuntu2.1) 7.4-2012.04
 
      Copyright (C) 2012 Free Software Foundation, Inc.
 
      License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
 
      This is free software: you are free to change and redistribute it.
 
      There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
 
      and "show warranty" for details.
 
      This GDB was configured as "x86_64-linux-gnu".
 
      Para las instrucciones de informe de errores, vea:
 
      <http://bugs.launchpad.net/gdb-linaro/>.
 
 
      (gdb) file <route to node>
 
      (gdb) run
 
</pre>
 
 
Don't forget to add Debug in the CMakeLists.txt
 
 
<syntaxhighlight lang=CMake>
 
cmake_minimum_required(VERSION 2.4.6)
 
include($ENV{ROS_ROOT}/core/rosbuild/rosbuild.cmake)
 
set(ROS_BUILD_TYPE Debug)
 
 
#set(ROS_BUILD_TYPE Release)
 
rosbuild_init(node)
 
 
set(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin)
 
 
rosbuild_gensrv()
 
rosbuild_add_boost_directories()
 
 
add_subdirectory(src)
 
</syntaxhighlight>
 
 
; EXTRA - Core dumps
 
 
The easy way to get [http://wiki.ros.org/roslaunch/Tutorials/Roslaunch%20Nodes%20in%20Valgrind%20or%20GDB core dumps]
 
 
Set the core size to unlimited (if it is not set) :
 
 
<pre>
 
$ ulimit -a
 
core file size          (blocks, -c) 0
 
...< more info here >...
 
 
$ ulimit -c unlimited
 
 
$ ulimit -a
 
core file size          (blocks, -c) unlimited
 
...< more info here >...
 
</pre>
 
 
To allow core dumps to be created (Ubuntu way)
 
 
<pre>
 
$ sudo -s
 
# echo 1 > /proc/sys/kernel/core_uses_pid
 
</pre>
 
  
 
=== Phase II: Integration and Architecture  ===
 
=== Phase II: Integration and Architecture  ===

Revision as of 09:12, 21 October 2013

RoCKIn Camp 2014

  • Project Name:

LogoRockin.png

  • Official Web Page
RoCKIn@home 


  • Project Codename
Watermelon :D


  • Advisor:
Vicente Matellán Olivera


  • Staff:
Technical software: Fernando Casado
Technical software: Víctor Rodríguez 
Technical software: Francisco Lera
Technical hardware: Carlos Rodríguez
  • Other Information:
* Academic Year: 2013-2014
* SVN Repositories: soon	... 
* Tags: Augmented Reality, Elderly people, Tele-Assistence
* Technology: ROS, PCL, c++, svn, OpenCV, cmake, OpenGL, Qt, Aruco, 
* State: Development

Project Summary

This challenge focuses on domestic service robots. The project aims robots with enhanced networking and cognitive abilities. They will be able to perform socially useful tasks such as supporting the impaired and the elderly (one of the main goal of our group).

In the initial stages of the competition individual robots will begin by overcoming basic individual tasks, such as navigation through the rooms of a house, manipulating objects or recognizing faces, and then coordinate to handle house-keeping tasks simultaneously, some of them in natural interaction with humans.

Robot

We want to take part in RoCKIn with the platform developed during the las two years in the Catedra Telefónica-ule.

MYRABot robot.

Robot Hardware

  1. iRobot Roomba 520
  2. Dinamixel Arm (5x12a)
  3. wood frame (yes, it is made with wood)
  4. Notebook (Atom processor) (display+computer are separeted)
  5. Kinect
  6. Arduino Mega

Robot Software

  1. ROS (robot control)
  2. MYRA (C/C++, ArUCo, Qt, openCV)


Proposal

We want to deploy in this robot the minimal functional abilities to be part of RoCKIn 2014.

  • Navigation
  • Mapping
  • Person recognition
  • Person tracking
  • Object recognition
  • Object manipulation
  • Speech recognition
  • Gesture recognition
  • Cognition

We are going to separate the development in three phases:

  1. Phase I: Initial Setup
  2. Phase II: Integration and architecture
  3. Phase III: Platform test
  4. Phase IV: Improvements and complex tasks
    1. Technical Challenge: Furniture-type Object perception
    2. Open Challenge: Present and demonstrate most important (scientific) achievements


Phase I: Initial Setup

Outline: Tasks developed in this phase

Phase II: Integration and Architecture

Non-Critical (but to-do)

  1. Android/iOS Teleoperation
  2. Desktop Qt interface
  3. Create robot model for Gazebo
  4. Create robot model for rviz (the same as Gazebo?)

Wishlist

  • Computer i7 processor, 8GB RAM, Nvidia (1-2 GB)
  • ASUS Xtion Pro Live Color RGB Sensor
  • Roomba battery
  • Arduino Mega (x2)
  • Roomba base (520, 560)