RoCKIn2014

From robotica.unileon.es
Revision as of 17:34, 18 October 2013 by WikiSheriff (talk | contribs) (Hardware Preparation)

Jump to: navigation, search

RoCKIn Camp 2014

  • Project Name:

LogoRockin.png

  • Official Web Page
RoCKIn@home 


  • Project Codename
Watermelon :D


  • Staff:
Technical software: Fernando Casado
Technical software: Víctor Rodríguez 
Technical software: Francisco Lera
Technical hardware: Carlos Rodríguez
  • Other Information:
* Academic Year: 2013-2014
* SVN Repositories: soon	... 
* Tags: Augmented Reality, Elderly people, Tele-Assistence
* Technology: ROS, PCL, c++, svn, OpenCV, cmake, OpenGL, Qt, Aruco, 
* State: Development

Project Summary

This challenge focuses on domestic service robots. The project aims robots with enhanced networking and cognitive abilities. They will be able to perform socially useful tasks such as supporting the impaired and the elderly (one of the main goal of our group).

In the initial stages of the competition individual robots will begin by overcoming basic individual tasks, such as navigation through the rooms of a house, manipulating objects or recognizing faces, and then coordinate to handle house-keeping tasks simultaneously, some of them in natural interaction with humans.

Robot

We want to take part in RoCKIn with the platform developed during the las two years in the Catedra Telefónica-ule.

MYRABot robot.

Robot Hardware

  1. iRobot Roomba 520
  2. Dinamixel Arm (5x12a)
  3. wood frame (yes, it is made with wood)
  4. Notebook (Atom processor) (display+computer are separeted)
  5. Kinect
  6. Arduino Mega

Robot Software

  1. ROS (robot control)
  2. MYRA (C/C++, ArUCo, Qt, openCV)


Proposal

We want to deploy in this robot the minimal functional abilities to be part of RoCKIn 2014.

  • Navigation
  • Mapping
  • Person recognition
  • Person tracking
  • Object recognition
  • Object manipulation
  • Speech recognition
  • Gesture recognition
  • Cognition

We are going to separate the development in three phases:

  1. Phase I: Initial Setup
  2. Phase II: Integration and architecture
  3. Phase III: Platform test
  4. Phase IV: Improvements and complex tasks
    1. Technical Challenge: Furniture-type Object perception
    2. Open Challenge: Present and demonstrate most important (scientific) achievements


Phase I: Initial Setup

Hardware Preparation


We need to make some initial tasks to fulfill the basic hardware setup, configuration, and customization of the robot.


Task 1
Get power from roomba brush (It is going to be used in the arm )
BrushModification.jpg
[[ <videoflash>KiNFuWWZwFs</videoflash> ]]


Task 2
Emergency Stop Button
PictureChainButtonStop.jpg


Task 3
Start Button

ToDo

Software Preparation


Environment setup

We are going to define the basis of the system to be deployed.

Packages search

We use ROS so we can find at least a package for each ability ready to deploy in a robot. In this way, this task involves search and test each package to evaluate if we are able to deploy in our platform.

  • Navigation
  • Mapping
  • Person recognition
  • Person tracking
  • Object recognition
  • Object manipulation
  • Speech recognition
  • Gesture recognition
  • Cognition

Ros: Debugging Techniques

Phase II: Integration and Architecture

Non-Critical (but to-do)

  1. Android/iOS Teleoperation
  2. Desktop Qt interface
  3. Create robot model for Gazebo
  4. Create robot model for rviz (the same as Gazebo?)

Wishlist

  • Computer i7 processor, 8GB RAM, Nvidia (1-2 GB)
  • ASUS Xtion Pro Live Color RGB Sensor