Difference between revisions of "RoCKIn2014"
WikiSheriff (talk | contribs) (→Hardware Preparation) |
WikiSheriff (talk | contribs) (→Hardware Preparation) |
||
Line 68: | Line 68: | ||
#Emergency Stop Button | #Emergency Stop Button | ||
− | [[Image:PictureChainButtonStop.jpg| | + | [[Image:PictureChainButtonStop.jpg|600px]] |
#Start Button | #Start Button |
Revision as of 10:45, 18 October 2013
RoCKIn Camp 2014
- Project Name:
RoCKIn 2014
- Codename
Watermelon :D
- Official Web Page
RoCKIn@home
- Staff:
Technical software: Fernando Casado Technical software: Víctor Rodríguez Technical software: Francisco Lera Technical hardware: Carlos Rodríguez
- Other Information:
* Academic Year: 2013-2014 * SVN Repositories: soon ... * Tags: Augmented Reality, Elderly people, Tele-Assistence * Technology: ROS, PCL, c++, svn, OpenCV, cmake, OpenGL, Qt, Aruco, * State: Development
Project Summary
This challenge focuses on domestic service robots. The project aims robots with enhanced networking and cognitive abilities. They will be able to perform socially useful tasks such as supporting the impaired and the elderly (one of the main goal of our group).
In the initial stages of the competition individual robots will begin by overcoming basic individual tasks, such as navigation through the rooms of a house, manipulating objects or recognizing faces, and then coordinate to handle house-keeping tasks simultaneously, some of them in natural interaction with humans.
Robot
We want to take part in RoCKIn with the platform developed during the las two years in the Catedra Telefónica-ule.
Robot Hardware
- iRobot Roomba 520
- Dinamixel Arm (5x12a)
- wood frame (yes, it is made with wood)
- Notebook (Atom processor) (display+computer are separeted)
- Kinect
- Arduino Mega
Robot Software
- ROS (robot control)
- MYRA (C/C++, ArUCo, Qt, openCV)
Proposal
We want to deploy in this robot the minimal functional abilities to be part of RoCKIn 2014.
- Mavigation
- Mapping
- Person recognition
- Person tracking
- Object recognition
- Object manipulation
- Speech recognition
- Gesture recognition
- Cognition
Phase I: Initial Setup
Hardware Preparation
- Get power from main roomba brush (for arm work)
- Emergency Stop Button
- Start Button
Software Preparation
As we are using ROS, we think that we can find each ability ready to deploy in the robot. In this way we are going to search and test each module to evaluate if we are going to be able to deploy in our robot.
Restriction: ROS Fuerte
Software Search
- Navigation
Mapping
- Person recognition
- Person tracking
- Object recognition
Object manipulation
- Speech recognition
- Gesture recognition
- Cognition
Environment setup
Ros: Debugging Techniques
Phase II: Integration and Architecture
Non-Critical (but to-do)
- Android/iOS Teleoperation
- Desktop Qt interface
- Create robot model for Gazebo
- Create robot model for rviz (the same as Gazebo?)
Wishlist
- Computer i7 processor, 8GB RAM, Nvidia (1-2 GB)
- ASUS Xtion Pro Live Color RGB Sensor