Benchmark dataset for evaluation of range-based people tracker classifiers in mobile robots

From robotica.unileon.es
Jump to: navigation, search
SciCrunch! reference: Range-based people tracker classifiers Benchmark Dataset, RRID:SCR_015743.

This data report summarizes a benchmark dataset which can be used to evaluate the performance of different approaches for detecting and tracking people by using LIDAR sensors. Information contained at the dataset is specially suitable to be used as training data for neural network-based classifiers.

Further information at Álvarez-Aparicio et al. (2017).

Materials and methods

The following section describes the materials (shown in Figure 1) used to gather data, which include: a certified study area, an autonomous robot with an on-board LIDAR sensor, and a real-time location system (RTLS) to obtain ground-truth data about person location. Recorded data include location estimates calculated by two people trackers, LD and PeTra, also described below. Finally, the recording procedure used to build the dataset is explained.

Fig. 1: Materials used in the experiments. (A) Orbi-One robot. (B) Leon@Home Testbed plan. (C) Floor plan of the apartment. Red dots show the location of KIO anchors. Black numbered dots denote Orbi-One locations during the data gathering. (D) KIO anchors. (E) General view of the apartment. (F) Furniture in mock-up apartment.

Leon@Home Testbed

Data have been gathered at Leon@Home Testbed. This is a Certified Testbed of the European Robotics league (ERL). Its main purpose is to benchmark service robots in a realistic home environment. Our testbed is made up of four parts, shown in Figure 1B: a mock-up apartment, a control zone with direct vision (glass wall) into the apartment, a small workshop, and a larger development zone, where researchers work.

Leon@Home Testbed is located on the second floor of the Módulo de Investigación en Cibernética (Building for Research in Cybernetics) on the Vegazana Campus of the University of León (Spain). The apartment is a single bedroom mock-up home built in an 8 m × 7 m space. Figure 1C shows a plan of the apartment. 60 cm high walls divide it into a kitchen, living room, bathroom, and bedroom. The furniture (Figures 1E,F) has been chosen to test different robot abilities. For instance, the kitchen cabinets all have different types of handles.

Orbi-one

Orbi-One (Figure 1A) is an assistant robot manufactured by Robotnik. It has several sensors, among them, a RGBD camera, a LIDAR sensor, and an inertial unit. It can operate a manipulator arm attached to its torso and has a wheeled base for moving around the room. Orbi-One includes a wireless access point, which allows WiFi communications with other robots and computers.

The software to control the robot hardware is based on a ROS framework. ROS is basically a set of libraries for robotics similar to operating system services, providing hardware abstraction for sensors and actuators, low-level device control, and inter-process communication. Computation takes place in processes named Nodes, which can receive and send Messages. Nodes publish Messages into information buffers called Topics.

KIO RTLS

In order to acquire ground-truth data about person location in the study area, we need an RTLS for indoor environments. The KIO RTLS commercial solution by Eliko has been used. KIO is a precise RTLS for tracking any object in 2- or 3-dimensional space. The Ultra Wideband technology enables to micro-position objects through obstructions. KIO also works in non-line-of-sight conditions and both indoors and outdoors.

KIO comes in two main configurations. The Regular Cell configuration guarantees a reliable accuracy of ±30 cm, according to the manufacturer’s specifications. The Small Cell configuration is designed for location-critical applications and provides reliable ±5 cm accuracy, according to the manufacturer’s specifications. Calibration done by the authors of this paper on the mock-up apartment shows that the error is higher in some areas, and lower in others, but on average, the claims of the manufacturer are correct.

KIO calculates the position of a mobile transceiver, called a Tag. In order to do so, KIO uses radio beacons, called Anchors, distributed in known positions in the surroundings. Figure 1D shows a KIO anchor. KIO tags are the same size and must be placed on-board the tracking subject, in our case people. The red dots in Figure 1C show the location of the six anchors used in these experiments. They are placed on the ceiling. The distribution of the anchors has been chosen following the method shown in Guerrero-Higueras et al. (2017).

Leg Detector (LD)

LD is a ROS package, which takes messages published by a LIDAR sensor as input and uses a machine-learning-trained classifier to detect groups of laser readings as possible legs. The code is available in a public repository, but is unsupported at this time.

LD publishes the location for the individual legs. It can also attempt to pair the legs and publish their average as an estimate of where the center of a person is. LD may optionally also publish visualization marker messages to indicate where detections happened.

PeTra

PeTra is a person-tracker tool for detecting and tracking, developed by the Robotics Group at the University of León. The system is based on a Convolutional Neural Network (CNN) using a configuration based on the U-Net architecture by Ronneberger et al. (2015).

The system performs the following steps in real time:

1. First, the data provided by the LIDAR sensor are processed to build a two dimensional occupancy map centered around the robot. This occupancy map is represented as a binary matrix, where 1s denote positions where the LIDAR scan found an obstacle, and 0s denote positions where the LIDAR scan either went through without detecting any obstacle or did not go through that position.

2. Then, the occupancy map is relayed to the network as input data. The network produces a second occupancy map representing the zones where legs have been detected.

3. Finally, center of mass calculations return the location of persons. PeTra also publishes locations for the individual legs and Marker messages for visualization.

PeTra system have been trained using the following data: npy_train_test_globales.tar.gz

Recording procedure

The data were gathered in 14 different situations. In all of them, Orbi-One stood still as one or more people, carrying a KIO tag, moved around it. Three different locations for Orbi-One were defined (see Figure 1C) resulting in 42 scenarios (14 situations × 3 Orbi-One locations). Figure 2 shows the 14 different recognition scenarios recorded. These scenarios have been chosen according to different situations that may occur in robotics competitions such as ERL or RoboCup.


Fig. 2: recognition scenarios recorded as shown at Álvarez-Aparicio et al. (2017).

Data

A rosbag file was created for each scenario (except for situations 3, 12, and 13 where 3 rosbag files where recorded, for situation 4 where 4 rosbag files where recorded, and for situation 9 where 5 rosbag files where recorded), recording LIDAR sensor measurements, location estimates from PeTra and LD, locations from KIO RTLS, and other useful data. Specifically, the following data were included in the rosbag files:

  • LIDAR sensor data. Data from LIDAR sensors are provided as ROS LaserScan Messages, which include, among other information, the following: acquisition time of the first ray in the scan, start/end angle of the scan, angular distance between measurements, and range data.
  • PeTra location estimates provided as ROS PointStamped Messages, which include a position [x, y, z] and a timestamp.
  • Location estimates calculated by LD. It publishes data for individual legs (as ROS PositionMeasurementArray Messages). It also attempts to pair the legs and publishes their average as an estimate of where the center of a person is as a ROS PositionMeasurement Message.
  • Locations provided by KIO RTLS also provided as ROS PointStamped Messages.
  • Messages from /map, /odom, and /tf ROS topics, which include map information, odometry of the robot base, and transformation information, respectively.

Differents versions of the dataset are enumerated below.

v1.0 [Nov-2017]

As a result of applying the recording method explained above, a first version of the dataset have been released. It includes measures for the scenarios defined at Fig. 2.:

Location 1. Kitchen Location 2. Living room Location 3. Bedroom
  • Situation 1 (1 file):
  1. test_01.bag: duration: 14:56s, size: 227.8 MB, start date/time: Jul 20, 2017 12:49:21.16
  • Situation 2 (1 file):
  1. test_02.bag: duration: 15:08s, size: 233.0 MB, start date/time: Jul 26, 2017 11:01:24.72
  • Situation 3 (3 files):
  1. test_03_1.bag: duration: 39.9s, size: 10.4 MB, start date/time: Jul 20, 2017 13:27:25.50
  2. test_03_2.bag: duration: 39.1s, size: 10.2 MB, start date/time: Jul 20, 2017 13:28:56.41
  3. test_03_3.bag: duration: 40.5s, size: 10.5 MB, start date/time: Jul 20, 2017 13:30:04.94
  • Situation 4 (4 files):
  1. test_04_1.bag: duration: 58.3s, size: 15.0 MB, start date/time: Jul 25, 2017 10:39:52.62
  2. test_04_2.bag: duration: 57.2s, size: 14.7 MB, start date/time: Jul 25, 2017 10:41:16.31
  3. test_04_3.bag: duration: 50.5s, size: 13.0 MB, start date/time: Jul 25, 2017 10:42:44.95
  4. test_04_4.bag: duration: 1:01s, size: 15.7 MB, start date/time: Jul 25, 2017 10:43:52.44
  • Situation 5 (1 file):
  1. test_05.bag: duration: 15:15s, size: 236.8 MB, start date/time: Jul 26, 2017 11:33:13.31
  • Situation 6 (1 file):
  1. test_06.bag: duration: 09:18s, size: 143.3 MB, start date/time: Jul 26, 2017 12:25:45.12
  • Situation 7 (1 file):
  1. test_07.bag: duration: 4:16s, size: 65.1 MB, start date/time: Jul 25, 2017 11:40:01.65
  • Situation 8 (1 file):
  1. test_08.bag: duration: 03:39s, size: 55.8 MB, start date/time: Jul 25, 2017 12:25:29.22
  • Situation 9 (5 files):
  1. test_09_1.bag: duration: 22.9s, size: 6.1 MB, start date/time: Jul 25, 2017 10:50:02.95
  2. test_09_2.bag: duration: 20.8s, size: 5.6 MB, start date/time: Jul 25, 2017 10:51:02.62
  3. test_09_3.bag: duration: 34.3s, size: 9.0 MB, start date/time: Jul 25, 2017 10:51:45.96
  4. test_09_4.bag: duration: 29.3s, size: 7.9 MB, start date/time: Jul 25, 2017 10:52:51.24
  5. test_09_5.bag: duration: 36.9s, size: 9.7 MB, start date/time: Jul 25, 2017 10:54:00.13
  • Situation 10 (1 file):
  1. test_10.bag: duration: 15:50s, size: 240.5 MB, start date/time: Jul 20, 2017 13:07:40.16
  • Situation 11 (1 file):
  1. test_11.bag: duration: 03:50s, size: 58.7 MB, start date/time: Jul 25, 2017 11:48:33.90
  • Situation 12 (3 files):
  1. test_12_1.bag: duration: 43.6s, size: 11.3 MB, start date/time: Jul 20, 2017 13:33:23.74
  2. test_12_2.bag: duration: 44.3s, size: 11.5 MB, start date/time: Jul 20, 2017 13:34:24.95
  3. test_12_3.bag: duration: 37.3s, size: 9.7 MB, start date/time: Jul 20, 2017 13:35:31.55
  • Situation 13 (3 files):
  1. test_13_1.bag: duration: 57.8s, size: 14.9 MB, start date/time: Jul 25, 2017 11:01:15.23
  2. test_13_2.bag: duration: 59.9s, size: 15.4 MB, start date/time: Jul 25, 2017 11:02:37.85
  3. test_13_3.bag: duration: 54.0s, size: 13.9 MB, start date/time: Jul 25, 2017 11:04:30.69
  • Situation 14 (1 file):
  1. test_14.bag: duration: 05.57s, size: 90.5 MB, start date/time: Jul 25, 2017 11:10:32.97
  • Situation 1 (1 file):
  1. test_01.bag: duration: 15:37s, size: 239.7 MB, start date/time: Nov 16, 2017 12:20:19.65
  • Situation 2 (1 file):
  1. test_02.bag: duration: 15:05s, size: 231.4 MB, start date/time: Nov 27, 2017 18:32:24.35
  • Situation 3 (3 files):
  1. test_03_1.bag: duration: 55.2s, size: 14.5 MB, start date/time: Nov 16, 2017 12:43:58.29
  2. test_03_2.bag: duration: 44.4s, size: 10.2 MB, start date/time: Nov 16, 2017 12:46:31.99
  3. test_03_3.bag: duration: 51.1s, size: 13.4 MB, start date/time: Nov 16, 2017 12:47:29.66
  • Situation 4 (4 files):
  1. test_04_1.bag: duration: 01:23s, size: 21.6 MB, start date/time: Nov 16, 2017 1:50:22.18
  2. test_04_2.bag: duration: 01:20s, size: 21.0 MB, start date/time: Nov 16, 2017 12:52:18.44
  3. test_04_3.bag: duration: 01:09s, size: 18.1 MB, start date/time: Nov 16, 2017 12:53:51.33
  4. test_04_4.bag: duration: 1:21s, size: 21.3 MB, start date/time: Nov 16, 2017 12:55:31.81
  • Situation 5 (1 file):
  1. test_05.bag: duration: 15:07s, size: 232.4 MB, start date/time: Nov 28 2017 18:33:16.24
  • Situation 6 (1 file):
  1. test_06.bag: duration: 09:14s, size: 136.2 MB, start date/time: Nov 29 2017 13:12:42.61
  • Situation 7 (1 file):
  1. test_07.bag: duration: 4:39s, size: 71.5 MB, start date/time: Nov 22 2017 12:48:33.86
  • Situation 8 (1 file):
  1. test_08.bag: duration: 03:04s, size: 47.2 MB, start date/time: Nov 22 2017 12:59:08.58
  • Situation 9 (5 files):
  1. test_09_1.bag: duration: 28.8s, size: 7.7 MB, start date/time: Nov 28, 2017 17:17:46.49
  2. test_09_2.bag: duration: 31.9s, size: 8.4 MB, start date/time: Nov 28, 2017 17:20:48.26
  3. test_09_3.bag: duration: 34.3s, size: 9.0 MB, start date/time: Nov 28, 2017 17:20:05.96
  4. test_09_4.bag: duration: 29.9s, size: 7.9 MB, start date/time: Nov 28, 2017 17:19:26.36
  5. test_09_5.bag: duration: 30.8s, size: 8.1 MB, start date/time: Nov 28, 2017 17:21:20.55
  • Situation 10 (1 file):
  1. test_10.bag: duration: 16:05s, size: 246.6 MB, start date/time: Nov 22 2017 11:55:02.39
  • Situation 11 (1 file):
  1. test_11.bag: duration: 03:25s, size: 52.8 MB, start date/time: Nov 22 2017 13:08:11.99
  • Situation 12 (3 files):
  1. test_12_1.bag: duration: 01:06s, size: 17.4 MB, start date/time: Nov 16, 2017 12:55:26.25
  2. test_12_2.bag: duration: 1:06s, size: 17.4 MB, start date/time: Nov 16, 2017 12:56:52.47
  3. test_12_3.bag: duration: 50.2s, size: 13.1 MB, start date/time: Nov 16, 2017 11:44:00.10
  • Situation 13 (3 files):
  1. test_13_1.bag: duration: 58.2s, size: 15.1 MB, start date/time: Nov 22, 2017 11:48:35.33
  2. test_13_2.bag: duration: 01:05s, size: 16.9 MB, start date/time: Nov 22, 2017 11:49:48.18
  3. test_13_3.bag: duration: 01:14s, size: 19.2 MB, start date/time: Nov 22, 2017 11:51:22.68
  • Situation 14 (1 file):
  1. test_14.bag: duration: 06:17s, size: 96.3 MB, start date/time: Nov 22 2017 12:23:07.92
  • Situation 1 (1 file):
  1. test_01.bag: duration: 15:17s, size: 234.6 MB, start date/time: Nov 21, 2017 18:56:54.87
  • Situation 2 (1 file):
  1. test_02.bag: duration: 15:08s, size: 233.0 MB, start date/time: Nov 28, 2017 17:37:34.08
  • Situation 3 (3 files):
  1. test_03_1.bag: duration: 44.7s, size: 11.7 MB, start date/time: Nov 21, 2017 13:16:51.65
  2. test_03_2.bag: duration: 39.9s, size: 10.5 MB, start date/time: Nov 21, 2017 19:19:06.27
  3. test_03_3.bag: duration: 41.2s, size: 10.8 MB, start date/time: Nov 21, 2017 19:18:06.86
  • Situation 4 (4 files):
  1. test_04_1.bag: duration: 01:01s, size: 16.0 MB, start date/time: Nov 21, 2017 19:21:30.99
  2. test_04_2.bag: duration: 01:01s, size: 15.9 MB, start date/time: Nov 21, 2017 19:25:38.00
  3. test_04_3.bag: duration: 58.0s, size: 15.1 MB, start date/time: Nov 21, 2017 19:27:16.74
  4. test_04_4.bag: duration: 1:00s, size: 15.8 MB, start date/time: Nov 21, 2017 19:29:39.28
  • Situation 5 (1 file):
  1. test_05.bag: duration: 15:11s, size: 233.0 MB, start date/time: Nov 28, 2017 19:05:20.19
  • Situation 6 (1 file):
  1. test_06.bag: duration: 09:03s, size: 133.6 MB, start date/time: Nov 26, 2017 13:24:25.64
  • Situation 7 (1 file):
  1. test_07.bag: duration: 4:16s, size: 65.1 MB, start date/time: Nov 27, 2017 17:34:50.22
  • Situation 8 (1 file):
  1. test_08.bag: duration: 03:47s, size: 58.3 MB, start date/time: Nov 27, 2017 17:41:48.32
  • Situation 9 (5 files):
  1. test_09_1.bag: duration: 31.3s, size: 8.3 MB, start date/time: Nov 28, 2017 17:23:58.31
  2. test_09_2.bag: duration: 30.4s, size: 8.0 MB, start date/time: Nov 28, 2017 17:24:41.09
  3. test_09_3.bag: duration: 31.2s, size: 8.3 MB, start date/time: Nov 28, 2017 17:25:22.22
  4. test_09_4.bag: duration: 31.4s, size: 8.3 MB, start date/time: Nov 28, 2017 17:26:06.98
  5. test_09_5.bag: duration: 33.2s, size: 8.8 MB, start date/time: Nov 28, 2017 17:26:51.32
  • Situation 10 (1 file):
  1. test_10.bag: duration: 10:00s, size: 153.7 MB, start date/time: Nov 22, 2017 18:15:29.53
  • Situation 11 (1 file):
  1. test_11.bag: duration: 03:50s, size: 59.1 MB, start date/time: Nov 22, 2017 18:25:54.55
  • Situation 12 (3 files):
  1. test_12_1.bag: duration: 41.3s, size: 10.8 MB, start date/time: Nov 22, 2017 18:35:11.74
  2. test_12_2.bag: duration: 41.8s, size: 10.9 MB, start date/time: Nov 22, 2017 18:36:12.11
  3. test_12_3.bag: duration: 39.8s, size: 10.5 MB, start date/time: Nov 22, 2017 18:37:05.06
  • Situation 13 (3 files):
  1. test_13_1.bag: duration: 01:01s, size: 14.9 MB, start date/time: Nov 22, 2017 18:40:13.36
  2. test_13_2.bag: duration: 01:00s, size: 15.7 MB, start date/time: Nov 22, 2017 18:41:28.33
  3. test_13_3.bag: duration: 01:00s, size: 15.7 MB, start date/time: Nov 22, 2017 18:42:39.52
  • Situation 14 (1 file):
  1. test_14.bag: duration: 06:12s, size: 95.1 MB, start date/time: Nov 23, 2017 12:53:06.48

v2.0 [Feb-2018]

As a result of applying the recording method explained above, a second version of the dataset have been released. It includes measures for the scenarios defined at Fig. 2.:

Location 1. Kitchen Location 2. Living room Location 3. Bedroom