Difference between revisions of "Benchmark dataset for evaluation of range-based people tracker classifiers in mobile robots"

From robotica.unileon.es
Jump to: navigation, search
Line 1: Line 1:
 
<blockquote style="background-color:#ffe;border:1px solid #fb0;padding:5px 10px">SciCrunch! reference: '''Range-based people tracker classifiers Benchmark Dataset, RRID:SCR_015743.'''</blockquote>
 
<blockquote style="background-color:#ffe;border:1px solid #fb0;padding:5px 10px">SciCrunch! reference: '''Range-based people tracker classifiers Benchmark Dataset, RRID:SCR_015743.'''</blockquote>
  
This data report summarizes a benchmark dataset which can be used to evaluate the performance of different approaches for detecting and tracking people by using lidar sensors. Information contained at the dataset is specially suitable to be used as training data for neural network-based classifiers. Data have been gathered in an indoor mock-up apartment, shown in Fig 1 (B), located at the Robotics Lab of the University of León (Spain). An autonomous robot, called Orbi-One and shown in Fig 1 (A), with an on-board Real Time Location System (RTLS) was used to gather the data.
+
This data report summarizes a benchmark dataset which can be used to evaluate the performance of different approaches for detecting and tracking people by using LIDAR sensors. Information contained at the dataset is specially suitable to be used as training data for neural network-based classifiers.  
 +
 
 +
Further information at [https://www.frontiersin.org/articles/10.3389/fnbot.2017.00072/full Álvarez-Aparicio et al. (2018)].
  
 
== Materials ==
 
== Materials ==
  
Data gathered by Orbi-One robot include:  
+
The following section describes the materials (shown in Figure 1) used to gather data, which include: a certified study area, an autonomous robot with an on-board LIDAR sensor, and a real-time location system (RTLS) to obtain ground-truth data about person location. Recorded data include location estimates calculated by two people trackers, LD and PeTra, also described below. Finally, the recording procedure used to build the dataset is explained.
* Lidar sensor measures.  
 
* Location estimates from two people trackers: '''ROS-LD''' and '''PeTra'''.  
 
* People location provided by a commercial RTLS, called '''KIO''', which can be used as ground-truth.
 
* Some other useful data gathered by the Orbi-One robot such as map information, odometry, and transform data.
 
  
Additional information about Orbi-One and the devices/packages used to get data is given below.
+
=== Leon@Home Testbed ===
 +
 
 +
Data have been gathered at Leon@Home Testbed. This is a Certified Testbed of the European Robotics league (ERL). Its main purpose is to benchmark service robots in a realistic home environment. Our testbed is made up of four parts, shown in Figure 1B: a mock-up apartment, a control zone with direct vision (glass wall) into the apartment, a small workshop, and a larger development zone, where researchers work.
 +
 
 +
Leon@Home Testbed is located on the second floor of the Módulo de Investigación en Cibernética (Building for Research in Cybernetics) on the Vegazana Campus of the University of León (Spain). The apartment is a single bedroom mock-up home built in an 8 m × 7 m space. Figure 1C shows a plan of the apartment. 60 cm high walls divide it into a kitchen, living room, bathroom, and bedroom. The furniture (Figures 1E,F) has been chosen to test different robot abilities. For instance, the kitchen cabinets all have different types of handles.
  
 
=== Orbi-one ===
 
=== Orbi-one ===
  
Orbi-One, shown at Fig 1 (A), is an assistant robot manufactured by [http://www.robotnik.es/manipuladores-roboticos-moviles/rb-one/ Robotnik]. The software to control the robot hardware is based on [http://www.ros.org/ ROS].
+
Orbi-One (Figure 1A) is an assistant robot manufactured by Robotnik. It has several sensors, among them, a RGBD camera, a LIDAR sensor, and an inertial unit. It can operate a manipulator arm attached to its torso and has a wheeled base for moving around the room. Orbi-One includes a wireless access point, which allows WiFi communications with other robots and computers.
  
[[File:PeopleTrackingFig1.png|frame|center|'''Fig. 1''': From left to right: (A) Orbi-One carrying KIO tag and a KIO anchor attached in the ceiling; (B) tobotics mobile lab plane, red dots show the location of KIO anchors; (C) occupancy map generated using lidar sensor measures; and (D) Network output.]]
+
The software to control the robot hardware is based on a ROS framework. ROS is basically a set of libraries for robotics similar to operating system services, providing hardware abstraction for sensors and actuators, low-level device control, and inter-process communication. Computation takes place in processes named Nodes, which can receive and send Messages. Nodes publish Messages into information buffers called Topics.
  
 
=== KIO RTLS ===
 
=== KIO RTLS ===
  
KIO RTLS commercial solution by [https://www.eliko.ee/products/kio-rtls/ Eliko] has been used to provide people location at the study area.
+
In order to acquire ground-truth data about person location in the study area, we need an RTLS for indoor environments. The KIO RTLS commercial solution by Eliko has been used. KIO is a precise RTLS for tracking any object in 2- or 3-dimensional space. The Ultra Wideband technology enables to micro-position objects through obstructions. KIO also works in non-line-of-sight conditions and both indoors and outdoors.
 +
 
 +
KIO comes in two main configurations. The Regular Cell configuration guarantees a reliable accuracy of ±30 cm, according to the manufacturer’s specifications. The Small Cell configuration is designed for location-critical applications and provides reliable ±5 cm accuracy, according to the manufacturer’s specifications. Calibration done by the authors of this paper on the mock-up apartment shows that the error is higher in some areas, and lower in others, but on average, the claims of the manufacturer are correct.
 +
 
 +
KIO calculates the position of a mobile transceiver, called a Tag. In order to do so, KIO uses radio beacons, called Anchors, distributed in known positions in the surroundings. Figure 1D shows a KIO anchor. KIO tags are the same size and must be placed on-board the tracking subject, in our case people. The red dots in Figure 1C show the location of the six anchors used in these experiments. They are placed on the ceiling. The distribution of the anchors has been chosen following the method shown in Guerrero-Higueras et al. (2017).
 +
 
 +
=== Leg Detector (D) ===
  
=== ROS Leg Detector (ROS-LD) ===
+
LD is a ROS package, which takes messages published by a LIDAR sensor as input and uses a machine-learning-trained classifier to detect groups of laser readings as possible legs. The code is available in a public repository, but is unsupported at this time.
  
ROS-LD is a ROS package which takes messages published by a lidar sensor as input and uses a machine-learning-trained classifier to detect groups of laser readings as possible legs. The code is available in a public [http://wiki.ros.org/leg_detector repository], but is unsupported at this time.
+
LD publishes the location for the individual legs. It can also attempt to pair the legs and publish their average as an estimate of where the center of a person is. LD may optionally also publish visualization marker messages to indicate where detections happened.
  
 
=== PeTra ===
 
=== PeTra ===
  
PeTra is a people tracker tool for detecting and tracking people developed by the Robotics Group from the University of León.
+
PeTra is a person-tracker tool for detecting and tracking, developed by the Robotics Group at the University of León. The system is based on a Convolutional Neural Network (CNN) using a configuration based on the U-Net architecture by Ronneberger et al. (2015).
 +
 
 +
The system performs the following steps in real time:
 +
 
 +
1. First, the data provided by the LIDAR sensor are processed to build a two dimensional occupancy map centered around the robot. This occupancy map is represented as a binary matrix, where 1s denote positions where the LIDAR scan found an obstacle, and 0s denote positions where the LIDAR scan either went through without detecting any obstacle or did not go through that position.
 +
 
 +
2. Then, the occupancy map is relayed to the network as input data. The network produces a second occupancy map representing the zones where legs have been detected.
 +
 
 +
3. Finally, center of mass calculations return the location of persons. PeTra also publishes locations for the individual legs and Marker messages for visualization.
  
== Recording procedure ==
+
=== Recording procedure ===
  
The data were gathered under 14 different scenarios. In all of them, Orbi-One was standing still as one or more people, carrying a KIO tag, moved around him. Fig. 2 shows the 14 different recognition scenarios recorded. These scenarios have been chosen according to different situations that may occurs on robotics competitions such as [https://www.eu-robotics.net/robotics_league/ ERL] or [http://www.robocup.org/ RoboCup].
+
The data were gathered in 14 different situations. In all of them, Orbi-One stood still as one or more people, carrying a KIO tag, moved around it. Three different locations for Orbi-One were defined (see Figure 1C) resulting in 42 scenarios (14 situations × 3 Orbi-One locations). Figure 2 shows the 14 different recognition scenarios recorded. These scenarios have been chosen according to different situations that may occur in robotics competitions such as ERL or RoboCup.
  
  
Line 40: Line 56:
 
== Data ==
 
== Data ==
  
A rosbag file was created for each scenario (3 for scenarios 3 and 12), recording lidar sensor measures, location estimates from PeTra and ROS-LD, locations from KIO RTLS and other useful data. Specifically, the following data were included in the rosbag files:
+
A rosbag file was created for each scenario (except for situations 3, 12, and 13 where 3 rosbag files where recorded, for situation 4 where 4 rosbag files where recorded, and for situation 9 where 5 rosbag files where recorded), recording LIDAR sensor measurements, location estimates from PeTra and LD, locations from KIO RTLS, and other useful data. Specifically, the following data were included in the rosbag files:
  
* Laser sensor messages (''sensor\_msgs/LaserScan'') published at the ''/scan'' topic.
+
* LIDAR sensor data. Data from LIDAR sensors are provided as ROS LaserScan Messages, which include, among other information, the following: acquisition time of the first ray in the scan, start/end angle of the scan, angular distance between measurements, and range data.
* Location estimates calculated by PeTra published at the ''/person'' topic.
+
* PeTra location estimates provided as ROS PointStamped Messages, which include a position [x, y, z] and a timestamp.
* Location estimates calculated by ROS-LD at the ''/people\_tracker\_measurements'' topic.
+
* Location estimates calculated by LD. It publishes data for individual legs (as ROS PositionMeasurementArray Messages). It also attempts to pair the legs and publishes their average as an estimate of where the center of a person is as a ROS PositionMeasurement Message.
* Location estimates calculated by the KIO RTLS published at the ''/kio/PointStamped/4037/out'' topic.  
+
* Locations provided by KIO RTLS also provided as ROS PointStamped Messages.
* Messages from the ''/map'', ''/odom'', and ''/tf'' topics which includes map information, odometry of the robot base, and transform information respectively.
+
* Messages from /map, /odom, and /tf ROS topics, which include map information, odometry of the robot base, and transformation information, respectively.
  
 
Differents versions of the dataset are enumerated below.
 
Differents versions of the dataset are enumerated below.

Revision as of 11:21, 13 February 2018

SciCrunch! reference: Range-based people tracker classifiers Benchmark Dataset, RRID:SCR_015743.

This data report summarizes a benchmark dataset which can be used to evaluate the performance of different approaches for detecting and tracking people by using LIDAR sensors. Information contained at the dataset is specially suitable to be used as training data for neural network-based classifiers.

Further information at Álvarez-Aparicio et al. (2018).

Materials

The following section describes the materials (shown in Figure 1) used to gather data, which include: a certified study area, an autonomous robot with an on-board LIDAR sensor, and a real-time location system (RTLS) to obtain ground-truth data about person location. Recorded data include location estimates calculated by two people trackers, LD and PeTra, also described below. Finally, the recording procedure used to build the dataset is explained.

Leon@Home Testbed

Data have been gathered at Leon@Home Testbed. This is a Certified Testbed of the European Robotics league (ERL). Its main purpose is to benchmark service robots in a realistic home environment. Our testbed is made up of four parts, shown in Figure 1B: a mock-up apartment, a control zone with direct vision (glass wall) into the apartment, a small workshop, and a larger development zone, where researchers work.

Leon@Home Testbed is located on the second floor of the Módulo de Investigación en Cibernética (Building for Research in Cybernetics) on the Vegazana Campus of the University of León (Spain). The apartment is a single bedroom mock-up home built in an 8 m × 7 m space. Figure 1C shows a plan of the apartment. 60 cm high walls divide it into a kitchen, living room, bathroom, and bedroom. The furniture (Figures 1E,F) has been chosen to test different robot abilities. For instance, the kitchen cabinets all have different types of handles.

Orbi-one

Orbi-One (Figure 1A) is an assistant robot manufactured by Robotnik. It has several sensors, among them, a RGBD camera, a LIDAR sensor, and an inertial unit. It can operate a manipulator arm attached to its torso and has a wheeled base for moving around the room. Orbi-One includes a wireless access point, which allows WiFi communications with other robots and computers.

The software to control the robot hardware is based on a ROS framework. ROS is basically a set of libraries for robotics similar to operating system services, providing hardware abstraction for sensors and actuators, low-level device control, and inter-process communication. Computation takes place in processes named Nodes, which can receive and send Messages. Nodes publish Messages into information buffers called Topics.

KIO RTLS

In order to acquire ground-truth data about person location in the study area, we need an RTLS for indoor environments. The KIO RTLS commercial solution by Eliko has been used. KIO is a precise RTLS for tracking any object in 2- or 3-dimensional space. The Ultra Wideband technology enables to micro-position objects through obstructions. KIO also works in non-line-of-sight conditions and both indoors and outdoors.

KIO comes in two main configurations. The Regular Cell configuration guarantees a reliable accuracy of ±30 cm, according to the manufacturer’s specifications. The Small Cell configuration is designed for location-critical applications and provides reliable ±5 cm accuracy, according to the manufacturer’s specifications. Calibration done by the authors of this paper on the mock-up apartment shows that the error is higher in some areas, and lower in others, but on average, the claims of the manufacturer are correct.

KIO calculates the position of a mobile transceiver, called a Tag. In order to do so, KIO uses radio beacons, called Anchors, distributed in known positions in the surroundings. Figure 1D shows a KIO anchor. KIO tags are the same size and must be placed on-board the tracking subject, in our case people. The red dots in Figure 1C show the location of the six anchors used in these experiments. They are placed on the ceiling. The distribution of the anchors has been chosen following the method shown in Guerrero-Higueras et al. (2017).

Leg Detector (D)

LD is a ROS package, which takes messages published by a LIDAR sensor as input and uses a machine-learning-trained classifier to detect groups of laser readings as possible legs. The code is available in a public repository, but is unsupported at this time.

LD publishes the location for the individual legs. It can also attempt to pair the legs and publish their average as an estimate of where the center of a person is. LD may optionally also publish visualization marker messages to indicate where detections happened.

PeTra

PeTra is a person-tracker tool for detecting and tracking, developed by the Robotics Group at the University of León. The system is based on a Convolutional Neural Network (CNN) using a configuration based on the U-Net architecture by Ronneberger et al. (2015).

The system performs the following steps in real time:

1. First, the data provided by the LIDAR sensor are processed to build a two dimensional occupancy map centered around the robot. This occupancy map is represented as a binary matrix, where 1s denote positions where the LIDAR scan found an obstacle, and 0s denote positions where the LIDAR scan either went through without detecting any obstacle or did not go through that position.

2. Then, the occupancy map is relayed to the network as input data. The network produces a second occupancy map representing the zones where legs have been detected.

3. Finally, center of mass calculations return the location of persons. PeTra also publishes locations for the individual legs and Marker messages for visualization.

Recording procedure

The data were gathered in 14 different situations. In all of them, Orbi-One stood still as one or more people, carrying a KIO tag, moved around it. Three different locations for Orbi-One were defined (see Figure 1C) resulting in 42 scenarios (14 situations × 3 Orbi-One locations). Figure 2 shows the 14 different recognition scenarios recorded. These scenarios have been chosen according to different situations that may occur in robotics competitions such as ERL or RoboCup.


Fig. 2: recognition scenarios recorded.

Data

A rosbag file was created for each scenario (except for situations 3, 12, and 13 where 3 rosbag files where recorded, for situation 4 where 4 rosbag files where recorded, and for situation 9 where 5 rosbag files where recorded), recording LIDAR sensor measurements, location estimates from PeTra and LD, locations from KIO RTLS, and other useful data. Specifically, the following data were included in the rosbag files:

  • LIDAR sensor data. Data from LIDAR sensors are provided as ROS LaserScan Messages, which include, among other information, the following: acquisition time of the first ray in the scan, start/end angle of the scan, angular distance between measurements, and range data.
  • PeTra location estimates provided as ROS PointStamped Messages, which include a position [x, y, z] and a timestamp.
  • Location estimates calculated by LD. It publishes data for individual legs (as ROS PositionMeasurementArray Messages). It also attempts to pair the legs and publishes their average as an estimate of where the center of a person is as a ROS PositionMeasurement Message.
  • Locations provided by KIO RTLS also provided as ROS PointStamped Messages.
  • Messages from /map, /odom, and /tf ROS topics, which include map information, odometry of the robot base, and transformation information, respectively.

Differents versions of the dataset are enumerated below.

v1.0 [Nov-2017]

As a result of applying the recording method explained above, a first version of the dataset have been released. It includes measures for the scenarios defined at Fig. 2.:

Scenario 1. Kitchen

  • Scenario 01 (1 file):
  1. test_01.bag: duration: 14:56s, size: 227.8 MB, start date/time: Jul 20, 2017 12:49:21.16
  • Scenario 02 (1 file):
  1. test_02.bag: duration: 15:08s, size: 233.0 MB, start date/time: Jul 26, 2017 11:01:24.72
  • Scenario 03 (3 files):
  1. test_03_1.bag: duration: 39.9s, size: 10.4 MB, start date/time: Jul 20, 2017 13:27:25.50
  2. test_03_2.bag: duration: 39.1s, size: 10.2 MB, start date/time: Jul 20, 2017 13:28:56.41
  3. test_03_3.bag: duration: 40.5s, size: 10.5 MB, start date/time: Jul 20, 2017 13:30:04.94
  • Scenario 04 (4 files):
  1. test_04_1.bag: duration: 58.3s, size: 15.0 MB, start date/time: Jul 25, 2017 10:39:52.62
  2. test_04_2.bag: duration: 57.2s, size: 14.7 MB, start date/time: Jul 25, 2017 10:41:16.31
  3. test_04_3.bag: duration: 50.5s, size: 13.0 MB, start date/time: Jul 25, 2017 10:42:44.95
  4. test_04_4.bag: duration: 1:01s, size: 15.7 MB, start date/time: Jul 25, 2017 10:43:52.44
  • Scenario 05 (1 file):
  1. test_05.bag: duration: 15:15s, size: 236.8 MB, start date/time: Jul 26, 2017 11:33:13.31
  • Scenario 06 (1 file):
  1. test_06.bag: duration: 09:18s, size: 143.3 MB, start date/time: Jul 26, 2017 12:25:45.12
  • Scenario 07 (1 file):
  1. test_07.bag: duration: 4:16s, size: 65.1 MB, start date/time: Jul 25, 2017 11:40:01.65
  • Scenario 08 (1 file):
  1. test_08.bag: duration: 03:39s, size: 55.8 MB, start date/time: Jul 25, 2017 12:25:29.22
  • Scenario 09 (5 files):
  1. test_09_1.bag: duration: 22.9s, size: 6.1 MB, start date/time: Jul 25, 2017 10:50:02.95
  2. test_09_2.bag: duration: 20.8s, size: 5.6 MB, start date/time: Jul 25, 2017 10:51:02.62
  3. test_09_3.bag: duration: 34.3s, size: 9.0 MB, start date/time: Jul 25, 2017 10:51:45.96
  4. test_09_4.bag: duration: 29.3s, size: 7.9 MB, start date/time: Jul 25, 2017 10:52:51.24
  5. test_09_5.bag: duration: 36.9s, size: 9.7 MB, start date/time: Jul 25, 2017 10:54:00.13
  • Scenario 10 (1 file):
  1. test_10.bag: duration: 15:50s, size: 240.5 MB, start date/time: Jul 20, 2017 13:07:40.16
  • Scenario 11 (1 file):
  1. test_11.bag: duration: 03:50s, size: 58.7 MB, start date/time: Jul 25, 2017 11:48:33.90
  • Scenario 12 (3 files):
  1. test_12_1.bag: duration: 43.6s, size: 11.3 MB, start date/time: Jul 20, 2017 13:33:23.74
  2. test_12_2.bag: duration: 44.3s, size: 11.5 MB, start date/time: Jul 20, 2017 13:34:24.95
  3. test_12_3.bag: duration: 37.3s, size: 9.7 MB, start date/time: Jul 20, 2017 13:35:31.55
  • Scenario 13 (3 files):
  1. test_13_1.bag: duration: 57.8s, size: 14.9 MB, start date/time: Jul 25, 2017 11:01:15.23
  2. test_13_2.bag: duration: 59.9s, size: 15.4 MB, start date/time: Jul 25, 2017 11:02:37.85
  3. test_13_3.bag: duration: 54.0s, size: 13.9 MB, start date/time: Jul 25, 2017 11:04:30.69
  • Scenario 14 (1 file):
  1. test_14.bag: duration: 05.57s, size: 90.5 MB, start date/time: Jul 25, 2017 11:10:32.97


Scenario 2. Living room

  • Scenario 01 (1 file):
  1. test_01.bag: duration: 15:37s, size: 239.7 MB, start date/time: Nov 16, 2017 12:20:19.65
  • Scenario 02 (1 file):
  1. test_02.bag: duration: 15:05s, size: 231.4 MB, start date/time: Nov 27, 2017 18:32:24.35
  • Scenario 03 (3 files):
  1. test_03_1.bag: duration: 55.2s, size: 14.5 MB, start date/time: Nov 16, 2017 12:43:58.29
  2. test_03_2.bag: duration: 44.4s, size: 10.2 MB, start date/time: Nov 16, 2017 12:46:31.99
  3. test_03_3.bag: duration: 51.1s, size: 13.4 MB, start date/time: Nov 16, 2017 12:47:29.66
  • Scenario 04 (4 files):
  1. test_04_1.bag: duration: 01:23s, size: 21.6 MB, start date/time: Nov 16, 2017 1:50:22.18
  2. test_04_2.bag: duration: 01:20s, size: 21.0 MB, start date/time: Nov 16, 2017 12:52:18.44
  3. test_04_3.bag: duration: 01:09s, size: 18.1 MB, start date/time: Nov 16, 2017 12:53:51.33
  4. test_04_4.bag: duration: 1:21s, size: 21.3 MB, start date/time: Nov 16, 2017 12:55:31.81
  • Scenario 05 (1 file):
  1. test_05.bag: duration: 15:07s, size: 232.4 MB, start date/time: Nov 28 2017 18:33:16.24
  • Scenario 06 (1 file):
  1. test_06.bag: duration: 09:14s, size: 136.2 MB, start date/time: Nov 29 2017 13:12:42.61
  • Scenario 07 (1 file):
  1. test_07.bag: duration: 4:39s, size: 71.5 MB, start date/time: Nov 22 2017 12:48:33.86
  • Scenario 08 (1 file):
  1. test_08.bag: duration: 03:04s, size: 47.2 MB, start date/time: Nov 22 2017 12:59:08.58
  • Scenario 09 (5 files):
  1. test_09_1.bag: duration: 28.8s, size: 7.7 MB, start date/time: Nov 28, 2017 17:17:46.49
  2. test_09_2.bag: duration: 31.9s, size: 8.4 MB, start date/time: Nov 28, 2017 17:20:48.26
  3. test_09_3.bag: duration: 34.3s, size: 9.0 MB, start date/time: Nov 28, 2017 17:20:05.96
  4. test_09_4.bag: duration: 29.9s, size: 7.9 MB, start date/time: Nov 28, 2017 17:19:26.36
  5. test_09_5.bag: duration: 30.8s, size: 8.1 MB, start date/time: Nov 28, 2017 17:21:20.55
  • Scenario 10 (1 file):
  1. test_10.bag: duration: 16:05s, size: 246.6 MB, start date/time: Nov 22 2017 11:55:02.39
  • Scenario 11 (1 file):
  1. test_11.bag: duration: 03:25s, size: 52.8 MB, start date/time: Nov 22 2017 13:08:11.99
  • Scenario 12 (3 files):
  1. test_12_1.bag: duration: 01:06s, size: 17.4 MB, start date/time: Nov 16, 2017 12:55:26.25
  2. test_12_2.bag: duration: 1:06s, size: 17.4 MB, start date/time: Nov 16, 2017 12:56:52.47
  3. test_12_3.bag: duration: 50.2s, size: 13.1 MB, start date/time: Nov 16, 2017 11:44:00.10
  • Scenario 13 (3 files):
  1. test_13_1.bag: duration: 58.2s, size: 15.1 MB, start date/time: Nov 22, 2017 11:48:35.33
  2. test_13_2.bag: duration: 01:05s, size: 16.9 MB, start date/time: Nov 22, 2017 11:49:48.18
  3. test_13_3.bag: duration: 01:14s, size: 19.2 MB, start date/time: Nov 22, 2017 11:51:22.68
  • Scenario 14 (1 file):
  1. test_14.bag: duration: 06:17s, size: 96.3 MB, start date/time: Nov 22 2017 12:23:07.92

Scenario 3. Bedroom

  • Scenario 01 (1 file):
  1. test_01.bag: duration: 15:17s, size: 234.6 MB, start date/time: Nov 21, 2017 18:56:54.87
  • Scenario 02 (1 file):
  1. test_02.bag: duration: 15:08s, size: 233.0 MB, start date/time: Nov 28, 2017 17:37:34.08
  • Scenario 03 (3 files):
  1. test_03_1.bag: duration: 44.7s, size: 11.7 MB, start date/time: Nov 21, 2017 13:16:51.65
  2. test_03_2.bag: duration: 39.9s, size: 10.5 MB, start date/time: Nov 21, 2017 19:19:06.27
  3. test_03_3.bag: duration: 41.2s, size: 10.8 MB, start date/time: Nov 21, 2017 19:18:06.86
  • Scenario 04 (4 files):
  1. test_04_1.bag: duration: 01:01s, size: 16.0 MB, start date/time: Nov 21, 2017 19:21:30.99
  2. test_04_2.bag: duration: 01:01s, size: 15.9 MB, start date/time: Nov 21, 2017 19:25:38.00
  3. test_04_3.bag: duration: 58.0s, size: 15.1 MB, start date/time: Nov 21, 2017 19:27:16.74
  4. test_04_4.bag: duration: 1:00s, size: 15.8 MB, start date/time: Nov 21, 2017 19:29:39.28
  • Scenario 05 (1 file):
  1. test_05.bag: duration: 15:11s, size: 233.0 MB, start date/time: Nov 28, 2017 19:05:20.19
  • Scenario 06 (1 file):
  1. test_06.bag: duration: 09:03s, size: 133.6 MB, start date/time: Nov 26, 2017 13:24:25.64
  • Scenario 07 (1 file):
  1. test_07.bag: duration: 4:16s, size: 65.1 MB, start date/time: Nov 27, 2017 17:34:50.22
  • Scenario 08 (1 file):
  1. test_08.bag: duration: 03:47s, size: 58.3 MB, start date/time: Nov 27, 2017 17:41:48.32
  • Scenario 09 (5 files):
  1. test_09_1.bag: duration: 31.3s, size: 8.3 MB, start date/time: Nov 28, 2017 17:23:58.31
  2. test_09_2.bag: duration: 30.4s, size: 8.0 MB, start date/time: Nov 28, 2017 17:24:41.09
  3. test_09_3.bag: duration: 31.2s, size: 8.3 MB, start date/time: Nov 28, 2017 17:25:22.22
  4. test_09_4.bag: duration: 31.4s, size: 8.3 MB, start date/time: Nov 28, 2017 17:26:06.98
  5. test_09_5.bag: duration: 33.2s, size: 8.8 MB, start date/time: Nov 28, 2017 17:26:51.32
  • Scenario 10 (1 file):
  1. test_10.bag: duration: 10:00s, size: 153.7 MB, start date/time: Nov 22, 2017 18:15:29.53
  • Scenario 11 (1 file):
  1. test_11.bag: duration: 03:50s, size: 59.1 MB, start date/time: Nov 22, 2017 18:25:54.55
  • Scenario 12 (3 files):
  1. test_12_1.bag: duration: 41.3s, size: 10.8 MB, start date/time: Nov 22, 2017 18:35:11.74
  2. test_12_2.bag: duration: 41.8s, size: 10.9 MB, start date/time: Nov 22, 2017 18:36:12.11
  3. test_12_3.bag: duration: 39.8s, size: 10.5 MB, start date/time: Nov 22, 2017 18:37:05.06
  • Scenario 13 (3 files):
  1. test_13_1.bag: duration: 01:01s, size: 14.9 MB, start date/time: Nov 22, 2017 18:40:13.36
  2. test_13_2.bag: duration: 01:00s, size: 15.7 MB, start date/time: Nov 22, 2017 18:41:28.33
  3. test_13_3.bag: duration: 01:00s, size: 15.7 MB, start date/time: Nov 22, 2017 18:42:39.52
  • Scenario 14 (1 file):
  1. test_14.bag: duration: 06:12s, size: 95.1 MB, start date/time: Nov 23, 2017 12:53:06.48