MENU
TIPS
ティップス
ホーム / ティップス

【DevCase】 Thermal Sensors + Deep Learning for object recognition

To help illustrate this DevCase, we have included an example below.

―Object Recognition from Thermal Information by Deep Learning (Utilizing GPUs)―
It can detect objects in real time by thermal information obtainable from thermal sensor.
TIPS1

While developing this, there were many challenges and points to consider.

The first point was, when a person was shown in the video stream, the system had to not only correctly identify them as a person among the 20 other category labels it had learned. It had to perform this using the characteristics of a human form to correctly identify them as a person even when they were partially obscured or not all in-frame.
Employees volunteered to have photos taken for the purpose of creating Deep Learning data, and we gained thousands of data points from those human photographs alone.

The next difficulty is that GPU memory was insufficient to transfer learned models of object recognition to a thermal image for transition. We were able to proceed ahead using GPU instances of the Cloud.
Also, since the thermal sensor prepared was for Windows only, it was not compatible with Linux (Ubuntu). Therefore, we had to build a driver.
But, once again, a problem occurred. Originally, we thought that sensor value could be obtained with RGB values from V4L, but it was not compatible with RGB mode. Therefore, we processed the range information of the sensor.

For testing, we brought frozen foods in front of the sensor, waved a flame with a lighter, took a cup containing boiling water, etc. We felt as though we could develop it safely, and felt the mission was accomplished.

~ Reference video ~

For reference, we will post the video that we filmed.
“Person” is displayed at the upper left of the frame surrounded by red, and “car” is displayed, as well.

【DevCase】 Model-Based Development + Deep Learning

To help better understand this concept, we have outlined examples below.

―On the theme of “Collisionless Driving”, MBD and Deep Learning have been combined for the development of self-driving systems―

 

【Model-Based Development (MBD)】

 

・System and Model-Based Development

 
In recent years, the importance of system design is increasing due to complications of the system itself, diversification of stakeholders, and changes in user needs.
In the system design phase, we aim to balance QCDSE (quality, cost, delivery time, safety, environment).
Without being dependent on the separate domains of “Mechanical”, “Electrical”, “Control Logic”, and others, we can work towards the necessary goal of being able to consider everything as an integrated and optimized service.
In order to efficiently express the system, efficient notation and communication methods are needed. In order to efficiently examine factors within a complex system, utilizing the Model is the best way.
 

・Merits of Model-Based Development

 
“Model” is a common language to facilitate communication between stakeholders. It aims to eliminate ambiguity by natural language and to ensure common understanding.
 
2018AW_1
2018AW_2

 
 

【 Deep Learning 】

 

・Why is Deep Learning needed?

 
Currently, there is exponentially increasing demand for Deep Learning and Artificial Intelligence.
One of the strengths of Deep Learning is “the ability to create its own functions”.
These functions may be existing algorithms or even a completely unknown function too difficult for a person to create.
As mentioned before, there is obvious advantage in self-created algorithms.

 1. Function approximation from data-only with a neural network
 2. Accurate data abstraction
 3. Increased speed via massive parallelization

 
2018AW_3
 

・Use cases of Deep Learning

 
Deep Learning is effective for existing algorithms with huge computational complexity, as well as hitherto unknown functions that people may find difficulty in developing. We use real-time processing by abstracting the enormous amount of computations of existing algorithms (called “Q Learning”) by means of Deep Learning/Reinforcement Learning, all at super high speed. Also, although the input is image data, the flexible point is that the estimation result can be regarded as the Q value of each action as a feature of Deep Learning. 
2018AW_4
 

・In-Vehicle Configuration

 
In actual development, it is desirable to use autonomous vehicle (AV) development platforms such as NVIDIA DRIVE PX 2, Intel Go, etc. and SoC such as NVIDIA JETSON TX 2 and Intel Cyclone V FPGA.
In this development demonstration, radio communication is used between recognition, judgment, and control based on the dimensions in the demonstration booth.
Detection is carried out by a Raspberry Pi mounted on the RC car, and judgment is made on the basis of the model learned by deep level reinforcement learning with DRIVE PX 2.
The RC car is operated indirectly by controlling the variable resistance to the remote.
 
2018AW_5

【 DevCase 】Inverse Reinforcement Learning

Autonomous RC car demonstration


What is inverse reinforcement learning?


Firstly, reinforcement learning is a type of model which learns to repeat results that produce maximal reward and minimize negative outcomes.
Inverse reinforcement learning estimates the best way to allocate rewards based on sample results provided by experts.
Using probabilistic models, maximum entropy, and other calculations, it is possible to generate reward functions from relatively few data in a way that would be difficult for a human operator to design.


TIPS_AW2019_DL



Demonstration overview


This demonstration uses reverse reinforcement learning to control an RC car.
Once the goal, way-points and obstacles have been ascertained, the system will use what it has learned from human controllers previously to traverse the best route to the goal.
Please try arranging the obstacles and way-points and see how the system responds.
TIPS_AW2019_DL-2



Why use inverse reinforcement learning?


Through parsing the attribute of the training data, it is possible to extract points that cannot be defined verbally.
Also, the learned algorithm is more robust and versatile.
Among others, inverse reinforcement learning can be applied to the following:
●Reproduce an expert’s level of control of heavy machinery.
●Reproduce the nuance of driving styles in autonomous vehicles (such as aggressive or mild styles, etc.)



(日本語) 【 開発事例 】Jetson AGX Xavier – Caffe vs TensorRT –

Sorry, this entry is only available in Japanese.

【 DevCase 】Vissim×CarSim Co-simulation

Autonomous driving in traffic


What kind of emergent behaviors do you see when you mix autonomous vehicles and human-driven vehicles on the road at the same time?
We are developing a co-operative system that conducts simulations to evaluate a model’s accuracy and robustness in traffic simulations, V2V, V2X and more.


aw2019_pro.png



Vissim × CarSim Co-operation Overview
The road model and other data from PTV Vissim is transferred to CarSim so that the same environment is simulated on both. The simulators are then run synchronously, sharing data with each other at each time step. In this demonstration, PTV Vissim and CarSim are linked and sharing information on vehicle positions. Co-simulation with Simulink will also be possible.



From Vissim’s Perspective


Simulate the flow of traffic around special vehicles, such as autonomous vehicles.
As well as calculating the changes in traffic flow, you can also verify whether certain scenarios result in accidents or other negative outcomes.



From CarSim’s Perspective


Verification of ADAS models can be implemented in an environment where there are many other vehicles. Confirm the validity of maneuvres such as braking and lane-changing based on the behavior of surrounding vehicles.
Since the other vehicles (traffic flow) generated by PTV Vissim are based on real-world traffic flow calculations, simulations closer to actual environments are easier to make.



【 DevCase 】Predicting future actions

A demonstration of movement prediction using an event-based camera


What is Prophesee?


Compared to frame-by-frame based processing methods used with conventional cameras, the event-based method is faster with less data overhead because it visualizes a scene by detecting changes in light intensity caused by movement.
“Change Detection” in the image below shows only the change in the light quantity.


aw2019_pro.png



Demonstration overview


There is a need to predict where falling obstacles and other moving objects will be after several seconds. We are demonstrating the concept of future behavior prediction using Prophesee’s event-based camera, which captures changes in light intensity needed to detect the motion of surrounding objects.
With conventional cameras, there is a large amount of data that takes time to process, this makes it less suitable to predict future behaviour. However, the reduced overheads with Prophesee allows prediction algorithms to be light and fast.
In this demonstration, we use LSTM (Long short-term memory) to predict the position of a moving object after n seconds.



aw2019_pro-2.png