How direct control enables safer teleoperation of autonomous vehicles
We’re only human. We get tired and distracted, and that’s how most driving accidents happen. That is why so many companies are furiously working on autonomous vehicles (AVs). However, there is still a need for humans to be able to take control.
Teleoperation – the remote monitoring and control of an autonomous vehicle – is not immune to human failings. This is why teleoperation platforms must have the built-in ability to prevent injury and death that is independent of the remote operator’s skills. This ability is called ATAS: Advanced Teleoperator Assistance Systems.
ATAS minimizes the chance of human error on the part of a remote operator in any driving situation. However, it is only meant to be used as a last resort. The preferred method of human intervention is indirect control, also known as “remote assistance.” Regardless, this article focuses specifically on direct control, and how can it be used safely.
ATAS is a group of sub-systems and features that enable adaptability to a specific form factor, use case, and operational design domain (ODD).
ATAS integrates the following:
- Vehicle-side algorithms to create a safety zone for the vehicle, its occupants, and its surroundings
- Prioritization of a safety zone to supersede direct commands from the remote station
- Maximizing information gathered by vehicle sensors, including LiDAR, radar, and sensor fusion
ATAS and dynamic trajectory: augmented, assisted, safer teleoperation
A remote operator may encounter two significant teleoperation challenges: situational awareness and (their own) human error. Vehicle-side safety algorithms and driving assistance are key to mitigating these challenges.
Standard advanced driver-assistance system (ADAS) frameworks can act only as a starting point. Safe teleoperation demands more, especially an ability to supersede commands received from the remote operator. ATAS provides clarity about the vehicle’s speed, direction, and surroundings plus a model of how the remote operator receives and perceives information.
Likewise, features designed to augment or assist the remote driving session require additional information and a model for how to present this information to the remote driver. Live video, audio, and other vehicle and ambient data can be supplemented with other information that compensates for the difference in experience vis-a-vis in-vehicle driving. This supplemental information should include:
- Current and expected vehicle speed and directions
- Current and expected orientation with regard to the vehicle’s environment
- Safe braking distance, with a maximum reasonable braking force
- Network latency, especially relating to the reasonable reaction time for safe braking
During teleoperation, ATAS instantly calculates and visualizes the trajectory of the vehicle’s path, according to the operator’s actions. Wheel position and vehicle velocity are used for calculating this trajectory. This, in turn, yields an image of the stopping distance needed to safely brake the vehicle. This method is inspired by Mobileye’s Responsibility-Sensitive Safety and Nvidia’s Safety Force Field Model in which an AV uses a similar calculation for its motion planning logic. However, these two methods, along with common ADAS systems, are insufficient for special conditions faced during teleoperation.
The ATAS approach takes into account latency parameters that are then included in a bespoke automated braking algorithm. A new trajectory, which we call dynamic trajectory (DT), is computed and displayed. The computation of an ATAS DT acts as a layer that monitors and prevents unacceptable actions during a remote operation session.
The ATAS DT acts as a safety buffer. Aware of the location of surrounding risks and the area of the DT, it can determine whether the vehicle is at risk of collision. This triggers a system response that informs and assists the driver while intervening to prevent the collision. This juxtaposition of the DT with approaching risks is a key building block for virtually any variant of ATAS.
Dynamic trajectory application: collision avoidance
Let us take a simple example to describe the actions of how forward collision avoidance works under ATAS.
As a vehicle is moving forward, ATAS draws information from the vehicle sensor stack, constantly monitoring for obstacles, or risks, around the vehicle, quite like ADAS risk detection.
The DT is calculated on the vehicle-side platform according to the commands received from the operator (e.g., steering and acceleration) and the vehicle’s current state. It is then translated into a dynamic real-time map format and is overlaid with the location of the obstacle or risk.
Collision avoidance is activated when the system detects an obstacle within the area of the DT. The system triggers the vehicle’s brakes, regardless of the operator’s actions. Even after the vehicle reaches a full stop, if the risk is still located in close proximity to the vehicle, the control over the brake pedal remains with the vehicle.
Network-aware teleoperation assistance
While the collision avoidance system supersedes unsafe remote human commands, an optimal system should also minimize the frequency of these interventions. This is achieved by assisting safe navigation by the remote operator, despite network latency and differences in situational awareness.
An overlay of the DT over the front screen of the operator’s station is a useful tool for allowing them to communicate and coordinate with the ATAS system. For coordination to be accurate and safe, the system must adjust precisely for network latency.
The operator’s interface features two warning indicators:
A. Collision Avoidance DT: This shows the distance from which the vehicle will trigger collision avoidance, to align the operator’s expectations with what the vehicle will actually do if it intervenes. To assure accuracy, the station-side DT is adjusted for the glass-to-glass latency of the video feed.
B. Collision Warning DT: This shows the distance from which the operator should start braking in order to prevent a collision. This includes not only the prior adjustment for video latency but also the operator’s reaction time.
Surrounding object locations are overlaid with the vehicle DT to identify potential risks, and the vehicle-side ATAS stack relays it in a matter of milliseconds to the station-side stack.
This is one example of how the ATAS framework addresses challenges inherent to teleoperation. Frontal collision avoidance is perhaps the simplest ATAS feature to illustrate, but it is hardly the only one designed and developed to meet the needs of different operating environments.
What may seem far and insignificant can already be a danger. | Credit: Ottopia
The same ATAS framework addresses more challenging scenarios and allows for context-sensitive applications.
Vehicle-side safety calculations, algorithm hierarchy, full use of vehicle sensing capabilities and context-specific adjustments all play a role in designing and implementing ATAS.
Driving a car is hard enough already. Adding latency and a situational awareness, which is based on cameras, makes it harder. Normally, a teleoperator would choose an indirect method of control. However, sometimes that is not an option, and at that point, unlike ADAS, ATAS becomes mission critical for remote driving.
Amit loves marrying technology with customer needs and has been doing so over the last 14 years. Before founding Ottopia, Amit was Head of Product for Microsoft’s leading cyber-security offering, VP Product at a company building low-latency wireless video solutions, and Head of a Cyber-Security R&D department in the IDF’s 8200 Unit. Amit is also a graduate of the prestigious Talpiot program.