ATHENA is a garage project of AETHON aimed at reinventing the way we interact and communicate with our vehicle. In today’s non-automated vehicles, the interaction is simple, press the pedal – it accelerates, turn the wheel – it turns. But what happens in the vehicle of tomorrow, when automation gives your hands the opportunity to wander off and do something else besides grabbing the wheel?
It is simple, one might say, if something happens or I want the car to stop, I regain control simply by grabbing the wheel. That is true, it is possible, but there can be multiple unforeseen problems:
- Drivers of automated vehicles regain control in a consistent and stabilized manor after around 40 sec (Merat et al., 2014)
- Another study showed that drivers can take from 2 to more than 25 seconds to regain control, which is a significant time frame especially at high speed. Also, researchers noted that “Significantly longer control transition times were found between driving with and without secondary tasks” (Eriksson & Stanton, 2017)
This is only the tip of the iceberg. Automated cars are only starting to appear. In full automation we would not need to take control; vehicles got it covered for us. But then, what is the necessity for having a wheel? How will we communicate with the vehicle?
The Society for Automotive Engineers has developed a standard of 5 automation levels. Level 0 is no automation while level 5 is complete automation. This standard, also adopted by NHTSA (National Highway Traffic Safety Administration, USA – https://www.nhtsa.gov/technology-innovation/automated-vehicles-safety), states that at level 4 and level 5, driver intervention becomes optional. This means that drivers will cease to exist, no wheel or formal driving training will be required. However, interaction between the “passenger” and the vehicle, cannot cease to exist. We need to tell our car where to go and how to get there, make stops if necessary or if we wish to. We should not only be talking about smart and autonomous vehicles but also for informed and empowered operators that will replace the driver concept. Athena aims to empower those operators with voice control.
Voice control will help us communicate “natural commands”, such as, make a stop after the blue car, and will help us maneuver under various conditions, translating our voice commands to vehicle movement and control. Athena is an AI that makes the translation: it receives the command from the driver and transmits it to the automated vehicle’s system. It does not blindly make a left when the driver requests it, but lets the vehicle know that the driver wants to make a left, leaving the car to decide when its safe. This is a new level of vehicle-machine interaction, a new Human-Machine Interface. It also learns about us as we speak, using Machine Learning and Natural Language Recognition for understanding and improving the commands. Most importantly though, Athena translates those commands to valid vehicle movement, understands complicated maneuvers and requests the vehicle to perform them ensuring that the driver does not become a passenger but an operator, empowering and reinventing the interface of future vehicles.
The automated vehicle is depicted with a red dot. At the end (right side) of the middle lane there is a segment of very low speed limit (~5km/h). “Human” vehicles (in green) choose to change lane to avoid the segment but the automated vehicle will not change lane without a human giving the command. The demo aims to show how Athena will operate in a simple yet relevant example.
* Simulation software provided by Technical University of Delft (http://homepage.tudelft.nl/05a3n/)
** Voice transcription (Speech-to-Text) is powered by Watson, IBM (https://www.ibm.com/watson/)
*** Automated vehicles’ movement: Longitudinal driving model by Papacharalampous et al. (2015)