Final Validation Trials
The final validation trials were performed from February to March 2015 within DLRs Air Traffic Validation Center in Braunschweig, Germany, with seven male and four female air traffic controllers from Dusseldorf, Frankfurt, Munich, Vienna and Prague. Pre-Validation trials were performed in October 2014 with three controllers from Dusseldorf and Prague.
Two main research objectives were addressed by the validation trials:
- the functional benefits of an AMAN with/without additional input compared to the conventional working method using only radar screen, radio/telecommunication and paper strips.
- the reduction of controller workload concerning the electronic flight strip documentation when using speech recognition.
Results
- Depending on the accepted rejection rate of the speech recognizer we got command error rates between 2% and 5% resulting in command recognition rates between 90% and 95%. These recognition rates were, however, only achieved with assistant based speech recognition, i.e. an AMAN dynamically generates context information to increase the recognition rate. Without context generation the recognition rate was only between 50 and 80%.
- On average the controllers used in 4% of their given commands callsigns not available in the scenario (e.g. DLH123 instead of AFR123 or DLH123 instead of DLH132). In most cases this does not confuse the assistant based recogniizer.
- Sequence prediction stability of the AMAN is improved on average by roughly two minutes if speech recognition is available as an input modality.
- Conformance between the planned trajectory and the actual radar data is improved (p-value of T-Test < 0.1%). Without speech recognition we observe non-conformance between 10% and 18% over the flight time, whereas in runs with speech recognition it only varies in between 3% and 10%.
- When controllers were asked to input each given commands also by mouse and/or keyboard into the system (without support of speech recognition) only 77.6% of the given commands were inputted also via mouse into the label. This is even more surprising because we did not require inputting all given commands. If the controller gives an ILS clearance together with a heading or with a descend command only the ILS clearance was required. If the controller repeats a clearance we of course only expected one mouse input and so on. The controller does not only forget to input a command, but also 10.7% of the manually inserted commands were never given to the pilot by speech. Though speech recognition is the more reliable input sensor at least in our simulation setup.
For more details see also the papers listed under references.