Ioannis Politis - Portfolio

Ioannis Politis

Portfolio

PhD student in University of Glasgow

Car warnings

Between October 2012 and May 2016 I was a PhD student in University of Glasgow, funded by Freescale Semiconductor (now NXP), exploring the use of multimodal displays for car drivers. My PhD supervisors were Stephen Brewster and Frank Pollick. Working in Glasgow Multimodal Interaction group , I looked for new ways to create audio, visual and tactile alerts about all kinds of events that could occur while driving. You can find a summary of each of my experiments below. You can also have a look at my publications for a more detailed description.


Using multimodal warnings in driving

Experimental setup of first set of experiments

In my first two experiments I investigated how combining audio, visual and tactile warnings of varying urgency can alert drivers. I used all unimodal, bimodal and trimodal combinations of simple cues with repeated pulses and designed different urgency levels that would reflect important or less important events on the road. I looked both into what drivers thought of the warnings, as well as how well they could identify them. The results showed that people clearly identified the urgency of the cues and were quicker and more accurate when responding to cues with more modalities. They also rated more urgent cues and cues using more modalities as more annoying. For more details you can look at this paper published in Automotive UI 2013.


The effect of situational urgency

Experimental setup of experiment on situational urgency

In this study I used the cues of the previous set of experiments and looked into how they would perform in the presence or absence of a critical event on the road. The warnings were delivered along with a car braking event on a driving simuHelvetica, sans-serifr or without this event happening. I found that drivers had quicker responses in the presence of the critical event, showing how situational urgency, what is actually going on in the road, can influence responses. I also observed a visual overload during the critical event, since participants were not as good in responding to visual warnings in that situation. This experiment was published as a paper in CHI 2014.


Designing the Speech Tactons

The wave forms of an audio message and the Speech Tactons

What if we could feel speech while driving? This question inspired me to design a set of vibrational messages, the Speech Tactons, based on rhythmic features of speech. I evaluated a set of these warnings relating to different road situations and found that when the Speech Tactons were combined with speech they improved ratings of drivers in terms of alerting effectiveness. This work received the Best Paper Award in Automotive UI 2014.


Comparing abstract and language-based warnings

Experimental setup of experiment comparing abstract and language-based warnings

The next logical step was to compare the Speech Tactons with simpler abstract warnings while driving. I looked at how they would perform both in an identification (low criticality) as well as a reaction (high criticality) task. I observed similar performance of the cues in the critical task and better performance of abstract cues in the low criticality task. To be fair though, the abstract cues were shorter, so people were expected to be quicker to identify them. Finding out that both types of cues can work equally well in critical situations was great, since they can be used interchangeably. This paper was presented in CHI 2015, the first CHI conference in Asia!


Designing warnings for autonomous car handovers of control

Experimental setup of experiment investigating warnings for autonomous car handovers of control

My latest experiment looked into how we can effectively inform drivers of an autonomous vehicle that they can stop driving or come back to it. This has not been explored so far and I find it a very interesting topic. First of all, which situations can lead to the driver stopping driving or coming back to it? I thought of a few and designed messages that address these situations. I then looked into what people thought of these messages and also how these messages would work when drivers were playing a game and needed to return to the wheel of a driverless car. I found that people identified the urgency of all the messages and rated them as effective. Visual messages on a Head-up Display performed poorly, since people were looking at the tablet game and not on the road when not driving! This paper was presented in Automotive UI 2015.


Along with conference presentations, I gave several invited talks for my PhD:

○    AutomotiveUI Doctoral Colloquium, ACM (2014)

○    SICSA HCI All Hands Meeting, University of Dundee (2014)

○    Pint of Science, Landsdowne Glasgow (2014)

○    CHI 2014 Doctoral Consortium, ACM (2014)

○    Crossmodal Multisensory Day, University of Glasgow (2014)

○    Freescale Future Fridays, Freescale Semiconductor Inc (2014)

○    Automotive Electronic Systems Innovation Network (AESIN), University of Warwick (2013)


←         back to portfolio         →

back to top