In the latest blog in the 'Why investigate' blog series, Professor Graham Edgar discusses situational awareness.
Stuck in a lift
This series of blogs has been characterised by writers of great wit and wisdom, with references to Socrates, Oscar Wilde and more. As the latest incumbent, I feel a great trust has been placed upon me to maintain the standard. That I plan to completely abuse by telling you about the time I was stuck in a lift in my underpants.
For the sake of probity, I should point out that they were highly respectable underpants. The sort of multi-purpose item that is sold in high-end camping shops as suitable for underwear/swimming/signal flags. That information is entirely irrelevant at the moment, but we will come back to it – sorry. What is relevant here is ‘how?’, as Dr Martin Langham would put it, I came to be stuck in the lift.
It was all to do with situation awareness (or SA, as those in the business would have it), the topic of this blog. Ironically I was, at the time, attending (when not in the lift) a conference in the US on human factors. After a hard day’s conferencing I decided to make use of the hotel pool. The pool was in the basement (which makes sense) while the changing room was on the 14th floor (which doesn’t). I was travelling alone, in the lift, from the 14th floor to the basement. During the journey I was looking around and realised that my SA was incomplete. There was an innocuous looking switch on the lift control-panel and I had no idea what it did. I had to know, so I switched it.
Switching the switch
The lift stopped immediately. Given that I am slightly claustrophobic, I felt a slight panic at this point but was reassured that the switch could obviously go both ways – so I switched it back again. And nothing happened. In rising panic and looking now (somewhat belatedly) more closely at the switch, I could see it was discretely marked with ‘Run’ and ‘Stop’. Reassured that it didn’t say ‘Stop forever’, I switched it a few more times and presumably some sort of reset kicked in – and the lift started moving. I did not touch the switch again.
Now, there are so many human factors lessons here. Why was such a switch left accessible for meddling human factors specialists to fiddle with? Why was it not clearly marked in big, luminous, letters? Clearly this incident was due to poor design and in no way my fault, but the key lesson in the context of this blog is the importance of SA. I didn’t fully understand the situation or what the consequences of my actions would be. It was lucky that it was a lift and not a nuclear power station (particularly given my lack of protective equipment).
So, what is situational awareness?
Well, whatever I say now, somebody is likely to disagree but, for the moment, I will go with a commonly used definition "Knowing what is going on around you and what to do about it". That seems a simple enough definition, but I can sense the torches being lit and the pitchforks sharpened at this very moment. I am, however, strangely reassured that no matter what definition I go for somebody, somewhere, will be reaching for a pitchfork. The study of SA is like that. In many ways, the lively debate in the area of SA research is one of its greatest strengths; no theory goes unchallenged.
Anyway, where was I (slip of SA there)? Ah yes, knowing what is going on around you. This immediately suggests an individual level of analysis for SA. You. When SA research really took off in the 90s, the level of analysis was strongly at the level of the individual for a number of reasons. One reason is that you have to start somewhere, and another is that the world was simpler then (seriously). SA tended to be assessed by whether or not an individual was aware of the ‘ground truth’. Do you know how fast your vehicle is travelling or whether there is a pedestrian in your path? But even then, it’s not simple. How much of the ground truth do you need to know to ‘do the right thing’? What information is relevant to your task and what isn’t? Irrelevant information is all around us (I pointed some out right at the start of this piece) but, coming in as the human factors specialist, it’s sometimes not obvious what is and isn’t, relevant.
As a driver, for example, is it important for you to know that the garage you pass is offering a free biscuit with every cup of takeaway coffee? It is to me, and this is an important point. What is important to one individual, may not be so for another, and an individual may consider some information personally important (and attend to it) when it has nothing to do with the task in hand. Also, some of the information that a person uses to guide their actions may not be represented in the ground truth at all. They may have brought it with them. This is one reason why people make what appear to be stupid mistakes (such as driving into something clearly visible) because the world wasn’t like that the many times they’ve come this way before. They are driving in the ‘usual’ situation, not the special one today. Crucially, individuals in exactly the same situation could have widely different situation awareness.
SA is most important in safety-critical situations; for example, driving a car, flying a plane or remembering your partner’s birthday. In these situations, an individual’s awareness needs to be of those aspects of the situation that are necessary for them to do their job – such as where pedestrians (for example) are when driving. My obsession with biscuits is not a problem as long as it doesn’t interfere with my awareness of those aspects of the situation essential for safe driving. Bringing prior knowledge to a situation can help as well as hinder – it can assist you in anticipating problems. So, is that problem solved in the land of SA? It doesn’t matter what else people are aware of (in the situation or in their head), as long as they are aware of the important stuff needed to successfully complete their task.
Well, that tells us what we would like to happen but, coming in as a human factors investigator, how do you assess whether it has? That is, how do you tell if a person has good SA? It’s actually much easier to see when they have poor SA because when something goes wrong it tends to show. The driver wasn’t aware of the pedestrian in their path, the pilot didn’t realise the undercarriage had jammed, the medic didn’t realise the medicine dosage was wrong. This relates directly to the ‘How’ of accidents that Martin has discussed previously. How the accident happened can often be traced back to a particular failure of SA. This gives us a good place to start in any investigation but, as Martin points out, it tells us how the accident happened, but it may not tell us why.
The situation that any individual needs to be aware of at any particular time can be very complex and can include other people, information processing systems and even hidden corporate-culture influences (there is a blind spot in the wing-mirror of your car because the manufacturer was too cheap to fit a bigger mirror). The individual is therefore a part of a complex system. Worse than that, the individual is a complex system. Prof Alex Stedmon talked cogently in a previous blog about the importance of ‘systems thinking’. This is important in the context of SA as the world around us (as I mentioned previously) has got a lot more complicated over the past few years. We now have ‘intelligent systems’ that, arguably, have their own SA. Self-evidently, technology is smarter than it used to be. I now find I am taking orders from my toothbrush ("change brush head"). Aspects of awareness that used to be the preserve of the human individual are now the preserve of non-human (I think referring to them as ‘artificial’ is unnecessarily rude) devices. In the bad old-days I had to be aware of the state of my toothbrush, now it holds that awareness for me.
Systems thinking has now been applied to the study of SA, embodied particularly in the notion of ‘distributed situation awareness’ (not my term although I, and my toothbrush, are very happy with the concept), which maintains that SA is maintained by, and distributed between, multiple elements within a system and that the appropriate level of analysis is the system. From a theoretical point of view, this makes perfect sense. If studying the individual gives an insight into how an accident happened, studying the system of which they are a part may tell us why. In many ways, the individual has almost always been part of a system that incorporates other intelligent agents – other people.
The study of SA being what it is, the idea of SA as a property of the system has led to some rather evangelical views. I have come across the obviously firmly held belief that the study of individual SA is now "no longer relevant". If you were to take your car to the garage with a flat tyre, would you be happy if the garage then declared they were wedded to a pure systems-approach and would be stripping the car back to the chassis before deciding that, yes, the problem is just a flat tyre?
Such a view of individual SA also suggests that the individual and the ‘system’ are distinct. I would suggest it’s a false dichotomy. As I’ve already said, the individual is a system, and an extremely complex one at that. If we are to take a systems approach, where does the system of interest begin and end? A similar debate has played out in the domains of cognitive psychology and neuroscience where the original conceptualisation was that different cognitive functions were carried out by distinct brain regions and those functions could be represented in models of cognitive processing by discrete ‘boxes’. The prevailing view now is that although you can certainly localise function to regions of the brain, those regions are massively interconnected to, and enmeshed with, other regions – and functions. So it is with individuals and systems.
A system within a system
So, where does that leave us in terms of being human factors specialists? Alex has already explored the notion that there are systems of systems. How do we know which part of the system(s) we need to look at? Particularly if we are coming in as a ‘stranger’. There may be interactions and even whole systems acting of which we are completely unaware. We can ‘ask the experts’ but can we be sure that even they know all the complex interactions that may be present in the system? This is where the study of the individual as a system (so, you see, still a systems-based approach) is invaluable. In many cases, the individual makes the active error leading to the accident – they provide the how, and they also provide a good place to start in establishing the why, because that individual will be linked to, and a part of, the wider systems. They are not an individual (well, they are, but we can reclassify them), they are a system within a system. If it turns out they are not linked to the wider system, well that is probably your problem right there. From now on, whenever I refer to ‘individual’ just insert ‘system within a system’ if you wish.
Back to the lift...
Remember the lift (I certainly do)? The action of the individual reveals a clear issue with the design of the system (leaving switches accessible) but, as Martin has pointed out when talking about error traps, the error could simply be an issue with the individual – me as a part of the system. It’s possible that nobody else has ever felt the need to try the switch, although I find that hard to believe. Sometimes (but only sometimes) you can predict behaviour. The northern philosopher, Billy Connolly (Scottish comedian), puts it very well. “Never trust a man who, when left alone in a room with a tea cosy, doesn’t try it on.” Had there been a tea cosy in the lift…
The finding that humans are sometimes predictable and sometimes not is another reason for studying the individual. Humans tend to be more creative and flexible in their actions than non-human systems (although the non-human systems are catching up). Another day in the lift, I may have been thinking of something else and not even noticed the switch. The rest of the system would have been the same, but my reaction different. I often wonder how many lifts I could have stopped if I’d only been paying attention.
Analysis of the wider system
Once you consider the individual to be an integral part of the wider system, then what was previously considered an individual level of analysis becomes accessing the wider system at a particular point (e.g. ‘Graham’). For example, I conduct investigations of road traffic collisions (RTCs), often when a driver has hit a pedestrian. Which reminds me, in a previous blog, Martin mentioned how badgers like to sleep on warm tarmac. I’ve been surprised to discover how many people do the same. The focus in investigating RTCs is, for me, very firmly on the individual (the driver usually), as they are the one that may go to jail. The question is how that individual driver in that situation (system) has behaved as compared to a ‘reasonable’ driver in the same situation. Although I may start the investigation by considering the actions of the driver, the overall approach is much broader, considering the lighting, the road layout, other road users, and so on. The broader effects overlap and interact, as systems do. The appraisal of the actions and awareness of the driver are a lead-in to an analysis of the wider system.
I know that others will argue for beginning an analysis at the broad systems level. I have no problem with that. In my view, they’re just starting at the other end of the continuum (or a different entry point into the system) and we’ll probably meet somewhere in the middle. Rarely, however, do you have the time, access or funds to deconstruct the entire system. Also, it does depend on what you’re doing the investigation for (as Martin has discussed). I tend to investigate accidents and there is usually somebody that did something (or didn’t do something) that precipitated it – so I start with that, or them. I will use what I find there as a guide to where to look in the wider system. If you are in the happy situation of trying to make sure the accident doesn’t happen in the first place, then beginning by looking at how awareness is spread through a system may be a better way to go, as the entry point is perhaps less obvious. But, above all, don’t rule out any approach to studying SA as a matter of dogma.
I will finish with a cautionary warning for anybody trying to assess SA. Beware of over-confidence. It’s a complex concept and whatever level you are studying (individual, system, individual system within a system…) there may be things going on of which you are completely unaware. You, as a stranger to the situation, may not have situation awareness. I will finish with a final quote by a fairly well-known author:
“And therefore as a stranger give it welcome. There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.” (Shakespeare, Hamlet).
Read the other blogs in this series
- Why investigate? Part 1
- Why investigate? Part 2: Where do facts come from (mummy)?
- Who should investigate? Part 3
- Human factors – the scientific study of man in her built environment. Part 4
- When to investigate? Part 5.
- How or Why. Part 6
- Why investigate? Part 7 – The questions and answers
- Why investigate? Part 8 – Why an ‘It’s an error trap conclusion’ is an error trap
- Why investigate? Part 9 – Making wrong decisions when we think they are the right decisions
Why investigate? Part 10. Fatigue – Enter the Sandman
About the Author
Graham Edgar is Professor of Psychology and Applied Neuroscience at the University of Gloucestershire, UK. He has over 35 years research and consultancy experience both in academia and in industry (BAe Systems). Graham specialises in modelling and measuring situational awareness across a number of domains (military, health, firefighting, driving) and also conducts forensic investigations into the perceptual and cognitive contributors to human errors and the (sometimes) ensuing accidents.