Guided Weapons: Too Much Automation or Not Enough?
Of all the recent write-ups on the Ukraine International Airlines Flight 752 shoot-down disaster in Iran, the best analysis that I read of how this could happen was in Macleans, a Canadian publication. 57 Canadians were killed in the disaster. From the text:
The problem with air defence systems, [former Canadian air defense office Chris Kilford said], is that they are “one of the most technologically dependent” components in any armed forces. Not only can they fail but “you need highly qualified people in the fire units and command centre. If anything, this incident could reveal serious command and control and training issues in the Iranian air defence forces,” he said.
Before I knew anything else about the disaster, I wanted to know how much automation went into the decision to fire a missile at a commercial jet. But when I read that anti-aircraft defense is “technologically dependent,” I know that there must be some automation involved. Any activity that substitutes (a) a process that unfolds without human intervention for (b) some combination of human judgment, skill, experience and action exercised in real time, is automated. Having spent too much time thinking about what is worth automating–and what changes as a result of automation–I had to speculate.
Relevant pieces from the article:
- Civilian planes like this 737 have a much different appearance on radar than a military jet. They are big and gaudy on radar; military jets are generally sleeker.
- It is possible the missile batteries that, we now know, fired twice at the plane had an automated targeting system that incorrectly identified the plane as an enemy.
- Anyone who fired the shot that stopped an American attack would have had “instant hero status,” according to another commentator.
- Normal verification procedures that prevent this type of incident may have been bypassed because of the overall hostile climate.
1 and 2 are related to automation. For all we know, the Iranians are using weapons that routinely misidentify their targets. Maybe a human can usually be counted to correct the computer’s error. 3 and 4 are human problems, related to the Iranian military’s state of high alert.
Iran has not shot down a civilian aircraft like this before. It is possible that the mistake is a coincidence: on the eve of a missile strike by the Iran on U.S. bases in Iraq, they fired two more missiles at an airplane inside their own borders. Possible but doubtful. So what was it about this night that led to their mistake? Was it just an anxious human? Almost certainly. But was there an anxious human nudged by a dangerous form of automation?
Sometimes automation consists of information-gathering. Scan the whole sky in a few seconds with radar, instead of squinting at the darkness with your eyes. But there is also automation that shortcuts human abilities, foreclosing human judgment that is possible because there is (or seems to be) no time. Make a “friend-or-foe” judgment using criteria gathered from sensors and specified in software, instead of analyzed in the field. Let the computer, which can gather information and offer a judgment faster, weigh more decisively in ultimate decision. Decide that there is more to be lost by waiting, than gained by making a more thorough appraisal. The human might still be in control, but he could have become a rubber stamp.
I see two forms of automation here, one more dangerous than the other:
- Some automation does things that human beings can’t do (e.g., see what’s in the darkness using radar). Everyone knows that radar is a tool to allow us to see better, further, more accurately than our own vision. It requires interpretation, because it produces information that is provided for our benefit. Radar is an enhancement that changes our abilities, but it does not force us to take action. Enhancement of human abilities requires that some person take an action in response. We must choose how to use it. For example, a human in the field can disregard radar, because he knows that conditions can lead radar to make an error.
- Automation becomes a danger when it moves into decision-making territory. Then, it competes with human action. There is radar that tells you that you are looking at a large object. And software that tells you that something large is an enemy (or a possible enemy). This type of automation risks concealing its status as a tool, because it seems to know more than you do. You have to defend your judgment against this kind of automation–and the automation is probably faster. It can overwhelm you. Automated decision-making may seem like it helps the operator, but it actually competes with him. It tells the operator what he ultimately wants to know (“friend-or-foe”), but reduces his power to make that judgement for himself. It may be on your side, but it raises the pressure on you. It breathes down your neck. I suspect the Iranian technology may have strayed into decision-making territory.
To some extent, automation always gets rid of the responsibility to exercise judgment, but we might be better off if we avoided it.