Can poorly designed or overly rigid protocols act as “accountability sinks”, shielding decision-makers from responsibility? If so, how can we structure protocols to promote transparency and accountability instead?
The Unaccountability Machine has not been published yet, but I’m intrigued by the notion of ‘accountability sinks’:
… what Dan Davies calls an “accountability sink”: a situation in which a human system delegates decision-making to a rule book rather than an identifiable individual. If something goes wrong, no one is held to account.
The origin of the problem, Davies argues, is the managerial revolution that began after the second world war, abetted by the advent of cheap computing power and the diffusion of algorithmic decision-making into every sphere of life. These systems have ended up “acting like a car’s crumple-zone to shield any individual manager from a disastrous decision”, he writes. While attractive from the individual’s perspective, they scramble the feedback on which society as a whole depends.
Seen from another perspective, accountability sinks are entirely reasonable responses to the ever-increasing complexity of modern economies. Standardization and explicit policies and procedures offer the only feasible route to meritocratic recruitment, consistent service, and efficient work. Relying on the personal discretion of middle managers would simply result in a different kind of mess.
The Unaccountability Machine - why do big systems make bad decisions?