Collingridge dilemma

The Collingridge dilemma is a methodological quandary in which efforts to influence or control the further development of technology face a double-bind problem:

  • An information problem: impacts cannot be easily predicted until the technology is extensively developed and widely used.
  • A power problem: control or change is difficult when the technology has become entrenched.

The idea was coined by David Collingridge, The University of Aston, Technology Policy Unit, in his 1980 book The Social Control of Technology.[1] The dilemma is a basic point of reference in technology assessment debates.[2]

In "This Explains Everything," edited by John Brockman, technology critic Evgeny Morozov explains Collingridge's idea by quoting Collingridge himself: "When change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult, and time-consuming." [3]

In "The Pacing Problem, the Collingridge Dilemma & Technological Determinism" by Adam Thierer, a senior research fellow at the Mercatus Center at George Mason University, the Collingridge dilemma is related to the "pacing problem" in technology regulation. The "pacing problem" refers to the notion that technological innovation is increasingly outpacing the ability of laws and regulations to keep up, first explained in Larry Downes' 2009 book The Laws of Disruption, in which he famously states that "technology changes exponentially, but social, economic, and legal systems change incrementally". In Thierer's essay, he tries to correlate these two concepts by saying that "the 'Collingridge dilemma' is simply a restatement of the pacing problem but with greater stress on the social drivers behind the pacing problem and an implicit solution to 'the problem' in the form of preemptive control of new technologies while they are still young and more manageable."[4]

A widely-adopted solution to Collingridge dilemma is the "Precautionary Principle", the belief that new innovations should not be embraced "until their developers can prove that they will not cause any harm to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions".[4] If they fail to do so, this innovation should be "prohibited, curtailed, modified, junked, or ignored". [5] This approach received criticisms from technology critics like Kevin Kelly who believe such a principle is ill-defined[4] and is biased against anything new because it drastically elevates the threshold for anything innovative. According to the American philosopher Max More, the Precautionary Principle "is very good for one thing — stopping technological progress...not because it leads in bad directions, but because it leads in no direction at all."[5]

References

  1. The Social Control of Technology (New York: St. Martin's Press; London: Pinter) ISBN 0-312-73168-X
  2. Article by K. Böhle in TATuP, September 2009, pp. 121-125 (in German)
  3. "This Explains Everything" (Harper Perennial, 2013, p.255, ISBN 0062230174)
  4. "The Pacing Problem, the Collingridge Dilemma & Technological Determinism". Technology Liberation Front. 2018-08-16. Retrieved 2018-09-23.
  5. Kelly, Kevin (2010). What technology wants. Viking Press.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.