Thursday, July 16, 2009

Existential Risk Mitigation

This post is partially a response to lesswrong.com/lw/12h/our_society_lacks_good_selfpreservation_mechanisms


From the Less Wrong post (I replaced the bullets with numbers for ease of reference):

"Well, Bostrom has a paper on existential risks, and he lists the following risks as being "most likely":

1 Deliberate misuse of nanotechnology,
2 Nuclear holocaust,
3 Badly programmed superintelligence,
4 Genetically engineered biological agent,
5 Accidental misuse of nanotechnology (“gray goo”),
6 Physics disasters,
7 Naturally occurring disease,
8 Asteroid or comet impact,
9 Runaway global warming,
10 Resource depletion or ecological destruction,
11 Misguided world government or another static social equilibrium stops technological progress,
12 “Dysgenic” pressures (We might evolve into a less brainy but more fertile species, homo philoprogenitus “lover of many offspring”)
13 Our potential or even our core values are eroded by evolutionary development,
14 Technological arrest,
15 Take-over by a transcending upload,
16 Flawed superintelligence,
17 [Stable] Repressive totalitarian global regime, "

First, there are very few real existential risks.

Of these 3 and 16 are the same problem, and 15 is close enough.
And so are 11 and 14.
9, 10, 12, and 13 are not real problems.
2 is not an existential risk.
11, 14, and 17 are not existential problems in themselves, although they could limit our ability to deal with a real existential problem if one arose.

So that leaves:

1 Deliberate misuse of nanotechnology
3/15/16 Flawed superintelligence
4 Genetically engineered biological agent
5 Accidental misuse of nanotechnology
6 Physics disasters
7 Naturally occurring disease
8 Asteroid or comet impact

6 is not likely and the only way to prevent it is deliberately impose 11/14, which while not an existential risk itself will increase the difficulty in handling an existential (or other) danger that may eventually occur.

7 and 8 are so unlikely within any given time span that they are not worth worrying about until the other dangers can be handled.

I used to think 1 was most likely and 5 next, but Eliezer Yudkowsky's writings have convinced me that unfriendly AI (3/15/16) is a nearer term risk, even if not necessarily a worse one.

Libertarianism is the best available self-preservation mechanism. I am using libertarianism in a general sense of freedom from government interference. It is the social and memetic equivalent of genetic behavioral dispersion; that members of many species behave slightly differently which reduces the likelihood of a large percentage falling to the same cause. The only possible defense against the real risks is to have many people researching them from many different directions - the biggest danger with any of these only occur if someone has a substantial lead in the development/implementation of the technologies involved.

No comments:

Post a Comment