When I worked remodeling and landscaping I used to try to visualize what could go wrong and head potential problems off while we were still planning the work. I finally got tired of the man I was working for calling me a pessimist and complaining about my negativity and quit saying anything.
Stay well away from anyone who uses the word "negativity". Every time I have heard it used it was an attack on someone attempting to show some foresight.
Showing posts with label responsibility. Show all posts
Showing posts with label responsibility. Show all posts
Thursday, August 6, 2009
Thursday, July 16, 2009
Existential Risk Mitigation
This post is partially a response to lesswrong.com/lw/12h/our_society_lacks_good_selfpreservation_mechanisms
From the Less Wrong post (I replaced the bullets with numbers for ease of reference):
"Well, Bostrom has a paper on existential risks, and he lists the following risks as being "most likely":
1 Deliberate misuse of nanotechnology,
2 Nuclear holocaust,
3 Badly programmed superintelligence,
4 Genetically engineered biological agent,
5 Accidental misuse of nanotechnology (“gray goo”),
6 Physics disasters,
7 Naturally occurring disease,
8 Asteroid or comet impact,
9 Runaway global warming,
10 Resource depletion or ecological destruction,
11 Misguided world government or another static social equilibrium stops technological progress,
12 “Dysgenic” pressures (We might evolve into a less brainy but more fertile species, homo philoprogenitus “lover of many offspring”)
13 Our potential or even our core values are eroded by evolutionary development,
14 Technological arrest,
15 Take-over by a transcending upload,
16 Flawed superintelligence,
17 [Stable] Repressive totalitarian global regime, "
First, there are very few real existential risks.
Of these 3 and 16 are the same problem, and 15 is close enough.
And so are 11 and 14.
9, 10, 12, and 13 are not real problems.
2 is not an existential risk.
11, 14, and 17 are not existential problems in themselves, although they could limit our ability to deal with a real existential problem if one arose.
So that leaves:
1 Deliberate misuse of nanotechnology
3/15/16 Flawed superintelligence
4 Genetically engineered biological agent
5 Accidental misuse of nanotechnology
6 Physics disasters
7 Naturally occurring disease
8 Asteroid or comet impact
6 is not likely and the only way to prevent it is deliberately impose 11/14, which while not an existential risk itself will increase the difficulty in handling an existential (or other) danger that may eventually occur.
7 and 8 are so unlikely within any given time span that they are not worth worrying about until the other dangers can be handled.
I used to think 1 was most likely and 5 next, but Eliezer Yudkowsky's writings have convinced me that unfriendly AI (3/15/16) is a nearer term risk, even if not necessarily a worse one.
Libertarianism is the best available self-preservation mechanism. I am using libertarianism in a general sense of freedom from government interference. It is the social and memetic equivalent of genetic behavioral dispersion; that members of many species behave slightly differently which reduces the likelihood of a large percentage falling to the same cause. The only possible defense against the real risks is to have many people researching them from many different directions - the biggest danger with any of these only occur if someone has a substantial lead in the development/implementation of the technologies involved.
From the Less Wrong post (I replaced the bullets with numbers for ease of reference):
"Well, Bostrom has a paper on existential risks, and he lists the following risks as being "most likely":
1 Deliberate misuse of nanotechnology,
2 Nuclear holocaust,
3 Badly programmed superintelligence,
4 Genetically engineered biological agent,
5 Accidental misuse of nanotechnology (“gray goo”),
6 Physics disasters,
7 Naturally occurring disease,
8 Asteroid or comet impact,
9 Runaway global warming,
10 Resource depletion or ecological destruction,
11 Misguided world government or another static social equilibrium stops technological progress,
12 “Dysgenic” pressures (We might evolve into a less brainy but more fertile species, homo philoprogenitus “lover of many offspring”)
13 Our potential or even our core values are eroded by evolutionary development,
14 Technological arrest,
15 Take-over by a transcending upload,
16 Flawed superintelligence,
17 [Stable] Repressive totalitarian global regime, "
First, there are very few real existential risks.
Of these 3 and 16 are the same problem, and 15 is close enough.
And so are 11 and 14.
9, 10, 12, and 13 are not real problems.
2 is not an existential risk.
11, 14, and 17 are not existential problems in themselves, although they could limit our ability to deal with a real existential problem if one arose.
So that leaves:
1 Deliberate misuse of nanotechnology
3/15/16 Flawed superintelligence
4 Genetically engineered biological agent
5 Accidental misuse of nanotechnology
6 Physics disasters
7 Naturally occurring disease
8 Asteroid or comet impact
6 is not likely and the only way to prevent it is deliberately impose 11/14, which while not an existential risk itself will increase the difficulty in handling an existential (or other) danger that may eventually occur.
7 and 8 are so unlikely within any given time span that they are not worth worrying about until the other dangers can be handled.
I used to think 1 was most likely and 5 next, but Eliezer Yudkowsky's writings have convinced me that unfriendly AI (3/15/16) is a nearer term risk, even if not necessarily a worse one.
Libertarianism is the best available self-preservation mechanism. I am using libertarianism in a general sense of freedom from government interference. It is the social and memetic equivalent of genetic behavioral dispersion; that members of many species behave slightly differently which reduces the likelihood of a large percentage falling to the same cause. The only possible defense against the real risks is to have many people researching them from many different directions - the biggest danger with any of these only occur if someone has a substantial lead in the development/implementation of the technologies involved.
Labels:
AI,
futurism,
libertarian,
responsibility,
science. technology,
transhumanism
Monday, May 25, 2009
Some Notes on Responsibility
Slightly revised version of some comments I left on lesswrong.com - Does Blind Review Slow Down Science?
First I should state that I disagree with anonymous review for the same reasons that I disagree with an unaccountable judiciary - the negative effects on responsibility.
However, there are several problems with the theory in this essay - the most important being that the editors know who the writer or researcher is and can decide to go ahead and publish on that score no matter what the reviewers say. The editors have a strong incentive to advance novel but true theories in that it will advance the reputation of the journal.
About the unaccountable judiciary, you might check out this book by Max Boot, Out Of Order: Arrogance, Corruption, And Incompetence On The Bench
, a large proportion of the problems he wrote about arose from judges not being personally responsible for their actions on the bench.
Also, more generally, I am a libertarian largely because I believe that everyone is totally and completely responsible for their own actions. Even if someone is holding a gun to your head, you decide what you do in response (and are responsible for letting yourself get in that position). Or if you are drunk or drugged, you are responsible for putting yourself in that position and therefore for what you do while that way.
By "responsible", I mean that people should bear some part of the forseeable costs of their actions. I say "some part" because the actions of others also influence costs, and stress "foreseeable" because in any complex system things interact to such an extent that only very direct results can actually be attributed reliably to any one party. Most attributions of "fault" in complex systems is scapegoating or motivated by interpersonal status games.
First I should state that I disagree with anonymous review for the same reasons that I disagree with an unaccountable judiciary - the negative effects on responsibility.
However, there are several problems with the theory in this essay - the most important being that the editors know who the writer or researcher is and can decide to go ahead and publish on that score no matter what the reviewers say. The editors have a strong incentive to advance novel but true theories in that it will advance the reputation of the journal.
About the unaccountable judiciary, you might check out this book by Max Boot, Out Of Order: Arrogance, Corruption, And Incompetence On The Bench
Also, more generally, I am a libertarian largely because I believe that everyone is totally and completely responsible for their own actions. Even if someone is holding a gun to your head, you decide what you do in response (and are responsible for letting yourself get in that position). Or if you are drunk or drugged, you are responsible for putting yourself in that position and therefore for what you do while that way.
By "responsible", I mean that people should bear some part of the forseeable costs of their actions. I say "some part" because the actions of others also influence costs, and stress "foreseeable" because in any complex system things interact to such an extent that only very direct results can actually be attributed reliably to any one party. Most attributions of "fault" in complex systems is scapegoating or motivated by interpersonal status games.
Subscribe to:
Posts (Atom)