Saturday, July 10, 2010

Alternative Explanation of The Matrix

A massive meteor bombardment destroyed the Earth's ecosystem. So humans and a massively intelligent AI did what it could to save as many people as possible.

The AI needs to keep some outside the Matrix as a control and insurance against problems inside the Matrix. The AI spreads the idea that the Matrix "victims" are slaves and provide energy to the AI to keep the outsiders outside (even though the energy source claims are obviously ridiculous - the people in Zion are profoundly ignorant and bordering on outright stupid).

The AI kills the thousands of people in Zion every hundred years or so when they get aggressive enough to start destabilizing the Matrix, thereby threatening billions.

This makes more sense than the silliness of the movies anyway (which admittedly isn't saying much).

Tuesday, June 22, 2010

Design for Troubleshooting

Designers, both hardware and software, need to make it easier for normal users to figure out what is causing the problem when their is a failure with their systems.

I just got back from the Post Office, where I was apparently the first customer of the day, and the postage machine at the window didn't work. The postal worker took the cover off, checked that no labels were jamming it, tried winding it forward to make sure it wasn't jammed, put the cover back on, tried to print again. It still didn't work. Took the cover off, again, fiddled with it, put the cover back on, tried it again. Through several repetitions, until another worker opened their window, and I was able to mail my item.

Any time you design something that needs to have a protective cover, include a "troubleshooting switch", a manual switch that can allow the machine to operate as long as it is held down, for troubleshooting problems. It is very useful to be able to see precisely where the problem is, which you can't do with a cover in place; in addition to speeding things up not to have to repeatedly remove and replace a cover. Such a switch should require the operator to continuously hold it down, so it will only operate while his attention is actually on the machine. And it should be designed so that it can't be used that way normally. Unless there really isn't any need for the protective cover in the first place.

In complex computer applications, this would mean allowing the user to step through an operation so they can see where the failure occurs. Or at least providing more informative error messages as to where the failure happened. I just signed up with a new ISP, and at the last step received a whole page of "failure notices"; which failed to identify the problem very well. I have been using computers for 15 years, even though I am not a programmer, so I managed to figure out it was some sort of error in formatting the input, so I backed out and tried resubmitting my information and finally got it to work.

Saturday, January 16, 2010

Drugs and Their (Non) Effects

I have been doing some self-experimentation. One thing I did was to try to find the best dose of caffeine for me. I tried regular coffee, decaff, and coffee and caffeine pills; several times and in different orders. With as close to controlled conditions as one person alone could arrange, I discovered that caffeine has absolutely no effect on me. I could discern absolutely no difference in alertness, learning (read a textbook chapter and did test at end), or reaction time (simple video game). I already knew I didn't react strongly to caffeine, hence the experimenting, but the result was a surprise. Apparently all of the effects I had previously attributed to caffeine had been placebo.

I had already known that I got little or no benefit from acetaminophen (Tylenol). The little relief I did get was easy to attribute to placebo effect.

I have begun wondering how much of the lower than unity effectiveness of most drugs and medications is the result of others who are totally unaffected by some particular drug. Rather than people who are just less affected.

Efficient Markets Hypothesis

There have been a couple of blog posts about the Efficient Markets Hypothesis in the past week:

How To Dis EMH By Robin Hanson January 9, 2010 11:15 am on Overcoming Bias

Then The Efficient Markets Hypothesis by Steve Sailer, Friday, January 15, 2010 on his blog.

According to the Efficient Markets Hypothesis markets reflect ALL relevant information, not just public, since any purchase or sale in a market, even that based entirely on private information is reflected in the price. And if someone has private information but does not act on it, by buying or selling, then it is not relevant as it does not affect the market price.

Two, collapses and delayed effects are PART of the EMH. No one who wasn't out of touch with reality has ever claimed that markets instantly reflect all possible information. I am sure someone, probably an academic somewhere, has made that claim.

Three, the biggest claim is obviously true, that there is NOTHING that produces more accurate prices and hence more efficient exchanges than the market. It's not some magic wand, just there is no humanly better alternative.

Markets are in a way a set of "distributed algorithms" for establishing and managing exchanges. But instead of being established across computers which are much to slow and weak, they work across human minds. Each mind only deals with a small fraction of the available information and outputs its decisions in the form of buy or don't buy or sell decisions; that is a decision as to whether the "market price" is lower, close to, or higher than their evaluation of value.

When we have "minds" more powerful than our own, markets may become obsolete, like in the "Economics 2.0" of Charles Stross's novel Accelerando (Singularity).

But until then there is no viable alternative for humans either to markets or to crippled command-type economies. The so-called "mixed economies" are not stable; special interests, especially the interests of government bureaucrats and politicians in increasing their power, inevitably leads to growth of the command side of the economy at the expense of the market.

Substantially edited to correct poor wording, 1:15 PM.

Thursday, December 31, 2009

Make Mistakes on Purpose

My second blog post was on the The Value of Mistakes: Mistakes and Learning. Today there is a post linked from HN, Make a Mess, Clean It Up!, that showed me a point I missed.

Making mistakes on purpose so you can learn from them. One advantage of doing it on purpose is that you can choose your time, so you are fresh and ready to learn, but even more importantly so you can do your learning under controlled circumstances, where you are not going to irritate and inconvenience, or worse, anyone else.

The general idea of making mistakes on purpose I vaguely remember from old Whole Earth Catalogs (I think it was in them, or maybe a theme I took away from them). I should have remembered it when I wrote my earlier post.

Sunday, December 6, 2009

First Draft - Science - Idealistic Versus Signaling

This is a rough draft - I just had the idea this morning and spent a little time working on it. Please leave any comments - I am ordering several books which should provide more information - this essay will be further refined - but probably not for at least a month, maybe more, depending on my reading and your feedback.


The responses to the recent leaking of the CRU's information and emails, has led me to a changed understanding of science and how it is viewed by various people, especially people who claim to be scientists.

Among people who actually do or consume science there seem to be two broad views - what they "believe" about science, rather than what they normally "say" about science when asked.

The classical view, what I have begun thinking of as the idealistic view, is science as the search for reliable knowledge. This is the version most scientists (and many non-scientists) espouse when asked, but increasingly many scientists actually hold another view when their beliefs are evaluated by their actions.

This is the signaling and control view of science. This is the "social network" view that has been developed by many sociologists of science.

For an extended example of the two views in conflict, see this recent thread of 369 comments Facts to fit the theory? Actually, no facts at all! . PhysicistDave is the best exemplar of the idealistic view, with pete and several others having extreme signaling and control viewpoints.

I wonder how much of the fact that there hasn't been any fundamental breakthroughs in the last fifty years has to do with the effective takeover of science by academics and government - that is by the signaling and control view. Maybe we have too many "accredited" scientists and they are too beholden to government, and to a lesser extent other grant-making organizations - and they have crowded out or controlled real, idealistic science.

This can also explain the conflict between those who extol peer review, despite its many flaws, and downplay open source science. They are controlling view scientists protecting their turf and power and prerogatives. Anyone thinking about the ideals of science, the classical view of science, immediately realizes that open sourcing the arguments and data will meet the ends of extending knowledge much better than peer review, now that it is possible. Peer review was a stop gap means of getting a quick review of a paper that was necessary when the costs of distributing information was high, but it is now obsolescent at best. Instead the senior scientists and journal editors are protecting their power by protecting peer review.


Bureaucrats, and especially teachers, will tend strongly toward the signaling and control view.

Economics and other social "sciences" will tend toward signaling and control view - for examples see Robin Hanson's and Tyler Cowen's take on the CRU leak with their claims that this is just how academia really works and pete, who claims a Masters in economics, in the comment thread linked above.

Robin Hanson's It's News on Academia, Not Climate
Yup, this behavior has long been typical when academics form competing groups, whether the public hears about such groups or not. If you knew how academia worked, this news would not surprise you nor change your opinions on global warming. I’ve never done this stuff, and I’d like to think I wouldn’t, but that is cheap talk since I haven’t had the opportunity. This works as a “scandal” only because of academia’s overly idealistic public image.

And Tyler Cowen in The lessons of "Climategate",
In other words, I don't think there's much here, although the episode should remind us of some common yet easily forgotten lessons.
Of course, both Hanson and Cowen believe in AGW, so these might just be attempts to avoid facing anything they don't want to look at.


As I discussed earlier, those who continue to advocate the general use of peer review will tend strongly toward the signaling and control view.

Newer scientists will tend more toward the classical, idealistic view; while more mature scientists as they gain stature and power (especially as they enter administration and editing) will turn increasingly signaling and control oriented.