I read Present one idea at a time and let others build upon it after finding it linked in Hacker News. My first response to the title, even before I clicked on the link, was that it was probably going to be a restatement of the amateur SF writer's error of trying to dole out ideas in their stories. Ideas are plentiful, trying to not put them in stories, apparently because they believe there should be only one or a few ideas per story is one reason most amateurs have a hard time writing good science fiction.
On reading the essay, I realized Sivers had an excellent point, but it was a point about feedback. Presenting one idea at a time makes it easier for readers to give good feedback, and they are therefore more likely to provide it.
I wonder if there is any way to combine the two views? To provide more background and context, with the necessarily larger numbers of ideas being presented, while still getting useful feedback from readers.
Added: I linked to this on LW and added this in the comments there:
One idea at a time is great for getting feedback. It is not so good for a reader trying to develop understanding. And the "sequences" don't really help much, trying to read/reread several to try to get context for understanding something is too choppy. I don't know what the best trade-off may be, but I can hope things will improve.
Friday, November 27, 2009
Thursday, November 5, 2009
Rules Destroys Intelligence
Size alone does not a bureaucracy make, though it always helps (or hurts, looking at it from a rational perspective). Rules exist in the first place to benefit the group and its production. A bureaucrat is someone who has forgotten that simple fact, and worships the rules as ends in themselves, rather than means to getting the job done. This is one reason large organizations are more bureaucratic than smaller ones, the distance of most workers from the actual job.
The ultimate in rule-bound work is automated work.
A Web example:
On September 30 I was reading a well-established post on a web site I generally like, that already had lots of comments. Since it has a [reply] button, I naturally replied to comments that warranted it. I didn't even realize how many I had posted until I had gone back to the homepage and found I had 9 of the top 10 comments. I knew from a discussion a year before that the site owners "would prefer" people not post more than 3 of the latest 10 comments - but that was before one of them left and before the reply button, so I didn't know if it would be a problem, and it really didn't even occur to me as I was replying to those comments.
Apparently it did. On October 11, I tried to comment on a new post, my first comment since the 30th, and got an error page with "You are posting comments too quickly. Slow down." Outstanding stupidity on the part of the web site. What an outstandingly stupid contradiction between the site's name and action.
The ultimate in rule-bound work is automated work.
A Web example:
On September 30 I was reading a well-established post on a web site I generally like, that already had lots of comments. Since it has a [reply] button, I naturally replied to comments that warranted it. I didn't even realize how many I had posted until I had gone back to the homepage and found I had 9 of the top 10 comments. I knew from a discussion a year before that the site owners "would prefer" people not post more than 3 of the latest 10 comments - but that was before one of them left and before the reply button, so I didn't know if it would be a problem, and it really didn't even occur to me as I was replying to those comments.
Apparently it did. On October 11, I tried to comment on a new post, my first comment since the 30th, and got an error page with "You are posting comments too quickly. Slow down." Outstanding stupidity on the part of the web site. What an outstandingly stupid contradiction between the site's name and action.
Friday, October 30, 2009
Innovation and Blogging Software
I don't have anything against innovation - provided it's more useful than the inconsistency it introduces. Tools, including software, are used for other ends, they are not ends in themselves except for a few people who specialize in them, or are otherwise particularly interested in them.
Part of the problem is that different people value different things and, consequently, want different things in their tools. This inevitably introduces complexity, both in the variety of tools available and in the tools themselves.
When browsing the internet and blogs, I am interested in finding interesting or useful content, not in learning to manage a dozen different software systems. There are too many different blogging/commenting systems. For someone interested in finding useful or interesting content rather than in "communing", it is seriously annoying to keep track of how they work.
Standardize somewhat on the blogging/commenting systems. Reducing the number of different systems will lessen the complexity a lot more than adding features to one or another would increase it. Reduce the number of systems by making it easier for current sites to transfer to another system. Reduce forking of projects by making it easy to patch systems to a consistent standard.
Part of the problem is that different people value different things and, consequently, want different things in their tools. This inevitably introduces complexity, both in the variety of tools available and in the tools themselves.
When browsing the internet and blogs, I am interested in finding interesting or useful content, not in learning to manage a dozen different software systems. There are too many different blogging/commenting systems. For someone interested in finding useful or interesting content rather than in "communing", it is seriously annoying to keep track of how they work.
Standardize somewhat on the blogging/commenting systems. Reducing the number of different systems will lessen the complexity a lot more than adding features to one or another would increase it. Reduce the number of systems by making it easier for current sites to transfer to another system. Reduce forking of projects by making it easy to patch systems to a consistent standard.
Labels:
blogging,
creativeness,
design,
programming,
tools,
usability
What Is a Model?
A model is a simplified, abstracted representation of an object or system that presents only the information needed by its user. For example, the plastic models of aircraft I built as a kid abstract away everything except the external appearance, a mathematical model of a system shows only those dimensions and relationships useful to the model's users, a control system is a model of the relationships between the stimuli and the response desired by the designer and user of the larger system being controlled (evolution as designer and organism as user in biological analogy). A control system doesn't make a model of a system, to a large degree it is the designers' model of the system it controls.
At the simplest end are one-dimensional models, that we call measurements.
The most complex models are not explicit, they are too complex to be explicitly known, much less communicated; the model of the world that each person carries within his own mind.
At the simplest end are one-dimensional models, that we call measurements.
The most complex models are not explicit, they are too complex to be explicitly known, much less communicated; the model of the world that each person carries within his own mind.
Labels:
decision making,
design,
futurism,
planning,
problem solving
Thursday, October 29, 2009
The Relationship of Software Engineering, Computer Science, and Programming
Computer science underlies programming rather like physics underlies engineering. You can do some programming or practical engineering with rules of thumb and copying from references, but they will ony take you so far.
What is needed for software engineering to become a reality, rather than a glorified name for programming, is a set of reliable principles for designing and building effective software, that is software that works as expected. Prototyping is the currently most effective way of building software, but it is not software engineering; it is an admission that there is not yet a discipline of software engineering.
From what I have read, even the large scale, high reliability programs are built more by careful programming, testing, and debugging than by detailed up-front design, the way large scale engineering projects are.
The main reason is the incredible complexity of software projects. The only physical products that approach software in complexity are large scale integrated circuits.
Software engineering will be an engineering discipline when the development of a new operating system, the associated utilities, and APIs is as predictable and stable as the design and construction of a new skyscraper.
This is all from general reading and memory, if you agree or disagree with me, please leave links to any sources you may have in comments.
What is needed for software engineering to become a reality, rather than a glorified name for programming, is a set of reliable principles for designing and building effective software, that is software that works as expected. Prototyping is the currently most effective way of building software, but it is not software engineering; it is an admission that there is not yet a discipline of software engineering.
From what I have read, even the large scale, high reliability programs are built more by careful programming, testing, and debugging than by detailed up-front design, the way large scale engineering projects are.
The main reason is the incredible complexity of software projects. The only physical products that approach software in complexity are large scale integrated circuits.
Software engineering will be an engineering discipline when the development of a new operating system, the associated utilities, and APIs is as predictable and stable as the design and construction of a new skyscraper.
This is all from general reading and memory, if you agree or disagree with me, please leave links to any sources you may have in comments.
Benefits of Having a Purpose
To get the benefits of having a "purpose" it doesn't need to be spiritual or altruistic or even helpful to others, all that is necessary is that it keeps you from dwelling on yourself and your own problems. Serious study, if it is interesting enough to you and difficult enough to really engage your attention is more than enough to gain you the benefits of a "purpose".
Partially a response to a post on Less Wrong back in February.
Partially a response to a post on Less Wrong back in February.
Labels:
commitment,
creativeness,
emotion,
enthusiasm,
motivation,
rationality
Saturday, October 17, 2009
Risks, Actions, and Benefits
Partially a response and clarification (at least I think it's clearer) to the strategies presented in Alicorn's post on Less Wrong, The Shadow Question. One of the big problems in her discussion of strategies is the conflation of up-front costs with risks.
"First, Do No Harm": When it is as easy to make things worse as better, be damn sure you know what you're doing before you start fixing things.
"Cherry on Top": An invitation to fiddle; small changes are very unlikely to make things worse, and may help.
"Lottery Ticket": She talks about a risk of making things worse, but it looks more like (from her examples and general discussion) that she means is an upfront cost with a chance of significant benefit later.
Insurance: The other headings were hers, but the one she uses here is misleading, as is her discussion. This is related to "Lottery Ticket" in having upfront costs, but in this case it's to prevent an unacceptable risk of harm. It can be something as simple as insuring your house against fire, so you have a temporary place to live and your house gets repaired (or you get a new house if that's easier/cheaper). To actually working to make a risky future less likely (for example, working on Friendly AI).
Another strategy mentioned by Morendil in a comment is "Go for broke" (a less functional version of this would be Russian Roulette), a big risk with the chance of a big reward, like First, Do No Harm, but higher potential risk/payoff matrix.
First, Do No Harm - Use knowledge to avoid as much risk as possible while still seeking the reward
Go for broke - Straightforward acceptance of large risk with large reward
Cherry on Top - Seek benefits at minimal risk
Lottery Ticket - Pay an up-front cost for a small chance at a large benefit
Insurance - Accept an up-front cost to hedge against a risk
Adventure sports isn't a risk management strategy, I mention it here because it feels like there should be a benefit - Seek the thrill of risk, while reducing actual risks, and not getting any benefit except the thrill
If you think of any other generic strategies, please leave a mention in the comments.
As an aside:
As for the title of the original post, I had to Google "Shadow Question". I don't watch television and have never seen an episode of Babylon 5. Given the page I found that describes the show, that was no loss. But the "two questions", "Who are you?" is the Vorlon question. "What do you want?" is the Shadow question. I guess you could call the first one silliness, and the Shadow question practical.
"First, Do No Harm": When it is as easy to make things worse as better, be damn sure you know what you're doing before you start fixing things.
"Cherry on Top": An invitation to fiddle; small changes are very unlikely to make things worse, and may help.
"Lottery Ticket": She talks about a risk of making things worse, but it looks more like (from her examples and general discussion) that she means is an upfront cost with a chance of significant benefit later.
Insurance: The other headings were hers, but the one she uses here is misleading, as is her discussion. This is related to "Lottery Ticket" in having upfront costs, but in this case it's to prevent an unacceptable risk of harm. It can be something as simple as insuring your house against fire, so you have a temporary place to live and your house gets repaired (or you get a new house if that's easier/cheaper). To actually working to make a risky future less likely (for example, working on Friendly AI).
Another strategy mentioned by Morendil in a comment is "Go for broke" (a less functional version of this would be Russian Roulette), a big risk with the chance of a big reward, like First, Do No Harm, but higher potential risk/payoff matrix.
First, Do No Harm - Use knowledge to avoid as much risk as possible while still seeking the reward
Go for broke - Straightforward acceptance of large risk with large reward
Cherry on Top - Seek benefits at minimal risk
Lottery Ticket - Pay an up-front cost for a small chance at a large benefit
Insurance - Accept an up-front cost to hedge against a risk
Adventure sports isn't a risk management strategy, I mention it here because it feels like there should be a benefit - Seek the thrill of risk, while reducing actual risks, and not getting any benefit except the thrill
If you think of any other generic strategies, please leave a mention in the comments.
As an aside:
As for the title of the original post, I had to Google "Shadow Question". I don't watch television and have never seen an episode of Babylon 5. Given the page I found that describes the show, that was no loss. But the "two questions", "Who are you?" is the Vorlon question. "What do you want?" is the Shadow question. I guess you could call the first one silliness, and the Shadow question practical.
Labels:
AI,
Babylon 5,
business,
decision making,
futurism,
insurance,
risks,
television
Subscribe to:
Posts (Atom)