July 14th, 2006


Why you aren't a coder

There are people who can code, and people who can't[1]. If you're one of the ones that can't, wondering where you're going wrong, or one of the ones that can, wondering why people find it so hard, you'll be glad to know that the answer is in this PDF. It's an academic paper in which some CompSci academics spend a fair amount of energy trying to work out what the difference between the two types is.

After a fair amount of (fairly amusing and informative) waffling, wherein they reveal that
"Programming ability is not known to be correlated with age, with sex, or with educational attainment; nor has it been found to be correlated with any of the aptitudes measured in conventional ‘intelligence’ or ‘problem-solving-ability’ tests."
they get down to the real finding, which is that there are three types of responses and
"...what distinguishes the three groups in the first test is their different attitudes to meaninglessness."
One group refuses to engage with things that they can't see meaning in, one tries to impart meaning to the information, and one deals with the information without imposing meaning on it.

The third group is the one that makes good coders.

Bad coders except computers to understand things. They expect computers to do the right thing. They can see the meaning behind what the coder is asking and expect the computer to work it out.

Good coders know that all that's happening is that data is moved from place to place, transformed in various ways, and (possibly) displayed in some way. The code itself is a series of instructions that in and of themselves have no meaning - they only take on meaning when it's agreed that the changes they produce can be mapped onto the outside world. You can't rely on any internal meaning to produce the "right" output, because as far as the computer is concerned there is no right output - there's just the output you told it to give, and if that's not what you meant, it doesn't care, because it doesn't understand meaning, it just understands instructions.

And to show that it's all about meaning, they tested and discovered that the success/failure of the students didn't break down in the same way when dealing with more meaningful tasks. In that case they generated the standard "normal" curve you'd expect from a range of ability, rather than clustering at two points of pass/fail.

If you expect computers to understand things then you're in for a nasty shock. I, personally, have no doubt that eventually we'll have computers with human-like intelligence, but if you expect that to make life much easier you're fooling yourself - I have just as much trouble explaining what I want to other people as I do to computers.

[1]Joel Spolsky divides it further into those coders who can deal with pointers and those who can't (see here and here, which seems to be another thing that's an ability rather than a skill. Pointers require higher levels of abstract thinking and are one of those things that coders either take to (after a few weeks of headaches) or bounce off of entirely.