Why we need to humanise computers
I'm currently working on some Semantic Web projects, and one of the things that strikes me repeatedly from the literature surrounding it is the constant referral to the computer in the abstract form - here's a typical quote, talking about the proposed benefits of the Semantic Web:
Now I have no problem with that in principle, but let's just analyse what it says, in particular the word "computers". Initially of course computers were actual people, they were often talented and clever individuals (more often than not women) whose job it was to perform complex mathematical calculations. You can see then how the term moved on to describe the machines which did the same thing. But what has that made the people who work on the machines?
I strongly believe that we need to revisit how we conceive of the computer, not as a black box, a mysterious entity that we can push one thing in and get another thing out, but as an encapsulation of human thought, human cognition encoded. At present this distinction is given to the content stored on the machine, but very rarely is the machine itself given any credit, in terms of the hardware and software that it is comprised of. To talk of it in the abstract is wrong, it has no meaning, computers in their myriad forms are all creations by individuals, and as such as representations of how those individuals think. All their many quirks and irregularities, as well as their strengths, and direct products of the individuals who made both their hardware and software components.
So what does this mean for the Semantic Web? If there are no 'machines' why are we making machine readable code?
My answer is that these are not so much machines, but little chunks of ourselves. The possibility to encode chunks of cognition into semi-autonomous, semi-intelligent copies of tiny parts of our own abilities. This is a logical extension to the concept of distributed cognition, that our thoughts do not simply reside in one place, but are distributed in many things around us, books, notes, the way we arrange our things around us, etc. Computer programmers can take this a step further, and not just use information technology to store a memory, but also a small chunk of their own thought processes. What I call this is human simulation, as opposed to the concept of artificial intelligence, as there is nothing artificial about how programmers use their skills to make the complex simple.
One day I hope to be able to develop software that will take cognition from an individual and think for them 'off-line'. I'm not proposing AI in it's classical sense, more a small replication of a specific problem that a human is trying to wrestle with and that using aspects of their own cognition this problem will be 'thought' of by the software programming that they will themselves have created. We are pretty certain that when we sleep our brains continue to work on problems that have presented themselves to us during the day, so I like to think of this concept as a web based dream machine. One day perhaps ...
Anyhow, for now, it's back to the semantic web.
"better enabling computers and people to work in co-operation"
Now I have no problem with that in principle, but let's just analyse what it says, in particular the word "computers". Initially of course computers were actual people, they were often talented and clever individuals (more often than not women) whose job it was to perform complex mathematical calculations. You can see then how the term moved on to describe the machines which did the same thing. But what has that made the people who work on the machines?
I strongly believe that we need to revisit how we conceive of the computer, not as a black box, a mysterious entity that we can push one thing in and get another thing out, but as an encapsulation of human thought, human cognition encoded. At present this distinction is given to the content stored on the machine, but very rarely is the machine itself given any credit, in terms of the hardware and software that it is comprised of. To talk of it in the abstract is wrong, it has no meaning, computers in their myriad forms are all creations by individuals, and as such as representations of how those individuals think. All their many quirks and irregularities, as well as their strengths, and direct products of the individuals who made both their hardware and software components.
So what does this mean for the Semantic Web? If there are no 'machines' why are we making machine readable code?
My answer is that these are not so much machines, but little chunks of ourselves. The possibility to encode chunks of cognition into semi-autonomous, semi-intelligent copies of tiny parts of our own abilities. This is a logical extension to the concept of distributed cognition, that our thoughts do not simply reside in one place, but are distributed in many things around us, books, notes, the way we arrange our things around us, etc. Computer programmers can take this a step further, and not just use information technology to store a memory, but also a small chunk of their own thought processes. What I call this is human simulation, as opposed to the concept of artificial intelligence, as there is nothing artificial about how programmers use their skills to make the complex simple.
One day I hope to be able to develop software that will take cognition from an individual and think for them 'off-line'. I'm not proposing AI in it's classical sense, more a small replication of a specific problem that a human is trying to wrestle with and that using aspects of their own cognition this problem will be 'thought' of by the software programming that they will themselves have created. We are pretty certain that when we sleep our brains continue to work on problems that have presented themselves to us during the day, so I like to think of this concept as a web based dream machine. One day perhaps ...
Anyhow, for now, it's back to the semantic web.
That is deep, I probably need to lay down for a bit to fully grasp the future implications here.
ReplyDeleteFrom a human user interface point of view I believe that no human should have to 'learn' how to use a computer. Indeed, if this is required, I would class it as a design fault! Human computer interfaces should 'learn' how to interact with the human.