Flawed Logic
You say “A runaway intelligence improvement loop is false… based on a flawed reasoning that stems from a misunderstanding of intelligence”.
That’s interesting as your article appears to contain several misunderstandings of intelligence. IQ is not intelligence, it is a measure of an individual’s potential intelligence, not of his intellectual attainments. Other factors such as imagination, courage and drive/determination have much greater effect on a person’s achievements in life than their potential intelligence. All things being equal, the more intelligent person should achieve more, but all things are never equal.
“ you cannot hope to arbitrarily increase the intelligence of an agent merely by tuning its brain — no more than you can increase the throughput of a factory line by speeding up the conveyor belt.” Bad analogy: I’m guessing you never managed a factory. the fact is that you can increase the throughput of a factory line by speeding up the conveyor belt, as long as the conveyor belt speed was the bottle-neck.
Your arguments are based on human intelligence and you assume the perceived behaviours of human intelligence will apply equally to Machine intelligence, but that is not necessarily so. You summarise your arguments at the end:
Remember:
- Intelligence is situational — there is no such thing as general intelligence. AI will prove this assumption wrong.
- No system exists in a vacuum; any individual intelligence will always be both defined and limited by the context of its existence, by its environment. Currently, our environment, not our brain, is acting as the bottleneck to our intelligence. An AI’s environment will be all of the internet: arguably not much of a limit.
- Human intelligence is largely externalized, contained not in our brain but in our civilization. This doesn’t apply to AI: why would it?
- Recursively self-improving systems, because of contingent bottlenecks, diminishing returns, and counter-reactions arising from the broader context in which they exist, cannot achieve exponential progress in practice. Empirically, they tend to display linear or sigmoidal improvement. In particular, this is the case for scientific progress — science being possibly the closest system to a recursively self-improving AI that we can observe. Sigmoidal growth is correct: Exponential at first, leaping through “Low hanging fruit” development much as Moore’s law predicted for exponentially faster computers, then linear progress, then incrementally smaller increases as dictated by diminishing returns and theoretical boundaries to growth. This still has the potential for massive growth. The Earth’s human population growth is a Sigmoidal curve, and look at that growth over the last 500 years.
- Recursive intelligence expansion is already happening — at the level of our civilization. It will keep happening in the age of AI, and it progresses at a roughly linear pace. Again you assume human rates of intelligence expansion must apply to AI as well, and there is no reason why that has to be the case.
“The impossibility of intelligence explosion” is unproven, but I doubt the hype about AI will come to be realised in my lifetime: I have yet to see a true AI. All the examples given (Go-players, Big Blue Chess player, SatNav systems etc) are (in my humble opinion) very Artificial but not very Intelligent.