Can’t wait to read this article tomorrow, looks really good!
Originally shared by Gideon Rosenblatt
Artificial Super Intelligence? Not So Fast. What Do you Mean?
If I had to distill the point that Kevin Kelly is making in this piece into one sentence, it would: The problem with “super intelligent AI” is that there isn’t actually just one form of intelligence.
His argument against the myth of a “Superhuman AI” is more complex than this, of course. He frames it in terms of five heresies:
1) Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
2) Humans do not have general purpose minds, and neither will AIs.
3) Emulation of human thinking in other media will be constrained by cost.
4) Dimensions of intelligence are not infinite.
5) Intelligences are only one factor in progress.
Some highlights (as I am so apt to provide in my posts):
A more accurate chart of the natural evolution of species is a disk radiating outward, like this one (above) first devised by David Hillis at the University of Texas and based on DNA. This deep genealogy mandala begins in the middle with the most primeval life forms, and then branches outward in time. Time moves outward so that the most recent species of life living on the planet today form the perimeter of the circumference of this circle. This picture emphasizes a fundamental fact of evolution that is hard to appreciate: Every species alive today is equally evolved. Humans exist on this outer ring alongside cockroaches, clams, ferns, foxes, and bacteria. Every one of these species has undergone an unbroken chain of three billion years of successful reproduction, which means that bacteria and cockroaches today are as highly evolved as humans. There is no ladder.
…
I will extend that further to claim that the only way to get a very human-like thought process is to run the computation on very human-like wet tissue. That also means that very big, complex artificial intelligences run on dry silicon will produce big, complex, unhuman-like minds. If it would be possible to build artificial wet brains using human-like grown neurons, my prediction is that their thought will be more similar to ours. The benefits of such a wet brain are proportional to how similar we make the substrate. The costs of creating wetware is huge and the closer that tissue is to human brain tissue, the more cost-efficient it is to just make a human. After all, making a human is something we can do in nine months.
And my favorite quote in the piece:
Therefore when we imagine an “intelligence explosion,” we should imagine it not as a cascading boom but rather as a scattering exfoliation of new varieties. A Cambrian explosion rather than a nuclear explosion. The results of accelerating technology will most likely not be super-human, but extra-human. Outside of our experience, but not necessarily “above” it.
And finally, a little myth to close things out:
I understand the beautiful attraction of a superhuman AI god. It’s like a new Superman. But like Superman, it is a mythical figure. Somewhere in the universe a Superman might exist, but he is very unlikely. However myths can be useful, and once invented they won’t go away. The idea of a Superman will never die. The idea of a superhuman AI Singularity, now that it has been birthed, will never go away either. But we should recognize that it is a religious idea at this moment and not a scientific one. If we inspect the evidence we have so far about intelligence, artificial and natural, we can only conclude that our speculations about a mythical superhuman AI god are just that: myths.
https://backchannel.com/the-myth-of-a-superhuman-ai-59282b686c62