Just read what John Newman wrote. I’m not smart enough to summarize it for you. 😁 It’s a very interesting take on and disagreement with the idea of the “Singularity” we keep hearing about, when AI surpasses human intelligence and allegedly takes over the world.
Originally shared by John Newman
The Multiplicity is Near
So I was at my buddy’s house and his uncle was there visiting. He was a hippy, vietnam vet with crazy conspiracy theories. After some crazy stories about the magical powers of gold, he got me ready for his really big theory.
He says, “Hey, man, you ever heard of the singularity?”
My ears perked up. “Haha, yeah, of course,” I said. “Now do you mean the technological kind or the gravity kind?”
“Nah, man. The singularity in the center of a black hole, man,” he said.
I was a little disheartened, preferring to have discussed the technological singularity with this vietnam vet. I said, “Ah, yeah. I’m familiar. Watchu’ got?”
He said, “Well, let me tell you. At the center of a black hole, it’s not a singularity. You want to know what it is?” He paused, trying to drum up some suspense. I signalled my interest by widening my eyes and he said, “It’s a multiplicity, man!”
I smiled. While he wasn’t talking about the technological singularity, his insight neatly answered some of my own questions regarding the technological singularity. His comments rang true to what I’ve been thinking about and trying to put to words for a while. Alas, I never did quite understand what he was talking about with regard to black holes, but I was tickled by the conversation nonetheless.
I’ve argued (https://plus.google.com/+JohnNewmanIII/posts/UFqMSwa6qSL) that there is no such thing as “artificial general intelligence” and that therefore there will be no “singularity,” per say. This essentially boils down to the “no free lunch” theorem (https://en.wikipedia.org/wiki/There_ain%27t_no_such_thing_as_a_free_lunch). All progress is brute force and any apparent shortcuts were discovered by brute force. For lack of a better term, I’ve been calling this process accidentation. In order to avoid the semantic pitfalls of “generality” when discussing AI, I also recommended a different AI taxonomy (https://plus.google.com/+JohnNewmanIII/posts/3DzQzLCRMbp).
However, that is not to say that the rate of technological progress will not continue to accelerate and that, in the near future, things will not appear to change faster than we can fathom. Rather, it’s that once we are able to replicate the human consciousness digitally, we will not have a road map in front of us that tells us which direction we should evolve consciousness towards. Sure, we can add memory – we can add speed – but functionally speaking, we don’t know what forms of consciousness might be “better.” In fact, as it stands, we have no basis right now to argue that any one form of consciousness is any better than any other, from a subjective perspective. Nor is there any reason to believe that that will change in any way.
Undoubtedly, though, the technology of digital consciousness will be cracked and the possibility of evolving consciousnesses into other forms will be possible. There is no reason to think that that evolution from traditional human consciousness won’t occur. A particular consciousness may become preoccupied with particular futures for particular markets or social phenomena, like sports, for instance. That brain may then optimize on areas that allow for better prediction capabilities related to those phenomena. But will we then say that brain is “more conscious” or “a better form of consciousness”? It doesn’t seem like that would necessarily be the case. Perhaps we would call it “fitter for a purpose,” in particular, but we wouldn’t necessarily call it better “in general.”
So, I’d argue that it is not a great singularity that we are approaching, but rather a “great multiplicity.” We are already seeing massive differentiation in machine learning and machines in general. Once we have digital consciousness, we may indeed seen an “explosion” of sorts – but not in a transcendent, godlike intelligence we hear about. Rather, I think we are going to see an explosion of differentiation of purposes (objective functions, if you prefer) and optimizations over those purposes. The explosion will occur horizontally, not vertically.