Cold water in the lap of anyone hoping for the Singularity sometime this century:

The simulations of Modha and Markram are about as brain-like as one of those plastic brains that neuroscientists like to keep on their desks. The plastic brain has all the parts that a real brain does, it’s roughly the same color and it has about as many molecules in it. OK, say optimists, the plastic brain doesn’t actually perceive, emote, plan or decide, but don’t be so critical! Give the researchers time! Another analogy: Current brain simulations resemble the “planes” and “radios” that Melanesian cargo-cult tribes built out of palm fronds, coral and coconut shells after being occupied by Japanese and American troops during World War II. “Brains” that can’t think are like “planes” that can’t fly.

In spite of all our sophisticated instruments and theories, our own brains are still as magical and mysterious to us as a cargo plane was to those Melanesians. Neuroscientists can’t mimic brains because they lack basic understanding of how brains work; they don’t know what to include in a simulation, and what to leave out. Most simulations assume that the basic physical unit of the brain is the neuron, and the basic unit of information is the electrochemical action potential, or spike, emitted by the neuron. A typical brain contains 100 billion cells, and each cell is linked via dendrites and synapses to as many as 100,000 others. Assuming that each synapse processes one action potential per second and that these transactions represent the brain’s computational output, then the brain performs at least one quadrillion operations per second.

Computers are fast approaching this information-processing capacity, leading to claims by artificial-intelligence enthusiast Ray Kurzweil and others that computers will soon not just equal but surpass our brains in cognitive power. But the brain may be processing information at many levels below and above that of individual neurons and synapses. Moreover, scientists have no idea how the brain encodes information. Unlike computers, which employ a single, static machine code that translates electrical pulses into information, brains may employ many different “neural codes,” which may be constantly changing in response to new experiences.

Thank you, John Horgan, for saying something that makes sense. This doesn’t mean that we shouldn’t consider A.I.s as thought experiments or understand how intelligent robots will alter our labor systems or consider the fact that our minds will likely be enhanced/augmented by minor bits of tech within the next fifty years – those ideas are still critical to developing a better system of ethics and law than we have now. What Horgan’s point does mean, however, is that the whole discussion of the Singularity and CEV and all that other nonsense is about as ethically significant as worrying about time-travel or terraforming. The technology too many steps removed for us to make reasonable guesses or assumptions, so it’s best to focus on the immanent issues instead.

Tagged with: