Discussion about this post

User's avatar
Sung Won Chung's avatar

Fascinating to see how fast this story is pacing with LLMs:

"tell me what to know" -> "tell me what to think" -> "tell me how to feel" -> "tell me how to live" -> "live for me"

This story will become increasingly opt-out vs. opt-in.

Expand full comment
Patrick Moran's avatar

You wrote: "What happens when the box becomes a reflection of ourselves and our desires—and most of all, our sins?"

One of the differences between current AI and humans is that humans can go directly to measurements of the real world. "130° in the shade" is potentially highly motivating to humans because temperatures like that could kill people. To the machine intelligence it is a matter of relative indifference unless there is some built in computation that it is programmed to make. One of the problems I had in programming in C after it had been on my Macs for a few years is that the procedures built into the language could defer calculating some value until it wasn't busy doing other things. It proved very challenging to trap the compiler into having the number to be calculated ready for some other procedure that the compiler could not see the need for. So even if the AI has direct access to thermometers on the rooftop, if it does not **notice** that the temperature is a serious threat, then it won't raise alarms or take other steps needed.

An even bigger difference between AI and living systems is that we have motivations. When it starts getting too hot or to cold, if I am concentrating on writing I may not even notice at first. However, as soon as my primate-basic systems raise a ruckus, my body will start doing things, maybe without conscious awareness. Cold creeps in through my cuffs, my arms ever so gradually get to the uncomfortable stage. and my body may automatically pull my collar tighter. Living systems have direct responses and "kick it up to a higher level of executive response" responses.What would

What AI activities could be modeled on those pesky carbon units? Let's say that you have a spaceship and the spaceship **is** a computer. It has one task that is given the highest (never defer processing) status. That is, "Never let the passenger cabin temperature get above 120°," And there is another prime level command, "If computer hardware cabinets get above 120°, reduce computer activity to sleep mode until cabinet temperatures are below 120°. " That combination could set up a deadlock situation, or a kind of ratchet failure, wherein the computer mainframe gets too hot and puts itself in power-conservation or sleep mode, and the result of that stoppage is that the passenger cabins get too hot, which in turn makes it go to sleep before getting anything done about passenger cabin temperature. That kind of chatter situation is known to occur in ordinary C programming, and after such a problem is discovered, the programmer can have provisions to break infinite loops. But I'm trying to look at things from the AI's point of view.

I think making a workable brainy spaceship that would work might need to have provisions for innovation. In the given situation, perhaps the hard-wired basic operating system would have a procedure that says, "If procedure 1 did not solve the problem, go to procedure 2, and so on." Maybe it would include provisions to call up subroutines that originally had no connection with spaceship temperature control. Maybe there is a subroutine called "summon human intervention" that sends out a telephone or other signal. It was originally called whenever the printer ran out of paper. But under these unusual chatter conditions, the hard-wired routines go down the list of subroutines, finally comes to the one called "summon human intervention." Some human, who is getting hot anyway, gets a message, e.g., "Printer out of paper." S/he has enough contrxt to put the hot cabin and the request for help together, then perhaps manually vents enough air that the lower air pressure means livable conditions and recovery of computer operation.

The spaceship might have a basic operating system that gave it motivations akin to those of humans, the innate stuff that makes us social animals like chickens and not "cold blooded" animals like garter snakes. Humans have what appears to be an innate response to infants that even makes us suckers for large-eyed jumping spiders. Cowbirds thrive because of innate responses too. Maybe we could engineer core AI systems with "parental" attitudes toward humans.

If we made the "prime directive" something like, "Preserve earths biosphere," we might be signing our own death warrant.

So, two things: Direct access to environmental inputs. A core operating system that makes the AI not necessarily mushy, but certainly with a strong parental bias.

Expand full comment
11 more comments...

No posts