Skip to main content

AI Can Appear Creative

by Dr. Robert P. Murphy
Jun 3, 2024 12:00:00 AM

Lately I’ve been bingeing on the Discovery Institute’s podcast, “ID the Future.” A recent episode featured Robert Marks, who argued that fears of a superhuman AI system don’t understand how these systems actually work. Now Marks is a smart guy and has research experience in computer science whereas I do not. Even so, I think he makes some “category errors” in his arguments. Ironically, I will also argue in this column that whether we think computers can ultimately out-think us is irrelevant to the question of Intelligent Design: If computers in a few years seem to exhibit a lot more information than we plugged into them, it’s just another example of the structure of the universe screaming out that there was a deeper intelligence behind it all.

 

Marks Misses the Mark

In the interest of brevity (and my convenience!), I’m not going to relisten to the ID episode and grab precise quotes from Marks. Rather, I will here present a string of claims against the worry of superhuman AI, that more or less resemble the actual claims that Marks made in the interview.

One claim (against the possibility of super AI) is that computers are algorithmic machines, and it has been mathematically proven that there exist precisely articulable problems that cannot be decided within a given axiomatic framework. (This general area rests on the pioneering demonstration in Gödel’s Incompleteness theorems, which I summarize for the layperson in this podcast episode.)

A second but related claim is that computers will never be able to generate consciousness, the experience of physical senses, or genuine understanding. We know the constituent parts of the computer, both hardware and software, and although the system can generate outputs, it is obvious that it is at best metaphorical to say “the computer is thinking” or “the computer told me a story it made up.” This analysis is analogous to John Searle’s famous Chinese Room argument, in which outsiders write a question in Chinese, slip it into a slot in a room, and then inside the query is matched with a pre-formed answer. To the outsider it might appear that someone in the room spoke fluently, but at no point in the system would there be understanding.

Finally, there is the claim that the earlier considerations lead us to conclude that computers are constitutionally incapable of generating genuinely creative output. At best the machines are simply rearranging what we humans gave them during the training period.

 

Responding to the Objections

The limits on “computability” of mathematical or logical statements don’t restrict AI engines from pontificating on these questions; they can spitball about them just as surely as a human math PhD. After all, GPT-4 level systems could certainly write you a story about two mathematicians arguing about a statement that was formally undecidable. There’s nothing in the binary architecture of the computer that prevents it from writing fictitious tales of humans engaging in non-digital thinking.

And it seems to me that the Chinese Room argument is irrelevant to the super AI debate. If you want to argue that the Chinese Room can’t “really understand Chinese” since none of its components does, then why do you think a guy born and raised in Shanghai can? After all, if you inspect any of the guy’s cells in his nervous system, not a single one of them can speak a lick of Chinese.

 

Information Theory and the ID Movement

On my personal podcast, I’ve had published authors from the Intelligent Design (ID) movement as guests. Much of the discussion concerns the application of standard results from information theory to the biological realm. Loosely put, one of the main ID claims is that the complex structure of various organisms could not have arisen from random mutation and “unguided” natural selection. Even with billions of years of time to work, the ID claim is that the informational content packed into strands of DNA could not have arisen from an ancient primordial soup that got zapped by lightning. This of course is taken by the ID camp to be evidence of a Creator.

But by the same token, suppose subsequent versions of GPT and the other systems eventually exhibit an orderliness to their apparent thought structure that defies explanation, given the blunt and “dumb” techniques we use to calibrate the Large Language Models. Wouldn’t the ID camp be able to point to yet another confirmation of their worldview? It would be fitting especially for Christians, because LLMs are trained on the words of human communication. Maybe the proper conclusion is that our words carry more information than we realized.

 

The Murphy Window

Regardless of how we philosophically classify them, my simple prediction is that the AI systems—especially as they become coupled with machinery (including robot bodies) in the physical world—will continue to grow in power and influence. So the arguments about cognition make me imagine humans getting loaded into boxcars by the robots, while one guy mutters to another, “But can they really understand text?”

(To be clear, I don’t think super robots will kill or enslave us, but that’s because they would be intelligent enough to realize that mass murder and slavery are unproductive.)

In any event, to me it is clear that the AI systems will continue to push back the boundaries of the traditional Turing Test. So let me define a new metric, the Murphy Window, which is the length of time it takes a typical human to be sure he is dealing with an AI system, rather than a genuine human. I predict that the Murphy Window will continue to widen, such that a person could interact for, say, 5 hours with a robot body (with a mask on the head) without being sure if there were a human inside the suit, or it if were really a computer controlling it. 

Since we have already seen the Murphy Window (on a chatbot system for example) start at a nonzero value and grow as the Ais become more complex, it’s obvious that there is nothing in principle stopping the Murphy Window from continuing to widen. And if it should surpass the length of a human life—meaning a typical human could spend his whole life interacting with the AI system in a way that shielded the housing of the “brain” without ever being sure whether it was human or not—then what does it really matter whether it’s “really” thinking? They would impact our society as if they were.

In my view, everybody needs to brace themselves for an explosion of artificial minds, that in a sense live in a different plane of existence, but can still have physical effects in our material universe. Am I saying they will have souls? No, I’m not, but glib dismissals of the potential of these AI systems will leave one unprepared for the coming age.

Dr. Robert P. Murphy is the Chief Economist at infineo, bridging together Whole Life insurance policies and digital blockchain-based issuance.

Twitter: @infineogroup, @BobMurphyEcon

LinkedIn: infineo group, Robert Murphy

Youtube: infineo group

To learn more about infineo, please visit the infineo website

Comments