Stephen Wolfram was technically a high school and college dropout: he left Eton and Oxford early, citing boredom. At age 20, he earned a PhD in theoretical physics from Caltech and joined the faculty in 1979. , WolframAlpha, and the Wolfram Language.He self-published a 1,200-page book titled a new science Think of nature as operating on ultra-simple computational rules. The book received surprisingly universal acclaim.
Wolfram’s work in computational thinking forms the basis of intelligent assistants such as Siri.In a conversation in April reasonLed by Katherine Mangu-Ward, he provides a candid assessment of his hopes and concerns about artificial intelligence and the complex relationship between humans and their technology.
reason: Are we too panicked about the rise of artificial intelligence, or not enough?
Wolfram: Depends on who “we” are. I interact with a lot of people, including people who believe AI will eat us all, and people who think AI is stupid and incapable of doing anything interesting. That’s a pretty broad range.
Throughout human history, one thing that has gradually changed is the development of technology. Technology often automates things we used to have to do ourselves. I think the great thing that technology does is provide us with a higher and higher platform, allowing us to do more things. I think the artificial intelligence moment we’re in now is one where the platform has just stepped up.
You recently wrote an article asking “Can artificial intelligence solve scientific problems?” What does it mean to solve scientific problems?
One of the things we look forward to is that science will predict what will happen. So, can artificial intelligence jump forward and figure out what’s going to happen, or are we so stuck in this irreducible calculation that we can’t expect to jump forward and predict what’s going to happen?
As currently conceived, artificial intelligence usually refers to neural networks trained on human behavioral data. The idea is then to take these training exemplars and make inferences from these exemplars in a manner similar to human inference.
Now can you turn it over to science and say, “Predict what will happen next, just like you can predict what the next word in a passage should be”? The answer is, well, no, not really.
One of the things we learned from large language models [LLMs] Language is more predictable than we think. Scientific problems happen to suffer from this phenomenon, which I call computational irreducibility—to know what is going to happen, you have to run the rules explicitly.
Language is something that we humans create and use. Something in the physical world has just passed this on to us. This is not something we humans invented. It turns out that neural networks work well on things we humans have invented. They don’t work well against things that are imported from the outside world.
The reason they perform so well over things we humans invent may be that their actual structure and operation are similar to that of our brains. It requires a brain-like thing to do brain-like things. So, yes, it does work, but there’s no guarantee that brain-like creatures will be able to understand the natural world.
This sounds very simple and very straightforward. This explanation does not prevent the entire discipline from temporarily floundering. It feels like this will make the crisis in scientific research get worse before it gets better. Is that too pessimistic?
It used to be that if you saw a big, long document, you knew you had to put in the effort to produce it. Things suddenly looked different. They just press a button and the machine generates these words.
So what does it mean to do an effective academic job? My own view is that what builds the foundation best is the formal stuff.
Mathematics, for example, provides a formal domain in which you can describe something with precise definitions. It becomes a brick upon which one can expect to build.
If you write an academic paper, it’s just a bunch of words. Does anyone know if there is a brick out there that could be used to build with?
In the past, we couldn’t see a student working on a problem and say, “Hey, this is where you went wrong,” unless someone did. An LL.M. appears to be able to do some of this. This is an interesting inversion of the problem. Yes, you can produce these things through the LL.M., but you can also let the LL.M. know what’s going on.
We’re actually trying to build an AI tutor – a system that can do personalized tutoring using an LL.M. This is a difficult question. The first thing you try is for a two-minute demo, and then you fall flat on your face. It’s actually quite difficult.
What is possible is that you can have [LLM] It’s nice to base each math problem on something specific that interests you (cooking, gardening, or baseball). This is a new level of human-machine interface.
So I think that’s a positive part of what’s possible. But the key thing to understand is that a thesis means that the person dedicated to writing it is no longer one thing.
We must give it up.
correct. I think the thing to realize about linguistic AI is that what they provide is a linguistic user interface. A typical use case might be that you are trying to write some reports for some regulatory filings. You have five points you want to make, but you need to submit a document.
So you brought up these five points. You offer it to the LL.M. LL.M. overstates the entire document. You send it in. It condenses it down to this.
So essentially what’s happening is you’re using natural language as a kind of transport layer that allows you to connect one system to another.
I said with a strong libertarian desire: “Can we skip the complicated regulatory filings and they can just tell the regulators these five things?”
Well, it’s also convenient because these are two very different systems trying to talk to each other. It’s hard to get these things to match up, but if there’s a fluffy layer in the middle, which is our natural language, it’s actually easier to get these systems to talk to each other.
I keep pointing out that maybe 400 years ago was the heyday of political philosophy, where people invented ideas about democracy and all that kind of stuff. I think there is a need and an opportunity to repeat this idea now because the world has changed.
As we think about artificial intelligence eventually taking on responsibilities in the world, how should we respond? I think this is an interesting moment and should be thought about a lot. It takes a lot less thinking than I thought.
An interesting thought experiment might be called rapid rule of government. One approach is for each person to write a small essay about how they want the world to be, and then feed all of those essays into an artificial intelligence. And then every time you want to make a decision, you just ask the AI based on all these articles you’ve read from all these people, “What should we do?”
One thing to realize is that the operations of government are, in a sense, an attempt to build something like a machine. In a sense, you put artificial intelligence in place instead of a human-operated machine, not sure how different it actually is, but you have other possibilities.
Robot mentors and government machines sound like something out of an Isaac Asimov story from my youth. It sounds both alluring and so dangerous when you think about how people bring their baggage into their technology. Is there any way we can solve this problem?
What needs to be recognized is that technology itself is nothing. What we do with artificial intelligence is an amplified version of what humans do.
What needs to be recognized is that primitive computing systems can do many, many things, most of which we humans don’t care about. So when we try to get it to do things we care about, we are bound to pull it in a human direction.
What do you think is the role of competition in solving these problems? Will competition within AI curb any ethical issues, perhaps in the same way that market competition constrains behavior in some respects?
Interesting question. I do think that an AI society is more stable than an AI that rules everyone. On the surface it would prevent some completely crazy things from happening, but the reason there’s a lot of LLM is because once you know ChatGPT is possible, in a way it becomes less difficult. You’re going to see a lot of companies and countries step up and say, “We’re going to spend this money. We’re going to build something like this.” It will be interesting to see what the improvement curve looks like from here. My own guess is that it happens gradually.
How are we going to mess this up? And by “we,” I mean maybe people with power, maybe just general human tendencies, and by “this,” I mean effectively harnessing artificial intelligence.
The first thing to realize is that artificial intelligence will suggest various things that people might do, just like GPS provides directions for things that people might do. Many people will just follow these recommendations. But one of the things about it is that you can’t predict what it’s going to do. Sometimes it does things we don’t think we want.
Another approach is to limit it to the extent that it will only do what we want it to do, and it will only do what we can predict it will do. This means it can’t do much.
Arguably we’ve done the same thing to humans, right? We have a lot of rules that prohibit things that people do, and sometimes we can suppress possible innovations from these people.
Yes, this is true. It happens in science. It’s a case of “be careful what you wish for” because you say, “I hope a lot of people do this science because it’s really cool and it can discover things.” But once a lot of people do it, eventually An institutional structure will be formed that makes it difficult for new things to happen.
Is there a way to short circuit it? Or should we even want to?
I have no idea. I have been thinking about this issue of basic science for a long time. Individuals can come up with original ideas. This becomes more difficult when it becomes institutionalized. Having said that: as the infrastructure of the world is built involving large numbers of people, you suddenly get to a point where you can see some new creative things to do that you wouldn’t be able to do if it were just one I have worked hard for decades. You need a collective effort to improve the entire platform.
This interview has been condensed and edited for style and clarity.
This article originally appeared in the print edition under the headline “The powerful unpredictability of artificial intelligence.”