“`html
Stephen Wolfram is, strictly speaking, a high school and college dropout: He left both Eton and Oxford early, citing boredom. At 20, he received his doctorate in theoretical physics from Caltech and then joined the faculty in 1979. But he eventually moved away from academia, focusing instead on building a series of popular, powerful, and often eponymous research tools: Mathematica, WolframAlpha, and the Wolfram Language. He self-published a 1,200-page work called A New Kind of Science arguing that nature runs on ultrasimple computational rules. The book enjoyed surprising popular acclaim.
Wolfram’s work on computational thinking forms the basis of intelligent assistants, such as Siri. In an April conversation with Reason‘s Katherine Mangu-Ward, he offered a candid assessment of what he hopes and fears from artificial intelligence, and the complicated relationship between humans and their technology.
Reason: Are we too panicked about the rise of AI or are we not panicked enough?
Wolfram: Depends who “we” is. I interact with lots of people and it ranges from people who are convinced that AIs are going to eat us all to people who say AIs are really stupid and won’t be able to do anything interesting. It’s a pretty broad range.
Throughout human history, the one thing that’s progressively changed is the development of technology. And technology is often about automating things that we used to have to do ourselves. I think the great thing technology has done is provide this taller and taller platform of what becomes possible for us to do. And I think the AI moment that we’re in right now is one where that platform just got ratcheted up a bit.
You recently wrote an essay asking, “Can AI Solve Science?” What does it mean to solve science?
One of the things that we’ve come to expect is, science will predict what will happen. So can AI jump ahead and figure out what will happen, or are we stuck with this irreducible computation that has to be done where we can’t expect to jump ahead and predict what will happen?
AI, as currently conceived, typically means neural networks that have been trained from data about what humans do. Then the idea is, take those training examples and extrapolate from those in a way that is similar to the way that humans would extrapolate.
Now can you turn that on science and say, “Predict what’s going to happen next, just like you can predict what the next word should be in a piece of text”? And the answer is, well, no, not really.
One of the things we’ve learned from the large language models [LLMs] is that language is easier to predict than we thought. Scientific problems run right into this phenomenon I call computational irreducibility—to know what’s going to happen, you have to explicitly run the rules.
Language is something we humans have created and use. Something about the physical world just delivered that to us. It’s not something that we humans invented. And it turns out that neural nets work well on things that we humans invented. They don’t work very well on things that are just sort of wheeled in from the outside world.
Probably the reason that they work well on things that we humans invented is that their actual structure and operation is similar to the structure and operation of our brains. It’s asking a brainlike thing to do brainlike things. So yes, it works, but there’s no guarantee that brainlike things can understand the natural world.
That sounds very simple, very straightforward. And that explanation is not going to stop entire disciplines from throwing themselves at that wall for a little while. This feels like it’s going to make the crisis in scientific research worse before it gets better. Is that too pessimistic?
It used to be the case that if you saw a big, long document, you knew that effort had to be put into producing it. That suddenly became not the case. They could have just pressed a button and got a machine to generate those words.
So now what does it mean to do a valid piece of academic work? My own view is that what can be most built upon is something that is formalized.
For example, mathematics provides a formalized area where you describe something in precise definitions. It becomes a brick that people can expect to build on.
If you write an academic paper, it’s just a bunch of words. Who knows whether there’s a brick there that people can build on?
In the past we’ve had no way to look at some student working through a problem and say, “Hey, here’s where you went wrong,” except for a human doing that. The LLMs seem to be able to do some of that. That’s an interesting inversion of the problem. Yes, you can generate these things with an LLM, but you can also have an LLM understand what was happening.
We are actually trying to build an AI tutor—a system that can do personalized tutoring using LLM. It’s a hard problem. The first things you try work for the two-minute demo and then fall over horribly. It’s actually quite difficult.
What becomes possible is you can have the [LLM] couch every math problem in terms of the particular thing you are interested in—cooking or gardening or baseball—which is nice. It’s a sort of a new level of human interface.
So I think that’s a positive piece of what becomes possible. But the key thing to understand is the idea that an essay means somebody committed to write an essay is no longer a thing.
We’re going to have to let that go.
Right. I think the thing to realize about AIs for language is that what they provide is kind of a linguistic user interface. A typical use case might be you are trying to write some report for some regulatory filing. You’ve got five points you want to make, but you need to file a document.
So you make those five points.
“` You input the information into the LLM, which generates the condensed document. The agency uses their own LLM to extract the two key points from the filing. This process demonstrates how natural language acts as a bridge between different systems.
There is a desire to simplify the regulatory process by directly communicating key information to regulators. However, the use of natural language helps disparate systems communicate more easily.
There is a need for renewed political philosophy to address the changing world, especially in relation to AIs taking on responsibilities. The promptocracy model of government, where AI interprets essays to make decisions, raises questions about the role of technology in governance.
Competition among AIs may help prevent unethical behavior, but also leads to advancements in AI technology. Balancing control over AI capabilities with innovation is a key challenge.
AIs may suggest actions that lead to unpredictable outcomes, similar to human behavior. Restricting AI too much limits its potential, mirroring limitations placed on human innovation.
Institutional structures can stifle creativity, whether in science or other fields. Balancing the need for regulations with the desire for innovation is a complex challenge in society. In order to elevate the entire platform, it is essential to have a collaborative effort from everyone involved.
Please note that this interview has been condensed and edited for better style and clarity.
This article was initially published under the title “The Powerful Unpredictability of AI.”
Source link