Min menu

Pages

News Sports

Will AI replace humans in the future? - Tech4Task4G

In the coming years, artificial intelligence is probably going to change your life, and possibly the entire world. But people have a hard time agreeing on how.

Below are excerpts from an interview at the World Economic Forum where renowned computer science professor and AI expert Stuart Russell helps separate the sense from the nonsense.

There is a big difference between asking a human to do something and giving it as a goal to an AI system. When you ask a person to get you a cup of coffee, you don't mean that it has to be their life's mission, and nothing else in the universe matters.

Even if they have to kill everyone in Starbucks to get you a coffee before they close — they should. No, you don't mean that. All the other things we care about mutually, should also be included in your behavior.

And the problem with the way we build AI systems now is that we give them a fixed purpose. Algorithms require us to define everything in the objective.

And if you say, can we fix ocean acidification?

Yes, you can have a catalytic reaction that does this very efficiently, but it uses a quarter of the oxygen in the atmosphere, which apparently causes us to die quite slowly and unpleasantly over several hours. 

So, how can we avoid this problem?

You might say, well, well, be more careful about defining the objective—don't forget atmospheric oxygen. And then, of course, some of the side effects of the reaction in the ocean poison all the fish. Well, I meant don't kill the fish either.

And then, well, what about seaweed?

Don't do anything that causes all the seaweed to die. And on and on. And the reason we don't have to do that with humans is that humans often know they don't know all the things we care about.

If you ask a person to bring you a cup of coffee, and you're at the Hotel George Sand in Paris, where coffee is 13 euros a cup, it's perfectly reasonable to come back and say, well, it's 13 euros.

Hey, do you really want this, or can I go next door and get one? And this is perfectly normal for a person. To ask, I'm going to repaint your house — is it OK if I remove the gutters and put them back? 

We don't think of it as a very sophisticated ability,

but AI systems don't have it because the way we build them now, they have to know the full objective.

If we build systems that know they don't know what the goal is, they start to exhibit these behaviors, like asking permission before exhausting all the oxygen in the atmosphere.

In all these senses, control over an AI system comes from the machine's uncertainty about what the real goal is.

And it's when you build machines that believe with certainty that they have a purpose, when you get this kind of psychological behavior. And I think we see the same thing in humans.

What happens when general purpose AI hits the real economy?

How do things change?

Can we adapt?

This is a very old point. Amazingly,

Aristotle actually has a passage where he says, Look, if we had fully automatic knitting machines and plectrums that could pluck lyres and produce music without a human being, we would have a Labor would not be needed.

This idea, which I think Keynes called technical unemployment in the 1930s, is very clear to people. They think, yes, of course, if the machine works, I will be unemployed.

You can think of the warehouses that companies are currently operating for e-commerce, they are semi-automated.

The way it works is that in an old warehouse—where you have tons of stuff piled up everywhere and humans run around and then bring it back and ship it—there's a robot that goes Curr receives a shelving unit.

It has what you need,

but the human has to pull it out of the bin or off the shelf, because it's still too hard. But, at the same time, would you build a robot that is accurate enough to pick any of the huge variety of items you can buy?

That would, in one fell swoop, kill 3 or 4 million jobs?

There is an interesting story written by E.M. Forster, where everyone is completely dependent on the machine.

The story is really about the fact that if you hand over the management of your civilization to machines, you lose the incentive to understand it yourself or to teach future generations how to understand it.

You can actually see "WALL-E" as a modern version,

where everyone is weakened and infantilized by the machine, and that hasn't been possible until now. We put a lot of our culture into books, but books can't play it for us.

And so we always have to teach the next generation. If you work it out, that's about a trillion person-years of teaching and learning and an unbroken chain spanning tens of thousands of generations.

What will happen if this chain is broken?

I think that's something that we're going to have to understand as AI moves forward. The actual date of arrival of general-purpose AI—you won't be able to pinpoint it, it's not a day. It's also not like it's all or nothing.

The effect is increasing.

So with every advance in AI, it significantly expands the range of tasks. So in that sense, I think most  experts say by the end of the century, we're very likely to have general purpose AI. 

The median is something around 2045. I'm a little more on the conservative side. I think the problem is more difficult than we think.

I like what John McAfee, he was one of the founders of AI, when he was asked this question, he said, somewhere between five and five hundred years. And I think we're going to need several Einsteins to make that happen.

Comments