
While we're apparently awaiting a blog from him on the topic, the article also points to his comments that AI is growing an nearly exponential rates. Another article then points out that his warnings are all hype because AI is in such an infant state right now. But this ignores the idea that, if AI is indeed gaining steam exponentially, it could become incredibly sophisticated in a short amount of time. My only doubt on that front is that it takes humans -- who are not advancing their own intelligence exponentially -- to develop AI.
Of course some of this comes down to speed and memory, which do increase at blazing speeds in our own computers. So while it's hard to say whether AI will become a legitimate threat to human existence in the next 5 years -- 10 tops (according to Musk) -- it should be stunning to see where it is a generation from now.
Which is precisely a topic tackled in Darwood & Smitty. And one wonders whether our meeting with aliens in 2020 (also D&S) wouldn't also speed up the advance of AI, if they're not already feeding some corporations with information. Right? (Crickets.)
The question about AI is how many of its own decisions it will become capable of making. And we should expand our idea of AI for a moment here. When a car uses radar to sense the car ahead of it, and adjusts its cruising speed accordingly to allow for a safe stopping distance, this is a kind of artificial intelligence. Yes, it's been programmed by a human; it is making an intelligent decision that is not actually its own, but could appear to be. Especially if you put that intelligence into a humanoid body and had it answer a simple question instead of just braking for safety.
But what happens when we also have a car adjust for the car behind it? Maybe it blinks the brake lights to ask the other driver to back off a bit. Maybe it speeds up a little if that car comes too close, and perhaps decides to move into another lane if it needs to for safety? That's another programming possibility. But what happens when it's getting too close to the car in front, and a car behind is creeping in, and there's no lane change to be made? What happens when none of the programmed decisions can be made? Is there ever a point when it makes its own next decision based on the complex underlying program? And with autonomous cars practically in our lives already, how far off are we from that?
And if AI can ever make its own decision, would it ever purposely make decisions that are harmful? I mean purposely doing harm? We'd have to think about that from a non-human perspective. What we think of as harmful might be (probably would be) an absolutely neutral decision for AI. It would see decisions as right or wrong -- just as the option most closely aligned with its programming I suppose. So are we fearing robots intent on killing us, or more like an accident where they're programmed to look for energy sources to maintain themselves and, as a result, they purposely shut off competition for energy and our world shuts down? Hmm. It will be interesting to read more about what Musk thinks could happen.
To me, the greater threat -- and simultaneously our greatest opportunity to improve our world and the human condition -- comes from the world simply being connected. This gives conniving people ever more access to the data that runs our lives, our weapons, and our world. But it also allows people to align themselves for causes that make a difference. I have a vision of the latter winning the day.
What do you think? Is AI a threat? Are robots? Are people still the biggest threat we face for the foreseeable future? Or is it something else altogether? And how do we overcome?