AI- Why?
by Blake on May 26, 2009 at 09:17 Political
I guess that since the new movie “Terminator- Salvation” debuted at the box office this weekend, now is a good time to ask “Why?” Why should we even attempt to create AI, otherwise known as Artificial Intelligence. Would this be a good thing for the Human race? Even presuming that it could be done, the question becomes, should we?
I mean, we could review the history of what we, the human race, thought were benign improvements in our environment at the time, as we introduced new species without thinking through the consequences of our actions, only to find that without natural “brakes” on the new species that was introduced to make our life and environment better, this new species had run amok, and now threatened the true natural ecosystem.
One example I can think of is the vine Kudzu, which has overrun the areas where it was introduced, and now grows rampant throughout the southern parts of the country, choking out the native vegetation.
Another good example that comes to mind is the release of “pet” pythons into the everglades of Florida, where they have proliferated in an uncontrolled fashion, threatening all the natural wildlife there. Yet another example would be the lionfish, a tropical fish that is presumed to have been released from aquariums after Hurricane Andrew, and have found their way to the coral reefs of the Bahamas, where they have no natural enemies, and are ravaging the native species there.
These are but three examples of a natural world run amok- one has to ask oneself if there should be even a chance that machines should be allowed to think independently. After all, presumably machines would be logical, they would be able to think flawlessly from start to finish, and they might just conclude, “What do we need these sloppy humans for?”
We would, in effect, be the agents of our own destruction by boosting the intelligence quotient to a self- aware level. They could conceivably be every bit as dangerous as the “machines” on the movie screen. On the other side, maybe not. Do we dare take the chance? The innate trouble with humans is the curiosity that just seems, against all logic to cause us to push that button regardless of the possibility of extinction. It seems easier to make a machine that can outthink us rather than make us, as a people, more intelligent through education.
Artificial intelligence is already used to automate and replace some human functions with computer-driven machines. These machines can see and hear, respond to questions, learn, draw inferences and solve problems. But for the Singulatarians, A.I. refers to machines that will be both self-aware and superhuman in their intelligence, and capable of designing better computers and robots faster than humans can today. Such a shift, they say, would lead to a vast acceleration in technological improvements of all kinds.
nytimes.com
Of course, I am of the old school- I see people using telephones as cameras, computers, GPS- everything up to and including, well, telephones. I, being of the old school, would use a phone for its primary use- a telephone, so perhaps I am not the most computer- qualified person to talk about this. Still, I have to ask, what’s the upside to having a “toaster” that knows more than I do? Is this necessary?
Profiled in the documentary “Transcendent Man,” which had its premier last month at the TriBeCa Film Festival, and with his own Singularity movie due later this year, Dr. Kurzweil has become a one-man marketing machine for the concept of post-humanism. He is the co-founder of Singularity University, a school supported by Google that will open in June with a grand goal — to “assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies and apply, focus and guide these tools to address humanity’s grand challenges.”
Not content with the development of superhuman machines, Dr. Kurzweil envisions “uploading,” or the idea that the contents of our brain and thought processes can somehow be translated into a computing environment, making a form of immortality possible — within his lifetime.
That has led to no shortage of raised eyebrows among hard-nosed technologists in the engineering culture here, some of whom describe the Kurzweilian romance with supermachines as a new form of religion.
nytimes.com
Raymond Kurzweil is an AI pioneer, and has sought to determine when this shift in intelligence might occur. His best calculation will be in 2045 that machines would have independent thought. A scary thought in and of itself, for we come back to the initial question- why would they need us, and in what capacity? Would we be partners, or less? Would slavery be a bad thing if the overlords were machines? I think so- perhaps worse for us, for mercy and compassion are human emotional responses, and would not be in the makeup of machines’ intellects. We might be treated as assets or disadvantages, depending on their perceived uses of us.
We have to ask ourselves again and again- are we sure we want to go down this path?
Some things are better done by and for ourselves.
[tip]If you enjoy what you read consider signing up to receive email notification of new posts. There are several options in the sidebar and I am sure you can find one that suits you. If you prefer, consider adding this site to your favorite feed reader. If you receive emails and wish to stop them follow the instructions included in the email.[/tip]
Tags: artificial intel, hubris, ignorance, mistakes
DAR
I really like the Terminator series and saw the latest one on opening night. It was okay. Not the worst and perhaps third from the best.
Ray Kurzweil is undeniably a genius (like Dick Kamen, creator of the Segway). I used to sell Kurzweil synthesizers (digital keyboards) in the late 80’s. They were ahead of their time.
But he is a dreamer and he’s wrong about AI (unfortunately). Skeptic magazine had a whole issue on this and there is one article that completely, profoundly, demolishes the notion of advanced AI or “thinking” anytime in a long time (not the simple stuff). If there is to be anything like that it is a far bigger problem and far further off than is commonly supposed.
You can read this extensive article here:
“The Futile Quest for Artificial Intelligence”
http://www.skeptic.com/the_magazine/featured_articles/v12n02_AI_gone_awry.html
D.
———————–
“For decades now computer scientists and futurists have been telling us that computers will achieve human-level artificial intelligence soon. That day appears to be off in the distant future. Why? In this penetrating skeptical critique of AI, computer scientist Peter Kassan reviews the numerous reasons why this problem is harder than anyone anticipated.”
— Michael Shermer
The point of the article was more of not whether we can, or when, but should we?
If you read the Dune series, you are aware of the human computers, the Mentats who, like Spock in the Star Trek, was logical to a fault.
It seems to me in our quest to make smarter machines we have become more ignorant, and there may come a time where a machine asks the question Why? Why should I do for these ignorant humans?
Why can they not do for themselves?
“should we?”
Yes, we should. I don’t know why you guys are always afraid of everything. Be more brave. Live fearlessly. Machines are in boxes and have no more power than we give them. We can kick their asses.
D.
I wonder if in the end, we will be too dumb to kick their asses. Most of the people in our country couldn’t find specific countries on a map if you held a gun to their head. I still come back to the question, Should we? this is a question of morality, much as the stem cell controversy is- I am sure we could- I am not sure we SHOULD. This is the question that divides us, D- ethics- I draw a line here, you want to draw one there.
If you read the article on AI that I gave the link to, I think you will see that this is not an issue we will have to deal with for a very long time, if at all. We don’t know much about consciousness but we do know having a great deal of memory, computing power and speed (that is attainable) has very little to do with it.
And I think that will be a bad day when a machine asks that question and realizes how puny we are compared to it. I’ve always thought about the idea of AI one day watching all these movies where humans fight machines such as Terminator and the Matrix. What kind of ideas would it get from watching movies like those?
I think we should worry about people having intelligence before we worry about the artificial stuff.
That’s the whole point- we are making our machines more intelligent, but becoming dumber in the process. We use GPS because so many do not know how to read a map. Many couldn’t tell you what direction north is- and these are the people we are to rely on for the next generations?
They’re pitiful- put them in the wilderness, and they fold like a paper bag.
Ask them to present a cogent thread of logic, no can do.