Artificial Intelligence

JustAFan

The Adebayo Akinfenwa of football photoshoppers
Joined
Sep 15, 2010
Messages
32,377
Location
An evil little city in the NE United States
So basically what it is saying is we (humans) are not as smart as we think we are but we are getting smart enough to make something that will either make us immortal (thus making us look like geniuses) or lead to our extinction (making us look foolish).

some joke about Skynet.....
 

Pogue Mahone

The caf's Camus.
Joined
Feb 22, 2006
Messages
134,169
Location
"like a man in silk pyjamas shooting pigeons
So basically what it is saying is we (humans) are not as smart as we think we are but we are getting smart enough to make something that will either make us immortal (thus making us look like geniuses) or lead to our extinction (making us look foolish).

some joke about Skynet.....
Pretty much. With the consensus of people who know about this stuff apparently inclined towards a negative outcome.

Fecking soon too. 40 years or less. Then it's goodnight Vienna.
 

Revan

Assumptionman
Joined
Dec 19, 2011
Messages
49,798
Location
London
I think that AI will be a real challenge for humanity on the future. The problem is that there is no way to predict what the hell will happen with them when they eventually will become smarter than us (which will most likely happen in the next 50 years). AI generalize things in a very different way to how humans do that. Prediction by 2029 are bullshit IMO, AI researchers are known to give super-optimistic predictions which never materialize (from Minsky time on the fifties until now the same thing has happened)

And the leading researchers seem to not give a shit about the possible problems it will cause. Forget the technological singularity, there are many problems that AI may cause. Problems which might not be solved when we realize it.

On other news, there is quite a high probability that I'll start a PhD on AI next year.
 

Snow

Somewhere down the lane, a licky boom boom down
Joined
Jul 10, 2007
Messages
33,482
Location
Lousy Smarch weather
I know that both Bill Gates and Elon Musk are both vary of the AI progress.
 

adexkola

Doesn't understand sportswashing.
Joined
Mar 17, 2008
Messages
48,547
Location
The CL is a glorified FA Cup set to music
Supports
orderly disembarking on planes
I think that AI will be a real challenge for humanity on the future. The problem is that there is no way to predict what the hell will happen with them when they eventually will become smarter than us (which will most likely happen in the next 50 years). AI generalize things in a very different way to how humans do that. Prediction by 2029 are bullshit IMO, AI researchers are known to give super-optimistic predictions which never materialize (from Minsky time on the fifties until now the same thing has happened)

And the leading researchers seem to not give a shit about the possible problems it will cause. Forget the technological singularity, there are many problems that AI may cause. Problems which might not be solved when we realize it.

On other news, there is quite a high probability that I'll start a PhD on AI next year.
Isn't AI just another name for machine learning?
 

x42bn6

Full Member
Joined
Jun 23, 2008
Messages
18,887
Location
西田麻衣の谷間. Being a nerd, geek and virgin
Isn't AI just another name for machine learning?
Machine learning is a "branch" of AI. It's AI that can learn. Like a child - a child learns and becomes more intelligent by its experiences. A child plays with fire and is burnt, and learns not to play with fire as a result. AIs don't have to learn - you can hard-code instructions and it would still be AI. An AI that drives a car but is only told to drive straight is still an AI - a stupid AI, but still AI nevertheless.

Regarding the OP - we're miles away from that sort of AI. We have domain-specific AIs that are really, really good (i.e. the reason you see few chess grandmasters play computers nowadays is because your smartphone has enough processing power to beat Magnus Carlsen - it's no contest), but general AIs are far away - probably not in this generation, although the next generation must be fairly likely at the very least. Still, who knows? For example, if we crack quantum computing in the next few decades, all bets are off because the potential computational power is staggering. Although our first concern would be encryption, mind you...
 

adexkola

Doesn't understand sportswashing.
Joined
Mar 17, 2008
Messages
48,547
Location
The CL is a glorified FA Cup set to music
Supports
orderly disembarking on planes
Machine learning is a "branch" of AI. It's AI that can learn. Like a child - a child learns and becomes more intelligent by its experiences. A child plays with fire and is burnt, and learns not to play with fire as a result. AIs don't have to learn - you can hard-code instructions and it would still be AI. An AI that drives a car but is only told to drive straight is still an AI - a stupid AI, but still AI nevertheless.

Regarding the OP - we're miles away from that sort of AI. We have domain-specific AIs that are really, really good (i.e. the reason you see few chess grandmasters play computers nowadays is because your smartphone has enough processing power to beat Magnus Carlsen - it's no contest), but general AIs are far away - probably not in this generation, although the next generation must be fairly likely at the very least. Still, who knows? For example, if we crack quantum computing in the next few decades, all bets are off because the potential computational power is staggering. Although our first concern would be encryption, mind you...
Cheers.

What drew me to @Revan's post was his bold prediction that AI will become smarter than humans in the next 50 years. From my experience to date, computing power isn't a problem for the vast majority of problems out there, it's finding suitable algorithms that separate the wheat from the chaff. Neural networks are a first step, but I wouldn't say much progress has been made on that front in terms of unsupervised learning.
 

Ubik

Nothing happens until something moves!
Joined
Jul 8, 2010
Messages
18,959
Definitely gonna be the end of us. Also probably the reason we've had no signals from other civilizations, robots are all too busy harvesting our precious body heat.
 

VeevaVee

The worst "V"
Scout
Joined
Jan 3, 2009
Messages
46,263
Location
Manchester
I don't have time to read it all, just had a super quick skim. Could someone help clear something up/give me the lowdown?

My first thought was that if we can control it surely it could benefit us greatly?

But is the issue that they'll become that intelligent that they'll no longer accept being controlled by humans?

If we could control it, what would the limit be? Answers to everything we've ever wanted to know?
 

Raptori

Special needs
Joined
Aug 12, 2014
Messages
2,962
I don't have time to read it all, just had a super quick skim. Could someone help clear something up/give me the lowdown?

My first thought was that if we can control it surely it could benefit us greatly?

But is the issue that they'll become that intelligent that they'll no longer accept being controlled by humans?

If we could control it, what would the limit be? Answers to everything we've ever wanted to know?
Basically once an AI exists, it'll be able to make improvements to itself which creates an infinite feedback loop with the result that it'll just keep getting more and more intelligent (until it reaches hardware limits).

The problem is that we don't know, and cannot know, anything about what will happen afterwards. We can only have vague guesses about what is possible, and they include all sorts of crazy shit that seems completely unbelievable.


Off-topic: did you know that according to most of our (admittedly flawed) methods of estimating intelligence, cetaceans are more intelligent than humans? :D
 

Revan

Assumptionman
Joined
Dec 19, 2011
Messages
49,798
Location
London
Cheers.

What drew me to @Revan's post was his bold prediction that AI will become smarter than humans in the next 50 years. From my experience to date, computing power isn't a problem for the vast majority of problems out there, it's finding suitable algorithms that separate the wheat from the chaff. Neural networks are a first step, but I wouldn't say much progress has been made on that front in terms of unsupervised learning.
Frankly speaking, my bold prediction would look pessimistic compared with what researchers on the field are saying. They say that is pretty sure that in that time, the AI will surpass us. Saying that, they were saying the same thing 50 years ago. What x42 said is spot on. Nowadays, we have specialized AI that can do many jobs better than humans, but they can do only that job. But things may change fast. Bith computing power and algorithms are improving (see google brain from Andrew Ng for example).

Neural networks have come back on top again (they were promising with perceptron, then it was found that they're useless, then back prop happened but wasn't used that well but now they are getting used more and are able to outperform many other techniques). Their entire history since their existence (on the forties-fifties) has been this way (best ever - useless - best ever - useless etc etc).
 

Pogue Mahone

The caf's Camus.
Joined
Feb 22, 2006
Messages
134,169
Location
"like a man in silk pyjamas shooting pigeons
Machine learning is a "branch" of AI. It's AI that can learn. Like a child - a child learns and becomes more intelligent by its experiences. A child plays with fire and is burnt, and learns not to play with fire as a result. AIs don't have to learn - you can hard-code instructions and it would still be AI. An AI that drives a car but is only told to drive straight is still an AI - a stupid AI, but still AI nevertheless.

Regarding the OP - we're miles away from that sort of AI. We have domain-specific AIs that are really, really good (i.e. the reason you see few chess grandmasters play computers nowadays is because your smartphone has enough processing power to beat Magnus Carlsen - it's no contest), but general AIs are far away - probably not in this generation, although the next generation must be fairly likely at the very least. Still, who knows? For example, if we crack quantum computing in the next few decades, all bets are off because the potential computational power is staggering. Although our first concern would be encryption, mind you...
It's a fairly well referenced piece and there's a section on exactly how far we are away from AGI/ASI, which is basically a poll of most of the foremost experts in the field. They seem to think it's not that far off at all. In most of our lifetime's anyway.

 

Member 39557

Guest
As long as we can switch the plug off before it gets aware enough to kill us.
 

RedSky

Shepherd’s Delight
Scout
Joined
Jul 27, 2006
Messages
74,306
Location
Hereford FC (Soccermanager)
Viewers of Person of Interest would be all too familiar of the danger of AI! Forget Skynet, the AI in Person of Interest is more realistic and if anything, more frightening imo.

It's an interesting debate though. The ultimate problem being that Humans are simply not reading for AI and there will always be those who will seek to profit from or simply want to watch the world burn.
 

Revan

Assumptionman
Joined
Dec 19, 2011
Messages
49,798
Location
London
As long as we can switch the plug off before it gets aware enough to kill us.
Of course that you won't be able to do that. Even primitive intelligient viruses can replicate itself on computers, just imagine what an AI which is more intelligient than any person can do. There won't be a point of switching the plug where the AI won't have a center and so can easily transfer itself by copying itself into other computers (exactly like Mr. Smith does in Matrix). The only way to switch the plug would be by removing electricity from Earth (which would be IMO quite an impossible task to do).
 

Revan

Assumptionman
Joined
Dec 19, 2011
Messages
49,798
Location
London
Viewers of Person of Interest would be all too familiar of the danger of AI! Forget Skynet, the AI in Person of Interest is more realistic and if anything, more frightening imo.

It's an interesting debate though. The ultimate problem being that Humans are simply not reading for AI and there will always be those who will seek to profit from or simply want to watch the world burn.
Even more importantly, the leading researchers on the field don't seem to give a shit about how we will deal when we eventually have machines more intelligient than us. From interviews which I have listened to a lot of them, they seem currently to care only how to make such a machine (the AI dream) but not what to do then. And also they seem eager to cite that technology has always helped humanity, not destroy them.

Maybe they're right though. It would be quite impossible making plans how to stop a bad AI if we currently don't have any idea how such an AI will work and behave.

And yep, PoI is a really nice example of humans misusing AI. Which will definitely happen. I mean they are currently using for their profits, let alone if they'll be able to do that with something as intelligient.

What I am pretty sure though is that AI 'rebelation' won't happen like in Matrix or in Terminator. I doubt that they'll have survival insticts, neither the need to rule. If it happens, it will happen as some kind of misunderstandment, either by Maths going slightly wrong (HAL 9000) or by them finding some generalization to problems which we gave, and that solution being in itself very damaging (Mass Effect 3??).
 

swooshboy

Band of Brothers
Joined
Aug 3, 2004
Messages
10,744
Location
London
Even more importantly, the leading researchers on the field don't seem to give a shit about how we will deal when we eventually have machines more intelligient than us. From interviews which I have listened to a lot of them, they seem currently to care only how to make such a machine (the AI dream) but not what to do then. And also they seem eager to cite that technology has always helped humanity, not destroy them.

Maybe they're right though. It would be quite impossible making plans how to stop a bad AI if we currently don't have any idea how such an AI will work and behave.

And yep, PoI is a really nice example of humans misusing AI. Which will definitely happen. I mean they are currently using for their profits, let alone if they'll be able to do that with something as intelligient.

What I am pretty sure though is that AI 'rebelation' won't happen like in Matrix or in Terminator. I doubt that they'll have survival insticts, neither the need to rule. If it happens, it will happen as some kind of misunderstandment, either by Maths going slightly wrong (HAL 9000) or by them finding some generalization to problems which we gave, and that solution being in itself very damaging (Mass Effect 3??).

I haven't read the entire article in the OP, but there is a section that considers nanobots:

Anyway, I brought you here because there’s this really unfunny part of nanotechnology lore I need to tell you about. In older versions of nanotech theory, a proposed method of nanoassembly involved the creation of trillions of tiny nanobots that would work in conjunction to build something. One way to create trillions of nanobots would be to make one that could self-replicate and then let the reproduction process turn that one into two, those two then turn into four, four into eight, and in about a day, there’d be a few trillion of them ready to go. That’s the power of exponential growth. Clever, right?

It’s clever until it causes the grand and complete Earthwide apocalypse by accident. The issue is that the same power of exponential growth that makes it super convenient to quickly create a trillion nanobots makes self-replication a terrifying prospect. Because what if the system glitches, and instead of stopping replication once the total hits a few trillion as expected, they just keep replicating? The nanobots would be designed to consume any carbon-based material in order to feed the replication process, and unpleasantly, all life is carbon-based. The Earth’s biomass contains about 1045 carbon atoms. A nanobot would consist of about 106 carbon atoms, so 1039 nanobots would consume all life on Earth, which would happen in 130 replications (2130 is about 1039), as oceans of nanobots (that’s the gray goo) rolled around the planet. Scientists think a nanobot could replicate in about 100 seconds, meaning this simple mistake would inconveniently end all life on Earth in 3.5 hours.
 

Raptori

Special needs
Joined
Aug 12, 2014
Messages
2,962
I haven't read the entire article in the OP, but there is a section that considers nanobots:
Nanobots aren't really the same thing as AI though of course. Nanoes have very high potential to be ridiculously destructive, AI not really on the same level :)
 

bishblaize

Full Member
Joined
Jan 23, 2014
Messages
4,280
Fun stuff but nothing there that hasn't been well explored in scifi many times over. Let me put down two counter arguments. (with a snippet from each)

Three arguments against the singularity
By Charlie Stross
http://www.antipope.org/charlie/blog-static/2011/06/reality-check-1.html
...super-intelligent AI is unlikely because, if you pursue Vernor's program, you get there incrementally by way of human-equivalent AI, and human-equivalent AI is unlikely. The reason it's unlikely is that human intelligence is an emergent phenomenon of human physiology, and it only survived the filtering effect of evolution by enhancing human survival fitness in some way. Enhancements to primate evolutionary fitness are not much use to a machine, or to people who want to extract useful payback (in the shape of work) from a machine they spent lots of time and effort developing. We may want machines that can recognize and respond to our motivations and needs, but we're likely to leave out the annoying bits, like needing to sleep for roughly 30% of the time, being lazy or emotionally unstable, and having motivations of its own....
The Betterness Explosion
By Robin Hanson
http://www.overcomingbias.com/2011/06/the-betterness-explosion.html
...to say that fearing that a new grand unified theory of intelligence will let one machine suddenly take over the world isn’t that different from fearing that a grand unified theory of betterness will let one better person suddenly take over the world. This isn’t to say that such an thing is impossible, but rather that we’d sure want some clearer indications that such a theory even exists before taking such a fear especially seriously.
 

RedSky

Shepherd’s Delight
Scout
Joined
Jul 27, 2006
Messages
74,306
Location
Hereford FC (Soccermanager)
Even more importantly, the leading researchers on the field don't seem to give a shit about how we will deal when we eventually have machines more intelligient than us. From interviews which I have listened to a lot of them, they seem currently to care only how to make such a machine (the AI dream) but not what to do then. And also they seem eager to cite that technology has always helped humanity, not destroy them.
This will be where as Humans we mess up. We make the monster but don't think on how to control it. If you build an AI who is capable of free thought, do you honestly think it will look at the Human race in a warm light considering it has billions of MB's of evidence to suggest otherwise. It's a very interesting/dangerous subject.

Robotic development isn't the scary subject, it's AI development which should frighten people.
 

Evil Fellaini

New Member
Joined
Dec 21, 2014
Messages
139
I like his monkey/skyscraper example. It's frightening to imagine what a "being" so superior to our intelligence would be able to do.

This comment gave me the chills:
The first question asked of AI; "Is there a god?" First AI answer; "There is now"
 

VeevaVee

The worst "V"
Scout
Joined
Jan 3, 2009
Messages
46,263
Location
Manchester
I can imagine war being a primary reason for wanting to create such powerful, intelligent machines. They could hold back for safety purposes in other aspects of use but they won't with war because they'll want the most powerful machines won't they? Scary thought.

Also, what would happen if immortality was achieved? Only the richest, most powerful humans get it? Everyone gets it and birth is controlled?

Also also, is it possible machines could do almost all jobs, and they could create everything everybody wants so humans can live in utopia, doing as they please, living millionaire type lifestyles?