Artificial intelligence | 2022 edition

jojojo

JoJoJoJoJoJoJo
Staff
Joined
Aug 18, 2007
Messages
38,364
Location
Welcome to Manchester reception committee
Starting a new thread because
https://www.redcafe.net/threads/artificial-intelligence.401225/
really is too old to bump again...

A report today on AI (or perhaps more fairly, machine learning) in a medical setting. A nice reminder that just before we throw the expertise baby out with the "routine task, that should be automated" bathwater - we need to make sure that our AI is as smart as it looks on the first run:

 

Simbo

Full Member
Joined
Oct 25, 2010
Messages
5,235
What is "AI" nowadays to a layman?

Code that writes its own code? Based on rules set on how many if statements developers bother to put in?
 

Revan

Assumptionman
Joined
Dec 19, 2011
Messages
49,696
Location
London
What is "AI" nowadays to a layman?

Code that writes its own code? Based on rules set on how many if statements developers bother to put in?
Definitely not. No one has been trying to do that for the last 30 years.

Pretty much AI nowadays means deep learning. So you train large neural networks in some train dataset, with the idea of them being able to generalize in real world, measured as their performance in a testing set. The problem is that while this works if the training and test sets come from the same distribution, typically the networks fail to do well when the test set has a different distribution from the training set. And well, the real-world has a different distribution to whatever training set you come up with.

Still, there is some very good progress in the field. In no particular order:

a) image classification, retrieval, segmentation, and recognition at this stage works relatively well. You can get some object recognizer and use in real world, with it working decently. Same for the other tasks I mentioned.
b) text understanding is getting better and better. Look at this model interpreting 'jokes' given as input. c) there is progress on self-driving cars although still much work remains to be done. Still, companies like Google, Argo AI, Aurora, Cruise, Tesla, Nvidia are doing good work there.
d) some progress in medical imaging, though I am not very familiar there.
e) protein folding from DeepMind, magnificent progress.
f) of course, AIs destroying players in games be it chess, Go, or Starcraft, but this is not massively useful.
g) meta-learning, AIs that teach other AIs many things which right now are decided by programmers/research scientists.

While you can imagine AI as code which writes code, right now that written code is not loops and conditionals, but weights of the neural network. And the neural network can be considered as function approximators, or even program approximators.
 

Code-CX

Full Member
Joined
Jan 18, 2011
Messages
1,169
Definitely not. No one has been trying to do that for the last 30 years.

Pretty much AI nowadays means deep learning. So you train large neural networks in some train dataset, with the idea of them being able to generalize in real world, measured as their performance in a testing set. The problem is that while this works if the training and test sets come from the same distribution, typically the networks fail to do well when the test set has a different distribution from the training set. And well, the real-world has a different distribution to whatever training set you come up with.

Still, there is some very good progress in the field. In no particular order:

a) image classification, retrieval, segmentation, and recognition at this stage works relatively well. You can get some object recognizer and use in real world, with it working decently. Same for the other tasks I mentioned.
b) text understanding is getting better and better. Look at this model interpreting 'jokes' given as input. c) there is progress on self-driving cars although still much work remains to be done. Still, companies like Google, Argo AI, Aurora, Cruise, Tesla, Nvidia are doing good work there.
d) some progress in medical imaging, though I am not very familiar there.
e) protein folding from DeepMind, magnificent progress.
f) of course, AIs destroying players in games be it chess, Go, or Starcraft, but this is not massively useful.
g) meta-learning, AIs that teach other AIs many things which right now are decided by programmers/research scientists.

While you can imagine AI as code which writes code, right now that written code is not loops and conditionals, but weights of the neural network. And the neural network can be considered as function approximators, or even program approximators.
I'm currently doing an MSc in Data Science where we have to run our choice of classifier to correctly identify images from a given dataset. I've tried different ML models with extensive hyperparameter trials and none of them even came close to results obtained from convolution neural networks (and I haven't even tuned them extensively yet). Maybe it's just that neural networks work better for image data, but it's just quite remarkable how accurate they can be.

The only problem I'm facing is that my GPU sucks hard so I have to run my code through Google Colab which loves disconnecting my run-time at random times.
 

Revan

Assumptionman
Joined
Dec 19, 2011
Messages
49,696
Location
London
I'm currently doing an MSc in Data Science where we have to run our choice of classifier to correctly identify images from a given dataset. I've tried different ML models with extensive hyperparameter trials and none of them even came close to results obtained from convolution neural networks (and I haven't even tuned them extensively yet). Maybe it's just that neural networks work better for image data, but it's just quite remarkable how accurate they can be.

The only problem I'm facing is that my GPU sucks hard so I have to run my code through Google Colab which loves disconnecting my run-time at random times.
If you working with images/text/speech, there is no point in even trying non deep learning solutions.

On the other hand, if you working with tabular data, it is hard to beat gradient boosting methods, though some recent types of neural networks get close to them.
 

11101

Full Member
Joined
Aug 26, 2014
Messages
21,324
Definitely not. No one has been trying to do that for the last 30 years.

Pretty much AI nowadays means deep learning. So you train large neural networks in some train dataset, with the idea of them being able to generalize in real world, measured as their performance in a testing set. The problem is that while this works if the training and test sets come from the same distribution, typically the networks fail to do well when the test set has a different distribution from the training set. And well, the real-world has a different distribution to whatever training set you come up with.

Still, there is some very good progress in the field. In no particular order:

a) image classification, retrieval, segmentation, and recognition at this stage works relatively well. You can get some object recognizer and use in real world, with it working decently. Same for the other tasks I mentioned.
b) text understanding is getting better and better. Look at this model interpreting 'jokes' given as input. c) there is progress on self-driving cars although still much work remains to be done. Still, companies like Google, Argo AI, Aurora, Cruise, Tesla, Nvidia are doing good work there.
d) some progress in medical imaging, though I am not very familiar there.
e) protein folding from DeepMind, magnificent progress.
f) of course, AIs destroying players in games be it chess, Go, or Starcraft, but this is not massively useful.
g) meta-learning, AIs that teach other AIs many things which right now are decided by programmers/research scientists.

While you can imagine AI as code which writes code, right now that written code is not loops and conditionals, but weights of the neural network. And the neural network can be considered as function approximators, or even program approximators.
B) still has a long way to go. My company uses it extensively to gauge sentiment, its not all that reliable. It struggles with context. You'll get it telling you a piece of text about missing a climate change target means the text is negative about climate change as a whole. Or maybe our engineers are just shit.
 

nimic

something nice
Scout
Joined
Aug 2, 2006
Messages
31,535
Location
And I'm all out of bubblegum.
That's very interesting, but his chatbot was definitely not sentient. That guy does seem slightly unhinged.


The Post said the decision to place Lemoine, a seven-year Google veteran with extensive experience in personalization algorithms, on paid leave was made following a number of “aggressive” moves the engineer reportedly made.

They include seeking to hire an attorney to represent LaMDA, the newspaper says
 

Hoof the ball

Full Member
Joined
Jan 16, 2008
Messages
12,326
Location
San Antonio, Texas.
Definitely not. No one has been trying to do that for the last 30 years.

Pretty much AI nowadays means deep learning. So you train large neural networks in some train dataset, with the idea of them being able to generalize in real world, measured as their performance in a testing set. The problem is that while this works if the training and test sets come from the same distribution, typically the networks fail to do well when the test set has a different distribution from the training set. And well, the real-world has a different distribution to whatever training set you come up with.

Still, there is some very good progress in the field. In no particular order:

a) image classification, retrieval, segmentation, and recognition at this stage works relatively well. You can get some object recognizer and use in real world, with it working decently. Same for the other tasks I mentioned.
b) text understanding is getting better and better. Look at this model interpreting 'jokes' given as input. c) there is progress on self-driving cars although still much work remains to be done. Still, companies like Google, Argo AI, Aurora, Cruise, Tesla, Nvidia are doing good work there.
d) some progress in medical imaging, though I am not very familiar there.
e) protein folding from DeepMind, magnificent progress.
f) of course, AIs destroying players in games be it chess, Go, or Starcraft, but this is not massively useful.
g) meta-learning, AIs that teach other AIs many things which right now are decided by programmers/research scientists.

While you can imagine AI as code which writes code, right now that written code is not loops and conditionals, but weights of the neural network. And the neural network can be considered as function approximators, or even program approximators.
I'm currently doing an MSc in Data Science where we have to run our choice of classifier to correctly identify images from a given dataset. I've tried different ML models with extensive hyperparameter trials and none of them even came close to results obtained from convolution neural networks (and I haven't even tuned them extensively yet). Maybe it's just that neural networks work better for image data, but it's just quite remarkable how accurate they can be.

The only problem I'm facing is that my GPU sucks hard so I have to run my code through Google Colab which loves disconnecting my run-time at random times.
Absolutely.

 

dumbo

Don't Just Fly…Soar!
Scout
Joined
Jan 6, 2008
Messages
9,380
Location
Thucydides nuts
Truly scary the power that these giddy tech weirdo have been afforded, and the deference with which they're treated. They are the Jesus-toast evangelists of the digital age.
 

nimic

something nice
Scout
Joined
Aug 2, 2006
Messages
31,535
Location
And I'm all out of bubblegum.
An AI just made this. I'm glad the full version isn't available to anyone who isn't basically an AI researcher, or I'd truly get nothing done.



This is what the full current generation version can do. It's almost actually unbelievable. It is going to have a massive impact on the creative industry in a few years, I bet. Why hire an artist for your website or game if an AI can do this?

 

Cascarino

Magnum Poopus
Joined
Jul 17, 2014
Messages
7,616
Location
Wales
Supports
Swansea
It is going to have a massive impact on the creative industry in a few years, I bet. Why hire an artist for your website or game if an AI can do this?
They’re incredible. Disconcerting too, creativity is the last human domain, even outside of visual art we potentially have ai coming which could write beautiful novels or poetry, maybe a cinematic masterpiece conceptualised and executed from the mind of a machine, yet one that could speak to the human experience, maybe even something more.

Outsourcing art would be a high price to pay to get Chuck Lorre off our screens
 

nimic

something nice
Scout
Joined
Aug 2, 2006
Messages
31,535
Location
And I'm all out of bubblegum.
They’re incredible. Disconcerting too, creativity is the last human domain, even outside of visual art we potentially have ai coming which could write beautiful novels or poetry, maybe a cinematic masterpiece conceptualised and executed from the mind of a machine, yet one that could speak to the human experience, maybe even something more.

Outsourcing art would be a high price to pay to get Chuck Lorre off our screens
Yep. Whenever the next automation has been discussed, art has always been put forward as the thing robots simply can't convincingly do. I doubt high art is going away anytime soon, but the sort of mass market, for profit art that many artists do to earn a living? That might be going sooner rather than later.
 
Last edited:

Superden

Full Member
Joined
Jul 13, 2013
Messages
2,112
AI Art just exposes how much of the conversation around high art is just bullshit.
 

oneniltothearsenal

Caf's Milton Friedman and Arse Aficionado
Scout
Joined
Dec 17, 2013
Messages
11,186
Supports
Brazil, Arsenal,LA Aztecs
They’re incredible. Disconcerting too, creativity is the last human domain, even outside of visual art we potentially have ai coming which could write beautiful novels or poetry, maybe a cinematic masterpiece conceptualised and executed from the mind of a machine, yet one that could speak to the human experience, maybe even something more.

Outsourcing art would be a high price to pay to get Chuck Lorre off our screens
I'm still doubtful about some of this. AI-generated music (at least what I've heard) is utter crap and misses what I think makes the best music and without semantics, which AI still doesn't have in any way, I also doubt an AI can write a novel anywhere near the best novelists.

I think, until we have a true, self-aware, fully conscious AI, things like great music or a beautiful novel will be beyond what any AI can produce.

Until a conscious AI can understand the difference in meaning between "time flies like an arrow" and "fruit flies like an apple" without following specific programmed rules to simulate semantics, I think there will be a hard limit on AI creative disciplines.
 

Cascarino

Magnum Poopus
Joined
Jul 17, 2014
Messages
7,616
Location
Wales
Supports
Swansea
I'm still doubtful about some of this. AI-generated music (at least what I've heard) is utter crap and misses what I think makes the best music and without semantics, which AI still doesn't have in any way, I also doubt an AI can write a novel anywhere near the best novelists.

I think, until we have a true, self-aware, fully conscious AI, things like great music or a beautiful novel will be beyond what any AI can produce.

Until a conscious AI can understand the difference in meaning between "time flies like an arrow" and "fruit flies like an apple" without following specific programmed rules to simulate semantics, I think there will be a hard limit on AI creative disciplines.
Aye I agree, I didn’t mean anytime soon, but rather the potentiality that one day we could have AI creations that transcend what we consider as feasible In today’s time. 30 years ago what is possible these days with AI would have seemed impossible (or maybe not? I’m not sure on this I’m just assuming this to be the case but I could be way off).
 

oneniltothearsenal

Caf's Milton Friedman and Arse Aficionado
Scout
Joined
Dec 17, 2013
Messages
11,186
Supports
Brazil, Arsenal,LA Aztecs
Aye I agree, I didn’t mean anytime soon, but rather the potentiality that one day we could have AI creations that transcend what we consider as feasible In today’s time. 30 years ago what is possible these days with AI would have seemed impossible (or maybe not? I’m not sure on this I’m just assuming this to be the case but I could be way off).
For me, I think we are still very much within the limits that people like Hubert Dreyfus identified in the 1970s, which are the limits of following algorithmic programming. Definitely, some of the visual art now is impressive but I don't think in general AI can possibly break through the hard limit until it's conscious and can understand meaning. Certainly, AI can do incredible things now and with more advances, it will replace certain things but it still relies on inputs in a way human creativity is not limited by. So, I think the hard limits are still potentially well outside our lifetimes. Although, heck I could be way off and we might see replicants in our lifetimes!
 

Cascarino

Magnum Poopus
Joined
Jul 17, 2014
Messages
7,616
Location
Wales
Supports
Swansea
For me, I think we are still very much within the limits that people like Hubert Dreyfus identified in the 1970s, which are the limits of following algorithmic programming. Definitely, some of the visual art now is impressive but I don't think in general AI can possibly break through the hard limit until it's conscious and can understand meaning. Certainly, AI can do incredible things now and with more advances, it will replace certain things but it still relies on inputs in a way human creativity is not limited by. So, I think the hard limits are still potentially well outside our lifetimes. Although, heck I could be way off and we might see replicants in our lifetimes!
That’s interesting, I guess as you and nimic have pointed out there’s a difference between the type of visual art shown in this thread (and possible advancements in this area) to creating wholly unique pieces of work like I envisioned (Shakespearean style writings etc).
 

Pexbo

Winner of the 'I'm not reading that' medal.
Joined
Jun 2, 2009
Messages
68,761
Location
Brizzle
Supports
Big Days
For me, I think we are still very much within the limits that people like Hubert Dreyfus identified in the 1970s, which are the limits of following algorithmic programming. Definitely, some of the visual art now is impressive but I don't think in general AI can possibly break through the hard limit until it's conscious and can understand meaning. Certainly, AI can do incredible things now and with more advances, it will replace certain things but it still relies on inputs in a way human creativity is not limited by. So, I think the hard limits are still potentially well outside our lifetimes. Although, heck I could be way off and we might see replicants in our lifetimes!
What you mean by this? Humans and AI are not dissimilar at all in the way they consume data. It’s sensors which convert data into binary signals which is then interpreted.

I’m not sure what you mean by algorithmic programming either, programs will always be based on algorithms in one sense or another. These AI are based on deep neural networks which bare little relation to programming as most people think if it. When a network is trained, you can’t analyse the algorithm itself and learn anything from it, it’s only the outputs that are interesting. It’s not like someone could read the code and understand or predict it’s outputs.

The same neural network can be trained to do a number of different things, given a different set of training data. The clue is in the same really, they’re based on the brain.
 

PedroMendez

Acolyte
Joined
Aug 9, 2013
Messages
9,466
Location
the other Santa Teresa
What you mean by this? Humans and AI are not dissimilar at all in the way they consume data. It’s sensors which convert data into binary signals which is then interpreted.

I’m not sure what you mean by algorithmic programming either, programs will always be based on algorithms in one sense or another. These AI are based on deep neural networks which bare little relation to programming as most people think if it. When a network is trained, you can’t analyse the algorithm itself and learn anything from it, it’s only the outputs that are interesting. It’s not like someone could read the code and understand or predict it’s outputs.

The same neural network can be trained to do a number of different things, given a different set of training data. The clue is in the same really, they’re based on the brain.
Considering that there is no comprehensive theory how human minds work, its hard to answer this question, but very little speaks for humans working just like a scaled neutral network. Neural networks are essentially big statistic engines, that find correlations. Thats why they are featuring so prominently nowadays with modern hardware, that has sufficient computing power to process big datasets. Humans seem to do something very very different.
 

oneniltothearsenal

Caf's Milton Friedman and Arse Aficionado
Scout
Joined
Dec 17, 2013
Messages
11,186
Supports
Brazil, Arsenal,LA Aztecs
What you mean by this? Humans and AI are not dissimilar at all in the way they consume data. It’s sensors which convert data into binary signals which is then interpreted.

I’m not sure what you mean by algorithmic programming either, programs will always be based on algorithms in one sense or another. These AI are based on deep neural networks which bare little relation to programming as most people think if it. When a network is trained, you can’t analyse the algorithm itself and learn anything from it, it’s only the outputs that are interesting. It’s not like someone could read the code and understand or predict it’s outputs.

The same neural network can be trained to do a number of different things, given a different set of training data. The clue is in the same really, they’re based on the brain.
It's true that modern AI has adapted and is taking an approach influenced by the neural network crowd which is moving towards trying to emulate what the human mind actually does but as @PedroMendez mentions, we still don't really understand how the mind works and how to replicate that. The approach is certainly better now than back in Minsky's heyday but it's not really the same as what our human minds are doing yet. There is still the huge gap between producing a result and understanding meaning. Even neural net AI can't actually understand the difference between "time flies like an arrow" and "fruit flies like an apple"

A few examples that highlight these differences are what Kahneman and behavioral economists call System 1 and System 2 problem solving (research documented in Thinking Fast and Slow) which doesn't appear to work the way our current AI works and the concept of how human expertise works as described by psychologist Csikszentmihalyi's concept of flow which involves processes different from how AI expert systems function.

One major difference that gets overlooked is the fact that we have embodied mind and that embodiment fundamentally affects our form of consciousness and may, in fact, be necessary for consciousness as we know it to exist. Basically why Descartes was wrong and the entire idea of separation of mind and body is incorrect. There is a long tradition of embodied consciousness from Merleau-Ponty to George Lakoff (and including William James, the phenomenologists, and more).

Cognitive science calls this entire philosophical worldview into serious question on empirical grounds... [the mind] arises from the nature of our brains, bodies, and bodily experiences. This is not just the innocuous and obvious claim that we need a body to reason; rather, it is the striking claim that the very structure of reason itself comes from the details of our embodiment... Thus, to understand reason we must understand the details of our visual system, our motor system, and the general mechanism of neural binding.
source

Personally, I think all this research, taken together, means that even software that emulates neural networks to a degree, as we have now, is inherently going to hit a hard limit because it can't hit that threshold of actually understanding. More computational power isn't sufficient to get it there. I believe someone like Lakoff would argue that what's missing in disembodied neural networks is the fundamental mechanism that leads to being able to understand meaning, which is the threshold that AI hasn't crossed and is a precondition for producing original masterpieces like Shakespeare, Mozart, or perhaps more obviously, dancers like Les Twins.

I do think artificial intelligence that is conscious, self-aware, and understands the difference in the two sentences above is possible and will happen at some point in the future (but further out than some believe), but it's going to have to be embodied and much more like a Blade Runner replicant/Westworld host than a disembodied computer program. And the type of embodiment would inevitably affect how that artificial consciousness works.
 

BD

technologically challenged barbie doll
Joined
Sep 1, 2011
Messages
23,199
Most of the times you'll read 'AI' in the media, it's a journalistic buzzword. That Google sentient chatbot was overblown - it was presumably just trained on text about 'self'. And the questions were asked in a backwards way - as if he asked questions so as to get a particular answer. The questions didn't follow very naturally.

I work quite close to machining learning research, and sometimes I despair that such an interesting research area is dominated by the people it is. It could be in better hands, let's just say.
 

Superden

Full Member
Joined
Jul 13, 2013
Messages
2,112
I was listening to something on radio 4 and it was all about AI and art and how it will not be able to capture the emotion of a brush stroke. What crap. I painted my landing when I was in a right grump, and it looks beautiful.
In what way?
 

oneniltothearsenal

Caf's Milton Friedman and Arse Aficionado
Scout
Joined
Dec 17, 2013
Messages
11,186
Supports
Brazil, Arsenal,LA Aztecs
Just to expand a little more, I think one important key is experience. Embodied theorists like Lakoff believe that conceptual metaphor is essential and necessary to our consciousness and that conceptual metaphor is based on our embodied experience. So while a toddler inherently understands the concept "over your head" when applied more abstractly, how would a disembodied AI system that learns based on pattern recognition and such understand that experience? Or on a more visceral level, how could a disembodied AI like what we have now begin to understand the experience of what sex feels like, or how sex at different times with different people can feel very different despite being the same physical act? This is why they believe embodiment is so foundational and without it, a disembodied AI even one that can learn and evolve in some ways simply can't understand the meaning of sentences the way we can.

This relates to other concepts like emotions. Our consciousness isn't just based on our neuronal networks but on our entire bodies. We instinctively understand "fight or flight" because we experience the rush of adreno-chemicals being dumped through our system. Just emulating a neuronal network isn't enough to replicate the complete experience of living in a human body by a long shot. So that embodied experience, we take for granted because its part of native understanding of the world since birth. But a disembodied AI system completely lacks that experience and its doubtful if that can be replicated any other way than embodiment like we experience. Can it understand pain? Or desire? Or nostalgia? Without understanding those concepts in an experiential way, I would say producing great art is impossible.

Ultimately, when it comes to art, the question might be: where does imagination come from? Can a great work of art be produced without imagination? I'd say it's extremely doubtful.
 

nimic

something nice
Scout
Joined
Aug 2, 2006
Messages
31,535
Location
And I'm all out of bubblegum.
Do you need imagination when you can perfectly replicated the results? That's where we're headed with, with these AIs. They'll need some prompting, but pretty soon you probably won't be able to tell if a piece of art was made by a person or an AI.
 

oneniltothearsenal

Caf's Milton Friedman and Arse Aficionado
Scout
Joined
Dec 17, 2013
Messages
11,186
Supports
Brazil, Arsenal,LA Aztecs
Do you need imagination when you can perfectly replicated the results? That's where we're headed with, with these AIs. They'll need some prompting, but pretty soon you probably won't be able to tell if a piece of art was made by a person or an AI.
I would argue yes, you do because you can't replicate the results for something like a great novel. Without an embodied sentient intelligence with experience in 3d space, emotions, and more, I would argue even the most advanced AI-based on our current systems will never be able to come close to producing a great novel when all the great novels rely on a depth of understanding of embodied experience. It will take something like a replicant/host level of AI to achieve that.

And for visual arts, where it seems the closest, I think your earlier post was spot on, where something like commercial logo type art sure, AI can replicate that but something like Bosch's The Garden of Earthly Delights will be beyond an AI. (curious what @harms thinks about this as he is IIRC a fine art professional).
 
Last edited: