What is "AI" nowadays to a layman?
Code that writes its own code? Based on rules set on how many if statements developers bother to put in?
Definitely not. No one has been trying to do that for the last 30 years.
Pretty much AI nowadays means deep learning. So you train large neural networks in some train dataset, with the idea of them being able to generalize in real world, measured as their performance in a testing set. The problem is that while this works if the training and test sets come from the same distribution, typically the networks fail to do well when the test set has a different distribution from the training set. And well, the real-world has a different distribution to whatever training set you come up with.
Still, there is some very good progress in the field. In no particular order:
a) image classification, retrieval, segmentation, and recognition at this stage works relatively well. You can get some object recognizer and use in real world, with it working decently. Same for the other tasks I mentioned.
b) text understanding is getting better and better. Look at this model interpreting 'jokes' given as input.
c) there is progress on self-driving cars although still much work remains to be done. Still, companies like Google, Argo AI, Aurora, Cruise, Tesla, Nvidia are doing good work there.
d) some progress in medical imaging, though I am not very familiar there.
e) protein folding from DeepMind, magnificent progress.
f) of course, AIs destroying players in games be it chess, Go, or Starcraft, but this is not massively useful.
g) meta-learning, AIs that teach other AIs many things which right now are decided by programmers/research scientists.
While you can imagine AI as code which writes code, right now that written code is not loops and conditionals, but weights of the neural network. And the neural network can be considered as function approximators, or even program approximators.