Since the discovery of computer systems, researchers and scientists have strived to improve the systems in an attempt to compete with the human brain function. The development of these various systems has been aimed at coming up with better functional systems that could beat humans at games and other functions.
A philosopher from the University of Houston has taken up a completely different approach. Instead of studying how artificial intelligence competes with human brain function, he is looking at how humans process abstract learning by deconstructing machine learnings’ complex neural networks.
By learning how the machine learning systems work Cameron Bucker aimed to get a better understanding of how the human brain learns. His goal is to settle a decade’s long argument on whether human learning is innate or it stems from experience. In his paper that was published in the Synthese, Buckner concludes that Deep Convolutional Neural Networks show that human learning is purely based on experience. This supports the philosophical school of thought, empiricism.
The DCNNs showcases a multilayered artificial neural network that contains nodes responsible for passing along information in the brain. This is a demonstration of how abstract knowledge is acquired in the brain.
Various scientists have concentrated on the results of the neural networks rather than on how these neural networks function. Though most scientists have shadowed the thoughts of John Locke and Aristotle, who were empiricists, they have avoided finding out the “why and how” of artificial intelligence.
Buckner has however been able to decipher how abstract thinking comes about. With the advancement of artificial intelligence systems, it has become necessary to understand how abstract thinking comes about. As machines can now do what comes to humans naturally the only way to stay on top is to learn how both the machines and humans come about abstract thoughts.
References
https://www.sciencedaily.com/releases/2018/10/181009115022.htm
