Cyriel M. A. Pennartz
- Published in print:
- 2015
- Published Online:
- May 2016
- ISBN:
- 9780262029315
- eISBN:
- 9780262330121
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262029315.003.0004
- Subject:
- Neuroscience, Behavioral Neuroscience
What are neural network models, what kind of cognitive processes can they perform, and what do they teach us about representations and consciousness? First, this chapter explains the functioning of ...
More
What are neural network models, what kind of cognitive processes can they perform, and what do they teach us about representations and consciousness? First, this chapter explains the functioning of reduced neuron models. We construct neural networks using these building blocks and explore how they accomplish memory, categorization and other tasks. Computational advantages of parallel-distributed networks are considered, and we explore their emergent properties, such as in pattern completion. Artificial neural networks appear instructive for understanding consciousness, as they illustrate how stable representations can be achieved in dynamic systems. More importantly, they show how low-level processes result in high-level phenomena such as memory retrieval. However, an essential remaining problem is that neural networks do not possess a mechanism specifying what kind of information (e.g. sensory modality) they process. Going back to the classic labeled-lines hypothesis, it is argued that this hypothesis does not offer a solution to the question how the brain differentiates the various sensory inputs it receives into distinct modalities. The brain is observed to live in a "Cuneiform room" by which it only receives and emits spike messages: these are the only source materials by which it can construct modally differentiated experiences.Less
What are neural network models, what kind of cognitive processes can they perform, and what do they teach us about representations and consciousness? First, this chapter explains the functioning of reduced neuron models. We construct neural networks using these building blocks and explore how they accomplish memory, categorization and other tasks. Computational advantages of parallel-distributed networks are considered, and we explore their emergent properties, such as in pattern completion. Artificial neural networks appear instructive for understanding consciousness, as they illustrate how stable representations can be achieved in dynamic systems. More importantly, they show how low-level processes result in high-level phenomena such as memory retrieval. However, an essential remaining problem is that neural networks do not possess a mechanism specifying what kind of information (e.g. sensory modality) they process. Going back to the classic labeled-lines hypothesis, it is argued that this hypothesis does not offer a solution to the question how the brain differentiates the various sensory inputs it receives into distinct modalities. The brain is observed to live in a "Cuneiform room" by which it only receives and emits spike messages: these are the only source materials by which it can construct modally differentiated experiences.
James A. Anderson
- Published in print:
- 2017
- Published Online:
- February 2018
- ISBN:
- 9780199357789
- eISBN:
- 9780190675264
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199357789.003.0007
- Subject:
- Psychology, Cognitive Psychology
Brains and computers were twins separated at birth. In 1943, it was known that action potentials were all or none, approximating TRUE or FALSE. In that year, Walter Pitts and Warren McCulloch wrote a ...
More
Brains and computers were twins separated at birth. In 1943, it was known that action potentials were all or none, approximating TRUE or FALSE. In that year, Walter Pitts and Warren McCulloch wrote a paper suggesting that neurons were computing logic functions and that networks of such neurons could compute any finite logic function. This was a bold and exciting large-scale theory of brain function. Around the same time, the first digital computer, the ENIAC, was being built. The McCulloch-Pitts work was well known to the scientists building ENIAC. The connection between them appeared explicitly in a report by John von Neumann on the successor to the ENIAC, the EDVAC. It soon became clear that biological brain computation was not based on logic functions. However, this idea was believed by many scientists for decades. A brilliant wrong theory can sometimes cause trouble.Less
Brains and computers were twins separated at birth. In 1943, it was known that action potentials were all or none, approximating TRUE or FALSE. In that year, Walter Pitts and Warren McCulloch wrote a paper suggesting that neurons were computing logic functions and that networks of such neurons could compute any finite logic function. This was a bold and exciting large-scale theory of brain function. Around the same time, the first digital computer, the ENIAC, was being built. The McCulloch-Pitts work was well known to the scientists building ENIAC. The connection between them appeared explicitly in a report by John von Neumann on the successor to the ENIAC, the EDVAC. It soon became clear that biological brain computation was not based on logic functions. However, this idea was believed by many scientists for decades. A brilliant wrong theory can sometimes cause trouble.