Skip to content

Cognitive Science 8: Neural Networks and Distributed Information Processing


Neurally inspired models of information processing

The tools we use to study brain activity is either too fine-grained (individual neurons) or too coarse-grained (overall blood flow).  We need to get information in between these two levels to get a good idea of how the brain works, which is likely to be found in neuron populations, which we are not yet good at studying. 

Because of this, researches have created neural network models to try to approximate how individual neurons work and scale up to populations. This is computational neuroscience. 

Single layer networks and Boolean functions

Amazingly, neurons can be modeled with basic Boolean functions. For example, a neuron can model the Boolean “AND” function if it only fires if both of its inputs also fires (true, true = true; true, false = false etc.). “OR” could be modeled similarly (true, true = true; true, false = true; false, false = false). This allows neural computational models to do anything other binary computers can. 

Single layer networks are limited though. Certain types of functions (XOR) cannot be computed by single layer networks, but multi-layer networks solve this problem.

Multilayer networks

Creating a way to train multilayered networks was the difficulty that kept previous researchers focused on single layer networks, but Paul Werbos and others got around this problem in the 70s and 80s. Werbos solved this problem with his backpropogation algorithm, which essentially looks at how much the hidden layer units contribute to error, and modifies the weights accordingly. 

A few differences between neural network models and real neurons arise. First there is only one type of unit in neural networks, and many kinds of neurons. There’s also less parallel processing in the brain. Each neuron connects to its nearest neighbors, not to all other neurons.  There are also orders of magnitude more neurons than units in typical neural networks. Unfortunately, there’s no evidence of backpropogation in the brain.  Thankfully, there are other possible ways of learning called local algorithms, where neurons change depending on the inputs and outputs they get.

Information processessing in neural networks: Key features

In neural networks, in contrast to phsyical symbol systems, representations are distributed across multiple areas, not located in a single place. There’s also no distinction between information storage and processing. The information storage comes in the form of different ways that each neural unit propagates/processes information. Neural networks are also able to “learn” from experience, and change their outputs after given different inputs. 

Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: