table we see that when Oi and Oj are both positive, the weight adjustment, ΔW, is positive.
This has the effect of strengthening the connection between i and j when i has contributed
to j’s “firing.”
Oi Oj Oi*Oj
+ + +
+ − −
− + −
− − +
Table 11.4 The signs and product of signs of node output values.
In the second and third rows of Table 11.4, i and j have opposite signs. Since their
signs differ, we want to inhibit i’s contribution to j’s output value. Therefore we adjust the
weight of the connection by a negative increment. Finally, in the fourth row, i and j again
have the same sign. This means that we increase the strength of their connection. This
weight adjustment mechanism has the effect of reinforcing the path between neurons
when they have similar signals and inhibiting them otherwise.
In the next sections we consider two types of Hebbian learning, unsupervised and
supervised. We begin by examining an unsupervised form.
11.5.2 An Example of Unsupervised Hebbian Learning
Recall that in unsupervised learning a critic is not available to provide the “correct” output
value; thus the weights are modified solely as a function of the input and output values of
the neuron. The training of this network has the effect of strengthening the network’s
responses to patterns that it has already seen. In the next example, we show how Hebbian
techniques can be used to model conditioned response learning, where an arbitrarily
selected stimulus can be used as a condition for a desired response.
Weight can be adjusted, ΔW, for a node i in unsupervised Hebbian learning with:
ΔW = c * f(X, W) * X
where c is the learning constant, a small positive number, f(X, W) is i’s output, and X is
the input vector to i.
We now show how a network can use Hebbian learning to transfer its response from a
primary or unconditioned stimulus to a conditioned stimulus. This allows us to model the
type of learning studied in Pavlov’s experiments, where by simultaneously ringing a bell
every time food was presented, a dog’s salivation response to food was transferred to the
bell. The network of Figure 11.19 has two layers, an input layer with six nodes and an out-
put layer with one node.The output layer returns either +1, signifying that the output neu-
ron has fired, or a −1, signifying that it is quiescent.