Berndporr.me.uk
BIOLOGICALLY INSPIRED ARTIFICIAL NEURAL NETWORK ALGORITHM WHICH
IMPLEMENTS LOCAL LEARNING RULES
Ausra Saudargiene1,2
, Bernd Porr1
and Florentin W¨org¨otter1
1Department of Psychology, University of Stirling, Stirling FK9 4LA, Scotland
2Department of Informatics, Vytautas Magnus University, Kaunas, Lithuania
into the dendritic tree (3). In distal parts, where back- prop-agating spikes fail to invade, slow and wide local N a+- and
Artificial neural networks (ANNs) are usually homoge-
Ca2+ channel-dependent dendritic spikes provide the nec-
nous in respect to the used learning algorithms. On the other
essary depolarization (1). These observations suggest that
hand, recent physiological observations suggest that in bio-
synaptic modifications are location-dependent.
logical neurons synapses undergo changes according to lo-
In this paper we present a biophysical model of STDP
cal learning rules. In this study we present a biophysically
which captures the dependence of synaptic changes on the
motivated learning rule which is influenced by the
shape
membrane potential shape. The model uses a differential
of the correlated signals and results in a learning charac-
Hebbian rule to correlate the NMDA synaptic conductance
teristic which depends on the dendritic site. We investigate
and the derivative of the membrane potential at a synapse.
this rule in a biophysical model as well as in the equiva-
We will show that the model reproduces the STDP weight
lent artificial neural network model. As a consequence of
change curve in a generic way and is sensitive to the dif-
our local rule we observe that transitions from differential
ferent shapes of the membrane potential. The model pre-
Hebbian to plain Hebbian learning can coexist at the same
dicts that learning depends on the synapse location on the
neuron. Thus, such a rule could be used in an ANN to cre-
dendritic tree. Then we will describe the equivalent circuit
ate synapses with entirely different learning properties at the
diagram and discuss the model referring to system-theory,
same network unit in a controlled way.
presenting it in the context of filter transfer functions at theend of this article.
2. BIOPHYSICAL MODEL
Learning rules used to update the weights in artificial neu-ral network algorithms are the same for all inputs and units.
The model represents a dendritic compartment with a sin-
However, recent physiological experiments suggest that in
gle NMDA synapse (Fig. 1 A). The NMDA channels are
biological neurons synaptic modifications depend on the lo-
essential in inducing synaptic plasticity as their blockade to
cation of the synapse (1) i.e. synaptic strength is regulated
a large degree prevents STDP (1). It is believed that NMDA
by local learning rules.
channel-mediated Ca2+ influx triggers the chain reactions
The same synapse may be strengthened and weakened
involving CaMKII, calmodulin, calcineurin and in this way
depending on the temporal order of the pre- and postsynap-
affects the synaptic strength (4). The NMDA synaptic con-
tic activity. The weight grows if the presynaptic signal pre-
ductance, regarded as a presynaptic signal, is given by :
cedes the postsynaptic signal, and shrinks if the temporalorder is reversed. Such form of synaptic modifications is
e−t/τ1 − e−t/τ2
called spike-timing-dependent plasticity (STDP) (2). How-
ever, not only the timing of the pre- and postsynaptic ac-
1 + κe−γV (t)
tivity, but also the the shapes of the signals may define the
where V is the membrane potential, ¯
gN = 4 nS peak con-
properties of synaptic plasticity. This claim is supported
gN = 4 nS τ1 = 40 ms, τ2 = 0.33 ms time
by the fact that strong depolarization, necessary to induce
constants and η = 0.33/mM , [M g2+] = 1 mM , γ =
synaptic changes, has a different origin and a varying shape
0.06/mV (5). The membrane potential is expressed as:
along the dendritic tree. Close to the soma learning is drivenby steep and short back-propagating spikes which become
more shallow and longer in duration while back-propagating
= ρ g(t)[E −v(t)]+idep(t)+
where ρ is the synaptic weight of the NMDA-channel, g itsconductance, E = 0 mV its equilibrium potential. The cur-rent idep is used to account for the depolarizion caused byother sources than synaptic inputs, such as back-propagatingspikes or local dendritic regenarative potentials. The lastterm represents the leakage current. The resting potentialVrest = −70 mV , membrane capacitance C = 50 pF andthe membrane resistance to R = 100 M Ω.
The differential Hebbian learning rule for the synaptic
change is defined as:
dρ = ˆg(t)V 0(t),
g is the normalized conductance function of the NMDA
channel, the pre-synaptic influence quantity, and V 0 is the
Fig. 1: Schematic diagram of the model. A) Components of
derivative of the postsynaptic membrane potential.
the membrane model. The inset shows the NMDA synaptic
The depolarizing membrane potentials, which trigger
conductance function. B) Depolarizing membrane poten-
synaptic plasticity, vary along the dendritic tree. We use a
tials: steep back-propagating spike and shallow dendritic
short and steep back-propagating action potential to model
spike 210µm and 860µm from the soma, respectively C)
the synaptic changes close to the soma, and long and shal-
The resulting weight change curves. The shallow depolar-izing potential leads to potentiation even for negative values
low dendritic spike to account for synaptic modifications in
0 < T < −20ms values.
the distal parts. The back-propagating spike and the den-dritic spike, measured 210µm and 860µm from the soma,respectively, are presented in Fig. 1 B and have been taken
tracellular resistance, R2 and C2 describe the passive mem-
from (6; 7). The depolarization coming from these spikes
brane properties and alltogether determine the shape of the
is very strong, therefore we may neglect the contribution of
postsynaptic signal v. The derivative of v, obtained after
the NMDA synaptic input. Instead of using Eq. 2 we calcu-
the filtering in the last R2 and C2 circuit, is multiplied by
late the change of the membrane potential using the given
g. The resulting so called weight change is fed to a gain-
shape of the spike and then substitute its derivative in the
controlled amplifier and influences the postsynaptic signal
learning rule (Eq. 3).
v. Various shapes of the postsynaptic sigal v may be ob-
We obtain an asymmetrical weight change curve if the
tained by adjusting the values of R2 , R3 and C2 and would
depolarization is provided by a steep back-propagating spike
lead to different learning characteristics.
(Fig. 1 C). The synapse is weekened if T < 0 and strength-ened if T > 0, where T is the temporal difference between
3. BIOLOGICALLY INSPIRED SITE-SPECIFIC
the presynaptic activity and the postsynaptic activity. T > 0
means that the postsynaptic signal follows the presynapticsignal at the NMDA channel and vice versa. However, we
We represent a further step of abstraction in Fig. 3. This
observe a shifted curve if the depolarization comes from the
block-diagram is not directly equivalent to the circuit in
shallow dendritic spike. The synaptic weight grows even
Fig. 2 but it captures the main observation emerging from
for negative values of T > −20ms. Thus, we get plain
the biophysical model. Namely that learning depends on
Hebbian learning between −20ms and ∞.
the location of the synapse, i.e. is driven by the derivative
The model reproduces the STDP curve in a generic way.
of a postsynaptic signal specific at a given site. In an artifi-
The shape of the weight change curve is strongly influenced
cial neural network system this would mean that output sig-
by the shape of the depolarizing membrane potential, which
nal undergoes a transformation specific for each input and
induces plasticity. The slow rising flank of this signal is the
only then its derivative is applied to update the weight of a
essential factor of the transition from an asymmetrical to a
given input. The diagram of such an algorithm is presented
symmetrical weight change characteristic. As the depolar-
in Fig. 3. We can still roughly associate the NMDA char-
izing potentials vary in different parts of the dendritic tree,
acteristic to the pathways x1,.,n representing many (possi-
these results suggest that learning rules are local and depend
bly different) inputs and the source of depolarization (e.g.,
on the location of the synapse in biological neurons.
the back-propagating spike) to the pathway x0. Hence this
The electrical circuit equivalent to the model described
pathway enters the summation node with an unchangable
above is presented in Fig. 2. Elements R1, C1 define the
weight ρ0. This circuit is a modified version of the ISO
shape of the presynaptic signal g. R3 corresponds to the in-
learning circuit (8). ISO learning is a drive-reinforcement
Fig. 2: Equivalent electrical circuit of the learning algo-
rithm. Postsynaptic signal v is differentiated by R2 C2 cir-
cuit and multiplied by the presynaptic signal g to obtain
the weight which influences the postsynaptic signal v via
a gain-controlled amplifier.
Fig. 3: A) Algorithm for site-specific learning. Transfer
functions are denoted as h, changing weights ρ as an apli-
algorithm for temporal sequence learning where the weights
fier. All inputs are filtered. Weight ρ0 is fixed. Weights
change according to the relative timing of the input signals.
ρ1,., ρn are updated using the derivatives v01,.,v0n of the
filtered output v. Filter functions h11 ,.,hnn differ for each
0, x1,., xn are filtered using bandpass filters h0,
input. B) Analytically calculated weight change curve if
h1,., hn, weighted by ρ0, ρ1,., ρn and summed to produce
the filtered output has a steep rising flank C) Analytically
the output v: v = ρ0u0 + P
iui, where u = x ∗ h. Dif-
calculated weight change curve if the filtered output has a
ferent from the ISO learning, here the output is also filtered
shallow rising flank.
with the filters h11,.,hnn, and only then the derivatives ofthe obtained signals v01,.,v0n are used to change the weightsof the corresponding inputs:
shapes of the depolarizing membrane potential at the loca-tion of the synapse. This signal changes its shape along the
dendrite and may be provided by different mechanisms such
where vi = v ∗ hii, µ 1.
as back-propagating spikes close to the soma and dendriticspikes in the distal parts. Therefore we predict that learning
We assume that the input x0 is dominating the output and its
rules are location-dependent. Close the soma, where learn-
weight ρ0 is fixed. We apply the analytical solution derived
ing is driven by short back-propagating spikes, the synaptic
for the ISO learning (8) to calculate the weight change curve
modifications are bidirectional, described by an asymmetri-
for different shapes of the filtered output signal (for details
cal STDP curve. In the distal parts, where synaptic changes
see Appendix). For a steep output signal entering the learn-
are induced mainly by long-lasting dendritic spikes, synapses
ing rule we obtain differential Hebbian learning, and for a
undergo potentiation even for negative values of T . The
shallow one we get a curve similar to plain Hebbian learning
same learning rule leads to different synaptic modifications
(Fig. 3B,C). The parameters of the filters h11,.,hnn which
and it is self-adjusting following the shapes of the depolar-
transforms the output signal determines this transition.
ization source in different locations of the dendritic tree.
The typical approach to model STDP is to assume a cer-
tain weight change curve which does not depend on the lo-cal properties of the cell, e.g. (9). A few more detailed mod-
The biophysical model of STDP inspired an artificial neu-
els take into consideration the postsynaptic signal which is
ral network algorithm with site-specific learning rules. The
associated with the membrane potential, e.g. (10; 11; 12)
biophysical model is based on a differential Hebbian learn-
and observe that its shape influences the shape of the weight
ing rule which correlates the NMDA synaptic conductance
change curve. These models differ from our as the rule of
with the derivative of the membrane potential. The results
(10) is based on TD learning, while (11; 12) rely on the
show that the weight change curve strongly depends on the
absolute Ca2+ concentration in the weight updating algo-
Our algorithm offers the possibility to easily define a
parameter-controlled learning rule in an artificial neural net-work. We have just now started trying to solve an instru-
[1] N. L. Golding, P. N. Staff, and N. Spurston, "Dendritic
mental conditioning problem, where the actions of the learner
spikes as a mechanism for cooperative long-term po-
influence its inputs and hence the learning with such an ar-
tentiation,"
Nature, vol. 418, pp. 326–331, 2002.
chitecture. A small network of sub-comparmentalized neu-
[2] G-Q. Bi and M. Poo, "Synaptic modification by corre-
rons is linked to a simple agent that reacts to stimulus pre-
lated activity: Hebb's postulate revisited,"
Annu. Rev.
sentation with an orienting behaviour following the stim-
Neurosci., vol. 24, pp. 139–166, 2001.
ulation of the right neuronal subset. The goal is to trainit with one unconditioned stimulus (US) and several con-
[3] J. C. Magee and D. Johnston, "A synaptically con-
ditioning stimuli (CS) only one of which is correlated to
trolled, associative signal for Hebbian plasticity in hip-
the unconditioned stimulus. The US will always trigger
pocampal neurons,"
Science, vol. 275, pp. 209–213,
the correct output neurons to elicit the orienting response.
Conversely, each CS elicit a response in many input neu-rons, some of which are better correlated with
each other
[4] G. Q. Bi, "Spatiotemporal specificity of synaptic plas-
than others. Hebbian learning
between these CS inputs will
ticity: cellular rules and mechanisms,"
Biol. Cybern.,
"extract" and strenghten the better-correlated neurons. This
vol. 87, pp. 319–332, 2002.
leads, after learning, to a drive from
all CS regardless of
[5] C. Koch,
Biophysics of Computation, Oxford Univer-
their correlation with the US. Now we get reliable (but mostly
sity Press, 1999.
wrong) behavioural reactions. Since all but one CS are tem-porally uncorrelated to the US, differential Hebbian learn-
[6] M. E. Larkum, J. J. Zhu, and B. Sakmann, "Dendritic
ing will lead to a weakening of all "wrong" CS. By the end,
mechanisms underlying the coupling of the dendritic
the system has learned to drive a small subset of only a few
with the axonal action potential initiation zone of adult
neurons in a feed-forward way, eliciting a response that will
rat layer 5 pyramidal neurons,"
J. Physiol. (Lond.),
lead to the desired behaviour. This is work in progress an
vol. 533, pp. 447–466, 2001.
no results exist so far. Nevertheless, it clearly shows howsuch sub-compartmentalized learning rules could be used
[7] G. Stuart, N. Spruston, B. Sakmann, and M. H¨ausser,
for behavioral control.
"Action potential initiation and backpropagation inneurons of the mammalian central nervous system,"
Trends Neurosci., vol. 20, pp. 125–131, 1997.
5. APPENDIX
[8] B. Porr and F. W¨org¨otter, "Isotropic sequence order
The weight change curves are calculated using the analyt-
learning,"
Neural Comp., vol. 15, pp. 831–864, 2003.
ical solution obtained for ISO learning (8). We assumethat the output is dominated by x
[9] S. Song, K. D. Miller, and L. F. Abbott, "Competitive
0 and the contribution of
other inputs is negligible (ρ
Hebbian Learning through spike-timing-dependent
= 0, i > 0). Then the
pairs of the filter functions h
synaptic plasticity,"
Nature Neurosci., vol. 3, pp. 919–
0 and h11, h0 and h22, etc., h0
nn can be considered as single filter functions h01,.,
h0n. These filters are specific for each input x1,.,xn path-
[10] R. P. N. Rao and T. J. Sejnowski,
way and shape the output signal vi whose derivative enters
dependent Hebbian plasticity as temporal difference
the learning rule. The filters h are described by: h(t) =
Neural Comp., vol. 13, pp. 2221–2237,
1 eat sin(bt) with a := −πf/Q and b := p(2πf2 − a2,
where f is the center frequency and Q is the damping. Thenthe cumulative weight change at the i − th pathway is given
[11] G. C. Castellani, E. M. Quinlan, L. N. Cooper, and
"A biophysical model of bidirec-
tional synaptic plasticity: Dependence on AMPA and
i(T ) = µ biMi cos(biT )+(aiPi+2a0i pi 2) sin(biT ) e−T ai
NMDA receptors,"
Proc. Natl. Acad. Sci. (USA), vol.
98, no. 22, pp. 12772–12777, October 23 2001.
i(T ) = µ b0iMi cos(b0iT )+(a0iP +2ai p0i 2) sin(b0iT ) e−T ai ,
[12] H. Z. Shouval, M. F. Bear, and L. N. Cooper, "A uni-
ai 2 − bi 2,p0i = a0i 2 − b0i 2, i > 0. The parame-
fied model of NMDA receptor-dependent bidirectional
ters for the weight change curves presented in Fig. 3 are:
synaptic plasticity,"
Proc. Natl. Acad. Sci. (USA), vol.
f01 = 0.01, Q01 = 0.6, f0n = 0.002, Q0n = 0.6, f1 =
99, no. 16, pp. 10831–10836, 2002.
fn = 0.01, Q1 = Qn = 0.6.
Source: http://www.berndporr.me.uk/iscas2004/iscas04.pdf
Venom Peptides and their Mimetics as Potential DrugsOren Bogin, Ph.D. Venomous creatures have a sophisticated mechanism for prey capture which includes a vast array of biologically-active compounds, such as enzymes, proteins, peptides and small molecular weight compounds. These substances target an immense number of receptors and membrane proteins with high affinity, selectivity and potency, and can serve as potential drugs or scaffolds for drug design.
Original Article Frequency of Tumor Lysis Syndrome in Aggressive and Slow Introduction Chemotherapy in Children with ALL Hashemi A 1, Shahvazian N 2, Zarezade A 2, Shakiba M 3, Atefi A 4 1- Department of Pediatrics, Hematology, Oncology and Genetics Research Center, Shahid Sadoughi University of Medical Sciences and Health Services, Yazd, Iran 2- General Practitioner , Yazd, Iran 3- Department of Pediatrics, Shahid Sadoughi University of Medical Sciences and Health Services, Yazd, Iran 4- Hematology, Oncology and Genetics Research Center, Shahid Sadoughi University of Medical Sciences and Health Services, Yazd, Iran