Neuron
A neuron with label $j$ receiving an input $p_j(t)$ from predecessor neurons consists of the following components
- an activation $a_j(t)$, the neuron's state, depending on a discrete time parameter,
- an optional threshold $\theta_j$, which stays fixed unless changed by learning,
- an activation function $f$ that computes the new activation at a given time $t+1$ from $a_j(t)$, $\theta_j$ and the net input $p_j(t)$ giving rise to the relation $a_j(t+1)=f(a_j(t), p_j(t),\theta_j)$,
- and an output function $f_{out}$ computing the output from the activation $o_j(t)=f_{out}(a_j(t))$.
Often the output function is simply the identity function.
An input neuron has no predecessor but serves as input interface for the whole network. Similarly an output neuron has no successor and thus serves as output interface of the whole network.
Propogation function
The propagation function
computes the input $p_j(t)$ to the neuron $j$ from the outputs $o_i(t)$ and typically has the form
$p_{j}(t)=\sum _{i}o_{i}(t)w_{ij}$
Bias
A bias term can be added, changing the form to the following:
$p_{j}(t)=\sum _{i}o_{i}(t)w_{ij}+w_{0j}$
where $w_{0j}$ is a bias.