Loading [MathJax]/extensions/TeX/boldsymbol.js

 

 

 

Gradient expressions

For this specific model, with just one output node and two hidden nodes, the gradient descent equations take the following form for output layer

w_{i}^{(2)}\leftarrow w_{i}^{(2)}- \eta \delta^{(2)} a_{i}^{(1)},

and

b^{(2)} \leftarrow b^{(2)}-\eta \delta^{(2)},

and

w_{ij}^{(1)}\leftarrow w_{ij}^{(1)}- \eta \delta_{i}^{(1)} a_{j}^{(0)},

and

b_{i}^{(1)} \leftarrow b_{i}^{(1)}-\eta \delta_{i}^{(1)},

where \eta is the learning rate.