Final technicalities IV

Expressing \( z_{1,j}^{\text{output}} \) as a vector gives the following way of weighting the inputs from the hidden layer:

$$ \boldsymbol{z}_{1}^{\text{output}} = \begin{pmatrix} b_1^{\text{output}} & \boldsymbol{w}_1^{\text{output}} \end{pmatrix} \begin{pmatrix} 1 & 1 & \dots & 1 \\ \boldsymbol{x}_1^{\text{hidden}} & \boldsymbol{x}_2^{\text{hidden}} & \dots & \boldsymbol{x}_N^{\text{hidden}} \end{pmatrix} $$

In this case we seek a continuous range of values since we are approximating a function. This means that after computing \( \boldsymbol{z}_{1}^{\text{output}} \) the neural network has finished its feed forward step, and \( \boldsymbol{z}_{1}^{\text{output}} \) is the final output of the network.