The figure here displays a simple example of an RNN, with inputs \( x_t \) at a given time \( t \) and outputs \( y_t \). Introducing time as a variable offers an intutitive way of understanding these networks. In addition to the inputs \( x_t \), the layer at a time \( t \) receives also as input the output from the previous layer \( t-1 \), that is \( y_{t1} \).
This means also that we need to have weights that link both the inputs \( x_t \) to the outputs \( y_t \) as well as weights that link the output from the previous time \( y_{t-1} \) and \( y_t \). The figure here shows an example of a simple RNN.