Rewriting as dot products

If we now flip the filter/weight vector, with the following term as a typical example

$$ y(0)=x(2)w(0)+x(1)w(1)+x(0)w(2)=x(2)\tilde{w}(2)+x(1)\tilde{w}(1)+x(0)\tilde{w}(0), $$

with \( \tilde{w}(0)=w(2) \), \( \tilde{w}(1)=w(1) \), and \( \tilde{w}(2)=w(0) \), we can then rewrite the above sum as a dot product of \( x(i:i+(m-1))\tilde{w} \) for element \( y(i) \), where \( x(i:i+(m-1)) \) is simply a patch of \( \boldsymbol{x} \) of size \( m-1 \).

The padding \( P \) we have introduced for the convolution stage is just another hyperparameter which is introduced as part of the architecture. Similarly, below we will also introduce another hyperparameter called Stride \( S \).