Revisiting our Logistic Regression case

In our discussion on Logistic Regression we studied the case of two classes, with \( y_i \) either \( 0 \) or \( 1 \). Furthermore we assumed also that we have only two parameters \( \theta \) in our fitting, that is we defined probabilities

$$ \begin{align*} p(y_i=1|x_i,\boldsymbol{\theta}) &= \frac{\exp{(\theta_0+\theta_1x_i)}}{1+\exp{(\theta_0+\theta_1x_i)}},\nonumber\\ p(y_i=0|x_i,\boldsymbol{\theta}) &= 1 - p(y_i=1|x_i,\boldsymbol{\theta}), \end{align*} $$

where \( \boldsymbol{\theta} \) are the weights we wish to extract from data, in our case \( \theta_0 \) and \( \theta_1 \).