Loading [MathJax]/extensions/TeX/boldsymbol.js

 

 

 

The last steps

Solving the above problem, yields the values of \lambda_i . To find the coefficients of your hyperplane we need simply to compute \boldsymbol{w}=\sum_{i} \lambda_iy_i\boldsymbol{x}_i. With our vector \boldsymbol{w} we can in turn find the value of the intercept b (here in two dimensions) via y_i(\boldsymbol{w}^T\boldsymbol{x}_i+b)=1, resulting in b = \frac{1}{y_i}-\boldsymbol{w}^T\boldsymbol{x}_i, or if we write it out in terms of the support vectors only, with N_s being their number, we have b = \frac{1}{N_s}\sum_{j\in N_s}\left(y_j-\sum_{i=1}^n\lambda_iy_i\boldsymbol{x}_i^T\boldsymbol{x}_j\right). With our hyperplane coefficients we can use our classifier to assign any observation by simply using y_i = \mathrm{sign}(\boldsymbol{w}^T\boldsymbol{x}_i+b). Below we discuss how to find the optimal values of \lambda_i . Before we proceed however, we discuss now the so-called soft classifier.