Loss of Precision

In a typical computer, floating-point numbers are represented in the way described above, but with certain restrictions on \( q \) and \( m \) imposed by the available word length. In the machine, our number \( x \) is represented as $$ \begin{equation} x=(-1)^s\times {\mbox{mantissa}}\times 2^{{\mbox{exponent}}}, \tag{4} \end{equation} $$

where \( s \) is the sign bit, and the exponent gives the available range. With a single-precision word, 32 bits, 8 bits would typically be reserved for the exponent, 1 bit for the sign and 23 for the mantissa.