In Part 3, we (sort of) confirmed that MS-Basic most probably used an 11th order minimax polynomial from Hart’s book Computer Approximations for computing the sine function. While I failed to dig up any information on the implementation of math libraries of early mainframes such as IBM’s and CDC’s (purportedly, they used CORDIC), the ROM of Sinclair Basic has been published.
Sinclair approximates the sine function as a linear combination \(S(t)=t \sum_{i=0}^n c_i T_i(2t^2-1),\; t:=\frac{2}{\pi}x\) of Chebyshev polynomials.

Chebyshev polynomials are orthogonal with respect to a weight function \(\frac{1}{\sqrt{1-x^2}}\), and they individually share a desirable property with the error of a Minimax approximation: they are equioscillating between \(-1\) and \(1\). If a Chebyshev series expansion of a function converges quickly, the error of an \(n\)th order truncation will be dominated by the first omitted term of order \(n+1\), and thus will be very close to the Minimax polynomial of \(n\)th order.
By a rationale similar to that described in Part 2, factoring out \(x\) and expanding in terms of \(T_i(x^2)\) ensures the symmetry of the approximation and optimizes for relative error. Consistent with the accuracy required for 40-bit floats, Sinclair used a \(5\)th order Chebyshev polynomial, effectively leading to an \(11\)th order approximation of \(\sin(x)\). To obtain the numerical values for the coefficients, we note that, aside from some limiting cases, Sinclair’s floating point format is the same 40-bit (8-bit biased exponent, 32-bit mantissa with implicit leading 1) representation as MS-Basic, so we can extract the coefficients from the source code.

Here we compare the relative errors for Sinclair Basic with Hart’s 9th and 11th order SIN 3341 and SIN 3342 polynomials (which are identical to Minimax polynomials obtained by Remez optimization), and results of truncated Chebyshev expansions. As expected we see that Hart and Chebyshev approximations are very close to each other, but Sinclair’s result is worse than Hart’s 11th order approximation by more than an order of magnitude. Adding Hart’s 3342 and Sinclair’s sine approximation using 40-bit floats to the comparison one sees that the Sinclair approximation suffers from error induced by inaccurate coefficients worse than roundoff alone would dictate.
Finally, it is worth noting that on page 115 of Hart’s book we find the identity \( \sin x = \sin \frac{x}{3} * \left( 3 – 4 \sin^2 \frac{x}{3}\right) \), originally intended for range reduction, which can also be used to iteratively compute \(\sin x\) from arguments sufficiently small that \(\sin x \approx x\) to machine precision.
def sini(n, x):
s = x / 3**n
for _ in range(n):
s *= 3-4*s**2
return s

In the range \(0\le x \le \frac{\pi}{2}\), 18 iterations are sufficient to produce results accurate to double precision.
Implementation notes (see also Part 2): Chebyshev polynomials were implemented using the well-known recurrence relation invented by Chlenshaw. The Chebyshev coefficients for \(\sin(\pi x / 2) \approx x \sum c_n T_n(x^2)\) were approximated using the discrete transformation described in Example 1 of the Wikipedia article.
The Remez coefficients were computed using Sam Hocevar’s LolRemez program ( lolremez -d -r '-1:1' "sin(x*pi/2)" "sin(x*pi/2)"
) and verified to agree with Hart’s approximations.
The Sinclair coefficients in text representation were originally found in a web page that seems to be no longer published. The only place they still can be found is in the Agena programming language web site. They were cross-checked by inserting into above-described Chebyshev routine and reverse-engineering the computation from the source code. Their origin is most likely Clenshaw’s article Polynomial Approximations to Elementary Functions (note the different normalization of \(T_n\) for \(n\ge 1\)). What follows is the Python code used for the present post.
import numpy as np
def chebp(c, x, a, b):
"""chebyshev polynomial(x) from given coefficients c in [a,b]"""
u = ( 2*x - (a+b) ) / ( b-a)
t0 = np.ones_like(u)
t1 = u
f = c[0]*t0 + c[1]*t1
for ci in c[2:]:
t0, t1 = t1, 2 * u*t1 - t0
f += ci*t1
return f
def sin2cheb(x):
c = [ 1.276278962, ] + [
ci * 2 for ci in ( -.142630785, 0.004559008,
-.000068294, 0.000000592, -.000000003, ) ]
y = 2/np.pi*x * chebp(c, (2/np.pi*x)**2, 0, 1)
return y
Further reading: There is a nice discussion about the history of early Microcomputer floating point implementations on the Stack Exchange Retrocomputing web site.
Leave a Reply