How Does a Computer Calculate sin(x)?

Author

Yaroslav Vakula

1 Introduction

Your calculator probably always knows the value of \(\sin(x)\). But have you ever wondered how it calculates it exactly? A computer, in the traditional sense, can’t think geometrically — it doesn’t understand angles, shapes, trigonometry, and so on. All it can do is add, subtract, and multiply numbers, and it does this very well and quickly.

This is where polynomials come in. Since a polynomial uses only these basic operations, any processor can easily calculate it. Thus, if \(\sin(x)\) is expressed as an arbitrary polynomial, its calculation becomes simple.

Taylor series will help us with this. The Taylor series represents any smooth function as an infinite sum of these same polynomials. A particularly convenient special case is the Maclaurin series, which is constructed around \(x = 0\) and works especially well when \(x\) remains close to this point.

This paper examines how the Maclaurin series for \(\sin(x)\) is constructed, how accurate the approximations are, and how many terms are actually needed to obtain a good answer.

2 Theoretical Foundations

2.1 Taylor Series

A Taylor series takes an arbitrary smooth function and expresses it as an infinite sum, where each term is constructed from the derivative of the function at some chosen point \(a\).

The general formula is shown in (1):

\[ f(x) = \sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!}(x - a)^n \tag{1}\]

Here \(f^{(n)}(a)\) is the \(n\)-th derivative of \(f\), evaluated at point \(a\) [1].

Each new term in such a sum slightly refines our approximation. If we take enough such terms, we can get as close to the true value as we like. And since factorials grow very quickly, the terms quickly decrease for large \(n\) — that’s why, in practice, only a few such terms are needed.

2.2 Maclaurin Series

If we center the expansion at \(a = 0\), we obtain a simpler and cleaner version of the series, called the Maclaurin series (2), which is actually used in most computers (microcontrollers, processors, etc.):

\[ f(x) = \sum_{n=0}^{\infty} \frac{f^{(n)}(0)}{n!} \, x^n \tag{2}\]

Since each derivative is evaluated at \(0\), the calculation is significantly simplified. The main drawback of this approach is that the accuracy of the approximation decreases as \(|x|\) increases — that is, the further the desired value is from zero, the more terms you need to maintain the desired accuracy.

2.3 Expansion of \(\sin(x)\)

\(\sin(x)\) has a pleasant structural property that makes its series particularly pure. Its derivatives pass through only four forms — \(\sin\), \(\cos\), \(-\sin\), \(-\cos\) — and at \(x = 0\), each sine-based derivative vanishes (\(\sin(0) = 0\)), while the cosine-based derivatives equal \(\pm 1\). This leaves only odd powers of \(x\).

The result is the Maclaurin expansion [1]:

\[ \sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots \tag{3}\]

The signs alternate and the factorials in the denominators increase rapidly — meaning that each subsequent term is much smaller than the previous one, especially near zero.

Of course, you can never calculate an infinite number of terms, and in practice, computing infinitely many terms is impossible. At some point, the calculation stops, and only a remainder remains. This remainder indicates how large the error of given approximation is. That is, how much the approximation deviates from the true result, which can be obtained analytically rather than numerically.

3 Approximation Error

If \(f(x)\) is the real function and \(T_n(x)\) is a Taylor polynomial of degree \(n\), then the error is simply the difference between them:

\[ R_n(x) = f(x) - T_n(x) \tag{4}\]

To bound this error without adding an infinite number of terms, we use the Lagrange form of the remainder [2]. For the Maclaurin series (where \(a = 0\)) it looks like this:

\[ R_n(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!} x^{n+1} \tag{5}\]

where \(\xi\) is some real number between \(0\) and \(x\).

This is particularly useful because all derivatives of \(\sin(x)\) are either \(\pm\sin(x)\) or \(\pm\cos(x)\), both of which have absolute values bounded by \(1\). So the inequality simplifies nicely [2]:

\[ |f^{(n+1)}(\xi)| \leq M \implies |f^{(n+1)}(\xi)| \leq 1 \tag{6}\]

This means that the absolute error satisfies:

\[ |R_n(x)| \le M \left| \frac{x^{n+1}}{(n+1)!} \right| \implies |R_n(x)| \le \left| \frac{x^{n+1}}{(n+1)!} \right| \tag{7}\]

Two things immediately follow:

  • When \(x\) is close to zero, \(x^{n+1}\) is small — hence the error is small.

  • As \(n\) increases, the factorial \((n+1)!\) increases sharply, which reduces the error very quickly.

4 Visualization

4.1 \(\sin(x)\) and Taylor Polynomials

Figure 1 shows \(\sin(x)\) as a dashed black line alongside five Taylor polynomial approximations of degrees \(1\) through \(13\). Degree 1 is simply the tangent line \(y = x\). Each higher degree adds another term from (3), and the approximation improves significantly over a larger range.

Figure 1: Graph: sin(x) and Taylor polynomial approximations for degrees 1 to 13

Close to \(x = 0\), degree \(13\) is almost indistinguishable from a true sinusoidal curve — it only starts to deviate at the edges of the graph. Polynomials of lower degrees lose their coherence much earlier.

5 Convergence Analysis

The Maclaurin series for \(\sin(x)\) converges everywhere — for any value you assign to \(x\). But how quickly it converges is another matter, and depends strongly on how far \(x\) is from zero.

Near zero, successive terms decrease rapidly, and only a few are needed. Further away from zero, the initial terms are large and tend to partially cancel out, so the partial sums take much longer to settle close to the true value.

Table 1 shows the minimum degree of the polynomial required to achieve three different accuracy goals at six control points.

Table 1: Minimum polynomial degree required to approximate sin(x) within the given accuracy
\(x\) Accuracy \(10^{-3}\) Accuracy \(10^{-6}\) Accuracy \(10^{-12}\)
0.5 \(n = 3\) \(n = 7\) \(n = 11\)
1.0 \(n = 5\) \(n = 9\) \(n = 13\)
2.0 \(n = 9\) \(n = 13\) \(n = 19\)
\(\pi\) \(n = 11\) \(n = 15\) \(n = 23\)
2\(\pi\) \(n = 21\) \(n = 25\) \(n = 33\)
3\(\pi\) \(n = 29\) \(n = 35\) \(n = 43\)

To understand why convergence slows down with distance from zero, consider how successive terms relate to one another. The general term of the series is:

\[ a_k = \frac{x^{2k+1}}{(2k+1)!} \tag{8}\]

Taking the ratio of consecutive terms:

\[ \frac{a_{k+1}}{a_k} = \frac{x^{2k+3}}{(2k+3)!} \cdot \frac{(2k+1)!}{x^{2k+1}} = \frac{x^2}{(2k+3)(2k+2)} \tag{9}\]

Substituting \(n = 2k+1\) for the degree of the current term, the denominator becomes \((n+2)(n+1)\), which for large \(n\) is well approximated by \(n^2\). This gives:

\[ \frac{a_{k+1}}{a_k} \approx \frac{x^2}{n^2} \tag{10}\]

Each new term is therefore smaller than the previous one by a factor of roughly \(\frac{x^2}{n^2}\). When \(x\) is small, this ratio is tiny and the series collapses quickly. When \(x\) is large, the ratio stays close to \(1\) for many terms before shrinking — which is precisely why more terms are needed far from zero.

In real processors, the Maclaurin series is never applied directly to large inputs. Instead, a range reduction is first applied: by using the identity \(\sin(x + 2\pi) = \sin(x)\), any input is mapped into a small interval near zero. From there, just a few terms are enough to reach full floating-point precision — keeping the computation both fast and accurate.

6 Conclusion

So, how does a computer calculate \(\sin(x)\)? The answer is the Maclaurin series (3) — an infinite sum where each term is simply an odd power of \(x\) divided by a factorial. Nothing more exotic than that.

Near \(x = 0\), the series converges quickly and a small number of terms is all you need. Further out, more terms are required — which is exactly why real implementations lean on range reduction, pulling every input back toward zero before the series is ever evaluated.

The accuracy gain isn’t linear, either. Each additional term cuts the error by roughly a factor of (10). Near zero that factor is small, so convergence is rapid; further away, it’s larger, so things slow down. Table 1 captures this behavior concretely.

Taylor and Maclaurin series show up across physics, engineering, signal processing, machine learning — really, anywhere that complex functions need to be computed efficiently. What they reveal is one of the more quietly powerful ideas in numerical mathematics: that even the most intricate functions can be pinned down using nothing but addition and multiplication. That’s a genuinely elegant result.

7 References

[1]
J. Stewart, Calculus: Early transcendentals, 6th ed. Belmont, CA: Thomson Brooks/Cole, 2008.
[2]
Wikipedia contributors, “Taylor’s theorem.” Wikipedia, The Free Encyclopedia, 2026. Accessed: Apr. 18, 2026. [Online]. Available: https://en.wikipedia.org/wiki/Taylor%27s_theorem