How Does a Computer Calculate sin(x)?

Yaroslav Vakula

The Problem

Your calculator always returns a value for \(\sin(x)\).

But how does it actually compute it?

  • A computer cannot reason geometrically
  • It only knows how to add, subtract, and multiply
  • It needs an arithmetic recipe — not a geometric one

The Key Idea: Polynomials

A polynomial uses only addition, subtraction, and multiplication:

\[p(x) = a_0 + a_1 x + a_2 x^2 + \cdots + a_n x^n\]

Any processor can evaluate a polynomial efficiently.

So if \(\sin(x)\) can be written as a polynomial — the problem is solved.

Taylor Series

Any smooth function can be written as an infinite sum of polynomial terms [1]:

\[f(x) = \sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!}(x - a)^n\]

  • \(f^{(n)}(a)\) — the \(n\)-th derivative of \(f\) at point \(a\)
  • Each new term refines the approximation
  • Factorials grow fast \(\rightarrow\) terms shrink fast \(\rightarrow\) few terms needed in practice

Maclaurin Series

Setting \(a = 0\) gives the Maclaurin series — the form used in most computing environments:

\[f(x) = \sum_{n=0}^{\infty} \frac{f^{(n)}(0)}{n!} \, x^n\]

Advantages:

  • All derivatives evaluated at \(0\) \(\rightarrow\) simpler coefficients
  • Very accurate when \(|x|\) is small

Drawback: accuracy decreases as \(|x|\) grows

Expansion of \(\sin(x)\)

The derivatives of \(\sin(x)\) cycle through four forms: \[\sin \to \cos \to -\sin \to -\cos \to \cdots\]

At \(x = 0\): sine-based terms vanish \((\sin(0) = 0)\), cosine-based equal \(\pm 1\).

Only odd powers survive:

\[\sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots\]

Approximation Error

In practice, we stop after \(n\) terms. The error is:

\[R_n(x) = f(x) - T_n(x)\]

The Lagrange remainder [2] gives an upper bound:

\[R_n(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!} x^{n+1}, \quad \xi \in (0,\, x)\]

\[|R_n(x)| \le \left| \frac{x^{n+1}}{(n+1)!} \right|\]

What the Error Bound Tells Us

\[|R_n(x)| \le \left| \frac{x^{n+1}}{(n+1)!} \right|\]

Near zero (\(|x|\) small): \(x^{n+1}\) is tiny \(\rightarrow\) error is tiny

Adding more terms (larger \(n\)): \((n+1)!\) grows explosively \(\rightarrow\) error shrinks fast

Both effects work in our favour near \(x = 0\).

Visualization

Figure 1: sin(x) and Taylor polynomial approximations for degrees 1 to 13

How Many Terms Are Needed?

Table 1: Minimum polynomial degree to approximate sin(x) within a given accuracy
\(x\) \(10^{-3}\) \(10^{-6}\) \(10^{-12}\)
0.5 \(n = 3\) \(n = 7\) \(n = 11\)
1.0 \(n = 5\) \(n = 9\) \(n = 13\)
2.0 \(n = 9\) \(n = 13\) \(n = 19\)
\(\pi\) \(n = 11\) \(n = 15\) \(n = 23\)
2\(\pi\) \(n = 21\) \(n = 25\) \(n = 33\)
3\(\pi\) \(n = 29\) \(n = 35\) \(n = 43\)

The required degree grows with \(|x|\).

Why Convergence Slows Down

The ratio of consecutive terms explains the pattern:

\[\frac{a_{k+1}}{a_k} = \frac{x^2}{(2k+3)(2k+2)} \approx \frac{x^2}{n^2}\]

  • Small \(x\): ratio \(\ll 1\) \(\rightarrow\) series collapses in a few terms
  • Large \(x\): ratio \(\approx 1\) for many terms \(\rightarrow\) slow convergence

This is why real processors use range reduction first:

\[\sin(x + 2\pi) = \sin(x)\]

Conclusion

How does a computer calculate \(\sin(x)\)?

\[\sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots\]

  • Only odd powers of \(x\), divided by factorials — nothing exotic
  • Highly accurate near \(x = 0\) with very few terms
  • Range reduction extends this to all inputs

The big idea: even the most complex functions can be pinned down using nothing but addition and multiplication.

References

[1]
J. Stewart, Calculus: Early transcendentals, 6th ed. Belmont, CA: Thomson Brooks/Cole, 2008.
[2]
Wikipedia contributors, “Taylor’s theorem.” Wikipedia, The Free Encyclopedia, 2026. Accessed: Apr. 18, 2026. [Online]. Available: https://en.wikipedia.org/wiki/Taylor%27s_theorem