A look at the finite difference method#
The finite difference method is a method that we can use to approximate derivatives. In fact, in many precalculus classes, this is introduced via the equation,
However, computers cannot understand what the limit of \(h \rightarrow 0\) means. In other words, a computer cannot reach 0; it will always get to a number that is infinitesimally small yet not 0. So what do we do? Well, we can take a look at the Taylor series expansion of \(f(x_o+h)\)
Here, \(n\) is an arbitrary number for the total number of derivatives we include in the expansion, and the superscript in parentheses, \(f^{(n)}\), represents the order of the derivative with respect to \(x\) for the function. \(R_n\) is the remainder term denoting the difference between the true equation and the approximated expansion. This is similar to the Big-O notation as it looks at the local truncation error and we will start using the Big-O notation from here on. If we then truncate the expansion to the first term we get,
Re-arranging the equation then gives,
Which is similar to equation (1) without the limit. The equation above is an approximation of the first derivative truncated to the first term and the error scales linearly as denoted by \(O(h)\).
Difference between forward, backward, and central FD approaches#
Here, I want to explore the slight differences between the forward, backward, and central finite difference methods.
Forward difference#
This is actually the same equation as what we were looking at previously,
With linearly scaling error as we have already seen.
Backward difference#
The backward FD equation is similar to the forward difference,
The difference between the two comes from understanding the Taylor series expansion of \(f(x_o-h)\) vs. \(f(x_o+h)\). If we look at the generalized equation for a Taylor series,
Resulting in the following equation for the backward difference up to second order,
We see that the \(x_o\) terms on the right-hand side cancel each other out and after reducing terms we get,
We can see that any linear terms will now carry a minus sign in the expression and all quadratic terms remain positive. So what’s the point of all this? Well we are about to see that.
Central difference#
The central finite difference method is a combination of the forward and backward difference methods. Without the specifics on how we get the equations, the central finite difference equation is,
And that is the equation. But, what is the big deal? Well, if look at the last term in the equation you will notice that the error now depends quadratically on the step size, \(O(h^2)\), thereby increasing the accuracy of the approximated derivative that we calculate. Now I will show you how this comes to be. The first thing to consider are the Taylor series expansions for \(f(x_o+h)\) and \(f(x_o-h)\),
And taking the difference of the equations gives,
What we now see is that the even terms will cancel off and the odd terms will remain. As such, in order to have the equaion only depend on the first derivative we must truncate to the second order giving,
With some re-arranging we get equation (8). So, by including extra points in the finite difference equation we can decrease the error in our approximation. With 2 points we get an error of \(O(h^2)\), 3 points will have \(O(h^3)\), etc. Just as a proof I will show what happens when we want to approximate the derivative with a five-point stencil, which has the equation
We would have the following equations for the Taylor series expansion,
We must eliminate the third ($\(h^3\)\() and fourth (\)\(h^4\)$) order terms from the expansion using,
I know this is the equation based on the pre-factors in the known equation for the five-point stencil. If you work out the algebra using the equation above, you will find that all third and fourth-order terms cancel out and you get the actual equation. So you may be asking yourself why I even mentioned any of this. We already have the equations using multi-point finite difference methods. Well, I will now explain this.
Computing the FD coefficients#
So if you are interested in finding the coefficients for the FD method using more than two points, such as the five-point stencil, you can look here for the coefficients up to eight points. But, something that you might notice as I did is that the step sizes MUST be consecutive, i.e. you can have \(f(x_o\pm h)\), \(f(x_o\pm 2h)\), \(f(x_o\pm 3h)\), etc. However, if you have \(f(x_o\pm h)\), and \(f(x_o\pm 3h)\) you can no longer use the five-point stencil as the third and fourth order terms will not cancel out in the Taylor series expansion. I am a big believer of seeing is believing so I will show you. Let’s construct the Taylor series expansions for \(f(x_o\pm h)\), and \(f(x_o\pm 3h)\),
If we use equation (12) for the five-point stencil we get,
The even terms disappear due to symmetry. However, the odd functions are where things get tricky as they carry a sign change and we see that the third-order terms remain in the equation. This means that we cannot approximate the derivative with those pre-factor coefficients anymore and we have to re-calculate them for this specific use case. This is a very simple case and the pre-factors are 27 and -1 for \(f(x_o\pm h)\) and \(f(x_o\pm 3h)\), respectively. I calculated these by looking at the non-common pre-factor, ignoring the 1/6, for the third order term and I will get 54. The equation to approximate the first derivative is then,
Once you go into evaluating the coefficients for a six-point FD method, the situation becomes more complicated and requires the help of a computer program. So, I made a small Python script that I implemented into my library called VibrAv and will share it below.
def _get_prefac(p, d):
prefacs = np.sum([2*x*y for x, y in zip(p, d)])
return prefacs
def _determine_prefactors(p, d):
n = len(p)
eqs = [2/np.math.factorial(x)*np.sum(p*d**x) \
for x in range(1, 2*n, 2)]
eqs[0] -= _get_prefac(p, d)
return eqs
def _get_arb_coeffs(steps):
from scipy.optimize import root
x0 = np.array([0.5]*steps.shape[0])
res = root(_determine_prefactors, x0=x0, args=steps)
prefac = _get_prefac(res.x, steps)
coeffs = res.x/prefac
return coeffs
So let’s break it down. The magic happens in the _determine_prefactors
function above and what we are essentially doing is we are adding up the
different orders in the Taylor series expansion equations. The overall equation
is
Here I will put fancy text on how to compute the FD coefficients as seen on here.