Skip to main content

Secant Method

The Secant method is a finite difference approximation of the Newton-Raphson method. This method doesn't require derivatives and hence is useful when derivative calculation is not possible.

This method requires two initial guesses but these guesses need not bracket a solution. This method belongs to the open bracket type. Like the bisection/regula-falsi methods, this method requires two previous values to calculate the next value during iteration.

The formula for the secant method can simply be obtained by replacing the derivative f(x)f'(x) with its finite difference counterpart. We know, the formula for the Newton-Raphson method is:

xn+1=xnf(xn)f(xn)\begin{equation} x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)} \end{equation}

Replacing f(xn)f'(x_n) with finite difference approximation f(xn)f(xn1)xnxn1\frac{f(x_n)-f(x_{n-1})}{x_n-x_{n-1}} and simplifying, we get the formula for secant method as:

xn+1=xn1f(xn)xnf(xn1)f(xn)f(xn1); n1\fcolorbox{#888}{#efefef}{$x_{n+1} = \frac{x_{n-1}f(x_n)-x_nf(x_{n-1})}{f(x_n)-f(x_{n-1})};\ n\ge 1$}

You might have noticed that the above formula looks quite similar to the Regula-Falsi method. In fact, the formulae are the same. The difference is in the bracketing interval. The Regula-Falsi method chooses the next value of xx such that the new interval will also bracket the solution. But for the secant method, we needn't check the bracketing criteria.

Since we don't check the bracketing criteria, the secant method is not guaranteed to converge but if it does converge then the order of convergence is superlinear.

Algorithm

  1. Start
  2. Declare variables x0, x1, x2, fx0, fx1x_0,\ x_1,\ x_2,\ f_{x_0},\ f_{x_1} and ee.
  3. Read values for initial guesses x0x_0, x1x_1 and error threshold ee.
  4. Calculate fx0=f(x0)f_{x_0}=f(x_0)
  5. Calculate fx1=f(x1)f_{x_1}=f(x_1)
  6. Calculate x2=x0fx1x1fx0fx1fx0x_2=\frac{x_0f_{x_1}-x_1f_{x_0}}{f_{x_1}-f_{x_0}}.
  7. If x2x1>e|x_2-x_1|> e then assign x0=x1, x1=x2,fx0=fx1x_0=x_1,\ x_1=x_2, f_{x_0}=f_{x_1} and goto step 5.
  8. Print the solution as x2x_2.
  9. Stop.

Programs

/*
* Secant Method in C
*/

#define MAX_ITER 100
#include <stdio.h>
#include <math.h>

double f(double x)
{
return x * x + x - 2;
}

int main()
{
double x0, x2, x1, fx0, fx1, x, threshold, error = 0;
int iter = 0;
printf("-----------Secant Method-----------\n");
printf("Enter x0: ");
scanf("%lf", &x0);

printf("Enter x1: ");
scanf("%lf", &x1);

printf("Enter error threshold: ");
scanf("%lf", &threshold);

fx0 = f(x0);
printf("%10s %10s %10s %10s %10s %10s\n", "N", "xn-1", "xn", "xn+1", "f(xn+1)", "error");
while (iter < MAX_ITER)
{
iter++;
fx1 = f(x1);
x2 = (x0 * fx1 - x1 * fx0) / (fx1 - fx0);

error = fabs(x2 - x1);
printf("%10d %10.5f %10.5f %10.5f %10.5f %10.5f\n", iter, x0, x1, x2, f(x2), error);
if (error > threshold)
{
x0 = x1;
x1 = x2;
fx0 = fx1; // As x0 becomes x1, fx0 becomes fx1
}
else
{
printf("\nHence the solution is: %10.5lf\n", x2);
return 0;
}
}

printf("Max iterations reached\n");
return -1;
}

Output

-----------Secant Method-----------
Enter x0: 5
Enter x1: 6
Enter error threshold: 0.0001
N xn-1 xn xn+1 f(xn+1) error
1 5.00000 6.00000 2.66667 7.77778 3.33333
2 6.00000 2.66667 1.86207 3.32937 0.80460
3 2.66667 1.86207 1.25988 0.84716 0.60219
4 1.86207 1.25988 1.05435 0.16601 0.20552
5 1.25988 1.05435 1.00426 0.01280 0.05009
6 1.05435 1.00426 1.00008 0.00023 0.00419
7 1.00426 1.00008 1.00000 0.00000 0.00008

Hence the solution is: 1.00000

Convergence

The secant method is not guaranteed to converge. But if it does converge then the order of convergence will be 1+52=1.618\frac{1+\sqrt{5}}{2}=1.618. This is called the superlinear order of convergence as it is less than quadratic(2) but more than linear(1). The successive error will have the following relation:

en+1=cnen1.618 e_{n+1} = c_n\cdot e_n^{1.618}

Where cnc_n is the rate of convergence which is variable and depends upon the current slope and curvature of the function.

Advantages

  1. Super linear order of convergence.
  2. Doesn't require derivative.
  3. Doesn't require the initial guesses to bracket a solution.

Disadvantage

  1. Can't guarantee convergence.
  2. Order of convergence is high but not as high as Netwon-Raphson method.
  3. Two initial guesses are required.