python - Solve this equation with fixed point iteration -
how can solve equation
x3 + x - 1 = 0
using fixed point iteration?
is there fixed-point iteration code (especially in python) can find online?
using scipy.optimize.fixed_point:
import scipy.optimize optimize def func(x): return -x**3+1 # finds value of x such func(x) = x, is, # -x**3 + 1 = x print(optimize.fixed_point(func,0)) # 0.682327803828
the python code defining fixed_point
in scipy/optimize/minpack.py. exact location depends on scipy
installed. can find out typing
in [63]: import scipy.optimize in [64]: scipy.optimize out[64]: <module 'scipy.optimize' '/usr/lib/python2.6/dist-packages/scipy/optimize/__init__.pyc'>
here code in scipy 0.7.0:
def fixed_point(func, x0, args=(), xtol=1e-8, maxiter=500): """find point func(x) == x given function of 1 or more variables , starting point, find fixed-point of function: i.e. func(x)=x. uses steffensen's method using aitken's del^2 convergence acceleration. see burden, faires, "numerical analysis", 5th edition, pg. 80 example ------- >>> numpy import sqrt, array >>> scipy.optimize import fixed_point >>> def func(x, c1, c2): return sqrt(c1/(x+c2)) >>> c1 = array([10,12.]) >>> c2 = array([3, 5.]) >>> fixed_point(func, [1.2, 1.3], args=(c1,c2)) array([ 1.4920333 , 1.37228132]) see also: fmin, fmin_powell, fmin_cg, fmin_bfgs, fmin_ncg -- multivariate local optimizers leastsq -- nonlinear least squares minimizer fmin_l_bfgs_b, fmin_tnc, fmin_cobyla -- constrained multivariate optimizers anneal, brute -- global optimizers fminbound, brent, golden, bracket -- local scalar minimizers fsolve -- n-dimenstional root-finding brentq, brenth, ridder, bisect, newton -- one-dimensional root-finding """ if not isscalar(x0): x0 = asarray(x0) p0 = x0 iter in range(maxiter): p1 = func(p0, *args) p2 = func(p1, *args) d = p2 - 2.0 * p1 + p0 p = where(d == 0, p2, p0 - (p1 - p0)*(p1-p0) / d) relerr = where(p0 == 0, p, (p-p0)/p0) if all(relerr < xtol): return p p0 = p else: p0 = x0 iter in range(maxiter): p1 = func(p0, *args) p2 = func(p1, *args) d = p2 - 2.0 * p1 + p0 if d == 0.0: return p2 else: p = p0 - (p1 - p0)*(p1-p0) / d if p0 == 0: relerr = p else: relerr = (p-p0)/p0 if relerr < xtol: return p p0 = p raise runtimeerror, "failed converge after %d iterations, value %s" % (maxiter,p)
Comments
Post a Comment