How to understand the order of convergence $|x_{k+1} - x| le C |x_k - x|^p$ (Convergence of a power function...
up vote
0
down vote
favorite
By definition, a sequence $x_k in mathbb{R}, k in mathbb{N}$
converges with order $p in [1,infty)$ to $x := lim_{ktoinfty}
x_k$ if begin{align} exists C in [0,infty): forall k in
mathbb{N}: |x_{k+1} - x| le C |x_k - x|^p end{align}
Assume $x_k$ is a sequence of approximations to $x$, then $|x_k - x|$ denotes the error of approximation at $k$-th iteration, and goes to zero as $k to infty$. I can understand the inequality $|x_{k+1} - x| le C |x_k - x|$ that implies the error must go smaller and smaller along the iterations. But I don't understand the motivation to put a power of $p$ on the previous step. Any practical meaning of taking $p$-th power of error $|x_k - x|^p$?
numerical-methods approximation numerical-linear-algebra numerical-calculus
add a comment |
up vote
0
down vote
favorite
By definition, a sequence $x_k in mathbb{R}, k in mathbb{N}$
converges with order $p in [1,infty)$ to $x := lim_{ktoinfty}
x_k$ if begin{align} exists C in [0,infty): forall k in
mathbb{N}: |x_{k+1} - x| le C |x_k - x|^p end{align}
Assume $x_k$ is a sequence of approximations to $x$, then $|x_k - x|$ denotes the error of approximation at $k$-th iteration, and goes to zero as $k to infty$. I can understand the inequality $|x_{k+1} - x| le C |x_k - x|$ that implies the error must go smaller and smaller along the iterations. But I don't understand the motivation to put a power of $p$ on the previous step. Any practical meaning of taking $p$-th power of error $|x_k - x|^p$?
numerical-methods approximation numerical-linear-algebra numerical-calculus
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
By definition, a sequence $x_k in mathbb{R}, k in mathbb{N}$
converges with order $p in [1,infty)$ to $x := lim_{ktoinfty}
x_k$ if begin{align} exists C in [0,infty): forall k in
mathbb{N}: |x_{k+1} - x| le C |x_k - x|^p end{align}
Assume $x_k$ is a sequence of approximations to $x$, then $|x_k - x|$ denotes the error of approximation at $k$-th iteration, and goes to zero as $k to infty$. I can understand the inequality $|x_{k+1} - x| le C |x_k - x|$ that implies the error must go smaller and smaller along the iterations. But I don't understand the motivation to put a power of $p$ on the previous step. Any practical meaning of taking $p$-th power of error $|x_k - x|^p$?
numerical-methods approximation numerical-linear-algebra numerical-calculus
By definition, a sequence $x_k in mathbb{R}, k in mathbb{N}$
converges with order $p in [1,infty)$ to $x := lim_{ktoinfty}
x_k$ if begin{align} exists C in [0,infty): forall k in
mathbb{N}: |x_{k+1} - x| le C |x_k - x|^p end{align}
Assume $x_k$ is a sequence of approximations to $x$, then $|x_k - x|$ denotes the error of approximation at $k$-th iteration, and goes to zero as $k to infty$. I can understand the inequality $|x_{k+1} - x| le C |x_k - x|$ that implies the error must go smaller and smaller along the iterations. But I don't understand the motivation to put a power of $p$ on the previous step. Any practical meaning of taking $p$-th power of error $|x_k - x|^p$?
numerical-methods approximation numerical-linear-algebra numerical-calculus
numerical-methods approximation numerical-linear-algebra numerical-calculus
asked 5 hours ago
Analysis Newbie
39617
39617
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
0
down vote
Consider a sequence of positive numbers $x_n$ such that $x_{n+1} = x_n^p$ with $p > 1$ and $x_0 < 1$. That's a special case, where $C = 1$.
Now take the logarithm. You should then be able to find an explicit formula for $x_n$. As $p$ varies, how does the rate change at which $x_n$ converges to 0? For example, how many steps are needed to guarantee $x_n < varepsilon$ for a given $varepsilon$?
By the way, the estimate $|x_{k+1} - x| le C |x_k - x|$ does not imply that the error goes to zero. This happens only if $C < 1$.
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
Consider a sequence of positive numbers $x_n$ such that $x_{n+1} = x_n^p$ with $p > 1$ and $x_0 < 1$. That's a special case, where $C = 1$.
Now take the logarithm. You should then be able to find an explicit formula for $x_n$. As $p$ varies, how does the rate change at which $x_n$ converges to 0? For example, how many steps are needed to guarantee $x_n < varepsilon$ for a given $varepsilon$?
By the way, the estimate $|x_{k+1} - x| le C |x_k - x|$ does not imply that the error goes to zero. This happens only if $C < 1$.
add a comment |
up vote
0
down vote
Consider a sequence of positive numbers $x_n$ such that $x_{n+1} = x_n^p$ with $p > 1$ and $x_0 < 1$. That's a special case, where $C = 1$.
Now take the logarithm. You should then be able to find an explicit formula for $x_n$. As $p$ varies, how does the rate change at which $x_n$ converges to 0? For example, how many steps are needed to guarantee $x_n < varepsilon$ for a given $varepsilon$?
By the way, the estimate $|x_{k+1} - x| le C |x_k - x|$ does not imply that the error goes to zero. This happens only if $C < 1$.
add a comment |
up vote
0
down vote
up vote
0
down vote
Consider a sequence of positive numbers $x_n$ such that $x_{n+1} = x_n^p$ with $p > 1$ and $x_0 < 1$. That's a special case, where $C = 1$.
Now take the logarithm. You should then be able to find an explicit formula for $x_n$. As $p$ varies, how does the rate change at which $x_n$ converges to 0? For example, how many steps are needed to guarantee $x_n < varepsilon$ for a given $varepsilon$?
By the way, the estimate $|x_{k+1} - x| le C |x_k - x|$ does not imply that the error goes to zero. This happens only if $C < 1$.
Consider a sequence of positive numbers $x_n$ such that $x_{n+1} = x_n^p$ with $p > 1$ and $x_0 < 1$. That's a special case, where $C = 1$.
Now take the logarithm. You should then be able to find an explicit formula for $x_n$. As $p$ varies, how does the rate change at which $x_n$ converges to 0? For example, how many steps are needed to guarantee $x_n < varepsilon$ for a given $varepsilon$?
By the way, the estimate $|x_{k+1} - x| le C |x_k - x|$ does not imply that the error goes to zero. This happens only if $C < 1$.
answered 4 hours ago
Hans Engler
9,90411836
9,90411836
add a comment |
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2995410%2fhow-to-understand-the-order-of-convergence-x-k1-x-le-c-x-k-x-p%23new-answer', 'question_page');
}
);
Post as a guest
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password