Matrix function
up vote
0
down vote
favorite
If we have $n$ by $n$ A matrix I want to ask about the general method to compute the matrix function.
For example how I can compute:
$cos(A)$ or
$sin(A)$ or
$e^{A}$ or
$log(A)$
or any other functions?
matrix-calculus
add a comment |
up vote
0
down vote
favorite
If we have $n$ by $n$ A matrix I want to ask about the general method to compute the matrix function.
For example how I can compute:
$cos(A)$ or
$sin(A)$ or
$e^{A}$ or
$log(A)$
or any other functions?
matrix-calculus
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
If we have $n$ by $n$ A matrix I want to ask about the general method to compute the matrix function.
For example how I can compute:
$cos(A)$ or
$sin(A)$ or
$e^{A}$ or
$log(A)$
or any other functions?
matrix-calculus
If we have $n$ by $n$ A matrix I want to ask about the general method to compute the matrix function.
For example how I can compute:
$cos(A)$ or
$sin(A)$ or
$e^{A}$ or
$log(A)$
or any other functions?
matrix-calculus
matrix-calculus
edited Nov 15 at 3:22
Seth
42312
42312
asked Nov 15 at 2:59
hmeteir
163
163
add a comment |
add a comment |
3 Answers
3
active
oldest
votes
up vote
1
down vote
accepted
A consequence of the Cayley-Hamilton theorem is that any analytic function $f$ of an $ntimes n$ matrix $A$ can be expressed as a polynomial $p(A)$ of degree at most $n-1$. It’s also the case that if $lambda$ is an eigenvalue of $A$, then $f(lambda)=p(lambda)$. If you know $A$’s eigenvalues, you can therefore generate a system of linear equations in the unknown coefficients of $p$. If there are repeated eigenvalues, this system will be underdetermined, but you can generate additional independent equations by repeatedly differentiating $f$ and $p$.
add a comment |
up vote
1
down vote
We can define functions $F: mathcal{M}_{ntimes n}(mathbb{R})rightarrow mathcal{M}_{ntimes n}(mathbb{R})$ analogous to analytic functions $f: mathbb{R}rightarrowmathbb{R}$ in the following way:
Let $Ainmathcal{M}_{ntimes n}(mathbb{R})$. Define $A^0 = I$. Then, define $A^k = Acdot A^{k-1}$ recursively, via the usual matrix product. For any polynomial $p(x) = sumlimits_{i=1}^k c_ix^i$, we can now define the matrix polynomial $P(A) = sumlimits_{i=1}^k c_iA^i$.
For functions which are not polynomials, but are analytic, we can use their power series expansion. Let $f(x)$ be an analytic function with power series $sumlimits_{i=1}^infty c_ix^i$. Then, we can define the matrix function $F(A)$ by its power series $ sumlimits_{i=1}^infty c_iA^i$, given that such a series converges to a unique matrix value for each $A$. Here is an example of proving convergence for the matrix exponential, $exp(A) := sumlimits_{i=1}^infty frac{1}{i!}A^i$. A similar method can be used to show convergence of other matrix power series corresponding to real analytic functions.
add a comment |
up vote
0
down vote
Calculate the eigenvalue decomposition of the matrix
$$A=QDQ^{-1}$$ where $D$ is a diagonal matrix whose entries are the eigenvalues of $A$, and the columns of $Q$ are the corresponding eigenvectors.
With this decomposition in hand, any function can be evaluated as
$$f(A)=Q,f(D),Q^{-1}$$
which is very convenient; just evaluate the function at each diagonal element.
If $A$ cannot be diagonalized or $Q$ is ill-conditioned, add a small random perturbation to the matrix and try again.
$$eqalign{
A' &= A+E cr
|E| &approx |A|cdot 10^{-14} cr
}$$
add a comment |
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
A consequence of the Cayley-Hamilton theorem is that any analytic function $f$ of an $ntimes n$ matrix $A$ can be expressed as a polynomial $p(A)$ of degree at most $n-1$. It’s also the case that if $lambda$ is an eigenvalue of $A$, then $f(lambda)=p(lambda)$. If you know $A$’s eigenvalues, you can therefore generate a system of linear equations in the unknown coefficients of $p$. If there are repeated eigenvalues, this system will be underdetermined, but you can generate additional independent equations by repeatedly differentiating $f$ and $p$.
add a comment |
up vote
1
down vote
accepted
A consequence of the Cayley-Hamilton theorem is that any analytic function $f$ of an $ntimes n$ matrix $A$ can be expressed as a polynomial $p(A)$ of degree at most $n-1$. It’s also the case that if $lambda$ is an eigenvalue of $A$, then $f(lambda)=p(lambda)$. If you know $A$’s eigenvalues, you can therefore generate a system of linear equations in the unknown coefficients of $p$. If there are repeated eigenvalues, this system will be underdetermined, but you can generate additional independent equations by repeatedly differentiating $f$ and $p$.
add a comment |
up vote
1
down vote
accepted
up vote
1
down vote
accepted
A consequence of the Cayley-Hamilton theorem is that any analytic function $f$ of an $ntimes n$ matrix $A$ can be expressed as a polynomial $p(A)$ of degree at most $n-1$. It’s also the case that if $lambda$ is an eigenvalue of $A$, then $f(lambda)=p(lambda)$. If you know $A$’s eigenvalues, you can therefore generate a system of linear equations in the unknown coefficients of $p$. If there are repeated eigenvalues, this system will be underdetermined, but you can generate additional independent equations by repeatedly differentiating $f$ and $p$.
A consequence of the Cayley-Hamilton theorem is that any analytic function $f$ of an $ntimes n$ matrix $A$ can be expressed as a polynomial $p(A)$ of degree at most $n-1$. It’s also the case that if $lambda$ is an eigenvalue of $A$, then $f(lambda)=p(lambda)$. If you know $A$’s eigenvalues, you can therefore generate a system of linear equations in the unknown coefficients of $p$. If there are repeated eigenvalues, this system will be underdetermined, but you can generate additional independent equations by repeatedly differentiating $f$ and $p$.
answered Nov 15 at 7:38
amd
28.5k21049
28.5k21049
add a comment |
add a comment |
up vote
1
down vote
We can define functions $F: mathcal{M}_{ntimes n}(mathbb{R})rightarrow mathcal{M}_{ntimes n}(mathbb{R})$ analogous to analytic functions $f: mathbb{R}rightarrowmathbb{R}$ in the following way:
Let $Ainmathcal{M}_{ntimes n}(mathbb{R})$. Define $A^0 = I$. Then, define $A^k = Acdot A^{k-1}$ recursively, via the usual matrix product. For any polynomial $p(x) = sumlimits_{i=1}^k c_ix^i$, we can now define the matrix polynomial $P(A) = sumlimits_{i=1}^k c_iA^i$.
For functions which are not polynomials, but are analytic, we can use their power series expansion. Let $f(x)$ be an analytic function with power series $sumlimits_{i=1}^infty c_ix^i$. Then, we can define the matrix function $F(A)$ by its power series $ sumlimits_{i=1}^infty c_iA^i$, given that such a series converges to a unique matrix value for each $A$. Here is an example of proving convergence for the matrix exponential, $exp(A) := sumlimits_{i=1}^infty frac{1}{i!}A^i$. A similar method can be used to show convergence of other matrix power series corresponding to real analytic functions.
add a comment |
up vote
1
down vote
We can define functions $F: mathcal{M}_{ntimes n}(mathbb{R})rightarrow mathcal{M}_{ntimes n}(mathbb{R})$ analogous to analytic functions $f: mathbb{R}rightarrowmathbb{R}$ in the following way:
Let $Ainmathcal{M}_{ntimes n}(mathbb{R})$. Define $A^0 = I$. Then, define $A^k = Acdot A^{k-1}$ recursively, via the usual matrix product. For any polynomial $p(x) = sumlimits_{i=1}^k c_ix^i$, we can now define the matrix polynomial $P(A) = sumlimits_{i=1}^k c_iA^i$.
For functions which are not polynomials, but are analytic, we can use their power series expansion. Let $f(x)$ be an analytic function with power series $sumlimits_{i=1}^infty c_ix^i$. Then, we can define the matrix function $F(A)$ by its power series $ sumlimits_{i=1}^infty c_iA^i$, given that such a series converges to a unique matrix value for each $A$. Here is an example of proving convergence for the matrix exponential, $exp(A) := sumlimits_{i=1}^infty frac{1}{i!}A^i$. A similar method can be used to show convergence of other matrix power series corresponding to real analytic functions.
add a comment |
up vote
1
down vote
up vote
1
down vote
We can define functions $F: mathcal{M}_{ntimes n}(mathbb{R})rightarrow mathcal{M}_{ntimes n}(mathbb{R})$ analogous to analytic functions $f: mathbb{R}rightarrowmathbb{R}$ in the following way:
Let $Ainmathcal{M}_{ntimes n}(mathbb{R})$. Define $A^0 = I$. Then, define $A^k = Acdot A^{k-1}$ recursively, via the usual matrix product. For any polynomial $p(x) = sumlimits_{i=1}^k c_ix^i$, we can now define the matrix polynomial $P(A) = sumlimits_{i=1}^k c_iA^i$.
For functions which are not polynomials, but are analytic, we can use their power series expansion. Let $f(x)$ be an analytic function with power series $sumlimits_{i=1}^infty c_ix^i$. Then, we can define the matrix function $F(A)$ by its power series $ sumlimits_{i=1}^infty c_iA^i$, given that such a series converges to a unique matrix value for each $A$. Here is an example of proving convergence for the matrix exponential, $exp(A) := sumlimits_{i=1}^infty frac{1}{i!}A^i$. A similar method can be used to show convergence of other matrix power series corresponding to real analytic functions.
We can define functions $F: mathcal{M}_{ntimes n}(mathbb{R})rightarrow mathcal{M}_{ntimes n}(mathbb{R})$ analogous to analytic functions $f: mathbb{R}rightarrowmathbb{R}$ in the following way:
Let $Ainmathcal{M}_{ntimes n}(mathbb{R})$. Define $A^0 = I$. Then, define $A^k = Acdot A^{k-1}$ recursively, via the usual matrix product. For any polynomial $p(x) = sumlimits_{i=1}^k c_ix^i$, we can now define the matrix polynomial $P(A) = sumlimits_{i=1}^k c_iA^i$.
For functions which are not polynomials, but are analytic, we can use their power series expansion. Let $f(x)$ be an analytic function with power series $sumlimits_{i=1}^infty c_ix^i$. Then, we can define the matrix function $F(A)$ by its power series $ sumlimits_{i=1}^infty c_iA^i$, given that such a series converges to a unique matrix value for each $A$. Here is an example of proving convergence for the matrix exponential, $exp(A) := sumlimits_{i=1}^infty frac{1}{i!}A^i$. A similar method can be used to show convergence of other matrix power series corresponding to real analytic functions.
answered Nov 15 at 3:15
AlexanderJ93
5,279522
5,279522
add a comment |
add a comment |
up vote
0
down vote
Calculate the eigenvalue decomposition of the matrix
$$A=QDQ^{-1}$$ where $D$ is a diagonal matrix whose entries are the eigenvalues of $A$, and the columns of $Q$ are the corresponding eigenvectors.
With this decomposition in hand, any function can be evaluated as
$$f(A)=Q,f(D),Q^{-1}$$
which is very convenient; just evaluate the function at each diagonal element.
If $A$ cannot be diagonalized or $Q$ is ill-conditioned, add a small random perturbation to the matrix and try again.
$$eqalign{
A' &= A+E cr
|E| &approx |A|cdot 10^{-14} cr
}$$
add a comment |
up vote
0
down vote
Calculate the eigenvalue decomposition of the matrix
$$A=QDQ^{-1}$$ where $D$ is a diagonal matrix whose entries are the eigenvalues of $A$, and the columns of $Q$ are the corresponding eigenvectors.
With this decomposition in hand, any function can be evaluated as
$$f(A)=Q,f(D),Q^{-1}$$
which is very convenient; just evaluate the function at each diagonal element.
If $A$ cannot be diagonalized or $Q$ is ill-conditioned, add a small random perturbation to the matrix and try again.
$$eqalign{
A' &= A+E cr
|E| &approx |A|cdot 10^{-14} cr
}$$
add a comment |
up vote
0
down vote
up vote
0
down vote
Calculate the eigenvalue decomposition of the matrix
$$A=QDQ^{-1}$$ where $D$ is a diagonal matrix whose entries are the eigenvalues of $A$, and the columns of $Q$ are the corresponding eigenvectors.
With this decomposition in hand, any function can be evaluated as
$$f(A)=Q,f(D),Q^{-1}$$
which is very convenient; just evaluate the function at each diagonal element.
If $A$ cannot be diagonalized or $Q$ is ill-conditioned, add a small random perturbation to the matrix and try again.
$$eqalign{
A' &= A+E cr
|E| &approx |A|cdot 10^{-14} cr
}$$
Calculate the eigenvalue decomposition of the matrix
$$A=QDQ^{-1}$$ where $D$ is a diagonal matrix whose entries are the eigenvalues of $A$, and the columns of $Q$ are the corresponding eigenvectors.
With this decomposition in hand, any function can be evaluated as
$$f(A)=Q,f(D),Q^{-1}$$
which is very convenient; just evaluate the function at each diagonal element.
If $A$ cannot be diagonalized or $Q$ is ill-conditioned, add a small random perturbation to the matrix and try again.
$$eqalign{
A' &= A+E cr
|E| &approx |A|cdot 10^{-14} cr
}$$
answered Nov 15 at 3:50
greg
7,1901719
7,1901719
add a comment |
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2999144%2fmatrix-function%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown