Optimize $M$ such that $MM^T$ is the “smallest” (in a positive semi-definite sense)
up vote
3
down vote
favorite
I want to solve the following problem:
begin{alignat}{}
underset{M}{text{minimize}} quad & MM^T \
text{subject to} quad & MF=I.
end{alignat}
where by minimize $MM^T$ I mean to find $M$ such that for any other matrix $hat M$ that satisfies the constraint, I have
$$
MM^T le hat Mhat M^T.
$$
that is, $MM^T - hat Mhat M^T$ is negative semi-definite. $F$ is (full rank) $n times p$ with $n>p$ and $I$ the identity matrix.
How can I translate this "smallest" concept into the objective function of the minimization problem? Should I perhaps minimize some kind of norm of $MM^T$? And after formalizing the problem, what is its solution?
Edit 1: For context, I am working on the minimum-variance unbiased linear estimator problem.
I am given the vector $y$ and a matrix $F$ such that
$$
y = Fx + epsilon
$$
where $F in mathbb{R}^{m times n}$ ($m>n$) full rank, $y in mathbb{R}^m$, $x in mathbb{R}^n$ and $epsilon$ is a random vector with zero mean and covariance matrix $I in mathbb{R}^{m times m}$ (identity matrix) and I want to estimate $x$.
I want to derive a linear estimator ($hat x = My$) such that it is unbiased ($E[hat x]=x$) and its covariance matrix satisfies
$$
E[(hat x - x)(hat x - x)^T] le E[(tilde{x} - x)(tilde{x} - x)^T]
$$
for any unbiased linear estimate $tilde{x}=tilde{M}y$.
For this problem, I know that such a linear estimator is unbiased if $MF=I$ (identity matrix) and its covariance matrix is
$$
E[(hat x - x)(hat x - x)^T] = MM^T
$$
and this is the point I am stuck.
I know that the solution for this problem is $M=(F^TF)^{-1}F^T$, and all the proofs I found assume this $M$ as an estimator, and then go on to prove that it is in fact the minimum-variance unbiased one. However, I want to arrive at this result starting from first principles, as if I did not know the solution a priori.
Edit 2: For reference, this pdf also discusses the topic using this informal notion of "smallest" and "largest" covariance matrix.
Edit 3: After some numerical tests, I arrived at the conclusion that minimizing $| M |_2$ or $| M |_F$ will both lead to the desired answer, that is, $M = (F^TF)^{-1}F^T$. Could someone give a formal argument for why this is the case?
Edit 4: Some more thoughts on the minimization of $| M |_2$ and $| M |_F$.
For $| M |_2$, we have
$$
| M |_2 = sqrt{lambda _{max} (MM^T)}.
$$
Since $sqrt{ x }$ is monotonically increasing for $x>0$, minimizing $| M |_2$ is equivalent to minimizing the maximum eigenvalue of $MM^T$ which, intuitively, makes sense if we want to make $MM^T$ "less" positive semi-definite.
For $| M |_F$, we have
$$
| M |_F = sqrt{text{trace}(MM^T)} = sqrt{ sum_i lambda_i(MM^T)}.
$$
Using the same argument for the minimization of $sqrt{ x }$ for $x>0$, minimizing $| M |_F$ is equivalent to minimizing the sum of the eigenvalues of $MM^T$ which again, intuitively, makes sense if we want to make $MM^T$ "less" positive semi-definite.
linear-algebra optimization estimation least-squares covariance
|
show 7 more comments
up vote
3
down vote
favorite
I want to solve the following problem:
begin{alignat}{}
underset{M}{text{minimize}} quad & MM^T \
text{subject to} quad & MF=I.
end{alignat}
where by minimize $MM^T$ I mean to find $M$ such that for any other matrix $hat M$ that satisfies the constraint, I have
$$
MM^T le hat Mhat M^T.
$$
that is, $MM^T - hat Mhat M^T$ is negative semi-definite. $F$ is (full rank) $n times p$ with $n>p$ and $I$ the identity matrix.
How can I translate this "smallest" concept into the objective function of the minimization problem? Should I perhaps minimize some kind of norm of $MM^T$? And after formalizing the problem, what is its solution?
Edit 1: For context, I am working on the minimum-variance unbiased linear estimator problem.
I am given the vector $y$ and a matrix $F$ such that
$$
y = Fx + epsilon
$$
where $F in mathbb{R}^{m times n}$ ($m>n$) full rank, $y in mathbb{R}^m$, $x in mathbb{R}^n$ and $epsilon$ is a random vector with zero mean and covariance matrix $I in mathbb{R}^{m times m}$ (identity matrix) and I want to estimate $x$.
I want to derive a linear estimator ($hat x = My$) such that it is unbiased ($E[hat x]=x$) and its covariance matrix satisfies
$$
E[(hat x - x)(hat x - x)^T] le E[(tilde{x} - x)(tilde{x} - x)^T]
$$
for any unbiased linear estimate $tilde{x}=tilde{M}y$.
For this problem, I know that such a linear estimator is unbiased if $MF=I$ (identity matrix) and its covariance matrix is
$$
E[(hat x - x)(hat x - x)^T] = MM^T
$$
and this is the point I am stuck.
I know that the solution for this problem is $M=(F^TF)^{-1}F^T$, and all the proofs I found assume this $M$ as an estimator, and then go on to prove that it is in fact the minimum-variance unbiased one. However, I want to arrive at this result starting from first principles, as if I did not know the solution a priori.
Edit 2: For reference, this pdf also discusses the topic using this informal notion of "smallest" and "largest" covariance matrix.
Edit 3: After some numerical tests, I arrived at the conclusion that minimizing $| M |_2$ or $| M |_F$ will both lead to the desired answer, that is, $M = (F^TF)^{-1}F^T$. Could someone give a formal argument for why this is the case?
Edit 4: Some more thoughts on the minimization of $| M |_2$ and $| M |_F$.
For $| M |_2$, we have
$$
| M |_2 = sqrt{lambda _{max} (MM^T)}.
$$
Since $sqrt{ x }$ is monotonically increasing for $x>0$, minimizing $| M |_2$ is equivalent to minimizing the maximum eigenvalue of $MM^T$ which, intuitively, makes sense if we want to make $MM^T$ "less" positive semi-definite.
For $| M |_F$, we have
$$
| M |_F = sqrt{text{trace}(MM^T)} = sqrt{ sum_i lambda_i(MM^T)}.
$$
Using the same argument for the minimization of $sqrt{ x }$ for $x>0$, minimizing $| M |_F$ is equivalent to minimizing the sum of the eigenvalues of $MM^T$ which again, intuitively, makes sense if we want to make $MM^T$ "less" positive semi-definite.
linear-algebra optimization estimation least-squares covariance
1
Ah... "skinny" and "fat" are rather uncommon denominations. Prefer for example "$F$ is $n times p$ with $n>p$"
– Jean Marie
Nov 19 at 21:38
1
@JeanMarie thanks, I edited the question.
– durdi
Nov 19 at 21:55
3
not all pd matrices can be compared, e.g., when A=diag(2,1) and B=diag(1,2), then neither $Aleq B$ nor $Ageq B$.
– LinAlg
Nov 19 at 23:23
2
Your question is still unclear. You should clarify the meaning of $Ale B$ for two matrices $A$ and $B$. Your question title seems to suggest that you are talking about positive semidefinite partial ordering, but in the question body, when you are comparing covariance matrices, my impression is that you are talking about entrywise comparison.
– user1551
Nov 20 at 10:49
1
I seem to recall that every left inverse of a matrix $F$ can be expressed in the form $(F^* F)^{-1} F^*+K$ where $K$ is a matrix which satisfies $KF=0$. Once you have this, it is not too hard to show that there is a smallest solution (in the ordering $A leq B$ means $B-A$ is positive semidefinite) and that it is obtained by taking $K=0$. I am leaving this as a comment rather than an answer since I don't fully remember on the claim that every left inverse has this form.
– Eric
Nov 20 at 13:44
|
show 7 more comments
up vote
3
down vote
favorite
up vote
3
down vote
favorite
I want to solve the following problem:
begin{alignat}{}
underset{M}{text{minimize}} quad & MM^T \
text{subject to} quad & MF=I.
end{alignat}
where by minimize $MM^T$ I mean to find $M$ such that for any other matrix $hat M$ that satisfies the constraint, I have
$$
MM^T le hat Mhat M^T.
$$
that is, $MM^T - hat Mhat M^T$ is negative semi-definite. $F$ is (full rank) $n times p$ with $n>p$ and $I$ the identity matrix.
How can I translate this "smallest" concept into the objective function of the minimization problem? Should I perhaps minimize some kind of norm of $MM^T$? And after formalizing the problem, what is its solution?
Edit 1: For context, I am working on the minimum-variance unbiased linear estimator problem.
I am given the vector $y$ and a matrix $F$ such that
$$
y = Fx + epsilon
$$
where $F in mathbb{R}^{m times n}$ ($m>n$) full rank, $y in mathbb{R}^m$, $x in mathbb{R}^n$ and $epsilon$ is a random vector with zero mean and covariance matrix $I in mathbb{R}^{m times m}$ (identity matrix) and I want to estimate $x$.
I want to derive a linear estimator ($hat x = My$) such that it is unbiased ($E[hat x]=x$) and its covariance matrix satisfies
$$
E[(hat x - x)(hat x - x)^T] le E[(tilde{x} - x)(tilde{x} - x)^T]
$$
for any unbiased linear estimate $tilde{x}=tilde{M}y$.
For this problem, I know that such a linear estimator is unbiased if $MF=I$ (identity matrix) and its covariance matrix is
$$
E[(hat x - x)(hat x - x)^T] = MM^T
$$
and this is the point I am stuck.
I know that the solution for this problem is $M=(F^TF)^{-1}F^T$, and all the proofs I found assume this $M$ as an estimator, and then go on to prove that it is in fact the minimum-variance unbiased one. However, I want to arrive at this result starting from first principles, as if I did not know the solution a priori.
Edit 2: For reference, this pdf also discusses the topic using this informal notion of "smallest" and "largest" covariance matrix.
Edit 3: After some numerical tests, I arrived at the conclusion that minimizing $| M |_2$ or $| M |_F$ will both lead to the desired answer, that is, $M = (F^TF)^{-1}F^T$. Could someone give a formal argument for why this is the case?
Edit 4: Some more thoughts on the minimization of $| M |_2$ and $| M |_F$.
For $| M |_2$, we have
$$
| M |_2 = sqrt{lambda _{max} (MM^T)}.
$$
Since $sqrt{ x }$ is monotonically increasing for $x>0$, minimizing $| M |_2$ is equivalent to minimizing the maximum eigenvalue of $MM^T$ which, intuitively, makes sense if we want to make $MM^T$ "less" positive semi-definite.
For $| M |_F$, we have
$$
| M |_F = sqrt{text{trace}(MM^T)} = sqrt{ sum_i lambda_i(MM^T)}.
$$
Using the same argument for the minimization of $sqrt{ x }$ for $x>0$, minimizing $| M |_F$ is equivalent to minimizing the sum of the eigenvalues of $MM^T$ which again, intuitively, makes sense if we want to make $MM^T$ "less" positive semi-definite.
linear-algebra optimization estimation least-squares covariance
I want to solve the following problem:
begin{alignat}{}
underset{M}{text{minimize}} quad & MM^T \
text{subject to} quad & MF=I.
end{alignat}
where by minimize $MM^T$ I mean to find $M$ such that for any other matrix $hat M$ that satisfies the constraint, I have
$$
MM^T le hat Mhat M^T.
$$
that is, $MM^T - hat Mhat M^T$ is negative semi-definite. $F$ is (full rank) $n times p$ with $n>p$ and $I$ the identity matrix.
How can I translate this "smallest" concept into the objective function of the minimization problem? Should I perhaps minimize some kind of norm of $MM^T$? And after formalizing the problem, what is its solution?
Edit 1: For context, I am working on the minimum-variance unbiased linear estimator problem.
I am given the vector $y$ and a matrix $F$ such that
$$
y = Fx + epsilon
$$
where $F in mathbb{R}^{m times n}$ ($m>n$) full rank, $y in mathbb{R}^m$, $x in mathbb{R}^n$ and $epsilon$ is a random vector with zero mean and covariance matrix $I in mathbb{R}^{m times m}$ (identity matrix) and I want to estimate $x$.
I want to derive a linear estimator ($hat x = My$) such that it is unbiased ($E[hat x]=x$) and its covariance matrix satisfies
$$
E[(hat x - x)(hat x - x)^T] le E[(tilde{x} - x)(tilde{x} - x)^T]
$$
for any unbiased linear estimate $tilde{x}=tilde{M}y$.
For this problem, I know that such a linear estimator is unbiased if $MF=I$ (identity matrix) and its covariance matrix is
$$
E[(hat x - x)(hat x - x)^T] = MM^T
$$
and this is the point I am stuck.
I know that the solution for this problem is $M=(F^TF)^{-1}F^T$, and all the proofs I found assume this $M$ as an estimator, and then go on to prove that it is in fact the minimum-variance unbiased one. However, I want to arrive at this result starting from first principles, as if I did not know the solution a priori.
Edit 2: For reference, this pdf also discusses the topic using this informal notion of "smallest" and "largest" covariance matrix.
Edit 3: After some numerical tests, I arrived at the conclusion that minimizing $| M |_2$ or $| M |_F$ will both lead to the desired answer, that is, $M = (F^TF)^{-1}F^T$. Could someone give a formal argument for why this is the case?
Edit 4: Some more thoughts on the minimization of $| M |_2$ and $| M |_F$.
For $| M |_2$, we have
$$
| M |_2 = sqrt{lambda _{max} (MM^T)}.
$$
Since $sqrt{ x }$ is monotonically increasing for $x>0$, minimizing $| M |_2$ is equivalent to minimizing the maximum eigenvalue of $MM^T$ which, intuitively, makes sense if we want to make $MM^T$ "less" positive semi-definite.
For $| M |_F$, we have
$$
| M |_F = sqrt{text{trace}(MM^T)} = sqrt{ sum_i lambda_i(MM^T)}.
$$
Using the same argument for the minimization of $sqrt{ x }$ for $x>0$, minimizing $| M |_F$ is equivalent to minimizing the sum of the eigenvalues of $MM^T$ which again, intuitively, makes sense if we want to make $MM^T$ "less" positive semi-definite.
linear-algebra optimization estimation least-squares covariance
linear-algebra optimization estimation least-squares covariance
edited Nov 30 at 10:53
asked Nov 19 at 21:12
durdi
195110
195110
1
Ah... "skinny" and "fat" are rather uncommon denominations. Prefer for example "$F$ is $n times p$ with $n>p$"
– Jean Marie
Nov 19 at 21:38
1
@JeanMarie thanks, I edited the question.
– durdi
Nov 19 at 21:55
3
not all pd matrices can be compared, e.g., when A=diag(2,1) and B=diag(1,2), then neither $Aleq B$ nor $Ageq B$.
– LinAlg
Nov 19 at 23:23
2
Your question is still unclear. You should clarify the meaning of $Ale B$ for two matrices $A$ and $B$. Your question title seems to suggest that you are talking about positive semidefinite partial ordering, but in the question body, when you are comparing covariance matrices, my impression is that you are talking about entrywise comparison.
– user1551
Nov 20 at 10:49
1
I seem to recall that every left inverse of a matrix $F$ can be expressed in the form $(F^* F)^{-1} F^*+K$ where $K$ is a matrix which satisfies $KF=0$. Once you have this, it is not too hard to show that there is a smallest solution (in the ordering $A leq B$ means $B-A$ is positive semidefinite) and that it is obtained by taking $K=0$. I am leaving this as a comment rather than an answer since I don't fully remember on the claim that every left inverse has this form.
– Eric
Nov 20 at 13:44
|
show 7 more comments
1
Ah... "skinny" and "fat" are rather uncommon denominations. Prefer for example "$F$ is $n times p$ with $n>p$"
– Jean Marie
Nov 19 at 21:38
1
@JeanMarie thanks, I edited the question.
– durdi
Nov 19 at 21:55
3
not all pd matrices can be compared, e.g., when A=diag(2,1) and B=diag(1,2), then neither $Aleq B$ nor $Ageq B$.
– LinAlg
Nov 19 at 23:23
2
Your question is still unclear. You should clarify the meaning of $Ale B$ for two matrices $A$ and $B$. Your question title seems to suggest that you are talking about positive semidefinite partial ordering, but in the question body, when you are comparing covariance matrices, my impression is that you are talking about entrywise comparison.
– user1551
Nov 20 at 10:49
1
I seem to recall that every left inverse of a matrix $F$ can be expressed in the form $(F^* F)^{-1} F^*+K$ where $K$ is a matrix which satisfies $KF=0$. Once you have this, it is not too hard to show that there is a smallest solution (in the ordering $A leq B$ means $B-A$ is positive semidefinite) and that it is obtained by taking $K=0$. I am leaving this as a comment rather than an answer since I don't fully remember on the claim that every left inverse has this form.
– Eric
Nov 20 at 13:44
1
1
Ah... "skinny" and "fat" are rather uncommon denominations. Prefer for example "$F$ is $n times p$ with $n>p$"
– Jean Marie
Nov 19 at 21:38
Ah... "skinny" and "fat" are rather uncommon denominations. Prefer for example "$F$ is $n times p$ with $n>p$"
– Jean Marie
Nov 19 at 21:38
1
1
@JeanMarie thanks, I edited the question.
– durdi
Nov 19 at 21:55
@JeanMarie thanks, I edited the question.
– durdi
Nov 19 at 21:55
3
3
not all pd matrices can be compared, e.g., when A=diag(2,1) and B=diag(1,2), then neither $Aleq B$ nor $Ageq B$.
– LinAlg
Nov 19 at 23:23
not all pd matrices can be compared, e.g., when A=diag(2,1) and B=diag(1,2), then neither $Aleq B$ nor $Ageq B$.
– LinAlg
Nov 19 at 23:23
2
2
Your question is still unclear. You should clarify the meaning of $Ale B$ for two matrices $A$ and $B$. Your question title seems to suggest that you are talking about positive semidefinite partial ordering, but in the question body, when you are comparing covariance matrices, my impression is that you are talking about entrywise comparison.
– user1551
Nov 20 at 10:49
Your question is still unclear. You should clarify the meaning of $Ale B$ for two matrices $A$ and $B$. Your question title seems to suggest that you are talking about positive semidefinite partial ordering, but in the question body, when you are comparing covariance matrices, my impression is that you are talking about entrywise comparison.
– user1551
Nov 20 at 10:49
1
1
I seem to recall that every left inverse of a matrix $F$ can be expressed in the form $(F^* F)^{-1} F^*+K$ where $K$ is a matrix which satisfies $KF=0$. Once you have this, it is not too hard to show that there is a smallest solution (in the ordering $A leq B$ means $B-A$ is positive semidefinite) and that it is obtained by taking $K=0$. I am leaving this as a comment rather than an answer since I don't fully remember on the claim that every left inverse has this form.
– Eric
Nov 20 at 13:44
I seem to recall that every left inverse of a matrix $F$ can be expressed in the form $(F^* F)^{-1} F^*+K$ where $K$ is a matrix which satisfies $KF=0$. Once you have this, it is not too hard to show that there is a smallest solution (in the ordering $A leq B$ means $B-A$ is positive semidefinite) and that it is obtained by taking $K=0$. I am leaving this as a comment rather than an answer since I don't fully remember on the claim that every left inverse has this form.
– Eric
Nov 20 at 13:44
|
show 7 more comments
1 Answer
1
active
oldest
votes
up vote
2
down vote
accepted
In this setting the answer is trivial. Let $F=Upmatrix{S\ 0}V^T$ be a singular value decomposition. The constraint $MF=I$ then implies that $M$ must be in the form of $Vpmatrix{S^{-1}&X}U^T$. Hence $MM^T=V(S^{-2}+XX^T)V^T$ and its unique global minimum (with respect to the positive semidefinite partial ordering) is given by $X=0$ (because $XX^T$ is positive semidefinite and it equals zero only when $X$ is zero), meaning that $M=F^+$, the Moore-Penrose inverse of $F$.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3005533%2foptimize-m-such-that-mmt-is-the-smallest-in-a-positive-semi-definite-sen%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
accepted
In this setting the answer is trivial. Let $F=Upmatrix{S\ 0}V^T$ be a singular value decomposition. The constraint $MF=I$ then implies that $M$ must be in the form of $Vpmatrix{S^{-1}&X}U^T$. Hence $MM^T=V(S^{-2}+XX^T)V^T$ and its unique global minimum (with respect to the positive semidefinite partial ordering) is given by $X=0$ (because $XX^T$ is positive semidefinite and it equals zero only when $X$ is zero), meaning that $M=F^+$, the Moore-Penrose inverse of $F$.
add a comment |
up vote
2
down vote
accepted
In this setting the answer is trivial. Let $F=Upmatrix{S\ 0}V^T$ be a singular value decomposition. The constraint $MF=I$ then implies that $M$ must be in the form of $Vpmatrix{S^{-1}&X}U^T$. Hence $MM^T=V(S^{-2}+XX^T)V^T$ and its unique global minimum (with respect to the positive semidefinite partial ordering) is given by $X=0$ (because $XX^T$ is positive semidefinite and it equals zero only when $X$ is zero), meaning that $M=F^+$, the Moore-Penrose inverse of $F$.
add a comment |
up vote
2
down vote
accepted
up vote
2
down vote
accepted
In this setting the answer is trivial. Let $F=Upmatrix{S\ 0}V^T$ be a singular value decomposition. The constraint $MF=I$ then implies that $M$ must be in the form of $Vpmatrix{S^{-1}&X}U^T$. Hence $MM^T=V(S^{-2}+XX^T)V^T$ and its unique global minimum (with respect to the positive semidefinite partial ordering) is given by $X=0$ (because $XX^T$ is positive semidefinite and it equals zero only when $X$ is zero), meaning that $M=F^+$, the Moore-Penrose inverse of $F$.
In this setting the answer is trivial. Let $F=Upmatrix{S\ 0}V^T$ be a singular value decomposition. The constraint $MF=I$ then implies that $M$ must be in the form of $Vpmatrix{S^{-1}&X}U^T$. Hence $MM^T=V(S^{-2}+XX^T)V^T$ and its unique global minimum (with respect to the positive semidefinite partial ordering) is given by $X=0$ (because $XX^T$ is positive semidefinite and it equals zero only when $X$ is zero), meaning that $M=F^+$, the Moore-Penrose inverse of $F$.
answered Nov 30 at 11:32
user1551
71.1k566125
71.1k566125
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3005533%2foptimize-m-such-that-mmt-is-the-smallest-in-a-positive-semi-definite-sen%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
Ah... "skinny" and "fat" are rather uncommon denominations. Prefer for example "$F$ is $n times p$ with $n>p$"
– Jean Marie
Nov 19 at 21:38
1
@JeanMarie thanks, I edited the question.
– durdi
Nov 19 at 21:55
3
not all pd matrices can be compared, e.g., when A=diag(2,1) and B=diag(1,2), then neither $Aleq B$ nor $Ageq B$.
– LinAlg
Nov 19 at 23:23
2
Your question is still unclear. You should clarify the meaning of $Ale B$ for two matrices $A$ and $B$. Your question title seems to suggest that you are talking about positive semidefinite partial ordering, but in the question body, when you are comparing covariance matrices, my impression is that you are talking about entrywise comparison.
– user1551
Nov 20 at 10:49
1
I seem to recall that every left inverse of a matrix $F$ can be expressed in the form $(F^* F)^{-1} F^*+K$ where $K$ is a matrix which satisfies $KF=0$. Once you have this, it is not too hard to show that there is a smallest solution (in the ordering $A leq B$ means $B-A$ is positive semidefinite) and that it is obtained by taking $K=0$. I am leaving this as a comment rather than an answer since I don't fully remember on the claim that every left inverse has this form.
– Eric
Nov 20 at 13:44