Calculating the trace of the product of several matrices
$begingroup$
I want to calculate the trace of something like
$qquadmathrm{Tr}(Gamma_RG_dGamma_LG_d^dagger)$
In order to optimise my code, I found something like this
Faster trace of product of two matrices,
which greatly reduces calculation time for trace.
My question is - Is it possible to generalize this somehow to the above case?
I tried looking at the Wikipedia with no luck
https://en.wikipedia.org/wiki/Trace_%28linear_algebra%29#Trace_of_a_product
Or is the most time efficient way just to say
$qquad M_1 = Gamma_rG_d qquad M_2=Gamma_LG_d^dagger$
and then use the trick in the link and Wikipedia.
matrix performance-tuning
$endgroup$
add a comment |
$begingroup$
I want to calculate the trace of something like
$qquadmathrm{Tr}(Gamma_RG_dGamma_LG_d^dagger)$
In order to optimise my code, I found something like this
Faster trace of product of two matrices,
which greatly reduces calculation time for trace.
My question is - Is it possible to generalize this somehow to the above case?
I tried looking at the Wikipedia with no luck
https://en.wikipedia.org/wiki/Trace_%28linear_algebra%29#Trace_of_a_product
Or is the most time efficient way just to say
$qquad M_1 = Gamma_rG_d qquad M_2=Gamma_LG_d^dagger$
and then use the trick in the link and Wikipedia.
matrix performance-tuning
$endgroup$
1
$begingroup$
That depends on the matrices. If one or more of these matrices have low rank, then one could try to employ low-rank factorizations.
$endgroup$
– Henrik Schumacher
Jan 20 at 21:51
add a comment |
$begingroup$
I want to calculate the trace of something like
$qquadmathrm{Tr}(Gamma_RG_dGamma_LG_d^dagger)$
In order to optimise my code, I found something like this
Faster trace of product of two matrices,
which greatly reduces calculation time for trace.
My question is - Is it possible to generalize this somehow to the above case?
I tried looking at the Wikipedia with no luck
https://en.wikipedia.org/wiki/Trace_%28linear_algebra%29#Trace_of_a_product
Or is the most time efficient way just to say
$qquad M_1 = Gamma_rG_d qquad M_2=Gamma_LG_d^dagger$
and then use the trick in the link and Wikipedia.
matrix performance-tuning
$endgroup$
I want to calculate the trace of something like
$qquadmathrm{Tr}(Gamma_RG_dGamma_LG_d^dagger)$
In order to optimise my code, I found something like this
Faster trace of product of two matrices,
which greatly reduces calculation time for trace.
My question is - Is it possible to generalize this somehow to the above case?
I tried looking at the Wikipedia with no luck
https://en.wikipedia.org/wiki/Trace_%28linear_algebra%29#Trace_of_a_product
Or is the most time efficient way just to say
$qquad M_1 = Gamma_rG_d qquad M_2=Gamma_LG_d^dagger$
and then use the trick in the link and Wikipedia.
matrix performance-tuning
matrix performance-tuning
edited Jan 20 at 23:29
m_goldberg
85.3k872196
85.3k872196
asked Jan 20 at 21:43
ThomasThomas
111
111
1
$begingroup$
That depends on the matrices. If one or more of these matrices have low rank, then one could try to employ low-rank factorizations.
$endgroup$
– Henrik Schumacher
Jan 20 at 21:51
add a comment |
1
$begingroup$
That depends on the matrices. If one or more of these matrices have low rank, then one could try to employ low-rank factorizations.
$endgroup$
– Henrik Schumacher
Jan 20 at 21:51
1
1
$begingroup$
That depends on the matrices. If one or more of these matrices have low rank, then one could try to employ low-rank factorizations.
$endgroup$
– Henrik Schumacher
Jan 20 at 21:51
$begingroup$
That depends on the matrices. If one or more of these matrices have low rank, then one could try to employ low-rank factorizations.
$endgroup$
– Henrik Schumacher
Jan 20 at 21:51
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
If one or more of these matrices have low rank, then one could try to employ low-rank factorizations. Of course, not all matrices have low rank. I just try to point our that low rank can be exploited nicely if present.
In order to demonstrate the effect, here is an implementation of Adaptive Cross Approximiation (ACA) (see this paper by Mario Bebendorf for more details on the algorithm). Because this implementation uses LinearAlgebra`BLAS`IAMAX
which was introduced with version 11.2, this won't run on older versions of Mathematica.
Needs["LinearAlgebra`BLAS`"]
ClearAll[ACACompression];
Options[ACACompression] = {
"MaxRank" -> 50,
"Tolerance" -> 1. 10^-4,
"StartingIndex" -> 1
};
ACACompression::maxrank =
"Warning: Computed factorization has maximal rank.";
ACACompression[A_?MatrixQ, opts : OptionsPattern] :=
ACACompression[i [Function] A[[i]], j [Function] A[[All, j]], opts]
ACACompression[row_, column_, OptionsPattern] :=
Module[{maxrank, ϵ, u, v, k, ik, jk, uk, vk, test, norm2, δ, w, iter, uQ, uR, vQ, vR,rank, σ, m, n, eps},
eps = $MachineEpsilon;
maxrank = OptionValue["MaxRank"];
ϵ = OptionValue["Tolerance"];
k = 1;
ik = OptionValue["StartingIndex"];
vk = row[ik];
m = Length[vk];
v = ConstantArray[0., {maxrank, m}];
jk = IAMAX[vk];
v[[k]] = vk = vk/vk[[jk]];
uk = column[jk];
n = Length[uk];
u = ConstantArray[0., {maxrank, n}];
u[[k]] = uk;
test = uk.uk vk.vk;
norm2 = test;
maxrank = Min[m, n, maxrank];
While[test > ϵ^2 norm2 && k < maxrank,
k++;
ik = IAMAX[uk];
vk = Subtract[row[ik], u[[;; k - 1, ik]].v[[;; k - 1]]];
jk = IAMAX[vk];
δ = vk[[jk]];
If[Abs[δ] < eps,
w = uk;
δ = 0.;
iter = 0;
While[Abs[δ] < eps,
iter++;
w[[ik]] = 0.;
ik = IAMAX[w];
vk = Subtract[row[ik], u[[;; k - 1, ik]].v[[;; k - 1]]];
jk = IAMAX[vk];
δ = vk[[jk]];
];
];
v[[k]] = vk = vk/δ;
uk = Subtract[column[jk], v[[;; k - 1, jk]].u[[;; k - 1]]];
u[[k]] = uk;
test = uk.uk vk.vk;
norm2 += test + Dot[u[[1 ;; k - 1]].uk, v[[1 ;; k - 1]].vk]
];
If[k == OptionValue["MaxRank"],
Message[ACACompression::maxrank];
];
{u[[1 ;; k]], v[[1 ;; k]]}
]
Now let's consider the following setup. I assume that the matrix G
has low rank; I ensure that by taking the squared distance matrix of a set of homogeneously distributed points. (I am well aware that $G$ in OP's example is probably thought of as a Gram matrix of an inner product and, as such, positive definite so that it may be not of low rank. Still, ΓR
, ΓL
may be of low rank.)
n = 2000;
ΓR = RandomReal[{-1, 1}, {n, n}];
ΓL = RandomReal[{-1, 1}, {n, n}];
G = DistanceMatrix[RandomReal[{-1, 1}, n]]^2;
Now, let's compute the trace in the straigth-forward way and compare the result and its timing to an ACA followed by a careful evaluation of the trace. Notice that the order of the Dot
-operations is crucial here.
a = Tr[ΓR.G.ΓL.Transpose[G]]; // RepeatedTiming // First
First@RepeatedTiming[
{u, v} = ACACompression[G, "Tolerance" -> 1. 10^-6];
b = Tr[(u.ΓR.Transpose[u]).(v.ΓL.Transpose[v])];
]
Abs[a - b]
0.311
0.0031
8.14907*10^-10
By the way, let's check the accuracy of the ACA is actually far better:
Max[Abs[Transpose[v].u - G]]
1.33227*10^-15
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "387"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f189908%2fcalculating-the-trace-of-the-product-of-several-matrices%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
If one or more of these matrices have low rank, then one could try to employ low-rank factorizations. Of course, not all matrices have low rank. I just try to point our that low rank can be exploited nicely if present.
In order to demonstrate the effect, here is an implementation of Adaptive Cross Approximiation (ACA) (see this paper by Mario Bebendorf for more details on the algorithm). Because this implementation uses LinearAlgebra`BLAS`IAMAX
which was introduced with version 11.2, this won't run on older versions of Mathematica.
Needs["LinearAlgebra`BLAS`"]
ClearAll[ACACompression];
Options[ACACompression] = {
"MaxRank" -> 50,
"Tolerance" -> 1. 10^-4,
"StartingIndex" -> 1
};
ACACompression::maxrank =
"Warning: Computed factorization has maximal rank.";
ACACompression[A_?MatrixQ, opts : OptionsPattern] :=
ACACompression[i [Function] A[[i]], j [Function] A[[All, j]], opts]
ACACompression[row_, column_, OptionsPattern] :=
Module[{maxrank, ϵ, u, v, k, ik, jk, uk, vk, test, norm2, δ, w, iter, uQ, uR, vQ, vR,rank, σ, m, n, eps},
eps = $MachineEpsilon;
maxrank = OptionValue["MaxRank"];
ϵ = OptionValue["Tolerance"];
k = 1;
ik = OptionValue["StartingIndex"];
vk = row[ik];
m = Length[vk];
v = ConstantArray[0., {maxrank, m}];
jk = IAMAX[vk];
v[[k]] = vk = vk/vk[[jk]];
uk = column[jk];
n = Length[uk];
u = ConstantArray[0., {maxrank, n}];
u[[k]] = uk;
test = uk.uk vk.vk;
norm2 = test;
maxrank = Min[m, n, maxrank];
While[test > ϵ^2 norm2 && k < maxrank,
k++;
ik = IAMAX[uk];
vk = Subtract[row[ik], u[[;; k - 1, ik]].v[[;; k - 1]]];
jk = IAMAX[vk];
δ = vk[[jk]];
If[Abs[δ] < eps,
w = uk;
δ = 0.;
iter = 0;
While[Abs[δ] < eps,
iter++;
w[[ik]] = 0.;
ik = IAMAX[w];
vk = Subtract[row[ik], u[[;; k - 1, ik]].v[[;; k - 1]]];
jk = IAMAX[vk];
δ = vk[[jk]];
];
];
v[[k]] = vk = vk/δ;
uk = Subtract[column[jk], v[[;; k - 1, jk]].u[[;; k - 1]]];
u[[k]] = uk;
test = uk.uk vk.vk;
norm2 += test + Dot[u[[1 ;; k - 1]].uk, v[[1 ;; k - 1]].vk]
];
If[k == OptionValue["MaxRank"],
Message[ACACompression::maxrank];
];
{u[[1 ;; k]], v[[1 ;; k]]}
]
Now let's consider the following setup. I assume that the matrix G
has low rank; I ensure that by taking the squared distance matrix of a set of homogeneously distributed points. (I am well aware that $G$ in OP's example is probably thought of as a Gram matrix of an inner product and, as such, positive definite so that it may be not of low rank. Still, ΓR
, ΓL
may be of low rank.)
n = 2000;
ΓR = RandomReal[{-1, 1}, {n, n}];
ΓL = RandomReal[{-1, 1}, {n, n}];
G = DistanceMatrix[RandomReal[{-1, 1}, n]]^2;
Now, let's compute the trace in the straigth-forward way and compare the result and its timing to an ACA followed by a careful evaluation of the trace. Notice that the order of the Dot
-operations is crucial here.
a = Tr[ΓR.G.ΓL.Transpose[G]]; // RepeatedTiming // First
First@RepeatedTiming[
{u, v} = ACACompression[G, "Tolerance" -> 1. 10^-6];
b = Tr[(u.ΓR.Transpose[u]).(v.ΓL.Transpose[v])];
]
Abs[a - b]
0.311
0.0031
8.14907*10^-10
By the way, let's check the accuracy of the ACA is actually far better:
Max[Abs[Transpose[v].u - G]]
1.33227*10^-15
$endgroup$
add a comment |
$begingroup$
If one or more of these matrices have low rank, then one could try to employ low-rank factorizations. Of course, not all matrices have low rank. I just try to point our that low rank can be exploited nicely if present.
In order to demonstrate the effect, here is an implementation of Adaptive Cross Approximiation (ACA) (see this paper by Mario Bebendorf for more details on the algorithm). Because this implementation uses LinearAlgebra`BLAS`IAMAX
which was introduced with version 11.2, this won't run on older versions of Mathematica.
Needs["LinearAlgebra`BLAS`"]
ClearAll[ACACompression];
Options[ACACompression] = {
"MaxRank" -> 50,
"Tolerance" -> 1. 10^-4,
"StartingIndex" -> 1
};
ACACompression::maxrank =
"Warning: Computed factorization has maximal rank.";
ACACompression[A_?MatrixQ, opts : OptionsPattern] :=
ACACompression[i [Function] A[[i]], j [Function] A[[All, j]], opts]
ACACompression[row_, column_, OptionsPattern] :=
Module[{maxrank, ϵ, u, v, k, ik, jk, uk, vk, test, norm2, δ, w, iter, uQ, uR, vQ, vR,rank, σ, m, n, eps},
eps = $MachineEpsilon;
maxrank = OptionValue["MaxRank"];
ϵ = OptionValue["Tolerance"];
k = 1;
ik = OptionValue["StartingIndex"];
vk = row[ik];
m = Length[vk];
v = ConstantArray[0., {maxrank, m}];
jk = IAMAX[vk];
v[[k]] = vk = vk/vk[[jk]];
uk = column[jk];
n = Length[uk];
u = ConstantArray[0., {maxrank, n}];
u[[k]] = uk;
test = uk.uk vk.vk;
norm2 = test;
maxrank = Min[m, n, maxrank];
While[test > ϵ^2 norm2 && k < maxrank,
k++;
ik = IAMAX[uk];
vk = Subtract[row[ik], u[[;; k - 1, ik]].v[[;; k - 1]]];
jk = IAMAX[vk];
δ = vk[[jk]];
If[Abs[δ] < eps,
w = uk;
δ = 0.;
iter = 0;
While[Abs[δ] < eps,
iter++;
w[[ik]] = 0.;
ik = IAMAX[w];
vk = Subtract[row[ik], u[[;; k - 1, ik]].v[[;; k - 1]]];
jk = IAMAX[vk];
δ = vk[[jk]];
];
];
v[[k]] = vk = vk/δ;
uk = Subtract[column[jk], v[[;; k - 1, jk]].u[[;; k - 1]]];
u[[k]] = uk;
test = uk.uk vk.vk;
norm2 += test + Dot[u[[1 ;; k - 1]].uk, v[[1 ;; k - 1]].vk]
];
If[k == OptionValue["MaxRank"],
Message[ACACompression::maxrank];
];
{u[[1 ;; k]], v[[1 ;; k]]}
]
Now let's consider the following setup. I assume that the matrix G
has low rank; I ensure that by taking the squared distance matrix of a set of homogeneously distributed points. (I am well aware that $G$ in OP's example is probably thought of as a Gram matrix of an inner product and, as such, positive definite so that it may be not of low rank. Still, ΓR
, ΓL
may be of low rank.)
n = 2000;
ΓR = RandomReal[{-1, 1}, {n, n}];
ΓL = RandomReal[{-1, 1}, {n, n}];
G = DistanceMatrix[RandomReal[{-1, 1}, n]]^2;
Now, let's compute the trace in the straigth-forward way and compare the result and its timing to an ACA followed by a careful evaluation of the trace. Notice that the order of the Dot
-operations is crucial here.
a = Tr[ΓR.G.ΓL.Transpose[G]]; // RepeatedTiming // First
First@RepeatedTiming[
{u, v} = ACACompression[G, "Tolerance" -> 1. 10^-6];
b = Tr[(u.ΓR.Transpose[u]).(v.ΓL.Transpose[v])];
]
Abs[a - b]
0.311
0.0031
8.14907*10^-10
By the way, let's check the accuracy of the ACA is actually far better:
Max[Abs[Transpose[v].u - G]]
1.33227*10^-15
$endgroup$
add a comment |
$begingroup$
If one or more of these matrices have low rank, then one could try to employ low-rank factorizations. Of course, not all matrices have low rank. I just try to point our that low rank can be exploited nicely if present.
In order to demonstrate the effect, here is an implementation of Adaptive Cross Approximiation (ACA) (see this paper by Mario Bebendorf for more details on the algorithm). Because this implementation uses LinearAlgebra`BLAS`IAMAX
which was introduced with version 11.2, this won't run on older versions of Mathematica.
Needs["LinearAlgebra`BLAS`"]
ClearAll[ACACompression];
Options[ACACompression] = {
"MaxRank" -> 50,
"Tolerance" -> 1. 10^-4,
"StartingIndex" -> 1
};
ACACompression::maxrank =
"Warning: Computed factorization has maximal rank.";
ACACompression[A_?MatrixQ, opts : OptionsPattern] :=
ACACompression[i [Function] A[[i]], j [Function] A[[All, j]], opts]
ACACompression[row_, column_, OptionsPattern] :=
Module[{maxrank, ϵ, u, v, k, ik, jk, uk, vk, test, norm2, δ, w, iter, uQ, uR, vQ, vR,rank, σ, m, n, eps},
eps = $MachineEpsilon;
maxrank = OptionValue["MaxRank"];
ϵ = OptionValue["Tolerance"];
k = 1;
ik = OptionValue["StartingIndex"];
vk = row[ik];
m = Length[vk];
v = ConstantArray[0., {maxrank, m}];
jk = IAMAX[vk];
v[[k]] = vk = vk/vk[[jk]];
uk = column[jk];
n = Length[uk];
u = ConstantArray[0., {maxrank, n}];
u[[k]] = uk;
test = uk.uk vk.vk;
norm2 = test;
maxrank = Min[m, n, maxrank];
While[test > ϵ^2 norm2 && k < maxrank,
k++;
ik = IAMAX[uk];
vk = Subtract[row[ik], u[[;; k - 1, ik]].v[[;; k - 1]]];
jk = IAMAX[vk];
δ = vk[[jk]];
If[Abs[δ] < eps,
w = uk;
δ = 0.;
iter = 0;
While[Abs[δ] < eps,
iter++;
w[[ik]] = 0.;
ik = IAMAX[w];
vk = Subtract[row[ik], u[[;; k - 1, ik]].v[[;; k - 1]]];
jk = IAMAX[vk];
δ = vk[[jk]];
];
];
v[[k]] = vk = vk/δ;
uk = Subtract[column[jk], v[[;; k - 1, jk]].u[[;; k - 1]]];
u[[k]] = uk;
test = uk.uk vk.vk;
norm2 += test + Dot[u[[1 ;; k - 1]].uk, v[[1 ;; k - 1]].vk]
];
If[k == OptionValue["MaxRank"],
Message[ACACompression::maxrank];
];
{u[[1 ;; k]], v[[1 ;; k]]}
]
Now let's consider the following setup. I assume that the matrix G
has low rank; I ensure that by taking the squared distance matrix of a set of homogeneously distributed points. (I am well aware that $G$ in OP's example is probably thought of as a Gram matrix of an inner product and, as such, positive definite so that it may be not of low rank. Still, ΓR
, ΓL
may be of low rank.)
n = 2000;
ΓR = RandomReal[{-1, 1}, {n, n}];
ΓL = RandomReal[{-1, 1}, {n, n}];
G = DistanceMatrix[RandomReal[{-1, 1}, n]]^2;
Now, let's compute the trace in the straigth-forward way and compare the result and its timing to an ACA followed by a careful evaluation of the trace. Notice that the order of the Dot
-operations is crucial here.
a = Tr[ΓR.G.ΓL.Transpose[G]]; // RepeatedTiming // First
First@RepeatedTiming[
{u, v} = ACACompression[G, "Tolerance" -> 1. 10^-6];
b = Tr[(u.ΓR.Transpose[u]).(v.ΓL.Transpose[v])];
]
Abs[a - b]
0.311
0.0031
8.14907*10^-10
By the way, let's check the accuracy of the ACA is actually far better:
Max[Abs[Transpose[v].u - G]]
1.33227*10^-15
$endgroup$
If one or more of these matrices have low rank, then one could try to employ low-rank factorizations. Of course, not all matrices have low rank. I just try to point our that low rank can be exploited nicely if present.
In order to demonstrate the effect, here is an implementation of Adaptive Cross Approximiation (ACA) (see this paper by Mario Bebendorf for more details on the algorithm). Because this implementation uses LinearAlgebra`BLAS`IAMAX
which was introduced with version 11.2, this won't run on older versions of Mathematica.
Needs["LinearAlgebra`BLAS`"]
ClearAll[ACACompression];
Options[ACACompression] = {
"MaxRank" -> 50,
"Tolerance" -> 1. 10^-4,
"StartingIndex" -> 1
};
ACACompression::maxrank =
"Warning: Computed factorization has maximal rank.";
ACACompression[A_?MatrixQ, opts : OptionsPattern] :=
ACACompression[i [Function] A[[i]], j [Function] A[[All, j]], opts]
ACACompression[row_, column_, OptionsPattern] :=
Module[{maxrank, ϵ, u, v, k, ik, jk, uk, vk, test, norm2, δ, w, iter, uQ, uR, vQ, vR,rank, σ, m, n, eps},
eps = $MachineEpsilon;
maxrank = OptionValue["MaxRank"];
ϵ = OptionValue["Tolerance"];
k = 1;
ik = OptionValue["StartingIndex"];
vk = row[ik];
m = Length[vk];
v = ConstantArray[0., {maxrank, m}];
jk = IAMAX[vk];
v[[k]] = vk = vk/vk[[jk]];
uk = column[jk];
n = Length[uk];
u = ConstantArray[0., {maxrank, n}];
u[[k]] = uk;
test = uk.uk vk.vk;
norm2 = test;
maxrank = Min[m, n, maxrank];
While[test > ϵ^2 norm2 && k < maxrank,
k++;
ik = IAMAX[uk];
vk = Subtract[row[ik], u[[;; k - 1, ik]].v[[;; k - 1]]];
jk = IAMAX[vk];
δ = vk[[jk]];
If[Abs[δ] < eps,
w = uk;
δ = 0.;
iter = 0;
While[Abs[δ] < eps,
iter++;
w[[ik]] = 0.;
ik = IAMAX[w];
vk = Subtract[row[ik], u[[;; k - 1, ik]].v[[;; k - 1]]];
jk = IAMAX[vk];
δ = vk[[jk]];
];
];
v[[k]] = vk = vk/δ;
uk = Subtract[column[jk], v[[;; k - 1, jk]].u[[;; k - 1]]];
u[[k]] = uk;
test = uk.uk vk.vk;
norm2 += test + Dot[u[[1 ;; k - 1]].uk, v[[1 ;; k - 1]].vk]
];
If[k == OptionValue["MaxRank"],
Message[ACACompression::maxrank];
];
{u[[1 ;; k]], v[[1 ;; k]]}
]
Now let's consider the following setup. I assume that the matrix G
has low rank; I ensure that by taking the squared distance matrix of a set of homogeneously distributed points. (I am well aware that $G$ in OP's example is probably thought of as a Gram matrix of an inner product and, as such, positive definite so that it may be not of low rank. Still, ΓR
, ΓL
may be of low rank.)
n = 2000;
ΓR = RandomReal[{-1, 1}, {n, n}];
ΓL = RandomReal[{-1, 1}, {n, n}];
G = DistanceMatrix[RandomReal[{-1, 1}, n]]^2;
Now, let's compute the trace in the straigth-forward way and compare the result and its timing to an ACA followed by a careful evaluation of the trace. Notice that the order of the Dot
-operations is crucial here.
a = Tr[ΓR.G.ΓL.Transpose[G]]; // RepeatedTiming // First
First@RepeatedTiming[
{u, v} = ACACompression[G, "Tolerance" -> 1. 10^-6];
b = Tr[(u.ΓR.Transpose[u]).(v.ΓL.Transpose[v])];
]
Abs[a - b]
0.311
0.0031
8.14907*10^-10
By the way, let's check the accuracy of the ACA is actually far better:
Max[Abs[Transpose[v].u - G]]
1.33227*10^-15
edited Jan 21 at 8:33
answered Jan 20 at 23:52
Henrik SchumacherHenrik Schumacher
51.4k469146
51.4k469146
add a comment |
add a comment |
Thanks for contributing an answer to Mathematica Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f189908%2fcalculating-the-trace-of-the-product-of-several-matrices%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
That depends on the matrices. If one or more of these matrices have low rank, then one could try to employ low-rank factorizations.
$endgroup$
– Henrik Schumacher
Jan 20 at 21:51