Value of the quadratic form as a function of the determinant












2












$begingroup$


Suppose I have a symmetric, positive definite matrix $A$ with all the diagonal elements $sigma^2 > 0$, and its off-diagonal elements are $rho in [-1,1]backslash{0}$.



Consider a quadratic form $y := x^T A^{-1}x$.



Suppose, $sigma^2 to infty$. It is easy to see that $det(A) to infty$. I saw a claim that this also implies that $y to 0$.



Is this true? If yes, why?



Thanks.










share|cite|improve this question









$endgroup$

















    2












    $begingroup$


    Suppose I have a symmetric, positive definite matrix $A$ with all the diagonal elements $sigma^2 > 0$, and its off-diagonal elements are $rho in [-1,1]backslash{0}$.



    Consider a quadratic form $y := x^T A^{-1}x$.



    Suppose, $sigma^2 to infty$. It is easy to see that $det(A) to infty$. I saw a claim that this also implies that $y to 0$.



    Is this true? If yes, why?



    Thanks.










    share|cite|improve this question









    $endgroup$















      2












      2








      2


      2



      $begingroup$


      Suppose I have a symmetric, positive definite matrix $A$ with all the diagonal elements $sigma^2 > 0$, and its off-diagonal elements are $rho in [-1,1]backslash{0}$.



      Consider a quadratic form $y := x^T A^{-1}x$.



      Suppose, $sigma^2 to infty$. It is easy to see that $det(A) to infty$. I saw a claim that this also implies that $y to 0$.



      Is this true? If yes, why?



      Thanks.










      share|cite|improve this question









      $endgroup$




      Suppose I have a symmetric, positive definite matrix $A$ with all the diagonal elements $sigma^2 > 0$, and its off-diagonal elements are $rho in [-1,1]backslash{0}$.



      Consider a quadratic form $y := x^T A^{-1}x$.



      Suppose, $sigma^2 to infty$. It is easy to see that $det(A) to infty$. I saw a claim that this also implies that $y to 0$.



      Is this true? If yes, why?



      Thanks.







      linear-algebra matrices determinant quadratic-forms






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Dec 15 '18 at 8:37









      avk255avk255

      286




      286






















          1 Answer
          1






          active

          oldest

          votes


















          3












          $begingroup$

          It is true. What we want to show is that all the eigenvalues of $A^{-1}$ go to zero in the limit. For this we need to show that all the eigenvalues of $A$ go to infinity. We would like to say that the off-diagonal is "small" and therfore, we can neglect them and see that we already have the eigenvalues on our diagonal. This is essentially true! Your condition on the off-diagonal implies that
          $$ Vert A - diag( sigma_1^2, dots, sigma_n^2) Vert_{F} leq n^2$$
          where the norm in question is the Frobenius norm. However, the Hoffman-Wielandt Theorem (see Hoffman-Wielandt Theorem Proof) tells us that the if $lambda_1(A) leq dots leq lambda_n(A)$ are the eigenvalues of $A$ and $mu_1 leq dots leq mu_n$ are the eigenvalues of $diag(sigma_1^2, dots, sigma_n^2)$ then
          $$ sum_{j=1}^n vert lambda_j(A) - mu_j vert^2 leq Vert A - diag(sigma_1^2, dots, sigma_n^2) Vert_F^2 $$
          Thus, all the eigenvalues of $A$ go to infinity and therefore all the eigenvalues of $A^{-1}$ go to zero. But this implies (as $A$ is diagonalizable) that $Vert A^{-1} Vert rightarrow 0$ and hence (by Cauchy-Schwarz)
          $$ y = x^T A^{-1} x leq Vert A^{-1} Vert cdot Vert x Vert^2 rightarrow 0.$$



          It is not true if we drop the assumption on the off-diagonal (first I missed that assumption and wrote this answer and now I cannot bring myself to delete what I wrote). If the diagonal of $A$ goes to infinity, you only get that the product over all eigenvalues of $A$ goes to infinity, but you might have, that only one eigenvalue actually goes to infinity. Then some of the eigenvalues of $A^{-1}$ are bounded away from zero and $y$ might not go to zero (if the base change does that diagonalizes the matrix does not depend on the parameter, then we can just take some eigenvector of an eigenvalue that does not go to infinity).



          I started with a diagonal matrix where one eigenvalue is fixed and one that is free for me to play. Then I searched for a unitary matrix such that after conjugation it satisfied the condition you wanted, ie that all the elements of the diagonal are positive and go to infinity.



          Here is the example. Take
          $$ A_lambda = frac{1}{2}begin{pmatrix} 1 + lambda & -lambda + 1 \ -lambda + 1 &1+lambda
          end{pmatrix} $$

          This matrix is symmetric and positive definite, in fact it is similar to $diag(1,lambda)$ as
          $$ U^star begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix} U = A_lambda$$
          where $U$ is the following orthogonal matrix
          $$ U=2^{-1/2}begin{pmatrix} 1 & -1 \ 1 & 1 end{pmatrix}.$$
          We have (using that $U^star = U^{-1}$)
          $$ A_{lambda}^{-1}
          = left( U^star begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix} U right)^{-1}
          = left( U^{-1} begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix} U right)^{-1}
          = U^{-1} begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix}^{-1} U
          = U^star begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix}^{-1} U
          = A_{1/lambda} $$

          Furthermore, we have
          $$ y=begin{pmatrix} 1 & 1 end{pmatrix} A_{1/lambda} begin{pmatrix} 1 \ 1 end{pmatrix} = 2 $$
          This does not go to zero and we are done.






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Thanks a lot for the answer. Before I process the whole thing, I don't understand the latter part. Specifically, why is $A^{-1}_lambda = A_{1/lambda}$? Numerically also that does not seem to be the case.
            $endgroup$
            – avk255
            Dec 15 '18 at 17:09












          • $begingroup$
            I am sorry, I forgot the scalar factor in front of $A_lambda$. I'll fix it.
            $endgroup$
            – Severin Schraven
            Dec 15 '18 at 18:26










          • $begingroup$
            Thanks a lot! This is great.
            $endgroup$
            – avk255
            Dec 15 '18 at 22:21










          • $begingroup$
            I'm glad I could help you.
            $endgroup$
            – Severin Schraven
            Dec 16 '18 at 10:37












          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3040273%2fvalue-of-the-quadratic-form-as-a-function-of-the-determinant%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          3












          $begingroup$

          It is true. What we want to show is that all the eigenvalues of $A^{-1}$ go to zero in the limit. For this we need to show that all the eigenvalues of $A$ go to infinity. We would like to say that the off-diagonal is "small" and therfore, we can neglect them and see that we already have the eigenvalues on our diagonal. This is essentially true! Your condition on the off-diagonal implies that
          $$ Vert A - diag( sigma_1^2, dots, sigma_n^2) Vert_{F} leq n^2$$
          where the norm in question is the Frobenius norm. However, the Hoffman-Wielandt Theorem (see Hoffman-Wielandt Theorem Proof) tells us that the if $lambda_1(A) leq dots leq lambda_n(A)$ are the eigenvalues of $A$ and $mu_1 leq dots leq mu_n$ are the eigenvalues of $diag(sigma_1^2, dots, sigma_n^2)$ then
          $$ sum_{j=1}^n vert lambda_j(A) - mu_j vert^2 leq Vert A - diag(sigma_1^2, dots, sigma_n^2) Vert_F^2 $$
          Thus, all the eigenvalues of $A$ go to infinity and therefore all the eigenvalues of $A^{-1}$ go to zero. But this implies (as $A$ is diagonalizable) that $Vert A^{-1} Vert rightarrow 0$ and hence (by Cauchy-Schwarz)
          $$ y = x^T A^{-1} x leq Vert A^{-1} Vert cdot Vert x Vert^2 rightarrow 0.$$



          It is not true if we drop the assumption on the off-diagonal (first I missed that assumption and wrote this answer and now I cannot bring myself to delete what I wrote). If the diagonal of $A$ goes to infinity, you only get that the product over all eigenvalues of $A$ goes to infinity, but you might have, that only one eigenvalue actually goes to infinity. Then some of the eigenvalues of $A^{-1}$ are bounded away from zero and $y$ might not go to zero (if the base change does that diagonalizes the matrix does not depend on the parameter, then we can just take some eigenvector of an eigenvalue that does not go to infinity).



          I started with a diagonal matrix where one eigenvalue is fixed and one that is free for me to play. Then I searched for a unitary matrix such that after conjugation it satisfied the condition you wanted, ie that all the elements of the diagonal are positive and go to infinity.



          Here is the example. Take
          $$ A_lambda = frac{1}{2}begin{pmatrix} 1 + lambda & -lambda + 1 \ -lambda + 1 &1+lambda
          end{pmatrix} $$

          This matrix is symmetric and positive definite, in fact it is similar to $diag(1,lambda)$ as
          $$ U^star begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix} U = A_lambda$$
          where $U$ is the following orthogonal matrix
          $$ U=2^{-1/2}begin{pmatrix} 1 & -1 \ 1 & 1 end{pmatrix}.$$
          We have (using that $U^star = U^{-1}$)
          $$ A_{lambda}^{-1}
          = left( U^star begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix} U right)^{-1}
          = left( U^{-1} begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix} U right)^{-1}
          = U^{-1} begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix}^{-1} U
          = U^star begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix}^{-1} U
          = A_{1/lambda} $$

          Furthermore, we have
          $$ y=begin{pmatrix} 1 & 1 end{pmatrix} A_{1/lambda} begin{pmatrix} 1 \ 1 end{pmatrix} = 2 $$
          This does not go to zero and we are done.






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Thanks a lot for the answer. Before I process the whole thing, I don't understand the latter part. Specifically, why is $A^{-1}_lambda = A_{1/lambda}$? Numerically also that does not seem to be the case.
            $endgroup$
            – avk255
            Dec 15 '18 at 17:09












          • $begingroup$
            I am sorry, I forgot the scalar factor in front of $A_lambda$. I'll fix it.
            $endgroup$
            – Severin Schraven
            Dec 15 '18 at 18:26










          • $begingroup$
            Thanks a lot! This is great.
            $endgroup$
            – avk255
            Dec 15 '18 at 22:21










          • $begingroup$
            I'm glad I could help you.
            $endgroup$
            – Severin Schraven
            Dec 16 '18 at 10:37
















          3












          $begingroup$

          It is true. What we want to show is that all the eigenvalues of $A^{-1}$ go to zero in the limit. For this we need to show that all the eigenvalues of $A$ go to infinity. We would like to say that the off-diagonal is "small" and therfore, we can neglect them and see that we already have the eigenvalues on our diagonal. This is essentially true! Your condition on the off-diagonal implies that
          $$ Vert A - diag( sigma_1^2, dots, sigma_n^2) Vert_{F} leq n^2$$
          where the norm in question is the Frobenius norm. However, the Hoffman-Wielandt Theorem (see Hoffman-Wielandt Theorem Proof) tells us that the if $lambda_1(A) leq dots leq lambda_n(A)$ are the eigenvalues of $A$ and $mu_1 leq dots leq mu_n$ are the eigenvalues of $diag(sigma_1^2, dots, sigma_n^2)$ then
          $$ sum_{j=1}^n vert lambda_j(A) - mu_j vert^2 leq Vert A - diag(sigma_1^2, dots, sigma_n^2) Vert_F^2 $$
          Thus, all the eigenvalues of $A$ go to infinity and therefore all the eigenvalues of $A^{-1}$ go to zero. But this implies (as $A$ is diagonalizable) that $Vert A^{-1} Vert rightarrow 0$ and hence (by Cauchy-Schwarz)
          $$ y = x^T A^{-1} x leq Vert A^{-1} Vert cdot Vert x Vert^2 rightarrow 0.$$



          It is not true if we drop the assumption on the off-diagonal (first I missed that assumption and wrote this answer and now I cannot bring myself to delete what I wrote). If the diagonal of $A$ goes to infinity, you only get that the product over all eigenvalues of $A$ goes to infinity, but you might have, that only one eigenvalue actually goes to infinity. Then some of the eigenvalues of $A^{-1}$ are bounded away from zero and $y$ might not go to zero (if the base change does that diagonalizes the matrix does not depend on the parameter, then we can just take some eigenvector of an eigenvalue that does not go to infinity).



          I started with a diagonal matrix where one eigenvalue is fixed and one that is free for me to play. Then I searched for a unitary matrix such that after conjugation it satisfied the condition you wanted, ie that all the elements of the diagonal are positive and go to infinity.



          Here is the example. Take
          $$ A_lambda = frac{1}{2}begin{pmatrix} 1 + lambda & -lambda + 1 \ -lambda + 1 &1+lambda
          end{pmatrix} $$

          This matrix is symmetric and positive definite, in fact it is similar to $diag(1,lambda)$ as
          $$ U^star begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix} U = A_lambda$$
          where $U$ is the following orthogonal matrix
          $$ U=2^{-1/2}begin{pmatrix} 1 & -1 \ 1 & 1 end{pmatrix}.$$
          We have (using that $U^star = U^{-1}$)
          $$ A_{lambda}^{-1}
          = left( U^star begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix} U right)^{-1}
          = left( U^{-1} begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix} U right)^{-1}
          = U^{-1} begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix}^{-1} U
          = U^star begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix}^{-1} U
          = A_{1/lambda} $$

          Furthermore, we have
          $$ y=begin{pmatrix} 1 & 1 end{pmatrix} A_{1/lambda} begin{pmatrix} 1 \ 1 end{pmatrix} = 2 $$
          This does not go to zero and we are done.






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Thanks a lot for the answer. Before I process the whole thing, I don't understand the latter part. Specifically, why is $A^{-1}_lambda = A_{1/lambda}$? Numerically also that does not seem to be the case.
            $endgroup$
            – avk255
            Dec 15 '18 at 17:09












          • $begingroup$
            I am sorry, I forgot the scalar factor in front of $A_lambda$. I'll fix it.
            $endgroup$
            – Severin Schraven
            Dec 15 '18 at 18:26










          • $begingroup$
            Thanks a lot! This is great.
            $endgroup$
            – avk255
            Dec 15 '18 at 22:21










          • $begingroup$
            I'm glad I could help you.
            $endgroup$
            – Severin Schraven
            Dec 16 '18 at 10:37














          3












          3








          3





          $begingroup$

          It is true. What we want to show is that all the eigenvalues of $A^{-1}$ go to zero in the limit. For this we need to show that all the eigenvalues of $A$ go to infinity. We would like to say that the off-diagonal is "small" and therfore, we can neglect them and see that we already have the eigenvalues on our diagonal. This is essentially true! Your condition on the off-diagonal implies that
          $$ Vert A - diag( sigma_1^2, dots, sigma_n^2) Vert_{F} leq n^2$$
          where the norm in question is the Frobenius norm. However, the Hoffman-Wielandt Theorem (see Hoffman-Wielandt Theorem Proof) tells us that the if $lambda_1(A) leq dots leq lambda_n(A)$ are the eigenvalues of $A$ and $mu_1 leq dots leq mu_n$ are the eigenvalues of $diag(sigma_1^2, dots, sigma_n^2)$ then
          $$ sum_{j=1}^n vert lambda_j(A) - mu_j vert^2 leq Vert A - diag(sigma_1^2, dots, sigma_n^2) Vert_F^2 $$
          Thus, all the eigenvalues of $A$ go to infinity and therefore all the eigenvalues of $A^{-1}$ go to zero. But this implies (as $A$ is diagonalizable) that $Vert A^{-1} Vert rightarrow 0$ and hence (by Cauchy-Schwarz)
          $$ y = x^T A^{-1} x leq Vert A^{-1} Vert cdot Vert x Vert^2 rightarrow 0.$$



          It is not true if we drop the assumption on the off-diagonal (first I missed that assumption and wrote this answer and now I cannot bring myself to delete what I wrote). If the diagonal of $A$ goes to infinity, you only get that the product over all eigenvalues of $A$ goes to infinity, but you might have, that only one eigenvalue actually goes to infinity. Then some of the eigenvalues of $A^{-1}$ are bounded away from zero and $y$ might not go to zero (if the base change does that diagonalizes the matrix does not depend on the parameter, then we can just take some eigenvector of an eigenvalue that does not go to infinity).



          I started with a diagonal matrix where one eigenvalue is fixed and one that is free for me to play. Then I searched for a unitary matrix such that after conjugation it satisfied the condition you wanted, ie that all the elements of the diagonal are positive and go to infinity.



          Here is the example. Take
          $$ A_lambda = frac{1}{2}begin{pmatrix} 1 + lambda & -lambda + 1 \ -lambda + 1 &1+lambda
          end{pmatrix} $$

          This matrix is symmetric and positive definite, in fact it is similar to $diag(1,lambda)$ as
          $$ U^star begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix} U = A_lambda$$
          where $U$ is the following orthogonal matrix
          $$ U=2^{-1/2}begin{pmatrix} 1 & -1 \ 1 & 1 end{pmatrix}.$$
          We have (using that $U^star = U^{-1}$)
          $$ A_{lambda}^{-1}
          = left( U^star begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix} U right)^{-1}
          = left( U^{-1} begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix} U right)^{-1}
          = U^{-1} begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix}^{-1} U
          = U^star begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix}^{-1} U
          = A_{1/lambda} $$

          Furthermore, we have
          $$ y=begin{pmatrix} 1 & 1 end{pmatrix} A_{1/lambda} begin{pmatrix} 1 \ 1 end{pmatrix} = 2 $$
          This does not go to zero and we are done.






          share|cite|improve this answer











          $endgroup$



          It is true. What we want to show is that all the eigenvalues of $A^{-1}$ go to zero in the limit. For this we need to show that all the eigenvalues of $A$ go to infinity. We would like to say that the off-diagonal is "small" and therfore, we can neglect them and see that we already have the eigenvalues on our diagonal. This is essentially true! Your condition on the off-diagonal implies that
          $$ Vert A - diag( sigma_1^2, dots, sigma_n^2) Vert_{F} leq n^2$$
          where the norm in question is the Frobenius norm. However, the Hoffman-Wielandt Theorem (see Hoffman-Wielandt Theorem Proof) tells us that the if $lambda_1(A) leq dots leq lambda_n(A)$ are the eigenvalues of $A$ and $mu_1 leq dots leq mu_n$ are the eigenvalues of $diag(sigma_1^2, dots, sigma_n^2)$ then
          $$ sum_{j=1}^n vert lambda_j(A) - mu_j vert^2 leq Vert A - diag(sigma_1^2, dots, sigma_n^2) Vert_F^2 $$
          Thus, all the eigenvalues of $A$ go to infinity and therefore all the eigenvalues of $A^{-1}$ go to zero. But this implies (as $A$ is diagonalizable) that $Vert A^{-1} Vert rightarrow 0$ and hence (by Cauchy-Schwarz)
          $$ y = x^T A^{-1} x leq Vert A^{-1} Vert cdot Vert x Vert^2 rightarrow 0.$$



          It is not true if we drop the assumption on the off-diagonal (first I missed that assumption and wrote this answer and now I cannot bring myself to delete what I wrote). If the diagonal of $A$ goes to infinity, you only get that the product over all eigenvalues of $A$ goes to infinity, but you might have, that only one eigenvalue actually goes to infinity. Then some of the eigenvalues of $A^{-1}$ are bounded away from zero and $y$ might not go to zero (if the base change does that diagonalizes the matrix does not depend on the parameter, then we can just take some eigenvector of an eigenvalue that does not go to infinity).



          I started with a diagonal matrix where one eigenvalue is fixed and one that is free for me to play. Then I searched for a unitary matrix such that after conjugation it satisfied the condition you wanted, ie that all the elements of the diagonal are positive and go to infinity.



          Here is the example. Take
          $$ A_lambda = frac{1}{2}begin{pmatrix} 1 + lambda & -lambda + 1 \ -lambda + 1 &1+lambda
          end{pmatrix} $$

          This matrix is symmetric and positive definite, in fact it is similar to $diag(1,lambda)$ as
          $$ U^star begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix} U = A_lambda$$
          where $U$ is the following orthogonal matrix
          $$ U=2^{-1/2}begin{pmatrix} 1 & -1 \ 1 & 1 end{pmatrix}.$$
          We have (using that $U^star = U^{-1}$)
          $$ A_{lambda}^{-1}
          = left( U^star begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix} U right)^{-1}
          = left( U^{-1} begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix} U right)^{-1}
          = U^{-1} begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix}^{-1} U
          = U^star begin{pmatrix} 1 & 0 \ 0 & lambda end{pmatrix}^{-1} U
          = A_{1/lambda} $$

          Furthermore, we have
          $$ y=begin{pmatrix} 1 & 1 end{pmatrix} A_{1/lambda} begin{pmatrix} 1 \ 1 end{pmatrix} = 2 $$
          This does not go to zero and we are done.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Dec 16 '18 at 10:39

























          answered Dec 15 '18 at 10:53









          Severin SchravenSeverin Schraven

          6,7802936




          6,7802936












          • $begingroup$
            Thanks a lot for the answer. Before I process the whole thing, I don't understand the latter part. Specifically, why is $A^{-1}_lambda = A_{1/lambda}$? Numerically also that does not seem to be the case.
            $endgroup$
            – avk255
            Dec 15 '18 at 17:09












          • $begingroup$
            I am sorry, I forgot the scalar factor in front of $A_lambda$. I'll fix it.
            $endgroup$
            – Severin Schraven
            Dec 15 '18 at 18:26










          • $begingroup$
            Thanks a lot! This is great.
            $endgroup$
            – avk255
            Dec 15 '18 at 22:21










          • $begingroup$
            I'm glad I could help you.
            $endgroup$
            – Severin Schraven
            Dec 16 '18 at 10:37


















          • $begingroup$
            Thanks a lot for the answer. Before I process the whole thing, I don't understand the latter part. Specifically, why is $A^{-1}_lambda = A_{1/lambda}$? Numerically also that does not seem to be the case.
            $endgroup$
            – avk255
            Dec 15 '18 at 17:09












          • $begingroup$
            I am sorry, I forgot the scalar factor in front of $A_lambda$. I'll fix it.
            $endgroup$
            – Severin Schraven
            Dec 15 '18 at 18:26










          • $begingroup$
            Thanks a lot! This is great.
            $endgroup$
            – avk255
            Dec 15 '18 at 22:21










          • $begingroup$
            I'm glad I could help you.
            $endgroup$
            – Severin Schraven
            Dec 16 '18 at 10:37
















          $begingroup$
          Thanks a lot for the answer. Before I process the whole thing, I don't understand the latter part. Specifically, why is $A^{-1}_lambda = A_{1/lambda}$? Numerically also that does not seem to be the case.
          $endgroup$
          – avk255
          Dec 15 '18 at 17:09






          $begingroup$
          Thanks a lot for the answer. Before I process the whole thing, I don't understand the latter part. Specifically, why is $A^{-1}_lambda = A_{1/lambda}$? Numerically also that does not seem to be the case.
          $endgroup$
          – avk255
          Dec 15 '18 at 17:09














          $begingroup$
          I am sorry, I forgot the scalar factor in front of $A_lambda$. I'll fix it.
          $endgroup$
          – Severin Schraven
          Dec 15 '18 at 18:26




          $begingroup$
          I am sorry, I forgot the scalar factor in front of $A_lambda$. I'll fix it.
          $endgroup$
          – Severin Schraven
          Dec 15 '18 at 18:26












          $begingroup$
          Thanks a lot! This is great.
          $endgroup$
          – avk255
          Dec 15 '18 at 22:21




          $begingroup$
          Thanks a lot! This is great.
          $endgroup$
          – avk255
          Dec 15 '18 at 22:21












          $begingroup$
          I'm glad I could help you.
          $endgroup$
          – Severin Schraven
          Dec 16 '18 at 10:37




          $begingroup$
          I'm glad I could help you.
          $endgroup$
          – Severin Schraven
          Dec 16 '18 at 10:37


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3040273%2fvalue-of-the-quadratic-form-as-a-function-of-the-determinant%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          mysqli_query(): Empty query in /home/lucindabrummitt/public_html/blog/wp-includes/wp-db.php on line 1924

          How to change which sound is reproduced for terminal bell?

          Can I use Tabulator js library in my java Spring + Thymeleaf project?