What is the practical impact of a matrix's condition number?












9












$begingroup$


Let's say I am trying to solve a square linear system
$Ax = b$
for whatever reason. A perturbation $delta b$ in $b$ will lead to a perturbation $delta x$ in $x$, whose relative norm is bounded by the condition number of A $kappa (A)$ according to
$$frac{||delta x||}{||x||} leq kappa(A) frac{||delta b||}{||b||}. tag{1}$$
If the only error/uncertainty of $b$ is due to rounding errors caused by floating point operations, then I am fine as long as $kappa(A)$ is significantly smaller than the inverse of machine epsilon $epsilon_text{mach}$ (right?). But it seems to me that for many (most?) applications, there will be relative errors in $b$ on an order of magnitude much greater than $epsilon_text{mach}^{-1}$. An average relative error of $10^{-3}$ in $b$ seems commonplace (for some applications, even $10^{-1}$ is to be expected).



Condition numbers on the order of $10^3$, or even $10^6$, also seem very common, especially when the dimensions of $A$ are large. So what does this mean? If $kappa(A) > 10^3$, I have to make sure that my relative error in $b$ is less than $10^{-3}$, preferably less than $10^{-5}$, in order to get any kind of significance in my computations when solving for $x$? Or are you going to tell me that since Equation $(1)$ is only a worst-case scenario, this will usually not be a problem? After fooling around with random matrices in MATLAB however, it seems to me that the "average" scenario usually is not that far from the worst-case scenario.



I realize that the answer to this question probably is very problem dependent, but it seems to me that this would be a very frequent problem in practice, and yet in engineering education it is only ever discussed in math courses, and even in those courses there is not much emphasis on this seemingly severe and common problem.










share|cite|improve this question









$endgroup$












  • $begingroup$
    If you look on the right part of the screen, there are about ten questions as links under "Related." Sometimes one of these is exactly what you want...in any case, each of those has its own list of ten related questions, created by the software of course. Worth looking when you have an open-ended question, "What is the significance of ____?"
    $endgroup$
    – Will Jagy
    Feb 13 '14 at 20:52
















9












$begingroup$


Let's say I am trying to solve a square linear system
$Ax = b$
for whatever reason. A perturbation $delta b$ in $b$ will lead to a perturbation $delta x$ in $x$, whose relative norm is bounded by the condition number of A $kappa (A)$ according to
$$frac{||delta x||}{||x||} leq kappa(A) frac{||delta b||}{||b||}. tag{1}$$
If the only error/uncertainty of $b$ is due to rounding errors caused by floating point operations, then I am fine as long as $kappa(A)$ is significantly smaller than the inverse of machine epsilon $epsilon_text{mach}$ (right?). But it seems to me that for many (most?) applications, there will be relative errors in $b$ on an order of magnitude much greater than $epsilon_text{mach}^{-1}$. An average relative error of $10^{-3}$ in $b$ seems commonplace (for some applications, even $10^{-1}$ is to be expected).



Condition numbers on the order of $10^3$, or even $10^6$, also seem very common, especially when the dimensions of $A$ are large. So what does this mean? If $kappa(A) > 10^3$, I have to make sure that my relative error in $b$ is less than $10^{-3}$, preferably less than $10^{-5}$, in order to get any kind of significance in my computations when solving for $x$? Or are you going to tell me that since Equation $(1)$ is only a worst-case scenario, this will usually not be a problem? After fooling around with random matrices in MATLAB however, it seems to me that the "average" scenario usually is not that far from the worst-case scenario.



I realize that the answer to this question probably is very problem dependent, but it seems to me that this would be a very frequent problem in practice, and yet in engineering education it is only ever discussed in math courses, and even in those courses there is not much emphasis on this seemingly severe and common problem.










share|cite|improve this question









$endgroup$












  • $begingroup$
    If you look on the right part of the screen, there are about ten questions as links under "Related." Sometimes one of these is exactly what you want...in any case, each of those has its own list of ten related questions, created by the software of course. Worth looking when you have an open-ended question, "What is the significance of ____?"
    $endgroup$
    – Will Jagy
    Feb 13 '14 at 20:52














9












9








9


2



$begingroup$


Let's say I am trying to solve a square linear system
$Ax = b$
for whatever reason. A perturbation $delta b$ in $b$ will lead to a perturbation $delta x$ in $x$, whose relative norm is bounded by the condition number of A $kappa (A)$ according to
$$frac{||delta x||}{||x||} leq kappa(A) frac{||delta b||}{||b||}. tag{1}$$
If the only error/uncertainty of $b$ is due to rounding errors caused by floating point operations, then I am fine as long as $kappa(A)$ is significantly smaller than the inverse of machine epsilon $epsilon_text{mach}$ (right?). But it seems to me that for many (most?) applications, there will be relative errors in $b$ on an order of magnitude much greater than $epsilon_text{mach}^{-1}$. An average relative error of $10^{-3}$ in $b$ seems commonplace (for some applications, even $10^{-1}$ is to be expected).



Condition numbers on the order of $10^3$, or even $10^6$, also seem very common, especially when the dimensions of $A$ are large. So what does this mean? If $kappa(A) > 10^3$, I have to make sure that my relative error in $b$ is less than $10^{-3}$, preferably less than $10^{-5}$, in order to get any kind of significance in my computations when solving for $x$? Or are you going to tell me that since Equation $(1)$ is only a worst-case scenario, this will usually not be a problem? After fooling around with random matrices in MATLAB however, it seems to me that the "average" scenario usually is not that far from the worst-case scenario.



I realize that the answer to this question probably is very problem dependent, but it seems to me that this would be a very frequent problem in practice, and yet in engineering education it is only ever discussed in math courses, and even in those courses there is not much emphasis on this seemingly severe and common problem.










share|cite|improve this question









$endgroup$




Let's say I am trying to solve a square linear system
$Ax = b$
for whatever reason. A perturbation $delta b$ in $b$ will lead to a perturbation $delta x$ in $x$, whose relative norm is bounded by the condition number of A $kappa (A)$ according to
$$frac{||delta x||}{||x||} leq kappa(A) frac{||delta b||}{||b||}. tag{1}$$
If the only error/uncertainty of $b$ is due to rounding errors caused by floating point operations, then I am fine as long as $kappa(A)$ is significantly smaller than the inverse of machine epsilon $epsilon_text{mach}$ (right?). But it seems to me that for many (most?) applications, there will be relative errors in $b$ on an order of magnitude much greater than $epsilon_text{mach}^{-1}$. An average relative error of $10^{-3}$ in $b$ seems commonplace (for some applications, even $10^{-1}$ is to be expected).



Condition numbers on the order of $10^3$, or even $10^6$, also seem very common, especially when the dimensions of $A$ are large. So what does this mean? If $kappa(A) > 10^3$, I have to make sure that my relative error in $b$ is less than $10^{-3}$, preferably less than $10^{-5}$, in order to get any kind of significance in my computations when solving for $x$? Or are you going to tell me that since Equation $(1)$ is only a worst-case scenario, this will usually not be a problem? After fooling around with random matrices in MATLAB however, it seems to me that the "average" scenario usually is not that far from the worst-case scenario.



I realize that the answer to this question probably is very problem dependent, but it seems to me that this would be a very frequent problem in practice, and yet in engineering education it is only ever discussed in math courses, and even in those courses there is not much emphasis on this seemingly severe and common problem.







numerical-linear-algebra






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Feb 13 '14 at 19:55









BleevoeBleevoe

4813




4813












  • $begingroup$
    If you look on the right part of the screen, there are about ten questions as links under "Related." Sometimes one of these is exactly what you want...in any case, each of those has its own list of ten related questions, created by the software of course. Worth looking when you have an open-ended question, "What is the significance of ____?"
    $endgroup$
    – Will Jagy
    Feb 13 '14 at 20:52


















  • $begingroup$
    If you look on the right part of the screen, there are about ten questions as links under "Related." Sometimes one of these is exactly what you want...in any case, each of those has its own list of ten related questions, created by the software of course. Worth looking when you have an open-ended question, "What is the significance of ____?"
    $endgroup$
    – Will Jagy
    Feb 13 '14 at 20:52
















$begingroup$
If you look on the right part of the screen, there are about ten questions as links under "Related." Sometimes one of these is exactly what you want...in any case, each of those has its own list of ten related questions, created by the software of course. Worth looking when you have an open-ended question, "What is the significance of ____?"
$endgroup$
– Will Jagy
Feb 13 '14 at 20:52




$begingroup$
If you look on the right part of the screen, there are about ten questions as links under "Related." Sometimes one of these is exactly what you want...in any case, each of those has its own list of ten related questions, created by the software of course. Worth looking when you have an open-ended question, "What is the significance of ____?"
$endgroup$
– Will Jagy
Feb 13 '14 at 20:52










3 Answers
3






active

oldest

votes


















8












$begingroup$

As you said, this is very problem dependent and it depends on what kind of accuracy in what quantity you actually want.



Often, $A$ and $b$ contain some measured data (problem coefficients, loads, etc.). They have usually a stochastic nature due to the errors in measurements. What they found later was that instead of $tilde{x}$ being the solution of $Ax=b$, it solved another problem, $tilde{A}tilde{x}=tilde{b}$, with $tilde{A}$ and $tilde{b}$, respectively, very close to $A$ and $b$ in terms of the measurements errors. So instead of solving exactly the already inexactly constructed problem, they found that they solved a still-acceptable problem within the measurement error tolerances even though their computed solution could be considered as a garbage with respect to the original one. But when you have inexactly stated problem, does it have a meaning to attempt to solve it exactly (impossible) or to a very high level of accuracy?



This is the concept of the so called backward error where you seek for what data you actually have the exact solution. It is a very popular topic in numerical linear algebra and error analysis and was pioneered by von Neumann, Turing and further developed and popularised by Jim Wilkinson. This appeared to be an important tool for analysing the rounding errors in floating point computations since the backward error analysis allows you to obtain bounds on the forward errors ($x-tilde{x}$ in some norm) in two separate parts: the algorithm-dependent part (provide bounds on the backward error) and the problem-dependent part (sensitivity of the problem, some sort of condition number).



If the backward error is small enough (by the order of $epsilon_{mathrm{mach}}$, you can say that you solved your problem to the greatest accuracy you could hope for on your computer. Why? Because even if you could somehow obtain "exact" values of the entries of your $A$ and $b$, the relative errors due to their storage in the finite precision arithmetic are of the order of the machine precision.



Now what exactly is the backward error in the context of the linear (nonsingular, for simplicity) systems? Assume you want to solve $Ax=b$ and obtain some $tilde{x}=x+delta x$. So you look for $tilde{A}=A+delta A$ and $tilde{b}=b+delta b$ such that $tilde{A}tilde{x}=tilde{b}$, that is, your $tilde{x}$ is the exact solution of a perturbed problem. Of course, we would like $delta A$ and $delta b$ to be as close as possible to the original data. So consider, e.g., the following formulation:
$$
eta(tilde{x})=min{tau:;(A+delta A)tilde{x}=(b+delta b),;|delta A|leqtau|A|,;|delta b|leqtau|b|}.
$$

It can be show that the formula for $eta$ is actually very simple:
$$
eta(tilde{x})=frac{|b-Atilde{x}|}{|A||tilde{x}|+|b|}.
$$

(This result was obtained some time ago in a paper by Rigal and Gaches.)
If you want to have a bound on the forward error, then the following holds:
$$
frac{|x-tilde{x}|}{|x|}leqfrac{eta(tilde{x})kappa(A)}{1-eta(tilde{x})kappa(A)}left(frac{|delta A|}{|A|}+frac{|delta b|}{|b|}right)
$$

assuming $eta(tilde{x})kappa(A)<1$.
Now this bound looks very similar than the one in the question. Note however that the backward error $eta(tilde{x})$ can be much lower than what you have as the coefficient by the $kappa(A)$ in your inequality, that is, the relative residual norm:
$$
eta(tilde{x})leqfrac{|b-Atilde{x}|}{|b|}.
$$

At some point, depending on $A$ and $tilde{x}$, the backward error $eta(tilde{x})$ could be even much lower than the right-hand side in the inequality above.
Note that the inequality $eta(tilde{x})kappa(A)<1$ guarantees you that the perturbed system matrix $tilde{A}$ is nonsingular. In other words, if you'd like to be sure you solved some nearby meaningful problem, you'd like to have $eta(tilde{x})$ at least of the order of $1/kappa(A)$. Also, starting from this point, decreasing $eta$ by an order of magnitude gives you roughly an additional significant digit in your solution error.



Much more can be said about forward/backward errors, I invite you to have a look on the Higham's book if you want to know more details.



Anyway, if you want to solve $Ax=b$, you need to know that your solution won't be exact no matter what algorithm you use and you always need to ask yourself (or somebody else) what do you expect from your solution and in what terms you want to measure the accuracy. This is sometimes hard to assess even in terms of backward errors since in some cases the classical norms like 2-norm or $infty$-norm used in rounding error analyses are not exactly what appears in the problem you want to solve.



The typical example of this issue is solving discretised partial differential equations. Often, the forward error norm $|x-tilde{x}|$ nor the residual norm $|b-Atilde{x}|$ have a meaningful role in this context. In practice, one is normally interested in the accuracy in terms of fluxes, that is, the gradients of the discrete solutions. It is not unusual that the errors in fluxes can be much lower than the forward errors in the actual $tilde{x}$ which has usually the meaning of some coordinates of the discrete functional approximation. Instead of looking at the significant digits of $tilde{x}$, you want to know whether the error you made by getting $tilde{x}$ instead of $x$ is somewhat related to the discretisation errors in some proper norm (say, in terms of these gradients). Some sort of accuracy analysis of the algorithms in such a context is, however, still an open, and probably very hard to handle, problem.



Of course, if you really insist that you want to solve $Ax=b$ (even though the exactness of your $A$ and $b$ is at least questionable), you can use some approaches based on arbitrary precision arithmetic. However, this is practical only for some small problems and can be incredibly expensive for larger problems. On some conferences, I saw sometimes people that are working on that (even on, e.g., GPUs) but these kind of solvers are so costly that they're mostly useless for practice. Not mentioning that solving linear problems exactly is at least arguable in any context.






share|cite|improve this answer











$endgroup$





















    5












    $begingroup$

    In practice, a matrix with a condition number of $10^{3}$ to $10^{6}$ is not terribly problematic. For linear systems, a good rule of thumb is that you lose a digit of precision due to conditioning for every power of 10 in your condition number. Since double precision is a 15-16 digit representation, a matrix with a condition number in the range of $10^{3}$ to $10^{6}$ isn't considered problematic from the standpoint of numerical calculation with direct methods (LU decomposition, Cholesky, etc.). If you were using iterative methods, these condition numbers would be undesirable because larger condition numbers mean slower convergence, so it would behoove you to use a preconditioner to transform your linear system into an equivalent linear system with smaller condition number to speed convergence. More problematic would be extremely ill-conditioned problems, with condition numbers exceeding $10^{10}$ or so; for these, more numerical precision would be needed, such as quadruple precision, or arbitrary precision arithmetic.



    As for propagating the measurement/experimental error through the calculation, the situation you describe can only be remedied through obtaining better data. Consider the case where $kappa(A) = 1$, and $A$ is known exactly. Then the relative error in your solution is bounded by the relative error in your input data, as indicated by the error bound you cite. This error swamps out the error due to numerics. You can't get a more precise solution via better numerics in that case; you can only get a more precise solution via better, more precise data. In practice, people recognize this limitation, and they do the best they can with the data they have, which is probably why it's not mentioned much in engineering courses. The typical exercise of doing Gaussian elimination in extremely limited precision (say, 3 decimal places) is supposed to emulate the consequences of data with limited precision.






    share|cite|improve this answer









    $endgroup$





















      0












      $begingroup$

      See Least square best solution



      The context where i know this is a college course, and a comment made by a professor about economics. Economics talks about multicollinearity, i think. He said that people were looking at ever more subtle numerical methods for dealing with high condition number, meaning matrices that are very nearly singular. His, very well informed, response was that the matrices were just singular. This is something that economists would not want to admit, as it would imply changes in the underlying theory.



      Anyway, with high condition number, you may try to find out what happens if your matrix is perturbed to a singular one. In that case, the image of the mapping is no longer full dimensional. Next, this is where problem dependence comes in, your target $b$ may or may not be near the subspace of the image, I guess you could also call it range. Anyway, you are now in the arena of Kalman Filters. In the first chapter of any book on this, they discuss the problem of minimizing $parallel Ax-b parallel$ when it is not possible to get an exact hit.



      Well, that's a start.






      share|cite|improve this answer











      $endgroup$













      • $begingroup$
        I am not interested in the case that the matrix is (close to being) singular. To clarify, I want to find an "exact" solution to Ax = b in the case of a moderate condition number of A (say 10^3 or 10^5, but much smaller than the inverse of machine epsilon), and am surprised at the apparent requirements this puts on the accuracy of my knowledge of b.
        $endgroup$
        – Bleevoe
        Feb 13 '14 at 22:08











      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f675474%2fwhat-is-the-practical-impact-of-a-matrixs-condition-number%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      8












      $begingroup$

      As you said, this is very problem dependent and it depends on what kind of accuracy in what quantity you actually want.



      Often, $A$ and $b$ contain some measured data (problem coefficients, loads, etc.). They have usually a stochastic nature due to the errors in measurements. What they found later was that instead of $tilde{x}$ being the solution of $Ax=b$, it solved another problem, $tilde{A}tilde{x}=tilde{b}$, with $tilde{A}$ and $tilde{b}$, respectively, very close to $A$ and $b$ in terms of the measurements errors. So instead of solving exactly the already inexactly constructed problem, they found that they solved a still-acceptable problem within the measurement error tolerances even though their computed solution could be considered as a garbage with respect to the original one. But when you have inexactly stated problem, does it have a meaning to attempt to solve it exactly (impossible) or to a very high level of accuracy?



      This is the concept of the so called backward error where you seek for what data you actually have the exact solution. It is a very popular topic in numerical linear algebra and error analysis and was pioneered by von Neumann, Turing and further developed and popularised by Jim Wilkinson. This appeared to be an important tool for analysing the rounding errors in floating point computations since the backward error analysis allows you to obtain bounds on the forward errors ($x-tilde{x}$ in some norm) in two separate parts: the algorithm-dependent part (provide bounds on the backward error) and the problem-dependent part (sensitivity of the problem, some sort of condition number).



      If the backward error is small enough (by the order of $epsilon_{mathrm{mach}}$, you can say that you solved your problem to the greatest accuracy you could hope for on your computer. Why? Because even if you could somehow obtain "exact" values of the entries of your $A$ and $b$, the relative errors due to their storage in the finite precision arithmetic are of the order of the machine precision.



      Now what exactly is the backward error in the context of the linear (nonsingular, for simplicity) systems? Assume you want to solve $Ax=b$ and obtain some $tilde{x}=x+delta x$. So you look for $tilde{A}=A+delta A$ and $tilde{b}=b+delta b$ such that $tilde{A}tilde{x}=tilde{b}$, that is, your $tilde{x}$ is the exact solution of a perturbed problem. Of course, we would like $delta A$ and $delta b$ to be as close as possible to the original data. So consider, e.g., the following formulation:
      $$
      eta(tilde{x})=min{tau:;(A+delta A)tilde{x}=(b+delta b),;|delta A|leqtau|A|,;|delta b|leqtau|b|}.
      $$

      It can be show that the formula for $eta$ is actually very simple:
      $$
      eta(tilde{x})=frac{|b-Atilde{x}|}{|A||tilde{x}|+|b|}.
      $$

      (This result was obtained some time ago in a paper by Rigal and Gaches.)
      If you want to have a bound on the forward error, then the following holds:
      $$
      frac{|x-tilde{x}|}{|x|}leqfrac{eta(tilde{x})kappa(A)}{1-eta(tilde{x})kappa(A)}left(frac{|delta A|}{|A|}+frac{|delta b|}{|b|}right)
      $$

      assuming $eta(tilde{x})kappa(A)<1$.
      Now this bound looks very similar than the one in the question. Note however that the backward error $eta(tilde{x})$ can be much lower than what you have as the coefficient by the $kappa(A)$ in your inequality, that is, the relative residual norm:
      $$
      eta(tilde{x})leqfrac{|b-Atilde{x}|}{|b|}.
      $$

      At some point, depending on $A$ and $tilde{x}$, the backward error $eta(tilde{x})$ could be even much lower than the right-hand side in the inequality above.
      Note that the inequality $eta(tilde{x})kappa(A)<1$ guarantees you that the perturbed system matrix $tilde{A}$ is nonsingular. In other words, if you'd like to be sure you solved some nearby meaningful problem, you'd like to have $eta(tilde{x})$ at least of the order of $1/kappa(A)$. Also, starting from this point, decreasing $eta$ by an order of magnitude gives you roughly an additional significant digit in your solution error.



      Much more can be said about forward/backward errors, I invite you to have a look on the Higham's book if you want to know more details.



      Anyway, if you want to solve $Ax=b$, you need to know that your solution won't be exact no matter what algorithm you use and you always need to ask yourself (or somebody else) what do you expect from your solution and in what terms you want to measure the accuracy. This is sometimes hard to assess even in terms of backward errors since in some cases the classical norms like 2-norm or $infty$-norm used in rounding error analyses are not exactly what appears in the problem you want to solve.



      The typical example of this issue is solving discretised partial differential equations. Often, the forward error norm $|x-tilde{x}|$ nor the residual norm $|b-Atilde{x}|$ have a meaningful role in this context. In practice, one is normally interested in the accuracy in terms of fluxes, that is, the gradients of the discrete solutions. It is not unusual that the errors in fluxes can be much lower than the forward errors in the actual $tilde{x}$ which has usually the meaning of some coordinates of the discrete functional approximation. Instead of looking at the significant digits of $tilde{x}$, you want to know whether the error you made by getting $tilde{x}$ instead of $x$ is somewhat related to the discretisation errors in some proper norm (say, in terms of these gradients). Some sort of accuracy analysis of the algorithms in such a context is, however, still an open, and probably very hard to handle, problem.



      Of course, if you really insist that you want to solve $Ax=b$ (even though the exactness of your $A$ and $b$ is at least questionable), you can use some approaches based on arbitrary precision arithmetic. However, this is practical only for some small problems and can be incredibly expensive for larger problems. On some conferences, I saw sometimes people that are working on that (even on, e.g., GPUs) but these kind of solvers are so costly that they're mostly useless for practice. Not mentioning that solving linear problems exactly is at least arguable in any context.






      share|cite|improve this answer











      $endgroup$


















        8












        $begingroup$

        As you said, this is very problem dependent and it depends on what kind of accuracy in what quantity you actually want.



        Often, $A$ and $b$ contain some measured data (problem coefficients, loads, etc.). They have usually a stochastic nature due to the errors in measurements. What they found later was that instead of $tilde{x}$ being the solution of $Ax=b$, it solved another problem, $tilde{A}tilde{x}=tilde{b}$, with $tilde{A}$ and $tilde{b}$, respectively, very close to $A$ and $b$ in terms of the measurements errors. So instead of solving exactly the already inexactly constructed problem, they found that they solved a still-acceptable problem within the measurement error tolerances even though their computed solution could be considered as a garbage with respect to the original one. But when you have inexactly stated problem, does it have a meaning to attempt to solve it exactly (impossible) or to a very high level of accuracy?



        This is the concept of the so called backward error where you seek for what data you actually have the exact solution. It is a very popular topic in numerical linear algebra and error analysis and was pioneered by von Neumann, Turing and further developed and popularised by Jim Wilkinson. This appeared to be an important tool for analysing the rounding errors in floating point computations since the backward error analysis allows you to obtain bounds on the forward errors ($x-tilde{x}$ in some norm) in two separate parts: the algorithm-dependent part (provide bounds on the backward error) and the problem-dependent part (sensitivity of the problem, some sort of condition number).



        If the backward error is small enough (by the order of $epsilon_{mathrm{mach}}$, you can say that you solved your problem to the greatest accuracy you could hope for on your computer. Why? Because even if you could somehow obtain "exact" values of the entries of your $A$ and $b$, the relative errors due to their storage in the finite precision arithmetic are of the order of the machine precision.



        Now what exactly is the backward error in the context of the linear (nonsingular, for simplicity) systems? Assume you want to solve $Ax=b$ and obtain some $tilde{x}=x+delta x$. So you look for $tilde{A}=A+delta A$ and $tilde{b}=b+delta b$ such that $tilde{A}tilde{x}=tilde{b}$, that is, your $tilde{x}$ is the exact solution of a perturbed problem. Of course, we would like $delta A$ and $delta b$ to be as close as possible to the original data. So consider, e.g., the following formulation:
        $$
        eta(tilde{x})=min{tau:;(A+delta A)tilde{x}=(b+delta b),;|delta A|leqtau|A|,;|delta b|leqtau|b|}.
        $$

        It can be show that the formula for $eta$ is actually very simple:
        $$
        eta(tilde{x})=frac{|b-Atilde{x}|}{|A||tilde{x}|+|b|}.
        $$

        (This result was obtained some time ago in a paper by Rigal and Gaches.)
        If you want to have a bound on the forward error, then the following holds:
        $$
        frac{|x-tilde{x}|}{|x|}leqfrac{eta(tilde{x})kappa(A)}{1-eta(tilde{x})kappa(A)}left(frac{|delta A|}{|A|}+frac{|delta b|}{|b|}right)
        $$

        assuming $eta(tilde{x})kappa(A)<1$.
        Now this bound looks very similar than the one in the question. Note however that the backward error $eta(tilde{x})$ can be much lower than what you have as the coefficient by the $kappa(A)$ in your inequality, that is, the relative residual norm:
        $$
        eta(tilde{x})leqfrac{|b-Atilde{x}|}{|b|}.
        $$

        At some point, depending on $A$ and $tilde{x}$, the backward error $eta(tilde{x})$ could be even much lower than the right-hand side in the inequality above.
        Note that the inequality $eta(tilde{x})kappa(A)<1$ guarantees you that the perturbed system matrix $tilde{A}$ is nonsingular. In other words, if you'd like to be sure you solved some nearby meaningful problem, you'd like to have $eta(tilde{x})$ at least of the order of $1/kappa(A)$. Also, starting from this point, decreasing $eta$ by an order of magnitude gives you roughly an additional significant digit in your solution error.



        Much more can be said about forward/backward errors, I invite you to have a look on the Higham's book if you want to know more details.



        Anyway, if you want to solve $Ax=b$, you need to know that your solution won't be exact no matter what algorithm you use and you always need to ask yourself (or somebody else) what do you expect from your solution and in what terms you want to measure the accuracy. This is sometimes hard to assess even in terms of backward errors since in some cases the classical norms like 2-norm or $infty$-norm used in rounding error analyses are not exactly what appears in the problem you want to solve.



        The typical example of this issue is solving discretised partial differential equations. Often, the forward error norm $|x-tilde{x}|$ nor the residual norm $|b-Atilde{x}|$ have a meaningful role in this context. In practice, one is normally interested in the accuracy in terms of fluxes, that is, the gradients of the discrete solutions. It is not unusual that the errors in fluxes can be much lower than the forward errors in the actual $tilde{x}$ which has usually the meaning of some coordinates of the discrete functional approximation. Instead of looking at the significant digits of $tilde{x}$, you want to know whether the error you made by getting $tilde{x}$ instead of $x$ is somewhat related to the discretisation errors in some proper norm (say, in terms of these gradients). Some sort of accuracy analysis of the algorithms in such a context is, however, still an open, and probably very hard to handle, problem.



        Of course, if you really insist that you want to solve $Ax=b$ (even though the exactness of your $A$ and $b$ is at least questionable), you can use some approaches based on arbitrary precision arithmetic. However, this is practical only for some small problems and can be incredibly expensive for larger problems. On some conferences, I saw sometimes people that are working on that (even on, e.g., GPUs) but these kind of solvers are so costly that they're mostly useless for practice. Not mentioning that solving linear problems exactly is at least arguable in any context.






        share|cite|improve this answer











        $endgroup$
















          8












          8








          8





          $begingroup$

          As you said, this is very problem dependent and it depends on what kind of accuracy in what quantity you actually want.



          Often, $A$ and $b$ contain some measured data (problem coefficients, loads, etc.). They have usually a stochastic nature due to the errors in measurements. What they found later was that instead of $tilde{x}$ being the solution of $Ax=b$, it solved another problem, $tilde{A}tilde{x}=tilde{b}$, with $tilde{A}$ and $tilde{b}$, respectively, very close to $A$ and $b$ in terms of the measurements errors. So instead of solving exactly the already inexactly constructed problem, they found that they solved a still-acceptable problem within the measurement error tolerances even though their computed solution could be considered as a garbage with respect to the original one. But when you have inexactly stated problem, does it have a meaning to attempt to solve it exactly (impossible) or to a very high level of accuracy?



          This is the concept of the so called backward error where you seek for what data you actually have the exact solution. It is a very popular topic in numerical linear algebra and error analysis and was pioneered by von Neumann, Turing and further developed and popularised by Jim Wilkinson. This appeared to be an important tool for analysing the rounding errors in floating point computations since the backward error analysis allows you to obtain bounds on the forward errors ($x-tilde{x}$ in some norm) in two separate parts: the algorithm-dependent part (provide bounds on the backward error) and the problem-dependent part (sensitivity of the problem, some sort of condition number).



          If the backward error is small enough (by the order of $epsilon_{mathrm{mach}}$, you can say that you solved your problem to the greatest accuracy you could hope for on your computer. Why? Because even if you could somehow obtain "exact" values of the entries of your $A$ and $b$, the relative errors due to their storage in the finite precision arithmetic are of the order of the machine precision.



          Now what exactly is the backward error in the context of the linear (nonsingular, for simplicity) systems? Assume you want to solve $Ax=b$ and obtain some $tilde{x}=x+delta x$. So you look for $tilde{A}=A+delta A$ and $tilde{b}=b+delta b$ such that $tilde{A}tilde{x}=tilde{b}$, that is, your $tilde{x}$ is the exact solution of a perturbed problem. Of course, we would like $delta A$ and $delta b$ to be as close as possible to the original data. So consider, e.g., the following formulation:
          $$
          eta(tilde{x})=min{tau:;(A+delta A)tilde{x}=(b+delta b),;|delta A|leqtau|A|,;|delta b|leqtau|b|}.
          $$

          It can be show that the formula for $eta$ is actually very simple:
          $$
          eta(tilde{x})=frac{|b-Atilde{x}|}{|A||tilde{x}|+|b|}.
          $$

          (This result was obtained some time ago in a paper by Rigal and Gaches.)
          If you want to have a bound on the forward error, then the following holds:
          $$
          frac{|x-tilde{x}|}{|x|}leqfrac{eta(tilde{x})kappa(A)}{1-eta(tilde{x})kappa(A)}left(frac{|delta A|}{|A|}+frac{|delta b|}{|b|}right)
          $$

          assuming $eta(tilde{x})kappa(A)<1$.
          Now this bound looks very similar than the one in the question. Note however that the backward error $eta(tilde{x})$ can be much lower than what you have as the coefficient by the $kappa(A)$ in your inequality, that is, the relative residual norm:
          $$
          eta(tilde{x})leqfrac{|b-Atilde{x}|}{|b|}.
          $$

          At some point, depending on $A$ and $tilde{x}$, the backward error $eta(tilde{x})$ could be even much lower than the right-hand side in the inequality above.
          Note that the inequality $eta(tilde{x})kappa(A)<1$ guarantees you that the perturbed system matrix $tilde{A}$ is nonsingular. In other words, if you'd like to be sure you solved some nearby meaningful problem, you'd like to have $eta(tilde{x})$ at least of the order of $1/kappa(A)$. Also, starting from this point, decreasing $eta$ by an order of magnitude gives you roughly an additional significant digit in your solution error.



          Much more can be said about forward/backward errors, I invite you to have a look on the Higham's book if you want to know more details.



          Anyway, if you want to solve $Ax=b$, you need to know that your solution won't be exact no matter what algorithm you use and you always need to ask yourself (or somebody else) what do you expect from your solution and in what terms you want to measure the accuracy. This is sometimes hard to assess even in terms of backward errors since in some cases the classical norms like 2-norm or $infty$-norm used in rounding error analyses are not exactly what appears in the problem you want to solve.



          The typical example of this issue is solving discretised partial differential equations. Often, the forward error norm $|x-tilde{x}|$ nor the residual norm $|b-Atilde{x}|$ have a meaningful role in this context. In practice, one is normally interested in the accuracy in terms of fluxes, that is, the gradients of the discrete solutions. It is not unusual that the errors in fluxes can be much lower than the forward errors in the actual $tilde{x}$ which has usually the meaning of some coordinates of the discrete functional approximation. Instead of looking at the significant digits of $tilde{x}$, you want to know whether the error you made by getting $tilde{x}$ instead of $x$ is somewhat related to the discretisation errors in some proper norm (say, in terms of these gradients). Some sort of accuracy analysis of the algorithms in such a context is, however, still an open, and probably very hard to handle, problem.



          Of course, if you really insist that you want to solve $Ax=b$ (even though the exactness of your $A$ and $b$ is at least questionable), you can use some approaches based on arbitrary precision arithmetic. However, this is practical only for some small problems and can be incredibly expensive for larger problems. On some conferences, I saw sometimes people that are working on that (even on, e.g., GPUs) but these kind of solvers are so costly that they're mostly useless for practice. Not mentioning that solving linear problems exactly is at least arguable in any context.






          share|cite|improve this answer











          $endgroup$



          As you said, this is very problem dependent and it depends on what kind of accuracy in what quantity you actually want.



          Often, $A$ and $b$ contain some measured data (problem coefficients, loads, etc.). They have usually a stochastic nature due to the errors in measurements. What they found later was that instead of $tilde{x}$ being the solution of $Ax=b$, it solved another problem, $tilde{A}tilde{x}=tilde{b}$, with $tilde{A}$ and $tilde{b}$, respectively, very close to $A$ and $b$ in terms of the measurements errors. So instead of solving exactly the already inexactly constructed problem, they found that they solved a still-acceptable problem within the measurement error tolerances even though their computed solution could be considered as a garbage with respect to the original one. But when you have inexactly stated problem, does it have a meaning to attempt to solve it exactly (impossible) or to a very high level of accuracy?



          This is the concept of the so called backward error where you seek for what data you actually have the exact solution. It is a very popular topic in numerical linear algebra and error analysis and was pioneered by von Neumann, Turing and further developed and popularised by Jim Wilkinson. This appeared to be an important tool for analysing the rounding errors in floating point computations since the backward error analysis allows you to obtain bounds on the forward errors ($x-tilde{x}$ in some norm) in two separate parts: the algorithm-dependent part (provide bounds on the backward error) and the problem-dependent part (sensitivity of the problem, some sort of condition number).



          If the backward error is small enough (by the order of $epsilon_{mathrm{mach}}$, you can say that you solved your problem to the greatest accuracy you could hope for on your computer. Why? Because even if you could somehow obtain "exact" values of the entries of your $A$ and $b$, the relative errors due to their storage in the finite precision arithmetic are of the order of the machine precision.



          Now what exactly is the backward error in the context of the linear (nonsingular, for simplicity) systems? Assume you want to solve $Ax=b$ and obtain some $tilde{x}=x+delta x$. So you look for $tilde{A}=A+delta A$ and $tilde{b}=b+delta b$ such that $tilde{A}tilde{x}=tilde{b}$, that is, your $tilde{x}$ is the exact solution of a perturbed problem. Of course, we would like $delta A$ and $delta b$ to be as close as possible to the original data. So consider, e.g., the following formulation:
          $$
          eta(tilde{x})=min{tau:;(A+delta A)tilde{x}=(b+delta b),;|delta A|leqtau|A|,;|delta b|leqtau|b|}.
          $$

          It can be show that the formula for $eta$ is actually very simple:
          $$
          eta(tilde{x})=frac{|b-Atilde{x}|}{|A||tilde{x}|+|b|}.
          $$

          (This result was obtained some time ago in a paper by Rigal and Gaches.)
          If you want to have a bound on the forward error, then the following holds:
          $$
          frac{|x-tilde{x}|}{|x|}leqfrac{eta(tilde{x})kappa(A)}{1-eta(tilde{x})kappa(A)}left(frac{|delta A|}{|A|}+frac{|delta b|}{|b|}right)
          $$

          assuming $eta(tilde{x})kappa(A)<1$.
          Now this bound looks very similar than the one in the question. Note however that the backward error $eta(tilde{x})$ can be much lower than what you have as the coefficient by the $kappa(A)$ in your inequality, that is, the relative residual norm:
          $$
          eta(tilde{x})leqfrac{|b-Atilde{x}|}{|b|}.
          $$

          At some point, depending on $A$ and $tilde{x}$, the backward error $eta(tilde{x})$ could be even much lower than the right-hand side in the inequality above.
          Note that the inequality $eta(tilde{x})kappa(A)<1$ guarantees you that the perturbed system matrix $tilde{A}$ is nonsingular. In other words, if you'd like to be sure you solved some nearby meaningful problem, you'd like to have $eta(tilde{x})$ at least of the order of $1/kappa(A)$. Also, starting from this point, decreasing $eta$ by an order of magnitude gives you roughly an additional significant digit in your solution error.



          Much more can be said about forward/backward errors, I invite you to have a look on the Higham's book if you want to know more details.



          Anyway, if you want to solve $Ax=b$, you need to know that your solution won't be exact no matter what algorithm you use and you always need to ask yourself (or somebody else) what do you expect from your solution and in what terms you want to measure the accuracy. This is sometimes hard to assess even in terms of backward errors since in some cases the classical norms like 2-norm or $infty$-norm used in rounding error analyses are not exactly what appears in the problem you want to solve.



          The typical example of this issue is solving discretised partial differential equations. Often, the forward error norm $|x-tilde{x}|$ nor the residual norm $|b-Atilde{x}|$ have a meaningful role in this context. In practice, one is normally interested in the accuracy in terms of fluxes, that is, the gradients of the discrete solutions. It is not unusual that the errors in fluxes can be much lower than the forward errors in the actual $tilde{x}$ which has usually the meaning of some coordinates of the discrete functional approximation. Instead of looking at the significant digits of $tilde{x}$, you want to know whether the error you made by getting $tilde{x}$ instead of $x$ is somewhat related to the discretisation errors in some proper norm (say, in terms of these gradients). Some sort of accuracy analysis of the algorithms in such a context is, however, still an open, and probably very hard to handle, problem.



          Of course, if you really insist that you want to solve $Ax=b$ (even though the exactness of your $A$ and $b$ is at least questionable), you can use some approaches based on arbitrary precision arithmetic. However, this is practical only for some small problems and can be incredibly expensive for larger problems. On some conferences, I saw sometimes people that are working on that (even on, e.g., GPUs) but these kind of solvers are so costly that they're mostly useless for practice. Not mentioning that solving linear problems exactly is at least arguable in any context.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Nov 28 '18 at 9:07

























          answered Feb 14 '14 at 13:25









          Algebraic PavelAlgebraic Pavel

          16.4k31840




          16.4k31840























              5












              $begingroup$

              In practice, a matrix with a condition number of $10^{3}$ to $10^{6}$ is not terribly problematic. For linear systems, a good rule of thumb is that you lose a digit of precision due to conditioning for every power of 10 in your condition number. Since double precision is a 15-16 digit representation, a matrix with a condition number in the range of $10^{3}$ to $10^{6}$ isn't considered problematic from the standpoint of numerical calculation with direct methods (LU decomposition, Cholesky, etc.). If you were using iterative methods, these condition numbers would be undesirable because larger condition numbers mean slower convergence, so it would behoove you to use a preconditioner to transform your linear system into an equivalent linear system with smaller condition number to speed convergence. More problematic would be extremely ill-conditioned problems, with condition numbers exceeding $10^{10}$ or so; for these, more numerical precision would be needed, such as quadruple precision, or arbitrary precision arithmetic.



              As for propagating the measurement/experimental error through the calculation, the situation you describe can only be remedied through obtaining better data. Consider the case where $kappa(A) = 1$, and $A$ is known exactly. Then the relative error in your solution is bounded by the relative error in your input data, as indicated by the error bound you cite. This error swamps out the error due to numerics. You can't get a more precise solution via better numerics in that case; you can only get a more precise solution via better, more precise data. In practice, people recognize this limitation, and they do the best they can with the data they have, which is probably why it's not mentioned much in engineering courses. The typical exercise of doing Gaussian elimination in extremely limited precision (say, 3 decimal places) is supposed to emulate the consequences of data with limited precision.






              share|cite|improve this answer









              $endgroup$


















                5












                $begingroup$

                In practice, a matrix with a condition number of $10^{3}$ to $10^{6}$ is not terribly problematic. For linear systems, a good rule of thumb is that you lose a digit of precision due to conditioning for every power of 10 in your condition number. Since double precision is a 15-16 digit representation, a matrix with a condition number in the range of $10^{3}$ to $10^{6}$ isn't considered problematic from the standpoint of numerical calculation with direct methods (LU decomposition, Cholesky, etc.). If you were using iterative methods, these condition numbers would be undesirable because larger condition numbers mean slower convergence, so it would behoove you to use a preconditioner to transform your linear system into an equivalent linear system with smaller condition number to speed convergence. More problematic would be extremely ill-conditioned problems, with condition numbers exceeding $10^{10}$ or so; for these, more numerical precision would be needed, such as quadruple precision, or arbitrary precision arithmetic.



                As for propagating the measurement/experimental error through the calculation, the situation you describe can only be remedied through obtaining better data. Consider the case where $kappa(A) = 1$, and $A$ is known exactly. Then the relative error in your solution is bounded by the relative error in your input data, as indicated by the error bound you cite. This error swamps out the error due to numerics. You can't get a more precise solution via better numerics in that case; you can only get a more precise solution via better, more precise data. In practice, people recognize this limitation, and they do the best they can with the data they have, which is probably why it's not mentioned much in engineering courses. The typical exercise of doing Gaussian elimination in extremely limited precision (say, 3 decimal places) is supposed to emulate the consequences of data with limited precision.






                share|cite|improve this answer









                $endgroup$
















                  5












                  5








                  5





                  $begingroup$

                  In practice, a matrix with a condition number of $10^{3}$ to $10^{6}$ is not terribly problematic. For linear systems, a good rule of thumb is that you lose a digit of precision due to conditioning for every power of 10 in your condition number. Since double precision is a 15-16 digit representation, a matrix with a condition number in the range of $10^{3}$ to $10^{6}$ isn't considered problematic from the standpoint of numerical calculation with direct methods (LU decomposition, Cholesky, etc.). If you were using iterative methods, these condition numbers would be undesirable because larger condition numbers mean slower convergence, so it would behoove you to use a preconditioner to transform your linear system into an equivalent linear system with smaller condition number to speed convergence. More problematic would be extremely ill-conditioned problems, with condition numbers exceeding $10^{10}$ or so; for these, more numerical precision would be needed, such as quadruple precision, or arbitrary precision arithmetic.



                  As for propagating the measurement/experimental error through the calculation, the situation you describe can only be remedied through obtaining better data. Consider the case where $kappa(A) = 1$, and $A$ is known exactly. Then the relative error in your solution is bounded by the relative error in your input data, as indicated by the error bound you cite. This error swamps out the error due to numerics. You can't get a more precise solution via better numerics in that case; you can only get a more precise solution via better, more precise data. In practice, people recognize this limitation, and they do the best they can with the data they have, which is probably why it's not mentioned much in engineering courses. The typical exercise of doing Gaussian elimination in extremely limited precision (say, 3 decimal places) is supposed to emulate the consequences of data with limited precision.






                  share|cite|improve this answer









                  $endgroup$



                  In practice, a matrix with a condition number of $10^{3}$ to $10^{6}$ is not terribly problematic. For linear systems, a good rule of thumb is that you lose a digit of precision due to conditioning for every power of 10 in your condition number. Since double precision is a 15-16 digit representation, a matrix with a condition number in the range of $10^{3}$ to $10^{6}$ isn't considered problematic from the standpoint of numerical calculation with direct methods (LU decomposition, Cholesky, etc.). If you were using iterative methods, these condition numbers would be undesirable because larger condition numbers mean slower convergence, so it would behoove you to use a preconditioner to transform your linear system into an equivalent linear system with smaller condition number to speed convergence. More problematic would be extremely ill-conditioned problems, with condition numbers exceeding $10^{10}$ or so; for these, more numerical precision would be needed, such as quadruple precision, or arbitrary precision arithmetic.



                  As for propagating the measurement/experimental error through the calculation, the situation you describe can only be remedied through obtaining better data. Consider the case where $kappa(A) = 1$, and $A$ is known exactly. Then the relative error in your solution is bounded by the relative error in your input data, as indicated by the error bound you cite. This error swamps out the error due to numerics. You can't get a more precise solution via better numerics in that case; you can only get a more precise solution via better, more precise data. In practice, people recognize this limitation, and they do the best they can with the data they have, which is probably why it's not mentioned much in engineering courses. The typical exercise of doing Gaussian elimination in extremely limited precision (say, 3 decimal places) is supposed to emulate the consequences of data with limited precision.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Feb 13 '14 at 22:58









                  Geoff OxberryGeoff Oxberry

                  38618




                  38618























                      0












                      $begingroup$

                      See Least square best solution



                      The context where i know this is a college course, and a comment made by a professor about economics. Economics talks about multicollinearity, i think. He said that people were looking at ever more subtle numerical methods for dealing with high condition number, meaning matrices that are very nearly singular. His, very well informed, response was that the matrices were just singular. This is something that economists would not want to admit, as it would imply changes in the underlying theory.



                      Anyway, with high condition number, you may try to find out what happens if your matrix is perturbed to a singular one. In that case, the image of the mapping is no longer full dimensional. Next, this is where problem dependence comes in, your target $b$ may or may not be near the subspace of the image, I guess you could also call it range. Anyway, you are now in the arena of Kalman Filters. In the first chapter of any book on this, they discuss the problem of minimizing $parallel Ax-b parallel$ when it is not possible to get an exact hit.



                      Well, that's a start.






                      share|cite|improve this answer











                      $endgroup$













                      • $begingroup$
                        I am not interested in the case that the matrix is (close to being) singular. To clarify, I want to find an "exact" solution to Ax = b in the case of a moderate condition number of A (say 10^3 or 10^5, but much smaller than the inverse of machine epsilon), and am surprised at the apparent requirements this puts on the accuracy of my knowledge of b.
                        $endgroup$
                        – Bleevoe
                        Feb 13 '14 at 22:08
















                      0












                      $begingroup$

                      See Least square best solution



                      The context where i know this is a college course, and a comment made by a professor about economics. Economics talks about multicollinearity, i think. He said that people were looking at ever more subtle numerical methods for dealing with high condition number, meaning matrices that are very nearly singular. His, very well informed, response was that the matrices were just singular. This is something that economists would not want to admit, as it would imply changes in the underlying theory.



                      Anyway, with high condition number, you may try to find out what happens if your matrix is perturbed to a singular one. In that case, the image of the mapping is no longer full dimensional. Next, this is where problem dependence comes in, your target $b$ may or may not be near the subspace of the image, I guess you could also call it range. Anyway, you are now in the arena of Kalman Filters. In the first chapter of any book on this, they discuss the problem of minimizing $parallel Ax-b parallel$ when it is not possible to get an exact hit.



                      Well, that's a start.






                      share|cite|improve this answer











                      $endgroup$













                      • $begingroup$
                        I am not interested in the case that the matrix is (close to being) singular. To clarify, I want to find an "exact" solution to Ax = b in the case of a moderate condition number of A (say 10^3 or 10^5, but much smaller than the inverse of machine epsilon), and am surprised at the apparent requirements this puts on the accuracy of my knowledge of b.
                        $endgroup$
                        – Bleevoe
                        Feb 13 '14 at 22:08














                      0












                      0








                      0





                      $begingroup$

                      See Least square best solution



                      The context where i know this is a college course, and a comment made by a professor about economics. Economics talks about multicollinearity, i think. He said that people were looking at ever more subtle numerical methods for dealing with high condition number, meaning matrices that are very nearly singular. His, very well informed, response was that the matrices were just singular. This is something that economists would not want to admit, as it would imply changes in the underlying theory.



                      Anyway, with high condition number, you may try to find out what happens if your matrix is perturbed to a singular one. In that case, the image of the mapping is no longer full dimensional. Next, this is where problem dependence comes in, your target $b$ may or may not be near the subspace of the image, I guess you could also call it range. Anyway, you are now in the arena of Kalman Filters. In the first chapter of any book on this, they discuss the problem of minimizing $parallel Ax-b parallel$ when it is not possible to get an exact hit.



                      Well, that's a start.






                      share|cite|improve this answer











                      $endgroup$



                      See Least square best solution



                      The context where i know this is a college course, and a comment made by a professor about economics. Economics talks about multicollinearity, i think. He said that people were looking at ever more subtle numerical methods for dealing with high condition number, meaning matrices that are very nearly singular. His, very well informed, response was that the matrices were just singular. This is something that economists would not want to admit, as it would imply changes in the underlying theory.



                      Anyway, with high condition number, you may try to find out what happens if your matrix is perturbed to a singular one. In that case, the image of the mapping is no longer full dimensional. Next, this is where problem dependence comes in, your target $b$ may or may not be near the subspace of the image, I guess you could also call it range. Anyway, you are now in the arena of Kalman Filters. In the first chapter of any book on this, they discuss the problem of minimizing $parallel Ax-b parallel$ when it is not possible to get an exact hit.



                      Well, that's a start.







                      share|cite|improve this answer














                      share|cite|improve this answer



                      share|cite|improve this answer








                      edited Apr 13 '17 at 12:21









                      Community

                      1




                      1










                      answered Feb 13 '14 at 20:42









                      Will JagyWill Jagy

                      103k5101200




                      103k5101200












                      • $begingroup$
                        I am not interested in the case that the matrix is (close to being) singular. To clarify, I want to find an "exact" solution to Ax = b in the case of a moderate condition number of A (say 10^3 or 10^5, but much smaller than the inverse of machine epsilon), and am surprised at the apparent requirements this puts on the accuracy of my knowledge of b.
                        $endgroup$
                        – Bleevoe
                        Feb 13 '14 at 22:08


















                      • $begingroup$
                        I am not interested in the case that the matrix is (close to being) singular. To clarify, I want to find an "exact" solution to Ax = b in the case of a moderate condition number of A (say 10^3 or 10^5, but much smaller than the inverse of machine epsilon), and am surprised at the apparent requirements this puts on the accuracy of my knowledge of b.
                        $endgroup$
                        – Bleevoe
                        Feb 13 '14 at 22:08
















                      $begingroup$
                      I am not interested in the case that the matrix is (close to being) singular. To clarify, I want to find an "exact" solution to Ax = b in the case of a moderate condition number of A (say 10^3 or 10^5, but much smaller than the inverse of machine epsilon), and am surprised at the apparent requirements this puts on the accuracy of my knowledge of b.
                      $endgroup$
                      – Bleevoe
                      Feb 13 '14 at 22:08




                      $begingroup$
                      I am not interested in the case that the matrix is (close to being) singular. To clarify, I want to find an "exact" solution to Ax = b in the case of a moderate condition number of A (say 10^3 or 10^5, but much smaller than the inverse of machine epsilon), and am surprised at the apparent requirements this puts on the accuracy of my knowledge of b.
                      $endgroup$
                      – Bleevoe
                      Feb 13 '14 at 22:08


















                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f675474%2fwhat-is-the-practical-impact-of-a-matrixs-condition-number%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Biblatex bibliography style without URLs when DOI exists (in Overleaf with Zotero bibliography)

                      ComboBox Display Member on multiple fields

                      Is it possible to collect Nectar points via Trainline?