Why does $ operatorname{Var}(X) = E[X^2] - (E[X])^2 $












8












$begingroup$


$ operatorname{Var}(X) = E[X^2] - (E[X])^2 $



I have seen and understand (mathematically) the proof for this. What I want to understand is: intuitively, why is this true? What does this formula tell us? From the formula, we see that if we subtract the square of expected value of x from the expected value of $ x^2 $, we get a measure of dispersion in the data (or in the case of standard deviation, the root of this value gets us a measure of dispersion in the data).



So it seems that there is some linkage between the expected value of $ x^2 $ and $ x $. How do I make sense of this formula? For example, the formula



$$ sigma^2 = frac 1n sum_{i = 1}^n (x_i - bar{x})^2 $$



makes perfect intuitive sense. It simply gives us the average of squares of deviations from the mean. What does the other formula tell us?










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    But... this is... a... definition, no?
    $endgroup$
    – Did
    Dec 4 '18 at 23:51
















8












$begingroup$


$ operatorname{Var}(X) = E[X^2] - (E[X])^2 $



I have seen and understand (mathematically) the proof for this. What I want to understand is: intuitively, why is this true? What does this formula tell us? From the formula, we see that if we subtract the square of expected value of x from the expected value of $ x^2 $, we get a measure of dispersion in the data (or in the case of standard deviation, the root of this value gets us a measure of dispersion in the data).



So it seems that there is some linkage between the expected value of $ x^2 $ and $ x $. How do I make sense of this formula? For example, the formula



$$ sigma^2 = frac 1n sum_{i = 1}^n (x_i - bar{x})^2 $$



makes perfect intuitive sense. It simply gives us the average of squares of deviations from the mean. What does the other formula tell us?










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    But... this is... a... definition, no?
    $endgroup$
    – Did
    Dec 4 '18 at 23:51














8












8








8


1



$begingroup$


$ operatorname{Var}(X) = E[X^2] - (E[X])^2 $



I have seen and understand (mathematically) the proof for this. What I want to understand is: intuitively, why is this true? What does this formula tell us? From the formula, we see that if we subtract the square of expected value of x from the expected value of $ x^2 $, we get a measure of dispersion in the data (or in the case of standard deviation, the root of this value gets us a measure of dispersion in the data).



So it seems that there is some linkage between the expected value of $ x^2 $ and $ x $. How do I make sense of this formula? For example, the formula



$$ sigma^2 = frac 1n sum_{i = 1}^n (x_i - bar{x})^2 $$



makes perfect intuitive sense. It simply gives us the average of squares of deviations from the mean. What does the other formula tell us?










share|cite|improve this question











$endgroup$




$ operatorname{Var}(X) = E[X^2] - (E[X])^2 $



I have seen and understand (mathematically) the proof for this. What I want to understand is: intuitively, why is this true? What does this formula tell us? From the formula, we see that if we subtract the square of expected value of x from the expected value of $ x^2 $, we get a measure of dispersion in the data (or in the case of standard deviation, the root of this value gets us a measure of dispersion in the data).



So it seems that there is some linkage between the expected value of $ x^2 $ and $ x $. How do I make sense of this formula? For example, the formula



$$ sigma^2 = frac 1n sum_{i = 1}^n (x_i - bar{x})^2 $$



makes perfect intuitive sense. It simply gives us the average of squares of deviations from the mean. What does the other formula tell us?







probability statistics variance






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 4 '18 at 22:50









Foobaz John

22.3k41452




22.3k41452










asked Dec 4 '18 at 22:45









WorldGovWorldGov

324111




324111








  • 1




    $begingroup$
    But... this is... a... definition, no?
    $endgroup$
    – Did
    Dec 4 '18 at 23:51














  • 1




    $begingroup$
    But... this is... a... definition, no?
    $endgroup$
    – Did
    Dec 4 '18 at 23:51








1




1




$begingroup$
But... this is... a... definition, no?
$endgroup$
– Did
Dec 4 '18 at 23:51




$begingroup$
But... this is... a... definition, no?
$endgroup$
– Did
Dec 4 '18 at 23:51










4 Answers
4






active

oldest

votes


















1












$begingroup$

The other formula tells you exactly the same thing as the one that you have given with $x,x^2$ $&$ $n$. You say you understand this formula so I assume that you also get that variance is just the average of all the deviations squared.



Now,
$mathbb{E}(X)$ is just the average of of all $x’_is$, which is to say that it is the mean of all $x’_is$.



Let us now define a deviation using the expectation operator.
$$Deviation = D = (X-mathbb{E}(X))$$
And Deviation squared is,
$$D^2 = (X-mathbb{E}(X))^2$$



Now that we have deviation let’s find the variance.
Using the above mentioned definition of variance, you should be able to see that



$$Variance = mathbb{E}(D^2)$$
Since $mathbb{E}(X)$ is the average value of $X$,The above equation is just the average of deviations squared.



Putting the value of $D^2$, we get,
$$Var(X) = mathbb{E}(X-mathbb{E}(X))^2 = mathbb{E}(X^2+mathbb{E}(X)^2-2X*mathbb{E}(X)) = mathbb{E}(X^2)+mathbb{E}(X)^2-2mathbb{E}(X)^2 = mathbb{E}(X^2)-mathbb{E}(X)^2$$
Hope this helps.






share|cite|improve this answer









$endgroup$





















    5












    $begingroup$

    Easy! Expand by the definition. Variance is the mean squared deviation, i.e., $V(X) = E((X-mu)^2).$ Now:



    $$ (X-mu)^2 = X^2 - 2X mu + mu^2$$



    and use the fact that $E(cdot)$ is a linear function and that $mu$ (the mean) is a constant.



    The shortcut computes the same thing, but counts the difference in the mean of squares and the square of the mean.






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      How can one prove that the expected value is a linear function?
      $endgroup$
      – Zacky
      Dec 4 '18 at 22:57






    • 1




      $begingroup$
      It follows from writing it as a sum: $$E(kX + Y) = sum (kxP(X = x) + yP(Y = y)) = ksum xP(X = x) + sum yP(Y = y)$$
      $endgroup$
      – Sean Roberson
      Dec 4 '18 at 22:59








    • 1




      $begingroup$
      Just to add to this, and take this with a grain of salt since I don't know probability: That this is a good definition for variance follows from wanting to get a sense of the distance you expect values of your random variable to be from the mean, one might naively choose the absolute value, but squaring is better as a smooth operation.
      $endgroup$
      – qbert
      Dec 4 '18 at 23:50



















    4












    $begingroup$

    Some times ago, a professor showed me this right triangle:



    enter image description here



    The formula you reported can be seen as the application of the Phytagora's theorem:



    $$P = mathbb{E}[X^2] = text{Var}[X] + mathbb{E}^2[X].$$



    Here, $P = mathbb{E}^2[X]$ (which is the second uncentered moment of $X$) is read as "the power" of $X$. Indeed, there is a physical explanation.



    In physics, energy and power are related to the "square" of some quantity (i.e. $X$ can be velocity for kinetic energy, current for Joule law, etc.).



    Suppose that these quantities are random (indeed, $X$ is a random variable). Then, the power $P$ is the sum of two contribution:




    1. The square of the expected value of $X$;

    2. Its variance (i.e. how much it varies from the expected value).


    It is clear that, if $X$ is not random, then $text{Var}[X] = 0$ and $mathbb{E}^2[X] = X^2$, so that:



    $$P = X^2,$$



    which is a typical physical definition of energy/power. When randomness is present, the we must use the whole formula



    $$P = mathbb{E}[X^2] = text{Var}[X] + mathbb{E}^2[X]$$



    to evaluate the power of the signal.



    As a final remark, the power of $X$ can be seen as the length of the vector which components corresponds to the square of its expected value plus its variability.





    P.S.
    A further clarification... the values $P$, $text{Var}[X]$ and $mathbb{E}^2[X]$ represent the squares of the sides of the triangle, not their length...






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      +1, I love this interpretation! I never saw it before.
      $endgroup$
      – Sean Roberson
      Dec 5 '18 at 0:50



















    0












    $begingroup$

    One intuitive way of measuring the variation of $X$ would be to look at how far, on average, $X$ is from it’s mean, $E(X)=mu$. That is, we want to compute $E(X-mu)$. However, mathematically, it’s “inconvenient” to use $E(X-mu)$, so we use the more convenient $E((X-mu)^{2}))$.



    To add, the formula you gave above, $frac{1}{n}sum_{i=1}^{n}(x_{i}-bar{x})$ is what you would use when you have finite data points. There is nothing random once you have your data points. $Var(X)$ is for a random variable, that can take on finite values, infinite countable values, or values on an interval.






    share|cite|improve this answer











    $endgroup$













      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3026285%2fwhy-does-operatornamevarx-ex2-ex2%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      4 Answers
      4






      active

      oldest

      votes








      4 Answers
      4






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      1












      $begingroup$

      The other formula tells you exactly the same thing as the one that you have given with $x,x^2$ $&$ $n$. You say you understand this formula so I assume that you also get that variance is just the average of all the deviations squared.



      Now,
      $mathbb{E}(X)$ is just the average of of all $x’_is$, which is to say that it is the mean of all $x’_is$.



      Let us now define a deviation using the expectation operator.
      $$Deviation = D = (X-mathbb{E}(X))$$
      And Deviation squared is,
      $$D^2 = (X-mathbb{E}(X))^2$$



      Now that we have deviation let’s find the variance.
      Using the above mentioned definition of variance, you should be able to see that



      $$Variance = mathbb{E}(D^2)$$
      Since $mathbb{E}(X)$ is the average value of $X$,The above equation is just the average of deviations squared.



      Putting the value of $D^2$, we get,
      $$Var(X) = mathbb{E}(X-mathbb{E}(X))^2 = mathbb{E}(X^2+mathbb{E}(X)^2-2X*mathbb{E}(X)) = mathbb{E}(X^2)+mathbb{E}(X)^2-2mathbb{E}(X)^2 = mathbb{E}(X^2)-mathbb{E}(X)^2$$
      Hope this helps.






      share|cite|improve this answer









      $endgroup$


















        1












        $begingroup$

        The other formula tells you exactly the same thing as the one that you have given with $x,x^2$ $&$ $n$. You say you understand this formula so I assume that you also get that variance is just the average of all the deviations squared.



        Now,
        $mathbb{E}(X)$ is just the average of of all $x’_is$, which is to say that it is the mean of all $x’_is$.



        Let us now define a deviation using the expectation operator.
        $$Deviation = D = (X-mathbb{E}(X))$$
        And Deviation squared is,
        $$D^2 = (X-mathbb{E}(X))^2$$



        Now that we have deviation let’s find the variance.
        Using the above mentioned definition of variance, you should be able to see that



        $$Variance = mathbb{E}(D^2)$$
        Since $mathbb{E}(X)$ is the average value of $X$,The above equation is just the average of deviations squared.



        Putting the value of $D^2$, we get,
        $$Var(X) = mathbb{E}(X-mathbb{E}(X))^2 = mathbb{E}(X^2+mathbb{E}(X)^2-2X*mathbb{E}(X)) = mathbb{E}(X^2)+mathbb{E}(X)^2-2mathbb{E}(X)^2 = mathbb{E}(X^2)-mathbb{E}(X)^2$$
        Hope this helps.






        share|cite|improve this answer









        $endgroup$
















          1












          1








          1





          $begingroup$

          The other formula tells you exactly the same thing as the one that you have given with $x,x^2$ $&$ $n$. You say you understand this formula so I assume that you also get that variance is just the average of all the deviations squared.



          Now,
          $mathbb{E}(X)$ is just the average of of all $x’_is$, which is to say that it is the mean of all $x’_is$.



          Let us now define a deviation using the expectation operator.
          $$Deviation = D = (X-mathbb{E}(X))$$
          And Deviation squared is,
          $$D^2 = (X-mathbb{E}(X))^2$$



          Now that we have deviation let’s find the variance.
          Using the above mentioned definition of variance, you should be able to see that



          $$Variance = mathbb{E}(D^2)$$
          Since $mathbb{E}(X)$ is the average value of $X$,The above equation is just the average of deviations squared.



          Putting the value of $D^2$, we get,
          $$Var(X) = mathbb{E}(X-mathbb{E}(X))^2 = mathbb{E}(X^2+mathbb{E}(X)^2-2X*mathbb{E}(X)) = mathbb{E}(X^2)+mathbb{E}(X)^2-2mathbb{E}(X)^2 = mathbb{E}(X^2)-mathbb{E}(X)^2$$
          Hope this helps.






          share|cite|improve this answer









          $endgroup$



          The other formula tells you exactly the same thing as the one that you have given with $x,x^2$ $&$ $n$. You say you understand this formula so I assume that you also get that variance is just the average of all the deviations squared.



          Now,
          $mathbb{E}(X)$ is just the average of of all $x’_is$, which is to say that it is the mean of all $x’_is$.



          Let us now define a deviation using the expectation operator.
          $$Deviation = D = (X-mathbb{E}(X))$$
          And Deviation squared is,
          $$D^2 = (X-mathbb{E}(X))^2$$



          Now that we have deviation let’s find the variance.
          Using the above mentioned definition of variance, you should be able to see that



          $$Variance = mathbb{E}(D^2)$$
          Since $mathbb{E}(X)$ is the average value of $X$,The above equation is just the average of deviations squared.



          Putting the value of $D^2$, we get,
          $$Var(X) = mathbb{E}(X-mathbb{E}(X))^2 = mathbb{E}(X^2+mathbb{E}(X)^2-2X*mathbb{E}(X)) = mathbb{E}(X^2)+mathbb{E}(X)^2-2mathbb{E}(X)^2 = mathbb{E}(X^2)-mathbb{E}(X)^2$$
          Hope this helps.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Dec 5 '18 at 0:02









          user601297user601297

          37119




          37119























              5












              $begingroup$

              Easy! Expand by the definition. Variance is the mean squared deviation, i.e., $V(X) = E((X-mu)^2).$ Now:



              $$ (X-mu)^2 = X^2 - 2X mu + mu^2$$



              and use the fact that $E(cdot)$ is a linear function and that $mu$ (the mean) is a constant.



              The shortcut computes the same thing, but counts the difference in the mean of squares and the square of the mean.






              share|cite|improve this answer









              $endgroup$













              • $begingroup$
                How can one prove that the expected value is a linear function?
                $endgroup$
                – Zacky
                Dec 4 '18 at 22:57






              • 1




                $begingroup$
                It follows from writing it as a sum: $$E(kX + Y) = sum (kxP(X = x) + yP(Y = y)) = ksum xP(X = x) + sum yP(Y = y)$$
                $endgroup$
                – Sean Roberson
                Dec 4 '18 at 22:59








              • 1




                $begingroup$
                Just to add to this, and take this with a grain of salt since I don't know probability: That this is a good definition for variance follows from wanting to get a sense of the distance you expect values of your random variable to be from the mean, one might naively choose the absolute value, but squaring is better as a smooth operation.
                $endgroup$
                – qbert
                Dec 4 '18 at 23:50
















              5












              $begingroup$

              Easy! Expand by the definition. Variance is the mean squared deviation, i.e., $V(X) = E((X-mu)^2).$ Now:



              $$ (X-mu)^2 = X^2 - 2X mu + mu^2$$



              and use the fact that $E(cdot)$ is a linear function and that $mu$ (the mean) is a constant.



              The shortcut computes the same thing, but counts the difference in the mean of squares and the square of the mean.






              share|cite|improve this answer









              $endgroup$













              • $begingroup$
                How can one prove that the expected value is a linear function?
                $endgroup$
                – Zacky
                Dec 4 '18 at 22:57






              • 1




                $begingroup$
                It follows from writing it as a sum: $$E(kX + Y) = sum (kxP(X = x) + yP(Y = y)) = ksum xP(X = x) + sum yP(Y = y)$$
                $endgroup$
                – Sean Roberson
                Dec 4 '18 at 22:59








              • 1




                $begingroup$
                Just to add to this, and take this with a grain of salt since I don't know probability: That this is a good definition for variance follows from wanting to get a sense of the distance you expect values of your random variable to be from the mean, one might naively choose the absolute value, but squaring is better as a smooth operation.
                $endgroup$
                – qbert
                Dec 4 '18 at 23:50














              5












              5








              5





              $begingroup$

              Easy! Expand by the definition. Variance is the mean squared deviation, i.e., $V(X) = E((X-mu)^2).$ Now:



              $$ (X-mu)^2 = X^2 - 2X mu + mu^2$$



              and use the fact that $E(cdot)$ is a linear function and that $mu$ (the mean) is a constant.



              The shortcut computes the same thing, but counts the difference in the mean of squares and the square of the mean.






              share|cite|improve this answer









              $endgroup$



              Easy! Expand by the definition. Variance is the mean squared deviation, i.e., $V(X) = E((X-mu)^2).$ Now:



              $$ (X-mu)^2 = X^2 - 2X mu + mu^2$$



              and use the fact that $E(cdot)$ is a linear function and that $mu$ (the mean) is a constant.



              The shortcut computes the same thing, but counts the difference in the mean of squares and the square of the mean.







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered Dec 4 '18 at 22:50









              Sean RobersonSean Roberson

              6,39031327




              6,39031327












              • $begingroup$
                How can one prove that the expected value is a linear function?
                $endgroup$
                – Zacky
                Dec 4 '18 at 22:57






              • 1




                $begingroup$
                It follows from writing it as a sum: $$E(kX + Y) = sum (kxP(X = x) + yP(Y = y)) = ksum xP(X = x) + sum yP(Y = y)$$
                $endgroup$
                – Sean Roberson
                Dec 4 '18 at 22:59








              • 1




                $begingroup$
                Just to add to this, and take this with a grain of salt since I don't know probability: That this is a good definition for variance follows from wanting to get a sense of the distance you expect values of your random variable to be from the mean, one might naively choose the absolute value, but squaring is better as a smooth operation.
                $endgroup$
                – qbert
                Dec 4 '18 at 23:50


















              • $begingroup$
                How can one prove that the expected value is a linear function?
                $endgroup$
                – Zacky
                Dec 4 '18 at 22:57






              • 1




                $begingroup$
                It follows from writing it as a sum: $$E(kX + Y) = sum (kxP(X = x) + yP(Y = y)) = ksum xP(X = x) + sum yP(Y = y)$$
                $endgroup$
                – Sean Roberson
                Dec 4 '18 at 22:59








              • 1




                $begingroup$
                Just to add to this, and take this with a grain of salt since I don't know probability: That this is a good definition for variance follows from wanting to get a sense of the distance you expect values of your random variable to be from the mean, one might naively choose the absolute value, but squaring is better as a smooth operation.
                $endgroup$
                – qbert
                Dec 4 '18 at 23:50
















              $begingroup$
              How can one prove that the expected value is a linear function?
              $endgroup$
              – Zacky
              Dec 4 '18 at 22:57




              $begingroup$
              How can one prove that the expected value is a linear function?
              $endgroup$
              – Zacky
              Dec 4 '18 at 22:57




              1




              1




              $begingroup$
              It follows from writing it as a sum: $$E(kX + Y) = sum (kxP(X = x) + yP(Y = y)) = ksum xP(X = x) + sum yP(Y = y)$$
              $endgroup$
              – Sean Roberson
              Dec 4 '18 at 22:59






              $begingroup$
              It follows from writing it as a sum: $$E(kX + Y) = sum (kxP(X = x) + yP(Y = y)) = ksum xP(X = x) + sum yP(Y = y)$$
              $endgroup$
              – Sean Roberson
              Dec 4 '18 at 22:59






              1




              1




              $begingroup$
              Just to add to this, and take this with a grain of salt since I don't know probability: That this is a good definition for variance follows from wanting to get a sense of the distance you expect values of your random variable to be from the mean, one might naively choose the absolute value, but squaring is better as a smooth operation.
              $endgroup$
              – qbert
              Dec 4 '18 at 23:50




              $begingroup$
              Just to add to this, and take this with a grain of salt since I don't know probability: That this is a good definition for variance follows from wanting to get a sense of the distance you expect values of your random variable to be from the mean, one might naively choose the absolute value, but squaring is better as a smooth operation.
              $endgroup$
              – qbert
              Dec 4 '18 at 23:50











              4












              $begingroup$

              Some times ago, a professor showed me this right triangle:



              enter image description here



              The formula you reported can be seen as the application of the Phytagora's theorem:



              $$P = mathbb{E}[X^2] = text{Var}[X] + mathbb{E}^2[X].$$



              Here, $P = mathbb{E}^2[X]$ (which is the second uncentered moment of $X$) is read as "the power" of $X$. Indeed, there is a physical explanation.



              In physics, energy and power are related to the "square" of some quantity (i.e. $X$ can be velocity for kinetic energy, current for Joule law, etc.).



              Suppose that these quantities are random (indeed, $X$ is a random variable). Then, the power $P$ is the sum of two contribution:




              1. The square of the expected value of $X$;

              2. Its variance (i.e. how much it varies from the expected value).


              It is clear that, if $X$ is not random, then $text{Var}[X] = 0$ and $mathbb{E}^2[X] = X^2$, so that:



              $$P = X^2,$$



              which is a typical physical definition of energy/power. When randomness is present, the we must use the whole formula



              $$P = mathbb{E}[X^2] = text{Var}[X] + mathbb{E}^2[X]$$



              to evaluate the power of the signal.



              As a final remark, the power of $X$ can be seen as the length of the vector which components corresponds to the square of its expected value plus its variability.





              P.S.
              A further clarification... the values $P$, $text{Var}[X]$ and $mathbb{E}^2[X]$ represent the squares of the sides of the triangle, not their length...






              share|cite|improve this answer











              $endgroup$













              • $begingroup$
                +1, I love this interpretation! I never saw it before.
                $endgroup$
                – Sean Roberson
                Dec 5 '18 at 0:50
















              4












              $begingroup$

              Some times ago, a professor showed me this right triangle:



              enter image description here



              The formula you reported can be seen as the application of the Phytagora's theorem:



              $$P = mathbb{E}[X^2] = text{Var}[X] + mathbb{E}^2[X].$$



              Here, $P = mathbb{E}^2[X]$ (which is the second uncentered moment of $X$) is read as "the power" of $X$. Indeed, there is a physical explanation.



              In physics, energy and power are related to the "square" of some quantity (i.e. $X$ can be velocity for kinetic energy, current for Joule law, etc.).



              Suppose that these quantities are random (indeed, $X$ is a random variable). Then, the power $P$ is the sum of two contribution:




              1. The square of the expected value of $X$;

              2. Its variance (i.e. how much it varies from the expected value).


              It is clear that, if $X$ is not random, then $text{Var}[X] = 0$ and $mathbb{E}^2[X] = X^2$, so that:



              $$P = X^2,$$



              which is a typical physical definition of energy/power. When randomness is present, the we must use the whole formula



              $$P = mathbb{E}[X^2] = text{Var}[X] + mathbb{E}^2[X]$$



              to evaluate the power of the signal.



              As a final remark, the power of $X$ can be seen as the length of the vector which components corresponds to the square of its expected value plus its variability.





              P.S.
              A further clarification... the values $P$, $text{Var}[X]$ and $mathbb{E}^2[X]$ represent the squares of the sides of the triangle, not their length...






              share|cite|improve this answer











              $endgroup$













              • $begingroup$
                +1, I love this interpretation! I never saw it before.
                $endgroup$
                – Sean Roberson
                Dec 5 '18 at 0:50














              4












              4








              4





              $begingroup$

              Some times ago, a professor showed me this right triangle:



              enter image description here



              The formula you reported can be seen as the application of the Phytagora's theorem:



              $$P = mathbb{E}[X^2] = text{Var}[X] + mathbb{E}^2[X].$$



              Here, $P = mathbb{E}^2[X]$ (which is the second uncentered moment of $X$) is read as "the power" of $X$. Indeed, there is a physical explanation.



              In physics, energy and power are related to the "square" of some quantity (i.e. $X$ can be velocity for kinetic energy, current for Joule law, etc.).



              Suppose that these quantities are random (indeed, $X$ is a random variable). Then, the power $P$ is the sum of two contribution:




              1. The square of the expected value of $X$;

              2. Its variance (i.e. how much it varies from the expected value).


              It is clear that, if $X$ is not random, then $text{Var}[X] = 0$ and $mathbb{E}^2[X] = X^2$, so that:



              $$P = X^2,$$



              which is a typical physical definition of energy/power. When randomness is present, the we must use the whole formula



              $$P = mathbb{E}[X^2] = text{Var}[X] + mathbb{E}^2[X]$$



              to evaluate the power of the signal.



              As a final remark, the power of $X$ can be seen as the length of the vector which components corresponds to the square of its expected value plus its variability.





              P.S.
              A further clarification... the values $P$, $text{Var}[X]$ and $mathbb{E}^2[X]$ represent the squares of the sides of the triangle, not their length...






              share|cite|improve this answer











              $endgroup$



              Some times ago, a professor showed me this right triangle:



              enter image description here



              The formula you reported can be seen as the application of the Phytagora's theorem:



              $$P = mathbb{E}[X^2] = text{Var}[X] + mathbb{E}^2[X].$$



              Here, $P = mathbb{E}^2[X]$ (which is the second uncentered moment of $X$) is read as "the power" of $X$. Indeed, there is a physical explanation.



              In physics, energy and power are related to the "square" of some quantity (i.e. $X$ can be velocity for kinetic energy, current for Joule law, etc.).



              Suppose that these quantities are random (indeed, $X$ is a random variable). Then, the power $P$ is the sum of two contribution:




              1. The square of the expected value of $X$;

              2. Its variance (i.e. how much it varies from the expected value).


              It is clear that, if $X$ is not random, then $text{Var}[X] = 0$ and $mathbb{E}^2[X] = X^2$, so that:



              $$P = X^2,$$



              which is a typical physical definition of energy/power. When randomness is present, the we must use the whole formula



              $$P = mathbb{E}[X^2] = text{Var}[X] + mathbb{E}^2[X]$$



              to evaluate the power of the signal.



              As a final remark, the power of $X$ can be seen as the length of the vector which components corresponds to the square of its expected value plus its variability.





              P.S.
              A further clarification... the values $P$, $text{Var}[X]$ and $mathbb{E}^2[X]$ represent the squares of the sides of the triangle, not their length...







              share|cite|improve this answer














              share|cite|improve this answer



              share|cite|improve this answer








              edited Dec 4 '18 at 23:52

























              answered Dec 4 '18 at 23:47









              the_candymanthe_candyman

              8,97832145




              8,97832145












              • $begingroup$
                +1, I love this interpretation! I never saw it before.
                $endgroup$
                – Sean Roberson
                Dec 5 '18 at 0:50


















              • $begingroup$
                +1, I love this interpretation! I never saw it before.
                $endgroup$
                – Sean Roberson
                Dec 5 '18 at 0:50
















              $begingroup$
              +1, I love this interpretation! I never saw it before.
              $endgroup$
              – Sean Roberson
              Dec 5 '18 at 0:50




              $begingroup$
              +1, I love this interpretation! I never saw it before.
              $endgroup$
              – Sean Roberson
              Dec 5 '18 at 0:50











              0












              $begingroup$

              One intuitive way of measuring the variation of $X$ would be to look at how far, on average, $X$ is from it’s mean, $E(X)=mu$. That is, we want to compute $E(X-mu)$. However, mathematically, it’s “inconvenient” to use $E(X-mu)$, so we use the more convenient $E((X-mu)^{2}))$.



              To add, the formula you gave above, $frac{1}{n}sum_{i=1}^{n}(x_{i}-bar{x})$ is what you would use when you have finite data points. There is nothing random once you have your data points. $Var(X)$ is for a random variable, that can take on finite values, infinite countable values, or values on an interval.






              share|cite|improve this answer











              $endgroup$


















                0












                $begingroup$

                One intuitive way of measuring the variation of $X$ would be to look at how far, on average, $X$ is from it’s mean, $E(X)=mu$. That is, we want to compute $E(X-mu)$. However, mathematically, it’s “inconvenient” to use $E(X-mu)$, so we use the more convenient $E((X-mu)^{2}))$.



                To add, the formula you gave above, $frac{1}{n}sum_{i=1}^{n}(x_{i}-bar{x})$ is what you would use when you have finite data points. There is nothing random once you have your data points. $Var(X)$ is for a random variable, that can take on finite values, infinite countable values, or values on an interval.






                share|cite|improve this answer











                $endgroup$
















                  0












                  0








                  0





                  $begingroup$

                  One intuitive way of measuring the variation of $X$ would be to look at how far, on average, $X$ is from it’s mean, $E(X)=mu$. That is, we want to compute $E(X-mu)$. However, mathematically, it’s “inconvenient” to use $E(X-mu)$, so we use the more convenient $E((X-mu)^{2}))$.



                  To add, the formula you gave above, $frac{1}{n}sum_{i=1}^{n}(x_{i}-bar{x})$ is what you would use when you have finite data points. There is nothing random once you have your data points. $Var(X)$ is for a random variable, that can take on finite values, infinite countable values, or values on an interval.






                  share|cite|improve this answer











                  $endgroup$



                  One intuitive way of measuring the variation of $X$ would be to look at how far, on average, $X$ is from it’s mean, $E(X)=mu$. That is, we want to compute $E(X-mu)$. However, mathematically, it’s “inconvenient” to use $E(X-mu)$, so we use the more convenient $E((X-mu)^{2}))$.



                  To add, the formula you gave above, $frac{1}{n}sum_{i=1}^{n}(x_{i}-bar{x})$ is what you would use when you have finite data points. There is nothing random once you have your data points. $Var(X)$ is for a random variable, that can take on finite values, infinite countable values, or values on an interval.







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited Dec 4 '18 at 23:18

























                  answered Dec 4 '18 at 23:10









                  Live Free or π HardLive Free or π Hard

                  479213




                  479213






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3026285%2fwhy-does-operatornamevarx-ex2-ex2%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Biblatex bibliography style without URLs when DOI exists (in Overleaf with Zotero bibliography)

                      ComboBox Display Member on multiple fields

                      Is it possible to collect Nectar points via Trainline?