Why does the function VarianceMLE give a different result from Variance?












3












$begingroup$


Why does the function VarianceMLE give a different result from Variance?



And what is it in Mathematica 11.3?



enter image description here



Please see the picture above t665he MLE is 621 and the other is 665.










share|improve this question











$endgroup$












  • $begingroup$
    In Mathematica 11.3, << Statistics` produces an error message and Variance@data // N gives 665.524
    $endgroup$
    – m_goldberg
    Jan 14 at 7:47








  • 2




    $begingroup$
    I'm going to guess that VarianceMLE is the maximum likelihood variance estimator rather than the unbiased one (assuming normally distributed data). The difference between the two is that Variance divides by N-1 (N == Length[data]) while the MLE estimator divides by N.
    $endgroup$
    – Sjoerd Smit
    Jan 14 at 9:22






  • 1




    $begingroup$
    And for future reference: please post copyable code in your question rather than a screenshot. This makes it much easier for someone else to copy your code and try things out.
    $endgroup$
    – Sjoerd Smit
    Jan 14 at 9:23












  • $begingroup$
    Which book did you see this in? It looks like a scan.
    $endgroup$
    – Szabolcs
    Jan 14 at 11:45
















3












$begingroup$


Why does the function VarianceMLE give a different result from Variance?



And what is it in Mathematica 11.3?



enter image description here



Please see the picture above t665he MLE is 621 and the other is 665.










share|improve this question











$endgroup$












  • $begingroup$
    In Mathematica 11.3, << Statistics` produces an error message and Variance@data // N gives 665.524
    $endgroup$
    – m_goldberg
    Jan 14 at 7:47








  • 2




    $begingroup$
    I'm going to guess that VarianceMLE is the maximum likelihood variance estimator rather than the unbiased one (assuming normally distributed data). The difference between the two is that Variance divides by N-1 (N == Length[data]) while the MLE estimator divides by N.
    $endgroup$
    – Sjoerd Smit
    Jan 14 at 9:22






  • 1




    $begingroup$
    And for future reference: please post copyable code in your question rather than a screenshot. This makes it much easier for someone else to copy your code and try things out.
    $endgroup$
    – Sjoerd Smit
    Jan 14 at 9:23












  • $begingroup$
    Which book did you see this in? It looks like a scan.
    $endgroup$
    – Szabolcs
    Jan 14 at 11:45














3












3








3





$begingroup$


Why does the function VarianceMLE give a different result from Variance?



And what is it in Mathematica 11.3?



enter image description here



Please see the picture above t665he MLE is 621 and the other is 665.










share|improve this question











$endgroup$




Why does the function VarianceMLE give a different result from Variance?



And what is it in Mathematica 11.3?



enter image description here



Please see the picture above t665he MLE is 621 and the other is 665.







probability-or-statistics






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jan 14 at 8:01









m_goldberg

85k872196




85k872196










asked Jan 14 at 7:24









FacetFacet

254




254












  • $begingroup$
    In Mathematica 11.3, << Statistics` produces an error message and Variance@data // N gives 665.524
    $endgroup$
    – m_goldberg
    Jan 14 at 7:47








  • 2




    $begingroup$
    I'm going to guess that VarianceMLE is the maximum likelihood variance estimator rather than the unbiased one (assuming normally distributed data). The difference between the two is that Variance divides by N-1 (N == Length[data]) while the MLE estimator divides by N.
    $endgroup$
    – Sjoerd Smit
    Jan 14 at 9:22






  • 1




    $begingroup$
    And for future reference: please post copyable code in your question rather than a screenshot. This makes it much easier for someone else to copy your code and try things out.
    $endgroup$
    – Sjoerd Smit
    Jan 14 at 9:23












  • $begingroup$
    Which book did you see this in? It looks like a scan.
    $endgroup$
    – Szabolcs
    Jan 14 at 11:45


















  • $begingroup$
    In Mathematica 11.3, << Statistics` produces an error message and Variance@data // N gives 665.524
    $endgroup$
    – m_goldberg
    Jan 14 at 7:47








  • 2




    $begingroup$
    I'm going to guess that VarianceMLE is the maximum likelihood variance estimator rather than the unbiased one (assuming normally distributed data). The difference between the two is that Variance divides by N-1 (N == Length[data]) while the MLE estimator divides by N.
    $endgroup$
    – Sjoerd Smit
    Jan 14 at 9:22






  • 1




    $begingroup$
    And for future reference: please post copyable code in your question rather than a screenshot. This makes it much easier for someone else to copy your code and try things out.
    $endgroup$
    – Sjoerd Smit
    Jan 14 at 9:23












  • $begingroup$
    Which book did you see this in? It looks like a scan.
    $endgroup$
    – Szabolcs
    Jan 14 at 11:45
















$begingroup$
In Mathematica 11.3, << Statistics` produces an error message and Variance@data // N gives 665.524
$endgroup$
– m_goldberg
Jan 14 at 7:47






$begingroup$
In Mathematica 11.3, << Statistics` produces an error message and Variance@data // N gives 665.524
$endgroup$
– m_goldberg
Jan 14 at 7:47






2




2




$begingroup$
I'm going to guess that VarianceMLE is the maximum likelihood variance estimator rather than the unbiased one (assuming normally distributed data). The difference between the two is that Variance divides by N-1 (N == Length[data]) while the MLE estimator divides by N.
$endgroup$
– Sjoerd Smit
Jan 14 at 9:22




$begingroup$
I'm going to guess that VarianceMLE is the maximum likelihood variance estimator rather than the unbiased one (assuming normally distributed data). The difference between the two is that Variance divides by N-1 (N == Length[data]) while the MLE estimator divides by N.
$endgroup$
– Sjoerd Smit
Jan 14 at 9:22




1




1




$begingroup$
And for future reference: please post copyable code in your question rather than a screenshot. This makes it much easier for someone else to copy your code and try things out.
$endgroup$
– Sjoerd Smit
Jan 14 at 9:23






$begingroup$
And for future reference: please post copyable code in your question rather than a screenshot. This makes it much easier for someone else to copy your code and try things out.
$endgroup$
– Sjoerd Smit
Jan 14 at 9:23














$begingroup$
Which book did you see this in? It looks like a scan.
$endgroup$
– Szabolcs
Jan 14 at 11:45




$begingroup$
Which book did you see this in? It looks like a scan.
$endgroup$
– Szabolcs
Jan 14 at 11:45










2 Answers
2






active

oldest

votes


















8












$begingroup$

I just checked my guess in my comment and I was right. VarianceMLE is the maximum likelihood variance estimator (see, e.g. here).



data = {34, 56, 28, 62, 32, 90, 20, 10, 12, 35, 63, 78, 12, 25, 68};
Variance[data]




13976/21





myVariance[lst_List] := Total[(lst - Mean[lst])^2]/(Length[lst] - 1);
myVarianceMLE[lst_List] := Total[(lst - Mean[lst])^2]/Length[lst];

myVariance[data]
myVarianceMLE[data]




13976/21



27952/45








share|improve this answer









$endgroup$













  • $begingroup$
    Thank you for your kind help, I will remember to post code next time.
    $endgroup$
    – Facet
    Jan 14 at 14:47










  • $begingroup$
    Wolfram Research article also mentions a replacement for VarianceMLE: reference.wolfram.com/language/Compatibility/tutorial/… .
    $endgroup$
    – kirma
    Jan 18 at 17:31



















3












$begingroup$

VarianceMLE computes a biased, maximum likelihood estimate of the population variance. Variance computes an unbiased estimate of the population variance. It can be shown that VarianceMLE underestimates the variance of the population.



Let ${y_i : 1 leq i leq n}$ be a sample of $n$ values from a population. The variance (central second moment) of the sample is
$$ sigma_y^2 = frac{1}{n} sum_{i=1}^n (y_i - bar{y}) text{,} $$
where $bar{y} = frac{1}{n} sum_{i=1}^n y_i$ is the sample mean. This $sigma_y^2$ is computed by VarianceMLE.



If $sigma^2$ is the population variance, with some work, one can show that the expected value of $sigma_y^2$ is $frac{n-1}{n} sigma^2$, so the sample variance is a biased estimator of the population variance. We can make this an unbiased estimator via
$$ s^2 = frac{n}{n-1} sigma_y^2 = frac{1}{n-1}sum_{i=1}^n (y_i - bar{y}) text{.} $$
This $s^2$ is computed by Variance. From the documentation (in the Details):



"Variance[list] is equivalent to Total[(list-Mean[list])^2]/(Length[list]-1) for real-valued data."



The very sparse documentation for VarianceMLE indicates that it is implemented in terms of Variance:



"VarianceMLE[data_] := Variance[data] (Length[data] - 1)/Length[data]"






share|improve this answer









$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "387"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f189445%2fwhy-does-the-function-variancemle-give-a-different-result-from-variance%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    8












    $begingroup$

    I just checked my guess in my comment and I was right. VarianceMLE is the maximum likelihood variance estimator (see, e.g. here).



    data = {34, 56, 28, 62, 32, 90, 20, 10, 12, 35, 63, 78, 12, 25, 68};
    Variance[data]




    13976/21





    myVariance[lst_List] := Total[(lst - Mean[lst])^2]/(Length[lst] - 1);
    myVarianceMLE[lst_List] := Total[(lst - Mean[lst])^2]/Length[lst];

    myVariance[data]
    myVarianceMLE[data]




    13976/21



    27952/45








    share|improve this answer









    $endgroup$













    • $begingroup$
      Thank you for your kind help, I will remember to post code next time.
      $endgroup$
      – Facet
      Jan 14 at 14:47










    • $begingroup$
      Wolfram Research article also mentions a replacement for VarianceMLE: reference.wolfram.com/language/Compatibility/tutorial/… .
      $endgroup$
      – kirma
      Jan 18 at 17:31
















    8












    $begingroup$

    I just checked my guess in my comment and I was right. VarianceMLE is the maximum likelihood variance estimator (see, e.g. here).



    data = {34, 56, 28, 62, 32, 90, 20, 10, 12, 35, 63, 78, 12, 25, 68};
    Variance[data]




    13976/21





    myVariance[lst_List] := Total[(lst - Mean[lst])^2]/(Length[lst] - 1);
    myVarianceMLE[lst_List] := Total[(lst - Mean[lst])^2]/Length[lst];

    myVariance[data]
    myVarianceMLE[data]




    13976/21



    27952/45








    share|improve this answer









    $endgroup$













    • $begingroup$
      Thank you for your kind help, I will remember to post code next time.
      $endgroup$
      – Facet
      Jan 14 at 14:47










    • $begingroup$
      Wolfram Research article also mentions a replacement for VarianceMLE: reference.wolfram.com/language/Compatibility/tutorial/… .
      $endgroup$
      – kirma
      Jan 18 at 17:31














    8












    8








    8





    $begingroup$

    I just checked my guess in my comment and I was right. VarianceMLE is the maximum likelihood variance estimator (see, e.g. here).



    data = {34, 56, 28, 62, 32, 90, 20, 10, 12, 35, 63, 78, 12, 25, 68};
    Variance[data]




    13976/21





    myVariance[lst_List] := Total[(lst - Mean[lst])^2]/(Length[lst] - 1);
    myVarianceMLE[lst_List] := Total[(lst - Mean[lst])^2]/Length[lst];

    myVariance[data]
    myVarianceMLE[data]




    13976/21



    27952/45








    share|improve this answer









    $endgroup$



    I just checked my guess in my comment and I was right. VarianceMLE is the maximum likelihood variance estimator (see, e.g. here).



    data = {34, 56, 28, 62, 32, 90, 20, 10, 12, 35, 63, 78, 12, 25, 68};
    Variance[data]




    13976/21





    myVariance[lst_List] := Total[(lst - Mean[lst])^2]/(Length[lst] - 1);
    myVarianceMLE[lst_List] := Total[(lst - Mean[lst])^2]/Length[lst];

    myVariance[data]
    myVarianceMLE[data]




    13976/21



    27952/45









    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Jan 14 at 9:28









    Sjoerd SmitSjoerd Smit

    3,410715




    3,410715












    • $begingroup$
      Thank you for your kind help, I will remember to post code next time.
      $endgroup$
      – Facet
      Jan 14 at 14:47










    • $begingroup$
      Wolfram Research article also mentions a replacement for VarianceMLE: reference.wolfram.com/language/Compatibility/tutorial/… .
      $endgroup$
      – kirma
      Jan 18 at 17:31


















    • $begingroup$
      Thank you for your kind help, I will remember to post code next time.
      $endgroup$
      – Facet
      Jan 14 at 14:47










    • $begingroup$
      Wolfram Research article also mentions a replacement for VarianceMLE: reference.wolfram.com/language/Compatibility/tutorial/… .
      $endgroup$
      – kirma
      Jan 18 at 17:31
















    $begingroup$
    Thank you for your kind help, I will remember to post code next time.
    $endgroup$
    – Facet
    Jan 14 at 14:47




    $begingroup$
    Thank you for your kind help, I will remember to post code next time.
    $endgroup$
    – Facet
    Jan 14 at 14:47












    $begingroup$
    Wolfram Research article also mentions a replacement for VarianceMLE: reference.wolfram.com/language/Compatibility/tutorial/… .
    $endgroup$
    – kirma
    Jan 18 at 17:31




    $begingroup$
    Wolfram Research article also mentions a replacement for VarianceMLE: reference.wolfram.com/language/Compatibility/tutorial/… .
    $endgroup$
    – kirma
    Jan 18 at 17:31











    3












    $begingroup$

    VarianceMLE computes a biased, maximum likelihood estimate of the population variance. Variance computes an unbiased estimate of the population variance. It can be shown that VarianceMLE underestimates the variance of the population.



    Let ${y_i : 1 leq i leq n}$ be a sample of $n$ values from a population. The variance (central second moment) of the sample is
    $$ sigma_y^2 = frac{1}{n} sum_{i=1}^n (y_i - bar{y}) text{,} $$
    where $bar{y} = frac{1}{n} sum_{i=1}^n y_i$ is the sample mean. This $sigma_y^2$ is computed by VarianceMLE.



    If $sigma^2$ is the population variance, with some work, one can show that the expected value of $sigma_y^2$ is $frac{n-1}{n} sigma^2$, so the sample variance is a biased estimator of the population variance. We can make this an unbiased estimator via
    $$ s^2 = frac{n}{n-1} sigma_y^2 = frac{1}{n-1}sum_{i=1}^n (y_i - bar{y}) text{.} $$
    This $s^2$ is computed by Variance. From the documentation (in the Details):



    "Variance[list] is equivalent to Total[(list-Mean[list])^2]/(Length[list]-1) for real-valued data."



    The very sparse documentation for VarianceMLE indicates that it is implemented in terms of Variance:



    "VarianceMLE[data_] := Variance[data] (Length[data] - 1)/Length[data]"






    share|improve this answer









    $endgroup$


















      3












      $begingroup$

      VarianceMLE computes a biased, maximum likelihood estimate of the population variance. Variance computes an unbiased estimate of the population variance. It can be shown that VarianceMLE underestimates the variance of the population.



      Let ${y_i : 1 leq i leq n}$ be a sample of $n$ values from a population. The variance (central second moment) of the sample is
      $$ sigma_y^2 = frac{1}{n} sum_{i=1}^n (y_i - bar{y}) text{,} $$
      where $bar{y} = frac{1}{n} sum_{i=1}^n y_i$ is the sample mean. This $sigma_y^2$ is computed by VarianceMLE.



      If $sigma^2$ is the population variance, with some work, one can show that the expected value of $sigma_y^2$ is $frac{n-1}{n} sigma^2$, so the sample variance is a biased estimator of the population variance. We can make this an unbiased estimator via
      $$ s^2 = frac{n}{n-1} sigma_y^2 = frac{1}{n-1}sum_{i=1}^n (y_i - bar{y}) text{.} $$
      This $s^2$ is computed by Variance. From the documentation (in the Details):



      "Variance[list] is equivalent to Total[(list-Mean[list])^2]/(Length[list]-1) for real-valued data."



      The very sparse documentation for VarianceMLE indicates that it is implemented in terms of Variance:



      "VarianceMLE[data_] := Variance[data] (Length[data] - 1)/Length[data]"






      share|improve this answer









      $endgroup$
















        3












        3








        3





        $begingroup$

        VarianceMLE computes a biased, maximum likelihood estimate of the population variance. Variance computes an unbiased estimate of the population variance. It can be shown that VarianceMLE underestimates the variance of the population.



        Let ${y_i : 1 leq i leq n}$ be a sample of $n$ values from a population. The variance (central second moment) of the sample is
        $$ sigma_y^2 = frac{1}{n} sum_{i=1}^n (y_i - bar{y}) text{,} $$
        where $bar{y} = frac{1}{n} sum_{i=1}^n y_i$ is the sample mean. This $sigma_y^2$ is computed by VarianceMLE.



        If $sigma^2$ is the population variance, with some work, one can show that the expected value of $sigma_y^2$ is $frac{n-1}{n} sigma^2$, so the sample variance is a biased estimator of the population variance. We can make this an unbiased estimator via
        $$ s^2 = frac{n}{n-1} sigma_y^2 = frac{1}{n-1}sum_{i=1}^n (y_i - bar{y}) text{.} $$
        This $s^2$ is computed by Variance. From the documentation (in the Details):



        "Variance[list] is equivalent to Total[(list-Mean[list])^2]/(Length[list]-1) for real-valued data."



        The very sparse documentation for VarianceMLE indicates that it is implemented in terms of Variance:



        "VarianceMLE[data_] := Variance[data] (Length[data] - 1)/Length[data]"






        share|improve this answer









        $endgroup$



        VarianceMLE computes a biased, maximum likelihood estimate of the population variance. Variance computes an unbiased estimate of the population variance. It can be shown that VarianceMLE underestimates the variance of the population.



        Let ${y_i : 1 leq i leq n}$ be a sample of $n$ values from a population. The variance (central second moment) of the sample is
        $$ sigma_y^2 = frac{1}{n} sum_{i=1}^n (y_i - bar{y}) text{,} $$
        where $bar{y} = frac{1}{n} sum_{i=1}^n y_i$ is the sample mean. This $sigma_y^2$ is computed by VarianceMLE.



        If $sigma^2$ is the population variance, with some work, one can show that the expected value of $sigma_y^2$ is $frac{n-1}{n} sigma^2$, so the sample variance is a biased estimator of the population variance. We can make this an unbiased estimator via
        $$ s^2 = frac{n}{n-1} sigma_y^2 = frac{1}{n-1}sum_{i=1}^n (y_i - bar{y}) text{.} $$
        This $s^2$ is computed by Variance. From the documentation (in the Details):



        "Variance[list] is equivalent to Total[(list-Mean[list])^2]/(Length[list]-1) for real-valued data."



        The very sparse documentation for VarianceMLE indicates that it is implemented in terms of Variance:



        "VarianceMLE[data_] := Variance[data] (Length[data] - 1)/Length[data]"







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Jan 14 at 15:15









        Eric TowersEric Towers

        2,306613




        2,306613






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematica Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f189445%2fwhy-does-the-function-variancemle-give-a-different-result-from-variance%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            How to change which sound is reproduced for terminal bell?

            Can I use Tabulator js library in my java Spring + Thymeleaf project?

            Title Spacing in Bjornstrup Chapter, Removing Chapter Number From Contents