Product of two Gaussian PDFs is a Gaussian PDF, but Product of two Gaussian Variables is not Gaussian












34












$begingroup$


The Product of Two Gaussian Random Variables is not Gaussian distributed:




  • Is the product of two Gaussian random variables also a Gaussian?

  • Also Wolfram Mathworld

  • So this is saying $X sim N(mu_1, sigma_1^2)$, $Y sim N(mu_2, sigma_2^2)$ then $XY sim W$ where W is some other distribution, that is not Gaussian


But the product of two Gaussian PDFs is a Gaussian PDF:




  • Calculate the product of two Gaussian PDF's

  • Full Proof


  • This tutorial which I am trying to understand Writes: $N(mu_1, sigma_1^2)times N(mu_2, sigma_2^2) = N(frac{sigma_1^2 mu_2 + sigma_2^2 mu_1}{sigma_1^2 + sigma_2^2},frac{1}{frac{1}{sigma_1^2} + frac{1}{sigma_2^2}})$


What is going on here?



What am I doing when I take the product of two pdfs
vs. when I take the product of two variables from the pdfs?



When (what physical situation) is described by one,
and what by the other?
(I think a few real world examples would clear things up for me)










share|cite|improve this question











$endgroup$

















    34












    $begingroup$


    The Product of Two Gaussian Random Variables is not Gaussian distributed:




    • Is the product of two Gaussian random variables also a Gaussian?

    • Also Wolfram Mathworld

    • So this is saying $X sim N(mu_1, sigma_1^2)$, $Y sim N(mu_2, sigma_2^2)$ then $XY sim W$ where W is some other distribution, that is not Gaussian


    But the product of two Gaussian PDFs is a Gaussian PDF:




    • Calculate the product of two Gaussian PDF's

    • Full Proof


    • This tutorial which I am trying to understand Writes: $N(mu_1, sigma_1^2)times N(mu_2, sigma_2^2) = N(frac{sigma_1^2 mu_2 + sigma_2^2 mu_1}{sigma_1^2 + sigma_2^2},frac{1}{frac{1}{sigma_1^2} + frac{1}{sigma_2^2}})$


    What is going on here?



    What am I doing when I take the product of two pdfs
    vs. when I take the product of two variables from the pdfs?



    When (what physical situation) is described by one,
    and what by the other?
    (I think a few real world examples would clear things up for me)










    share|cite|improve this question











    $endgroup$















      34












      34








      34


      19



      $begingroup$


      The Product of Two Gaussian Random Variables is not Gaussian distributed:




      • Is the product of two Gaussian random variables also a Gaussian?

      • Also Wolfram Mathworld

      • So this is saying $X sim N(mu_1, sigma_1^2)$, $Y sim N(mu_2, sigma_2^2)$ then $XY sim W$ where W is some other distribution, that is not Gaussian


      But the product of two Gaussian PDFs is a Gaussian PDF:




      • Calculate the product of two Gaussian PDF's

      • Full Proof


      • This tutorial which I am trying to understand Writes: $N(mu_1, sigma_1^2)times N(mu_2, sigma_2^2) = N(frac{sigma_1^2 mu_2 + sigma_2^2 mu_1}{sigma_1^2 + sigma_2^2},frac{1}{frac{1}{sigma_1^2} + frac{1}{sigma_2^2}})$


      What is going on here?



      What am I doing when I take the product of two pdfs
      vs. when I take the product of two variables from the pdfs?



      When (what physical situation) is described by one,
      and what by the other?
      (I think a few real world examples would clear things up for me)










      share|cite|improve this question











      $endgroup$




      The Product of Two Gaussian Random Variables is not Gaussian distributed:




      • Is the product of two Gaussian random variables also a Gaussian?

      • Also Wolfram Mathworld

      • So this is saying $X sim N(mu_1, sigma_1^2)$, $Y sim N(mu_2, sigma_2^2)$ then $XY sim W$ where W is some other distribution, that is not Gaussian


      But the product of two Gaussian PDFs is a Gaussian PDF:




      • Calculate the product of two Gaussian PDF's

      • Full Proof


      • This tutorial which I am trying to understand Writes: $N(mu_1, sigma_1^2)times N(mu_2, sigma_2^2) = N(frac{sigma_1^2 mu_2 + sigma_2^2 mu_1}{sigma_1^2 + sigma_2^2},frac{1}{frac{1}{sigma_1^2} + frac{1}{sigma_2^2}})$


      What is going on here?



      What am I doing when I take the product of two pdfs
      vs. when I take the product of two variables from the pdfs?



      When (what physical situation) is described by one,
      and what by the other?
      (I think a few real world examples would clear things up for me)







      probability intuition






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Oct 19 '18 at 9:15









      amWhy

      1




      1










      asked Jan 21 '15 at 1:22









      Lyndon WhiteLyndon White

      6601620




      6601620






















          4 Answers
          4






          active

          oldest

          votes


















          22












          $begingroup$

          The product of the PDFs of two random variables $X$ and $Y$ will give the joint distribution of the vector-valued random variable $(X,Y)$ in the case that $X$ and $Y$ are independent. Therefore, if $X$ and $Y$ are normally distributed independent random variables, the product of their PDFs is bivariate normal with zero correlation.



          On the other hand, even in the case that $X$ and $Y$ are IID standard normal random variables, their product is not itself normal, as the links you provide show. The product of $X$ and $Y$ is a scalar-valued random variable, not a vector-valued one as in the above case.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            What you say is true, but note that the sum of two normally distributed independent random variables is a scalar-valued normally distributed random variable.
            $endgroup$
            – Henry
            Jan 21 '15 at 8:12












          • $begingroup$
            I would also note that @LyndonWhite used product of 2 PDF with their result being Scalar PDF and not Vector PDF.
            $endgroup$
            – Royi
            Oct 1 '18 at 8:20



















          6












          $begingroup$

          1.) The first example is already sufficient. Just to throw in another one for a sum of Gaussian variables, consider diffusion: at each step in time a particle is perturbed by a random, Gaussian-distributed step in space. At each time the distribution of its possible positions in space will be a Gaussian because the total displacement is the sum of a bunch of Gaussian-distributed displacements, and the sum of Gaussian variables is Gaussian.



          2.) The second situation (product of Gaussian PDFs) is confusing because the resulting function is a Gaussian, but it is not a probability distribution because its not normalized! Nevertheless, there are physical situations in which the product of two Gaussian PDFs is useful. See below.



          TL;DR - a physical example for a product of Gaussian PDFs comes from Bayesian probability. If our prior knowledge of a value is Gaussian, and we take a measurement which is corrupted by Gaussian noise, then the posterior distribution, which is proportional to the prior and the measurement distributions, is also Gaussian.



          For example:



          Suppose you are trying to measure a constant, unknown, value $X$. You can take measurements of it, with Gaussian noise, your measurement model is $tilde{X} = X + epsilon$. Finally, suppose you have a Gaussian prior distribution for $X$. Then, the posterior distribution after taking a measurement is



          $$P[Xmid tilde{X}] = frac{P[tilde{X}mid X] P[X]}{P[tilde{X}]}$$



          As is fashionable in beysian probability, we throw out the value $P[tilde{X}]$, because it doesn't depend explicitly on $X$, so we can ignore it for now and normalize later.



          Now, our assumption is that the prior, $P[X]$, is Gaussian. The measurement model tells us that $P[tilde{X}mid X]$ is Gaussian, in particular $P[tilde{X}mid X] = N[Sigma_{epsilon},X]$. Since the product of two Gaussians is a Gaussian, the posterior probability is Gaussian. It is not normalized, but that is where $P[tilde{X}]$ (which we "threw out" earlier) comes in. It must be exactly the right value to normalize this distribution, which we can now read off from the variance of the Gaussian posterior.



          What you should really take away from this is that Gaussians are magical [1]. I don't know of any other PDF which has this property. This is why, for example, Kalman filters work so darn well. Kalman filters utilize both of these properties, and that is how you get a super-efficient algorithm for state estimation for a linear dynamical system with Gaussian noise.



          [1] - Gaussians are not actually magical, but perhaps they are mathemagical.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            Note that the result in the OP is for a product of Gaussian PDFs on the same variable, while the likelihood and prior aren't. However it still works because of the symmetry between the variable and the mean in a Gaussian
            $endgroup$
            – guillefix
            Dec 17 '18 at 15:17










          • $begingroup$
            Actually the most 'magical' is probably the central limit theorem, and that the point $(X,Y)$ where $X,Y$ are independent standard normal random variables has a perfect circular symmetry. =)
            $endgroup$
            – user21820
            Dec 24 '18 at 14:56



















          2












          $begingroup$

          Intuition 1 (Multiplying random variables): Suppose kids save money Y for X days before giving up. X is random iid normally distributed variable from 0 days to 10 days. Average kid saves for 5 days. Average kid saves 0.30 a day (mostly between .10 and .50). What's the distribution?



          Now are the savings of a kid normally distributed? We know the mean, median, mode of a normal distribution are same as it is symmetric with a standard deviation.



          The average savings are clearly $0.30 * 5 = 1.50. The max savings are 5 and the min is 0. The median of 5 and 0 is 2.50. The mean and median don't match so the product is not normally distributed. The distribution is shifted to the left from the 2.50 mark. The probability that a kid saved 1 is higher than the probability that he saved 4.



          Intuition 2 (Multiplying Gaussian PDFs): Now you're multiplying not the numbers but the functions together. The multiplying is just a bunch of algebra and the resulting function also fits the form factor of a Gaussian. The proof for that is given in your link. It means if you have populations of kids there will be a Gaussian representing their savings with some average savings from each sub population. Basically, a mixture of gaussians is a gaussian. See https://en.m.wikipedia.org/wiki/Mixture_model for how various distributions mix. It’s useful for building machine learning models. Maybe given distributions of daily savings and total savings you want to establish the distribution of how long the kids tend to save.






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Your Intuition $1$ seems to be a good explanation but your Intuition $2$ doesn't help. The other answer by @Lyndon has the reverse problem. I'd hate to be in the OPs position trying to determine which of your answers is accepted. On another matter, you should learn basic MathJax and use more of it with each new question and answer as you learn more MathJax.
            $endgroup$
            – Stephen Meskin
            Nov 15 '17 at 2:06












          • $begingroup$
            I added more to the intuition 2. I felt the spirit of the question is really around intuition 1. I don’t think I need to repeat the impressive wording of the other answer.
            $endgroup$
            – Aditya Mittal
            Nov 16 '17 at 2:50





















          2












          $begingroup$

          In brief, you were confused by two totally different concepts.




          1. For the first, you are calculating the distribution of transformed random variables. here, you specify that as the product of XY.


          2. For the second, you just calculate the product of two functions $phi(x)phi(y)$, which happen to be the PDF of two normal random variables.



          For some details, check here. In general, if you want to calculate the PDF of XY, you need to figure out $F(t) = P(XY < t)$, then $f(t) = F'(t)$. As for the product of two functions, that's easy as you can see.






          share|cite|improve this answer









          $endgroup$













            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "69"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1112866%2fproduct-of-two-gaussian-pdfs-is-a-gaussian-pdf-but-product-of-two-gaussian-vari%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            4 Answers
            4






            active

            oldest

            votes








            4 Answers
            4






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            22












            $begingroup$

            The product of the PDFs of two random variables $X$ and $Y$ will give the joint distribution of the vector-valued random variable $(X,Y)$ in the case that $X$ and $Y$ are independent. Therefore, if $X$ and $Y$ are normally distributed independent random variables, the product of their PDFs is bivariate normal with zero correlation.



            On the other hand, even in the case that $X$ and $Y$ are IID standard normal random variables, their product is not itself normal, as the links you provide show. The product of $X$ and $Y$ is a scalar-valued random variable, not a vector-valued one as in the above case.






            share|cite|improve this answer









            $endgroup$













            • $begingroup$
              What you say is true, but note that the sum of two normally distributed independent random variables is a scalar-valued normally distributed random variable.
              $endgroup$
              – Henry
              Jan 21 '15 at 8:12












            • $begingroup$
              I would also note that @LyndonWhite used product of 2 PDF with their result being Scalar PDF and not Vector PDF.
              $endgroup$
              – Royi
              Oct 1 '18 at 8:20
















            22












            $begingroup$

            The product of the PDFs of two random variables $X$ and $Y$ will give the joint distribution of the vector-valued random variable $(X,Y)$ in the case that $X$ and $Y$ are independent. Therefore, if $X$ and $Y$ are normally distributed independent random variables, the product of their PDFs is bivariate normal with zero correlation.



            On the other hand, even in the case that $X$ and $Y$ are IID standard normal random variables, their product is not itself normal, as the links you provide show. The product of $X$ and $Y$ is a scalar-valued random variable, not a vector-valued one as in the above case.






            share|cite|improve this answer









            $endgroup$













            • $begingroup$
              What you say is true, but note that the sum of two normally distributed independent random variables is a scalar-valued normally distributed random variable.
              $endgroup$
              – Henry
              Jan 21 '15 at 8:12












            • $begingroup$
              I would also note that @LyndonWhite used product of 2 PDF with their result being Scalar PDF and not Vector PDF.
              $endgroup$
              – Royi
              Oct 1 '18 at 8:20














            22












            22








            22





            $begingroup$

            The product of the PDFs of two random variables $X$ and $Y$ will give the joint distribution of the vector-valued random variable $(X,Y)$ in the case that $X$ and $Y$ are independent. Therefore, if $X$ and $Y$ are normally distributed independent random variables, the product of their PDFs is bivariate normal with zero correlation.



            On the other hand, even in the case that $X$ and $Y$ are IID standard normal random variables, their product is not itself normal, as the links you provide show. The product of $X$ and $Y$ is a scalar-valued random variable, not a vector-valued one as in the above case.






            share|cite|improve this answer









            $endgroup$



            The product of the PDFs of two random variables $X$ and $Y$ will give the joint distribution of the vector-valued random variable $(X,Y)$ in the case that $X$ and $Y$ are independent. Therefore, if $X$ and $Y$ are normally distributed independent random variables, the product of their PDFs is bivariate normal with zero correlation.



            On the other hand, even in the case that $X$ and $Y$ are IID standard normal random variables, their product is not itself normal, as the links you provide show. The product of $X$ and $Y$ is a scalar-valued random variable, not a vector-valued one as in the above case.







            share|cite|improve this answer












            share|cite|improve this answer



            share|cite|improve this answer










            answered Jan 21 '15 at 1:41









            heropupheropup

            64.2k762102




            64.2k762102












            • $begingroup$
              What you say is true, but note that the sum of two normally distributed independent random variables is a scalar-valued normally distributed random variable.
              $endgroup$
              – Henry
              Jan 21 '15 at 8:12












            • $begingroup$
              I would also note that @LyndonWhite used product of 2 PDF with their result being Scalar PDF and not Vector PDF.
              $endgroup$
              – Royi
              Oct 1 '18 at 8:20


















            • $begingroup$
              What you say is true, but note that the sum of two normally distributed independent random variables is a scalar-valued normally distributed random variable.
              $endgroup$
              – Henry
              Jan 21 '15 at 8:12












            • $begingroup$
              I would also note that @LyndonWhite used product of 2 PDF with their result being Scalar PDF and not Vector PDF.
              $endgroup$
              – Royi
              Oct 1 '18 at 8:20
















            $begingroup$
            What you say is true, but note that the sum of two normally distributed independent random variables is a scalar-valued normally distributed random variable.
            $endgroup$
            – Henry
            Jan 21 '15 at 8:12






            $begingroup$
            What you say is true, but note that the sum of two normally distributed independent random variables is a scalar-valued normally distributed random variable.
            $endgroup$
            – Henry
            Jan 21 '15 at 8:12














            $begingroup$
            I would also note that @LyndonWhite used product of 2 PDF with their result being Scalar PDF and not Vector PDF.
            $endgroup$
            – Royi
            Oct 1 '18 at 8:20




            $begingroup$
            I would also note that @LyndonWhite used product of 2 PDF with their result being Scalar PDF and not Vector PDF.
            $endgroup$
            – Royi
            Oct 1 '18 at 8:20











            6












            $begingroup$

            1.) The first example is already sufficient. Just to throw in another one for a sum of Gaussian variables, consider diffusion: at each step in time a particle is perturbed by a random, Gaussian-distributed step in space. At each time the distribution of its possible positions in space will be a Gaussian because the total displacement is the sum of a bunch of Gaussian-distributed displacements, and the sum of Gaussian variables is Gaussian.



            2.) The second situation (product of Gaussian PDFs) is confusing because the resulting function is a Gaussian, but it is not a probability distribution because its not normalized! Nevertheless, there are physical situations in which the product of two Gaussian PDFs is useful. See below.



            TL;DR - a physical example for a product of Gaussian PDFs comes from Bayesian probability. If our prior knowledge of a value is Gaussian, and we take a measurement which is corrupted by Gaussian noise, then the posterior distribution, which is proportional to the prior and the measurement distributions, is also Gaussian.



            For example:



            Suppose you are trying to measure a constant, unknown, value $X$. You can take measurements of it, with Gaussian noise, your measurement model is $tilde{X} = X + epsilon$. Finally, suppose you have a Gaussian prior distribution for $X$. Then, the posterior distribution after taking a measurement is



            $$P[Xmid tilde{X}] = frac{P[tilde{X}mid X] P[X]}{P[tilde{X}]}$$



            As is fashionable in beysian probability, we throw out the value $P[tilde{X}]$, because it doesn't depend explicitly on $X$, so we can ignore it for now and normalize later.



            Now, our assumption is that the prior, $P[X]$, is Gaussian. The measurement model tells us that $P[tilde{X}mid X]$ is Gaussian, in particular $P[tilde{X}mid X] = N[Sigma_{epsilon},X]$. Since the product of two Gaussians is a Gaussian, the posterior probability is Gaussian. It is not normalized, but that is where $P[tilde{X}]$ (which we "threw out" earlier) comes in. It must be exactly the right value to normalize this distribution, which we can now read off from the variance of the Gaussian posterior.



            What you should really take away from this is that Gaussians are magical [1]. I don't know of any other PDF which has this property. This is why, for example, Kalman filters work so darn well. Kalman filters utilize both of these properties, and that is how you get a super-efficient algorithm for state estimation for a linear dynamical system with Gaussian noise.



            [1] - Gaussians are not actually magical, but perhaps they are mathemagical.






            share|cite|improve this answer









            $endgroup$













            • $begingroup$
              Note that the result in the OP is for a product of Gaussian PDFs on the same variable, while the likelihood and prior aren't. However it still works because of the symmetry between the variable and the mean in a Gaussian
              $endgroup$
              – guillefix
              Dec 17 '18 at 15:17










            • $begingroup$
              Actually the most 'magical' is probably the central limit theorem, and that the point $(X,Y)$ where $X,Y$ are independent standard normal random variables has a perfect circular symmetry. =)
              $endgroup$
              – user21820
              Dec 24 '18 at 14:56
















            6












            $begingroup$

            1.) The first example is already sufficient. Just to throw in another one for a sum of Gaussian variables, consider diffusion: at each step in time a particle is perturbed by a random, Gaussian-distributed step in space. At each time the distribution of its possible positions in space will be a Gaussian because the total displacement is the sum of a bunch of Gaussian-distributed displacements, and the sum of Gaussian variables is Gaussian.



            2.) The second situation (product of Gaussian PDFs) is confusing because the resulting function is a Gaussian, but it is not a probability distribution because its not normalized! Nevertheless, there are physical situations in which the product of two Gaussian PDFs is useful. See below.



            TL;DR - a physical example for a product of Gaussian PDFs comes from Bayesian probability. If our prior knowledge of a value is Gaussian, and we take a measurement which is corrupted by Gaussian noise, then the posterior distribution, which is proportional to the prior and the measurement distributions, is also Gaussian.



            For example:



            Suppose you are trying to measure a constant, unknown, value $X$. You can take measurements of it, with Gaussian noise, your measurement model is $tilde{X} = X + epsilon$. Finally, suppose you have a Gaussian prior distribution for $X$. Then, the posterior distribution after taking a measurement is



            $$P[Xmid tilde{X}] = frac{P[tilde{X}mid X] P[X]}{P[tilde{X}]}$$



            As is fashionable in beysian probability, we throw out the value $P[tilde{X}]$, because it doesn't depend explicitly on $X$, so we can ignore it for now and normalize later.



            Now, our assumption is that the prior, $P[X]$, is Gaussian. The measurement model tells us that $P[tilde{X}mid X]$ is Gaussian, in particular $P[tilde{X}mid X] = N[Sigma_{epsilon},X]$. Since the product of two Gaussians is a Gaussian, the posterior probability is Gaussian. It is not normalized, but that is where $P[tilde{X}]$ (which we "threw out" earlier) comes in. It must be exactly the right value to normalize this distribution, which we can now read off from the variance of the Gaussian posterior.



            What you should really take away from this is that Gaussians are magical [1]. I don't know of any other PDF which has this property. This is why, for example, Kalman filters work so darn well. Kalman filters utilize both of these properties, and that is how you get a super-efficient algorithm for state estimation for a linear dynamical system with Gaussian noise.



            [1] - Gaussians are not actually magical, but perhaps they are mathemagical.






            share|cite|improve this answer









            $endgroup$













            • $begingroup$
              Note that the result in the OP is for a product of Gaussian PDFs on the same variable, while the likelihood and prior aren't. However it still works because of the symmetry between the variable and the mean in a Gaussian
              $endgroup$
              – guillefix
              Dec 17 '18 at 15:17










            • $begingroup$
              Actually the most 'magical' is probably the central limit theorem, and that the point $(X,Y)$ where $X,Y$ are independent standard normal random variables has a perfect circular symmetry. =)
              $endgroup$
              – user21820
              Dec 24 '18 at 14:56














            6












            6








            6





            $begingroup$

            1.) The first example is already sufficient. Just to throw in another one for a sum of Gaussian variables, consider diffusion: at each step in time a particle is perturbed by a random, Gaussian-distributed step in space. At each time the distribution of its possible positions in space will be a Gaussian because the total displacement is the sum of a bunch of Gaussian-distributed displacements, and the sum of Gaussian variables is Gaussian.



            2.) The second situation (product of Gaussian PDFs) is confusing because the resulting function is a Gaussian, but it is not a probability distribution because its not normalized! Nevertheless, there are physical situations in which the product of two Gaussian PDFs is useful. See below.



            TL;DR - a physical example for a product of Gaussian PDFs comes from Bayesian probability. If our prior knowledge of a value is Gaussian, and we take a measurement which is corrupted by Gaussian noise, then the posterior distribution, which is proportional to the prior and the measurement distributions, is also Gaussian.



            For example:



            Suppose you are trying to measure a constant, unknown, value $X$. You can take measurements of it, with Gaussian noise, your measurement model is $tilde{X} = X + epsilon$. Finally, suppose you have a Gaussian prior distribution for $X$. Then, the posterior distribution after taking a measurement is



            $$P[Xmid tilde{X}] = frac{P[tilde{X}mid X] P[X]}{P[tilde{X}]}$$



            As is fashionable in beysian probability, we throw out the value $P[tilde{X}]$, because it doesn't depend explicitly on $X$, so we can ignore it for now and normalize later.



            Now, our assumption is that the prior, $P[X]$, is Gaussian. The measurement model tells us that $P[tilde{X}mid X]$ is Gaussian, in particular $P[tilde{X}mid X] = N[Sigma_{epsilon},X]$. Since the product of two Gaussians is a Gaussian, the posterior probability is Gaussian. It is not normalized, but that is where $P[tilde{X}]$ (which we "threw out" earlier) comes in. It must be exactly the right value to normalize this distribution, which we can now read off from the variance of the Gaussian posterior.



            What you should really take away from this is that Gaussians are magical [1]. I don't know of any other PDF which has this property. This is why, for example, Kalman filters work so darn well. Kalman filters utilize both of these properties, and that is how you get a super-efficient algorithm for state estimation for a linear dynamical system with Gaussian noise.



            [1] - Gaussians are not actually magical, but perhaps they are mathemagical.






            share|cite|improve this answer









            $endgroup$



            1.) The first example is already sufficient. Just to throw in another one for a sum of Gaussian variables, consider diffusion: at each step in time a particle is perturbed by a random, Gaussian-distributed step in space. At each time the distribution of its possible positions in space will be a Gaussian because the total displacement is the sum of a bunch of Gaussian-distributed displacements, and the sum of Gaussian variables is Gaussian.



            2.) The second situation (product of Gaussian PDFs) is confusing because the resulting function is a Gaussian, but it is not a probability distribution because its not normalized! Nevertheless, there are physical situations in which the product of two Gaussian PDFs is useful. See below.



            TL;DR - a physical example for a product of Gaussian PDFs comes from Bayesian probability. If our prior knowledge of a value is Gaussian, and we take a measurement which is corrupted by Gaussian noise, then the posterior distribution, which is proportional to the prior and the measurement distributions, is also Gaussian.



            For example:



            Suppose you are trying to measure a constant, unknown, value $X$. You can take measurements of it, with Gaussian noise, your measurement model is $tilde{X} = X + epsilon$. Finally, suppose you have a Gaussian prior distribution for $X$. Then, the posterior distribution after taking a measurement is



            $$P[Xmid tilde{X}] = frac{P[tilde{X}mid X] P[X]}{P[tilde{X}]}$$



            As is fashionable in beysian probability, we throw out the value $P[tilde{X}]$, because it doesn't depend explicitly on $X$, so we can ignore it for now and normalize later.



            Now, our assumption is that the prior, $P[X]$, is Gaussian. The measurement model tells us that $P[tilde{X}mid X]$ is Gaussian, in particular $P[tilde{X}mid X] = N[Sigma_{epsilon},X]$. Since the product of two Gaussians is a Gaussian, the posterior probability is Gaussian. It is not normalized, but that is where $P[tilde{X}]$ (which we "threw out" earlier) comes in. It must be exactly the right value to normalize this distribution, which we can now read off from the variance of the Gaussian posterior.



            What you should really take away from this is that Gaussians are magical [1]. I don't know of any other PDF which has this property. This is why, for example, Kalman filters work so darn well. Kalman filters utilize both of these properties, and that is how you get a super-efficient algorithm for state estimation for a linear dynamical system with Gaussian noise.



            [1] - Gaussians are not actually magical, but perhaps they are mathemagical.







            share|cite|improve this answer












            share|cite|improve this answer



            share|cite|improve this answer










            answered Jan 31 '18 at 22:45









            Gabriel BarelloGabriel Barello

            6111




            6111












            • $begingroup$
              Note that the result in the OP is for a product of Gaussian PDFs on the same variable, while the likelihood and prior aren't. However it still works because of the symmetry between the variable and the mean in a Gaussian
              $endgroup$
              – guillefix
              Dec 17 '18 at 15:17










            • $begingroup$
              Actually the most 'magical' is probably the central limit theorem, and that the point $(X,Y)$ where $X,Y$ are independent standard normal random variables has a perfect circular symmetry. =)
              $endgroup$
              – user21820
              Dec 24 '18 at 14:56


















            • $begingroup$
              Note that the result in the OP is for a product of Gaussian PDFs on the same variable, while the likelihood and prior aren't. However it still works because of the symmetry between the variable and the mean in a Gaussian
              $endgroup$
              – guillefix
              Dec 17 '18 at 15:17










            • $begingroup$
              Actually the most 'magical' is probably the central limit theorem, and that the point $(X,Y)$ where $X,Y$ are independent standard normal random variables has a perfect circular symmetry. =)
              $endgroup$
              – user21820
              Dec 24 '18 at 14:56
















            $begingroup$
            Note that the result in the OP is for a product of Gaussian PDFs on the same variable, while the likelihood and prior aren't. However it still works because of the symmetry between the variable and the mean in a Gaussian
            $endgroup$
            – guillefix
            Dec 17 '18 at 15:17




            $begingroup$
            Note that the result in the OP is for a product of Gaussian PDFs on the same variable, while the likelihood and prior aren't. However it still works because of the symmetry between the variable and the mean in a Gaussian
            $endgroup$
            – guillefix
            Dec 17 '18 at 15:17












            $begingroup$
            Actually the most 'magical' is probably the central limit theorem, and that the point $(X,Y)$ where $X,Y$ are independent standard normal random variables has a perfect circular symmetry. =)
            $endgroup$
            – user21820
            Dec 24 '18 at 14:56




            $begingroup$
            Actually the most 'magical' is probably the central limit theorem, and that the point $(X,Y)$ where $X,Y$ are independent standard normal random variables has a perfect circular symmetry. =)
            $endgroup$
            – user21820
            Dec 24 '18 at 14:56











            2












            $begingroup$

            Intuition 1 (Multiplying random variables): Suppose kids save money Y for X days before giving up. X is random iid normally distributed variable from 0 days to 10 days. Average kid saves for 5 days. Average kid saves 0.30 a day (mostly between .10 and .50). What's the distribution?



            Now are the savings of a kid normally distributed? We know the mean, median, mode of a normal distribution are same as it is symmetric with a standard deviation.



            The average savings are clearly $0.30 * 5 = 1.50. The max savings are 5 and the min is 0. The median of 5 and 0 is 2.50. The mean and median don't match so the product is not normally distributed. The distribution is shifted to the left from the 2.50 mark. The probability that a kid saved 1 is higher than the probability that he saved 4.



            Intuition 2 (Multiplying Gaussian PDFs): Now you're multiplying not the numbers but the functions together. The multiplying is just a bunch of algebra and the resulting function also fits the form factor of a Gaussian. The proof for that is given in your link. It means if you have populations of kids there will be a Gaussian representing their savings with some average savings from each sub population. Basically, a mixture of gaussians is a gaussian. See https://en.m.wikipedia.org/wiki/Mixture_model for how various distributions mix. It’s useful for building machine learning models. Maybe given distributions of daily savings and total savings you want to establish the distribution of how long the kids tend to save.






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              Your Intuition $1$ seems to be a good explanation but your Intuition $2$ doesn't help. The other answer by @Lyndon has the reverse problem. I'd hate to be in the OPs position trying to determine which of your answers is accepted. On another matter, you should learn basic MathJax and use more of it with each new question and answer as you learn more MathJax.
              $endgroup$
              – Stephen Meskin
              Nov 15 '17 at 2:06












            • $begingroup$
              I added more to the intuition 2. I felt the spirit of the question is really around intuition 1. I don’t think I need to repeat the impressive wording of the other answer.
              $endgroup$
              – Aditya Mittal
              Nov 16 '17 at 2:50


















            2












            $begingroup$

            Intuition 1 (Multiplying random variables): Suppose kids save money Y for X days before giving up. X is random iid normally distributed variable from 0 days to 10 days. Average kid saves for 5 days. Average kid saves 0.30 a day (mostly between .10 and .50). What's the distribution?



            Now are the savings of a kid normally distributed? We know the mean, median, mode of a normal distribution are same as it is symmetric with a standard deviation.



            The average savings are clearly $0.30 * 5 = 1.50. The max savings are 5 and the min is 0. The median of 5 and 0 is 2.50. The mean and median don't match so the product is not normally distributed. The distribution is shifted to the left from the 2.50 mark. The probability that a kid saved 1 is higher than the probability that he saved 4.



            Intuition 2 (Multiplying Gaussian PDFs): Now you're multiplying not the numbers but the functions together. The multiplying is just a bunch of algebra and the resulting function also fits the form factor of a Gaussian. The proof for that is given in your link. It means if you have populations of kids there will be a Gaussian representing their savings with some average savings from each sub population. Basically, a mixture of gaussians is a gaussian. See https://en.m.wikipedia.org/wiki/Mixture_model for how various distributions mix. It’s useful for building machine learning models. Maybe given distributions of daily savings and total savings you want to establish the distribution of how long the kids tend to save.






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              Your Intuition $1$ seems to be a good explanation but your Intuition $2$ doesn't help. The other answer by @Lyndon has the reverse problem. I'd hate to be in the OPs position trying to determine which of your answers is accepted. On another matter, you should learn basic MathJax and use more of it with each new question and answer as you learn more MathJax.
              $endgroup$
              – Stephen Meskin
              Nov 15 '17 at 2:06












            • $begingroup$
              I added more to the intuition 2. I felt the spirit of the question is really around intuition 1. I don’t think I need to repeat the impressive wording of the other answer.
              $endgroup$
              – Aditya Mittal
              Nov 16 '17 at 2:50
















            2












            2








            2





            $begingroup$

            Intuition 1 (Multiplying random variables): Suppose kids save money Y for X days before giving up. X is random iid normally distributed variable from 0 days to 10 days. Average kid saves for 5 days. Average kid saves 0.30 a day (mostly between .10 and .50). What's the distribution?



            Now are the savings of a kid normally distributed? We know the mean, median, mode of a normal distribution are same as it is symmetric with a standard deviation.



            The average savings are clearly $0.30 * 5 = 1.50. The max savings are 5 and the min is 0. The median of 5 and 0 is 2.50. The mean and median don't match so the product is not normally distributed. The distribution is shifted to the left from the 2.50 mark. The probability that a kid saved 1 is higher than the probability that he saved 4.



            Intuition 2 (Multiplying Gaussian PDFs): Now you're multiplying not the numbers but the functions together. The multiplying is just a bunch of algebra and the resulting function also fits the form factor of a Gaussian. The proof for that is given in your link. It means if you have populations of kids there will be a Gaussian representing their savings with some average savings from each sub population. Basically, a mixture of gaussians is a gaussian. See https://en.m.wikipedia.org/wiki/Mixture_model for how various distributions mix. It’s useful for building machine learning models. Maybe given distributions of daily savings and total savings you want to establish the distribution of how long the kids tend to save.






            share|cite|improve this answer











            $endgroup$



            Intuition 1 (Multiplying random variables): Suppose kids save money Y for X days before giving up. X is random iid normally distributed variable from 0 days to 10 days. Average kid saves for 5 days. Average kid saves 0.30 a day (mostly between .10 and .50). What's the distribution?



            Now are the savings of a kid normally distributed? We know the mean, median, mode of a normal distribution are same as it is symmetric with a standard deviation.



            The average savings are clearly $0.30 * 5 = 1.50. The max savings are 5 and the min is 0. The median of 5 and 0 is 2.50. The mean and median don't match so the product is not normally distributed. The distribution is shifted to the left from the 2.50 mark. The probability that a kid saved 1 is higher than the probability that he saved 4.



            Intuition 2 (Multiplying Gaussian PDFs): Now you're multiplying not the numbers but the functions together. The multiplying is just a bunch of algebra and the resulting function also fits the form factor of a Gaussian. The proof for that is given in your link. It means if you have populations of kids there will be a Gaussian representing their savings with some average savings from each sub population. Basically, a mixture of gaussians is a gaussian. See https://en.m.wikipedia.org/wiki/Mixture_model for how various distributions mix. It’s useful for building machine learning models. Maybe given distributions of daily savings and total savings you want to establish the distribution of how long the kids tend to save.







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited Nov 16 '17 at 2:56

























            answered Nov 15 '17 at 1:33









            Aditya MittalAditya Mittal

            1213




            1213












            • $begingroup$
              Your Intuition $1$ seems to be a good explanation but your Intuition $2$ doesn't help. The other answer by @Lyndon has the reverse problem. I'd hate to be in the OPs position trying to determine which of your answers is accepted. On another matter, you should learn basic MathJax and use more of it with each new question and answer as you learn more MathJax.
              $endgroup$
              – Stephen Meskin
              Nov 15 '17 at 2:06












            • $begingroup$
              I added more to the intuition 2. I felt the spirit of the question is really around intuition 1. I don’t think I need to repeat the impressive wording of the other answer.
              $endgroup$
              – Aditya Mittal
              Nov 16 '17 at 2:50




















            • $begingroup$
              Your Intuition $1$ seems to be a good explanation but your Intuition $2$ doesn't help. The other answer by @Lyndon has the reverse problem. I'd hate to be in the OPs position trying to determine which of your answers is accepted. On another matter, you should learn basic MathJax and use more of it with each new question and answer as you learn more MathJax.
              $endgroup$
              – Stephen Meskin
              Nov 15 '17 at 2:06












            • $begingroup$
              I added more to the intuition 2. I felt the spirit of the question is really around intuition 1. I don’t think I need to repeat the impressive wording of the other answer.
              $endgroup$
              – Aditya Mittal
              Nov 16 '17 at 2:50


















            $begingroup$
            Your Intuition $1$ seems to be a good explanation but your Intuition $2$ doesn't help. The other answer by @Lyndon has the reverse problem. I'd hate to be in the OPs position trying to determine which of your answers is accepted. On another matter, you should learn basic MathJax and use more of it with each new question and answer as you learn more MathJax.
            $endgroup$
            – Stephen Meskin
            Nov 15 '17 at 2:06






            $begingroup$
            Your Intuition $1$ seems to be a good explanation but your Intuition $2$ doesn't help. The other answer by @Lyndon has the reverse problem. I'd hate to be in the OPs position trying to determine which of your answers is accepted. On another matter, you should learn basic MathJax and use more of it with each new question and answer as you learn more MathJax.
            $endgroup$
            – Stephen Meskin
            Nov 15 '17 at 2:06














            $begingroup$
            I added more to the intuition 2. I felt the spirit of the question is really around intuition 1. I don’t think I need to repeat the impressive wording of the other answer.
            $endgroup$
            – Aditya Mittal
            Nov 16 '17 at 2:50






            $begingroup$
            I added more to the intuition 2. I felt the spirit of the question is really around intuition 1. I don’t think I need to repeat the impressive wording of the other answer.
            $endgroup$
            – Aditya Mittal
            Nov 16 '17 at 2:50













            2












            $begingroup$

            In brief, you were confused by two totally different concepts.




            1. For the first, you are calculating the distribution of transformed random variables. here, you specify that as the product of XY.


            2. For the second, you just calculate the product of two functions $phi(x)phi(y)$, which happen to be the PDF of two normal random variables.



            For some details, check here. In general, if you want to calculate the PDF of XY, you need to figure out $F(t) = P(XY < t)$, then $f(t) = F'(t)$. As for the product of two functions, that's easy as you can see.






            share|cite|improve this answer









            $endgroup$


















              2












              $begingroup$

              In brief, you were confused by two totally different concepts.




              1. For the first, you are calculating the distribution of transformed random variables. here, you specify that as the product of XY.


              2. For the second, you just calculate the product of two functions $phi(x)phi(y)$, which happen to be the PDF of two normal random variables.



              For some details, check here. In general, if you want to calculate the PDF of XY, you need to figure out $F(t) = P(XY < t)$, then $f(t) = F'(t)$. As for the product of two functions, that's easy as you can see.






              share|cite|improve this answer









              $endgroup$
















                2












                2








                2





                $begingroup$

                In brief, you were confused by two totally different concepts.




                1. For the first, you are calculating the distribution of transformed random variables. here, you specify that as the product of XY.


                2. For the second, you just calculate the product of two functions $phi(x)phi(y)$, which happen to be the PDF of two normal random variables.



                For some details, check here. In general, if you want to calculate the PDF of XY, you need to figure out $F(t) = P(XY < t)$, then $f(t) = F'(t)$. As for the product of two functions, that's easy as you can see.






                share|cite|improve this answer









                $endgroup$



                In brief, you were confused by two totally different concepts.




                1. For the first, you are calculating the distribution of transformed random variables. here, you specify that as the product of XY.


                2. For the second, you just calculate the product of two functions $phi(x)phi(y)$, which happen to be the PDF of two normal random variables.



                For some details, check here. In general, if you want to calculate the PDF of XY, you need to figure out $F(t) = P(XY < t)$, then $f(t) = F'(t)$. As for the product of two functions, that's easy as you can see.







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered Dec 6 '18 at 3:10









                Albert ChenAlbert Chen

                23027




                23027






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Mathematics Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1112866%2fproduct-of-two-gaussian-pdfs-is-a-gaussian-pdf-but-product-of-two-gaussian-vari%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    mysqli_query(): Empty query in /home/lucindabrummitt/public_html/blog/wp-includes/wp-db.php on line 1924

                    How to change which sound is reproduced for terminal bell?

                    Can I use Tabulator js library in my java Spring + Thymeleaf project?