Why is $dW^2=dt$ in stochastic calculus?(do not use Ito’ Lemma),












0












$begingroup$


I'm trying to calculate the following integral: $$int_0^t W_tdW_t$$
Without using Ito’ Lemma. I am confused about how $dW^2=dt$ when Ito's Lemma is not used.





Hint



Let W be a standard Wiener process and t an arbitrary positive real number. For each $n$, and $t_i=it2^{-n}$, then:




  1. show that $sum_i(Delta W(t_i))^2$ converges to $t$ as $n$ grow.

  2. show that the terms in the sum are IID and that their variance
    shrinks sufficiently fast as $n$ grows and use the fourth moment of
    a Gaussian distribution.




So, I am left with the following:



$$E(sum_i(Delta W(t_i))^2)=E(sum_i(W(t_{i+1})-W(t_{i}))^2)=?$$



What I have done



So, can I say since $(W(t_{i+1})-W(t_{i}))$ follow $sqrt {t_{i+1}-t_{i}}N(0,1)$ and by strong law of Large number
$sum_i(W(t_{i+1})-W(t_{i}))^2$ converges to $sum_iE(W(t_{i+1})-W(t_{i}))^2$=1/2 $sum_i(t_{i+1}-t_{i})$=t?



Is this correct?










share|cite|improve this question











$endgroup$








  • 5




    $begingroup$
    ?? What happened when you applied the (rather detailed) hint?
    $endgroup$
    – Did
    Dec 12 '18 at 17:10










  • $begingroup$
    You will need the fourth moment of a Gaussian distribution???
    $endgroup$
    – gloria
    Dec 12 '18 at 17:29










  • $begingroup$
    Is this supposed to address my comment? If not, why?
    $endgroup$
    – Did
    Dec 12 '18 at 17:32










  • $begingroup$
    Can not make sure my last step. Since it is said fourth moment. I just figure it like this.
    $endgroup$
    – gloria
    Dec 12 '18 at 17:49










  • $begingroup$
    "According to [the] strong law of Large number[s] ... converges to ..." ?? LLN deals with means $frac1nsumlimits_{k=1}^nX_k$. Where do you see one here?
    $endgroup$
    – Did
    Dec 12 '18 at 18:02
















0












$begingroup$


I'm trying to calculate the following integral: $$int_0^t W_tdW_t$$
Without using Ito’ Lemma. I am confused about how $dW^2=dt$ when Ito's Lemma is not used.





Hint



Let W be a standard Wiener process and t an arbitrary positive real number. For each $n$, and $t_i=it2^{-n}$, then:




  1. show that $sum_i(Delta W(t_i))^2$ converges to $t$ as $n$ grow.

  2. show that the terms in the sum are IID and that their variance
    shrinks sufficiently fast as $n$ grows and use the fourth moment of
    a Gaussian distribution.




So, I am left with the following:



$$E(sum_i(Delta W(t_i))^2)=E(sum_i(W(t_{i+1})-W(t_{i}))^2)=?$$



What I have done



So, can I say since $(W(t_{i+1})-W(t_{i}))$ follow $sqrt {t_{i+1}-t_{i}}N(0,1)$ and by strong law of Large number
$sum_i(W(t_{i+1})-W(t_{i}))^2$ converges to $sum_iE(W(t_{i+1})-W(t_{i}))^2$=1/2 $sum_i(t_{i+1}-t_{i})$=t?



Is this correct?










share|cite|improve this question











$endgroup$








  • 5




    $begingroup$
    ?? What happened when you applied the (rather detailed) hint?
    $endgroup$
    – Did
    Dec 12 '18 at 17:10










  • $begingroup$
    You will need the fourth moment of a Gaussian distribution???
    $endgroup$
    – gloria
    Dec 12 '18 at 17:29










  • $begingroup$
    Is this supposed to address my comment? If not, why?
    $endgroup$
    – Did
    Dec 12 '18 at 17:32










  • $begingroup$
    Can not make sure my last step. Since it is said fourth moment. I just figure it like this.
    $endgroup$
    – gloria
    Dec 12 '18 at 17:49










  • $begingroup$
    "According to [the] strong law of Large number[s] ... converges to ..." ?? LLN deals with means $frac1nsumlimits_{k=1}^nX_k$. Where do you see one here?
    $endgroup$
    – Did
    Dec 12 '18 at 18:02














0












0








0


3



$begingroup$


I'm trying to calculate the following integral: $$int_0^t W_tdW_t$$
Without using Ito’ Lemma. I am confused about how $dW^2=dt$ when Ito's Lemma is not used.





Hint



Let W be a standard Wiener process and t an arbitrary positive real number. For each $n$, and $t_i=it2^{-n}$, then:




  1. show that $sum_i(Delta W(t_i))^2$ converges to $t$ as $n$ grow.

  2. show that the terms in the sum are IID and that their variance
    shrinks sufficiently fast as $n$ grows and use the fourth moment of
    a Gaussian distribution.




So, I am left with the following:



$$E(sum_i(Delta W(t_i))^2)=E(sum_i(W(t_{i+1})-W(t_{i}))^2)=?$$



What I have done



So, can I say since $(W(t_{i+1})-W(t_{i}))$ follow $sqrt {t_{i+1}-t_{i}}N(0,1)$ and by strong law of Large number
$sum_i(W(t_{i+1})-W(t_{i}))^2$ converges to $sum_iE(W(t_{i+1})-W(t_{i}))^2$=1/2 $sum_i(t_{i+1}-t_{i})$=t?



Is this correct?










share|cite|improve this question











$endgroup$




I'm trying to calculate the following integral: $$int_0^t W_tdW_t$$
Without using Ito’ Lemma. I am confused about how $dW^2=dt$ when Ito's Lemma is not used.





Hint



Let W be a standard Wiener process and t an arbitrary positive real number. For each $n$, and $t_i=it2^{-n}$, then:




  1. show that $sum_i(Delta W(t_i))^2$ converges to $t$ as $n$ grow.

  2. show that the terms in the sum are IID and that their variance
    shrinks sufficiently fast as $n$ grows and use the fourth moment of
    a Gaussian distribution.




So, I am left with the following:



$$E(sum_i(Delta W(t_i))^2)=E(sum_i(W(t_{i+1})-W(t_{i}))^2)=?$$



What I have done



So, can I say since $(W(t_{i+1})-W(t_{i}))$ follow $sqrt {t_{i+1}-t_{i}}N(0,1)$ and by strong law of Large number
$sum_i(W(t_{i+1})-W(t_{i}))^2$ converges to $sum_iE(W(t_{i+1})-W(t_{i}))^2$=1/2 $sum_i(t_{i+1}-t_{i})$=t?



Is this correct?







stochastic-processes stochastic-calculus stochastic-integrals






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 12 '18 at 23:32









Waqas

15913




15913










asked Dec 12 '18 at 16:53









gloriagloria

43




43








  • 5




    $begingroup$
    ?? What happened when you applied the (rather detailed) hint?
    $endgroup$
    – Did
    Dec 12 '18 at 17:10










  • $begingroup$
    You will need the fourth moment of a Gaussian distribution???
    $endgroup$
    – gloria
    Dec 12 '18 at 17:29










  • $begingroup$
    Is this supposed to address my comment? If not, why?
    $endgroup$
    – Did
    Dec 12 '18 at 17:32










  • $begingroup$
    Can not make sure my last step. Since it is said fourth moment. I just figure it like this.
    $endgroup$
    – gloria
    Dec 12 '18 at 17:49










  • $begingroup$
    "According to [the] strong law of Large number[s] ... converges to ..." ?? LLN deals with means $frac1nsumlimits_{k=1}^nX_k$. Where do you see one here?
    $endgroup$
    – Did
    Dec 12 '18 at 18:02














  • 5




    $begingroup$
    ?? What happened when you applied the (rather detailed) hint?
    $endgroup$
    – Did
    Dec 12 '18 at 17:10










  • $begingroup$
    You will need the fourth moment of a Gaussian distribution???
    $endgroup$
    – gloria
    Dec 12 '18 at 17:29










  • $begingroup$
    Is this supposed to address my comment? If not, why?
    $endgroup$
    – Did
    Dec 12 '18 at 17:32










  • $begingroup$
    Can not make sure my last step. Since it is said fourth moment. I just figure it like this.
    $endgroup$
    – gloria
    Dec 12 '18 at 17:49










  • $begingroup$
    "According to [the] strong law of Large number[s] ... converges to ..." ?? LLN deals with means $frac1nsumlimits_{k=1}^nX_k$. Where do you see one here?
    $endgroup$
    – Did
    Dec 12 '18 at 18:02








5




5




$begingroup$
?? What happened when you applied the (rather detailed) hint?
$endgroup$
– Did
Dec 12 '18 at 17:10




$begingroup$
?? What happened when you applied the (rather detailed) hint?
$endgroup$
– Did
Dec 12 '18 at 17:10












$begingroup$
You will need the fourth moment of a Gaussian distribution???
$endgroup$
– gloria
Dec 12 '18 at 17:29




$begingroup$
You will need the fourth moment of a Gaussian distribution???
$endgroup$
– gloria
Dec 12 '18 at 17:29












$begingroup$
Is this supposed to address my comment? If not, why?
$endgroup$
– Did
Dec 12 '18 at 17:32




$begingroup$
Is this supposed to address my comment? If not, why?
$endgroup$
– Did
Dec 12 '18 at 17:32












$begingroup$
Can not make sure my last step. Since it is said fourth moment. I just figure it like this.
$endgroup$
– gloria
Dec 12 '18 at 17:49




$begingroup$
Can not make sure my last step. Since it is said fourth moment. I just figure it like this.
$endgroup$
– gloria
Dec 12 '18 at 17:49












$begingroup$
"According to [the] strong law of Large number[s] ... converges to ..." ?? LLN deals with means $frac1nsumlimits_{k=1}^nX_k$. Where do you see one here?
$endgroup$
– Did
Dec 12 '18 at 18:02




$begingroup$
"According to [the] strong law of Large number[s] ... converges to ..." ?? LLN deals with means $frac1nsumlimits_{k=1}^nX_k$. Where do you see one here?
$endgroup$
– Did
Dec 12 '18 at 18:02










1 Answer
1






active

oldest

votes


















1












$begingroup$

Recall Riemann sum approximation is:



$$X_t^m = sum_{t_j<t}f_{t_j}Delta W_j$$



where $Delta t = 2^{-m}$ and $W_t$ is a standard Brownian motion, and



$$Delta W_j = W_{t_{j+1}}-W_{t_j}$$



The pathwise convergence is for every Brownian motion and the Riemann approx. converges to the limit such that the limit is measurable for the filtration up till $t$ because $X_t$ is a continuous function of $W_{[0,t]}$. Here I will skip a lot of explanation on Reimann sum approx. (and will take $Delta W_j$ as Gaussian with mean $Delta t$ and variance $2Delta t^2$ (essentially above hint) see this) and go to the problem at hand which is:



$$X_t=int_{0}^t W_s dW_s$$



By Riemann sum approx. we write



$$X_t^m=sum_{t_j<t}W_{t_j}(W_{t_{j+1}}-W_{t_{j}})$$



Noticing that:



$$W_{t_j}=frac{1}{2}(W_{t_{j+1}}+W_{t_j})-frac{1}{2}(W_{t_{j+1}}-W_{t_j})$$



So we can write:



$$X_t^m=frac{1}{2}underbrace{sum_{t_j<t}W_{t_j}(W_{t_{j+1}}+W_{t_j})(W_{t_{j+1}}-W_{t_j})}_{text{first sum}}-frac{1}{2}underbrace{sum_{t_j<t}W_{t_j}(W_{t_{j+1}}-W_{t_j})(W_{t_{j+1}}-W_{t_j})}_{text{second sum}}$$



Notice,
$$(W_{t_{j+1}}+W_{t_j})(W_{t_{j+1}}-W_{t_j})=W^2_{t_{j+1}}-W_{t_j}^2$$



By telescoping sum and by letting $t_n=max{t_j|t_j<t}$ the first sum term becomes $frac{1}{2}(W^2_{t_{n+1}}-W_0^2)$. Also, because $W_0=0$, we get $frac{1}{2}W^2_{t_{n+1}}$ and $W_{t_{n+1}}rightarrow W_t$ as $Delta t rightarrow 0$. Now the second sum term is $sum_{t_j<t}Delta W_j^2=S$ and since $E(Delta W_j^2)=Delta t$, so




$$E(sum_{t_j<t}Delta W_j^2)=E(S)=sum_{t_j<t}Delta t=t_n$$




This answer's the ? in the above question



$t_nrightarrow t$ as $Delta trightarrow 0$. Similarly for variance



$$var(S)=2Delta t sum_{t_j<t}Delta t=2Delta t t_n leq 2 t 2^{-m}$$



Thus,




$$int_0^t W_s dW_s = frac{1}{2}(W_t^2 - t)$$




This is the problem statement



Notice, if $W_t$ was differentiable we would obtain $frac{1}{2}W_t^2$ instead.






share|cite|improve this answer











$endgroup$














    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3036938%2fwhy-is-dw2-dt-in-stochastic-calculusdo-not-use-ito-lemma%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1












    $begingroup$

    Recall Riemann sum approximation is:



    $$X_t^m = sum_{t_j<t}f_{t_j}Delta W_j$$



    where $Delta t = 2^{-m}$ and $W_t$ is a standard Brownian motion, and



    $$Delta W_j = W_{t_{j+1}}-W_{t_j}$$



    The pathwise convergence is for every Brownian motion and the Riemann approx. converges to the limit such that the limit is measurable for the filtration up till $t$ because $X_t$ is a continuous function of $W_{[0,t]}$. Here I will skip a lot of explanation on Reimann sum approx. (and will take $Delta W_j$ as Gaussian with mean $Delta t$ and variance $2Delta t^2$ (essentially above hint) see this) and go to the problem at hand which is:



    $$X_t=int_{0}^t W_s dW_s$$



    By Riemann sum approx. we write



    $$X_t^m=sum_{t_j<t}W_{t_j}(W_{t_{j+1}}-W_{t_{j}})$$



    Noticing that:



    $$W_{t_j}=frac{1}{2}(W_{t_{j+1}}+W_{t_j})-frac{1}{2}(W_{t_{j+1}}-W_{t_j})$$



    So we can write:



    $$X_t^m=frac{1}{2}underbrace{sum_{t_j<t}W_{t_j}(W_{t_{j+1}}+W_{t_j})(W_{t_{j+1}}-W_{t_j})}_{text{first sum}}-frac{1}{2}underbrace{sum_{t_j<t}W_{t_j}(W_{t_{j+1}}-W_{t_j})(W_{t_{j+1}}-W_{t_j})}_{text{second sum}}$$



    Notice,
    $$(W_{t_{j+1}}+W_{t_j})(W_{t_{j+1}}-W_{t_j})=W^2_{t_{j+1}}-W_{t_j}^2$$



    By telescoping sum and by letting $t_n=max{t_j|t_j<t}$ the first sum term becomes $frac{1}{2}(W^2_{t_{n+1}}-W_0^2)$. Also, because $W_0=0$, we get $frac{1}{2}W^2_{t_{n+1}}$ and $W_{t_{n+1}}rightarrow W_t$ as $Delta t rightarrow 0$. Now the second sum term is $sum_{t_j<t}Delta W_j^2=S$ and since $E(Delta W_j^2)=Delta t$, so




    $$E(sum_{t_j<t}Delta W_j^2)=E(S)=sum_{t_j<t}Delta t=t_n$$




    This answer's the ? in the above question



    $t_nrightarrow t$ as $Delta trightarrow 0$. Similarly for variance



    $$var(S)=2Delta t sum_{t_j<t}Delta t=2Delta t t_n leq 2 t 2^{-m}$$



    Thus,




    $$int_0^t W_s dW_s = frac{1}{2}(W_t^2 - t)$$




    This is the problem statement



    Notice, if $W_t$ was differentiable we would obtain $frac{1}{2}W_t^2$ instead.






    share|cite|improve this answer











    $endgroup$


















      1












      $begingroup$

      Recall Riemann sum approximation is:



      $$X_t^m = sum_{t_j<t}f_{t_j}Delta W_j$$



      where $Delta t = 2^{-m}$ and $W_t$ is a standard Brownian motion, and



      $$Delta W_j = W_{t_{j+1}}-W_{t_j}$$



      The pathwise convergence is for every Brownian motion and the Riemann approx. converges to the limit such that the limit is measurable for the filtration up till $t$ because $X_t$ is a continuous function of $W_{[0,t]}$. Here I will skip a lot of explanation on Reimann sum approx. (and will take $Delta W_j$ as Gaussian with mean $Delta t$ and variance $2Delta t^2$ (essentially above hint) see this) and go to the problem at hand which is:



      $$X_t=int_{0}^t W_s dW_s$$



      By Riemann sum approx. we write



      $$X_t^m=sum_{t_j<t}W_{t_j}(W_{t_{j+1}}-W_{t_{j}})$$



      Noticing that:



      $$W_{t_j}=frac{1}{2}(W_{t_{j+1}}+W_{t_j})-frac{1}{2}(W_{t_{j+1}}-W_{t_j})$$



      So we can write:



      $$X_t^m=frac{1}{2}underbrace{sum_{t_j<t}W_{t_j}(W_{t_{j+1}}+W_{t_j})(W_{t_{j+1}}-W_{t_j})}_{text{first sum}}-frac{1}{2}underbrace{sum_{t_j<t}W_{t_j}(W_{t_{j+1}}-W_{t_j})(W_{t_{j+1}}-W_{t_j})}_{text{second sum}}$$



      Notice,
      $$(W_{t_{j+1}}+W_{t_j})(W_{t_{j+1}}-W_{t_j})=W^2_{t_{j+1}}-W_{t_j}^2$$



      By telescoping sum and by letting $t_n=max{t_j|t_j<t}$ the first sum term becomes $frac{1}{2}(W^2_{t_{n+1}}-W_0^2)$. Also, because $W_0=0$, we get $frac{1}{2}W^2_{t_{n+1}}$ and $W_{t_{n+1}}rightarrow W_t$ as $Delta t rightarrow 0$. Now the second sum term is $sum_{t_j<t}Delta W_j^2=S$ and since $E(Delta W_j^2)=Delta t$, so




      $$E(sum_{t_j<t}Delta W_j^2)=E(S)=sum_{t_j<t}Delta t=t_n$$




      This answer's the ? in the above question



      $t_nrightarrow t$ as $Delta trightarrow 0$. Similarly for variance



      $$var(S)=2Delta t sum_{t_j<t}Delta t=2Delta t t_n leq 2 t 2^{-m}$$



      Thus,




      $$int_0^t W_s dW_s = frac{1}{2}(W_t^2 - t)$$




      This is the problem statement



      Notice, if $W_t$ was differentiable we would obtain $frac{1}{2}W_t^2$ instead.






      share|cite|improve this answer











      $endgroup$
















        1












        1








        1





        $begingroup$

        Recall Riemann sum approximation is:



        $$X_t^m = sum_{t_j<t}f_{t_j}Delta W_j$$



        where $Delta t = 2^{-m}$ and $W_t$ is a standard Brownian motion, and



        $$Delta W_j = W_{t_{j+1}}-W_{t_j}$$



        The pathwise convergence is for every Brownian motion and the Riemann approx. converges to the limit such that the limit is measurable for the filtration up till $t$ because $X_t$ is a continuous function of $W_{[0,t]}$. Here I will skip a lot of explanation on Reimann sum approx. (and will take $Delta W_j$ as Gaussian with mean $Delta t$ and variance $2Delta t^2$ (essentially above hint) see this) and go to the problem at hand which is:



        $$X_t=int_{0}^t W_s dW_s$$



        By Riemann sum approx. we write



        $$X_t^m=sum_{t_j<t}W_{t_j}(W_{t_{j+1}}-W_{t_{j}})$$



        Noticing that:



        $$W_{t_j}=frac{1}{2}(W_{t_{j+1}}+W_{t_j})-frac{1}{2}(W_{t_{j+1}}-W_{t_j})$$



        So we can write:



        $$X_t^m=frac{1}{2}underbrace{sum_{t_j<t}W_{t_j}(W_{t_{j+1}}+W_{t_j})(W_{t_{j+1}}-W_{t_j})}_{text{first sum}}-frac{1}{2}underbrace{sum_{t_j<t}W_{t_j}(W_{t_{j+1}}-W_{t_j})(W_{t_{j+1}}-W_{t_j})}_{text{second sum}}$$



        Notice,
        $$(W_{t_{j+1}}+W_{t_j})(W_{t_{j+1}}-W_{t_j})=W^2_{t_{j+1}}-W_{t_j}^2$$



        By telescoping sum and by letting $t_n=max{t_j|t_j<t}$ the first sum term becomes $frac{1}{2}(W^2_{t_{n+1}}-W_0^2)$. Also, because $W_0=0$, we get $frac{1}{2}W^2_{t_{n+1}}$ and $W_{t_{n+1}}rightarrow W_t$ as $Delta t rightarrow 0$. Now the second sum term is $sum_{t_j<t}Delta W_j^2=S$ and since $E(Delta W_j^2)=Delta t$, so




        $$E(sum_{t_j<t}Delta W_j^2)=E(S)=sum_{t_j<t}Delta t=t_n$$




        This answer's the ? in the above question



        $t_nrightarrow t$ as $Delta trightarrow 0$. Similarly for variance



        $$var(S)=2Delta t sum_{t_j<t}Delta t=2Delta t t_n leq 2 t 2^{-m}$$



        Thus,




        $$int_0^t W_s dW_s = frac{1}{2}(W_t^2 - t)$$




        This is the problem statement



        Notice, if $W_t$ was differentiable we would obtain $frac{1}{2}W_t^2$ instead.






        share|cite|improve this answer











        $endgroup$



        Recall Riemann sum approximation is:



        $$X_t^m = sum_{t_j<t}f_{t_j}Delta W_j$$



        where $Delta t = 2^{-m}$ and $W_t$ is a standard Brownian motion, and



        $$Delta W_j = W_{t_{j+1}}-W_{t_j}$$



        The pathwise convergence is for every Brownian motion and the Riemann approx. converges to the limit such that the limit is measurable for the filtration up till $t$ because $X_t$ is a continuous function of $W_{[0,t]}$. Here I will skip a lot of explanation on Reimann sum approx. (and will take $Delta W_j$ as Gaussian with mean $Delta t$ and variance $2Delta t^2$ (essentially above hint) see this) and go to the problem at hand which is:



        $$X_t=int_{0}^t W_s dW_s$$



        By Riemann sum approx. we write



        $$X_t^m=sum_{t_j<t}W_{t_j}(W_{t_{j+1}}-W_{t_{j}})$$



        Noticing that:



        $$W_{t_j}=frac{1}{2}(W_{t_{j+1}}+W_{t_j})-frac{1}{2}(W_{t_{j+1}}-W_{t_j})$$



        So we can write:



        $$X_t^m=frac{1}{2}underbrace{sum_{t_j<t}W_{t_j}(W_{t_{j+1}}+W_{t_j})(W_{t_{j+1}}-W_{t_j})}_{text{first sum}}-frac{1}{2}underbrace{sum_{t_j<t}W_{t_j}(W_{t_{j+1}}-W_{t_j})(W_{t_{j+1}}-W_{t_j})}_{text{second sum}}$$



        Notice,
        $$(W_{t_{j+1}}+W_{t_j})(W_{t_{j+1}}-W_{t_j})=W^2_{t_{j+1}}-W_{t_j}^2$$



        By telescoping sum and by letting $t_n=max{t_j|t_j<t}$ the first sum term becomes $frac{1}{2}(W^2_{t_{n+1}}-W_0^2)$. Also, because $W_0=0$, we get $frac{1}{2}W^2_{t_{n+1}}$ and $W_{t_{n+1}}rightarrow W_t$ as $Delta t rightarrow 0$. Now the second sum term is $sum_{t_j<t}Delta W_j^2=S$ and since $E(Delta W_j^2)=Delta t$, so




        $$E(sum_{t_j<t}Delta W_j^2)=E(S)=sum_{t_j<t}Delta t=t_n$$




        This answer's the ? in the above question



        $t_nrightarrow t$ as $Delta trightarrow 0$. Similarly for variance



        $$var(S)=2Delta t sum_{t_j<t}Delta t=2Delta t t_n leq 2 t 2^{-m}$$



        Thus,




        $$int_0^t W_s dW_s = frac{1}{2}(W_t^2 - t)$$




        This is the problem statement



        Notice, if $W_t$ was differentiable we would obtain $frac{1}{2}W_t^2$ instead.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Dec 13 '18 at 3:21

























        answered Dec 12 '18 at 21:32









        WaqasWaqas

        15913




        15913






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3036938%2fwhy-is-dw2-dt-in-stochastic-calculusdo-not-use-ito-lemma%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Biblatex bibliography style without URLs when DOI exists (in Overleaf with Zotero bibliography)

            ComboBox Display Member on multiple fields

            Is it possible to collect Nectar points via Trainline?