Finding the positive root of $x^3 +x^2 =0.1$ by numerical methods.












1












$begingroup$


The positive root of $x^3 +x^2 =0.1$ is denoted to be $A$.



$(a)$ Find the first approximation to $A$ by linear interpolation on the interval $(0,1)$



For this, I got $x_1 =0.05$



$(b)$ Indicate why linear interpolation does not give a good approximation to $A$.



All I think of for this is that $x_1$ by linear interpolation greatly underestimates $A$ to a large extent. But wouldn't numerical methods like Newton-Raphson, Linear Interpolation and iteration $x_{n+1}=F(x_n)$ all give bad first approximations?



$(c)$ Find an alternative first approximation to $A$ by using the fact that if $x$ is small then $x^3$ is negligible compared with $x^2$



So, for this am I supposed to use Newton-Raphson with $x_1=0$ since $x$ is small?










share|cite|improve this question









$endgroup$

















    1












    $begingroup$


    The positive root of $x^3 +x^2 =0.1$ is denoted to be $A$.



    $(a)$ Find the first approximation to $A$ by linear interpolation on the interval $(0,1)$



    For this, I got $x_1 =0.05$



    $(b)$ Indicate why linear interpolation does not give a good approximation to $A$.



    All I think of for this is that $x_1$ by linear interpolation greatly underestimates $A$ to a large extent. But wouldn't numerical methods like Newton-Raphson, Linear Interpolation and iteration $x_{n+1}=F(x_n)$ all give bad first approximations?



    $(c)$ Find an alternative first approximation to $A$ by using the fact that if $x$ is small then $x^3$ is negligible compared with $x^2$



    So, for this am I supposed to use Newton-Raphson with $x_1=0$ since $x$ is small?










    share|cite|improve this question









    $endgroup$















      1












      1








      1





      $begingroup$


      The positive root of $x^3 +x^2 =0.1$ is denoted to be $A$.



      $(a)$ Find the first approximation to $A$ by linear interpolation on the interval $(0,1)$



      For this, I got $x_1 =0.05$



      $(b)$ Indicate why linear interpolation does not give a good approximation to $A$.



      All I think of for this is that $x_1$ by linear interpolation greatly underestimates $A$ to a large extent. But wouldn't numerical methods like Newton-Raphson, Linear Interpolation and iteration $x_{n+1}=F(x_n)$ all give bad first approximations?



      $(c)$ Find an alternative first approximation to $A$ by using the fact that if $x$ is small then $x^3$ is negligible compared with $x^2$



      So, for this am I supposed to use Newton-Raphson with $x_1=0$ since $x$ is small?










      share|cite|improve this question









      $endgroup$




      The positive root of $x^3 +x^2 =0.1$ is denoted to be $A$.



      $(a)$ Find the first approximation to $A$ by linear interpolation on the interval $(0,1)$



      For this, I got $x_1 =0.05$



      $(b)$ Indicate why linear interpolation does not give a good approximation to $A$.



      All I think of for this is that $x_1$ by linear interpolation greatly underestimates $A$ to a large extent. But wouldn't numerical methods like Newton-Raphson, Linear Interpolation and iteration $x_{n+1}=F(x_n)$ all give bad first approximations?



      $(c)$ Find an alternative first approximation to $A$ by using the fact that if $x$ is small then $x^3$ is negligible compared with $x^2$



      So, for this am I supposed to use Newton-Raphson with $x_1=0$ since $x$ is small?







      numerical-methods roots






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Nov 26 '18 at 13:27









      Arc NeoepiArc Neoepi

      405413




      405413






















          3 Answers
          3






          active

          oldest

          votes


















          1












          $begingroup$

          (a) Correct. (b) Yes. It is also bad because $1$ is far away from the root, as can be seen comparing the function values. And the function is very non-linear around $x=0$, making linear approximations relatively bad.
          enter image description here



          (c) No, you are supposed to solve $x^2=0.1$, disregarding the $x^3$ term. Or to put it into inequalities, as $0<x<1$ you also have
          $$
          x^2le 0.1le 2x^2implies sqrt{0.05}le x le sqrt{0.1}.
          $$

          Then iterate further, for instance using $x_{k+1}=sqrt{frac{0.1}{1+x_k}}$.



          k    x[k]
          ------------------
          1 0.316227766017
          2 0.275635071544
          3 0.279986295554
          4 0.279509993492
          5 0.279562012937
          6 0.279556330208
          7 0.279556950986
          8 0.279556883172
          9 0.27955689058





          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            So, I have 2 qn. 1) How do you get from $0<x<1$ to $x^2le 0.1le 2x^2$ ? 2) Why use $x_{k+1}=sqrt{frac{0.1}{1+x_k}}$ ? Is $x_{k+1}=(0.1-(x_k)^2)^{1/3}$ okay?
            $endgroup$
            – Arc Neoepi
            Nov 26 '18 at 14:01








          • 1




            $begingroup$
            1) Put $x^2+x^3$ in place of $0.1$
            $endgroup$
            – Empy2
            Nov 26 '18 at 14:06








          • 1




            $begingroup$
            $0<x<1implies 0<x^3<x^2implies x^2<0.1<2x^2$. If computing with positive quantities it makes life more predictable if all intermediate terms stay positive. Some iteration formulas will be divergent. Computing using the secant or Dekker's method will be faster than such simple iterations.
            $endgroup$
            – LutzL
            Nov 26 '18 at 14:09










          • $begingroup$
            Thanks for the help and the tips!
            $endgroup$
            – Arc Neoepi
            Nov 26 '18 at 14:17



















          0












          $begingroup$

          Just for your curiosity.



          We can approximate functions using Padé approximants much better than with Taylor series (remember that Newton method is equivalent to an $O(x^2)$ Taylor expansion).



          Since we know how to solve easily quadratic equations, let us consider the $[2,2]$ Padé approximant. It will be
          $$x^3+x^2 simfrac{x^2}{x^2-x+1}$$ If you develop the rhs of the above as a Taylor series, you would get $x^2+x^3-x^5+Oleft(x^6right)$ (pretty close, isn't it ?).



          So, for small values of $a$, the approximate solution of $x^3+x^2=a$ is given by the solution of $(1-a) x^2+a x-a=0$ that is to say
          $$x_pm=frac{apmsqrt{4 a-3 a^2}}{2 (a-1)}$$ So, for $a=frac 1 {10}$, the estimate would be $xapprox 0.282376$ which is quite close to the "exact" solution $xapprox 0.279557$






          share|cite|improve this answer









          $endgroup$





















            0












            $begingroup$

            $newcommand{bbx}[1]{,bbox[15px,border:1px groove navy]{displaystyle{#1}},}
            newcommand{braces}[1]{leftlbrace,{#1},rightrbrace}
            newcommand{bracks}[1]{leftlbrack,{#1},rightrbrack}
            newcommand{dd}{mathrm{d}}
            newcommand{ds}[1]{displaystyle{#1}}
            newcommand{expo}[1]{,mathrm{e}^{#1},}
            newcommand{ic}{mathrm{i}}
            newcommand{mc}[1]{mathcal{#1}}
            newcommand{mrm}[1]{mathrm{#1}}
            newcommand{pars}[1]{left(,{#1},right)}
            newcommand{partiald}[3]{frac{partial^{#1} #2}{partial #3^{#1}}}
            newcommand{root}[2]{,sqrt[#1]{,{#2},},}
            newcommand{totald}[3]{frac{mathrm{d}^{#1} #2}{mathrm{d} #3^{#1}}}
            newcommand{verts}[1]{leftvert,{#1},rightvert}$




            • Start with the iteration
              $$
              x_{n} = root{0.1 - x_{n}^{3}},,qquad
              x_{0} = root{0.1} approx 0.3162
              $$

              It seems to be that $ds{10}$ iterations are enough. Namely, $$
              x_{10} approx 0.279564quadmbox{and}quadmrm{f}pars{x_{10}} approx 5.43129 times 10^{-6}
              $$

              where $bbox[10px,#ffd,border: 1px groove navy]{ds{mrm{f}pars{x} equiv x^{3} + x^{2} - 0.1}}$.

            • In addition, you can refine the above result by means of a Newton-Rapson Iteration:
              $$
              y_{n} = y_{n - 1} - {y_{n - 1}^{3} + y_{n - 1}^{2} - 0.1 over
              3y_{n - 1}^{2} + 2y_{n - 1}},,qquad y_{0} = x_{10} approx 0.279564
              $$

              With about three iterations, I'll find
              $$
              y_{3} approx bbx{large 0.279957} implies mrm{f}pars{y_{3}} approx
              2.77556 times10^{-17}
              $$

              begin{align}
              end{align}







            share|cite|improve this answer











            $endgroup$













              Your Answer





              StackExchange.ifUsing("editor", function () {
              return StackExchange.using("mathjaxEditing", function () {
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              });
              });
              }, "mathjax-editing");

              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "69"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: true,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: 10,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              noCode: true, onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });














              draft saved

              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3014334%2ffinding-the-positive-root-of-x3-x2-0-1-by-numerical-methods%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              3 Answers
              3






              active

              oldest

              votes








              3 Answers
              3






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              1












              $begingroup$

              (a) Correct. (b) Yes. It is also bad because $1$ is far away from the root, as can be seen comparing the function values. And the function is very non-linear around $x=0$, making linear approximations relatively bad.
              enter image description here



              (c) No, you are supposed to solve $x^2=0.1$, disregarding the $x^3$ term. Or to put it into inequalities, as $0<x<1$ you also have
              $$
              x^2le 0.1le 2x^2implies sqrt{0.05}le x le sqrt{0.1}.
              $$

              Then iterate further, for instance using $x_{k+1}=sqrt{frac{0.1}{1+x_k}}$.



              k    x[k]
              ------------------
              1 0.316227766017
              2 0.275635071544
              3 0.279986295554
              4 0.279509993492
              5 0.279562012937
              6 0.279556330208
              7 0.279556950986
              8 0.279556883172
              9 0.27955689058





              share|cite|improve this answer











              $endgroup$













              • $begingroup$
                So, I have 2 qn. 1) How do you get from $0<x<1$ to $x^2le 0.1le 2x^2$ ? 2) Why use $x_{k+1}=sqrt{frac{0.1}{1+x_k}}$ ? Is $x_{k+1}=(0.1-(x_k)^2)^{1/3}$ okay?
                $endgroup$
                – Arc Neoepi
                Nov 26 '18 at 14:01








              • 1




                $begingroup$
                1) Put $x^2+x^3$ in place of $0.1$
                $endgroup$
                – Empy2
                Nov 26 '18 at 14:06








              • 1




                $begingroup$
                $0<x<1implies 0<x^3<x^2implies x^2<0.1<2x^2$. If computing with positive quantities it makes life more predictable if all intermediate terms stay positive. Some iteration formulas will be divergent. Computing using the secant or Dekker's method will be faster than such simple iterations.
                $endgroup$
                – LutzL
                Nov 26 '18 at 14:09










              • $begingroup$
                Thanks for the help and the tips!
                $endgroup$
                – Arc Neoepi
                Nov 26 '18 at 14:17
















              1












              $begingroup$

              (a) Correct. (b) Yes. It is also bad because $1$ is far away from the root, as can be seen comparing the function values. And the function is very non-linear around $x=0$, making linear approximations relatively bad.
              enter image description here



              (c) No, you are supposed to solve $x^2=0.1$, disregarding the $x^3$ term. Or to put it into inequalities, as $0<x<1$ you also have
              $$
              x^2le 0.1le 2x^2implies sqrt{0.05}le x le sqrt{0.1}.
              $$

              Then iterate further, for instance using $x_{k+1}=sqrt{frac{0.1}{1+x_k}}$.



              k    x[k]
              ------------------
              1 0.316227766017
              2 0.275635071544
              3 0.279986295554
              4 0.279509993492
              5 0.279562012937
              6 0.279556330208
              7 0.279556950986
              8 0.279556883172
              9 0.27955689058





              share|cite|improve this answer











              $endgroup$













              • $begingroup$
                So, I have 2 qn. 1) How do you get from $0<x<1$ to $x^2le 0.1le 2x^2$ ? 2) Why use $x_{k+1}=sqrt{frac{0.1}{1+x_k}}$ ? Is $x_{k+1}=(0.1-(x_k)^2)^{1/3}$ okay?
                $endgroup$
                – Arc Neoepi
                Nov 26 '18 at 14:01








              • 1




                $begingroup$
                1) Put $x^2+x^3$ in place of $0.1$
                $endgroup$
                – Empy2
                Nov 26 '18 at 14:06








              • 1




                $begingroup$
                $0<x<1implies 0<x^3<x^2implies x^2<0.1<2x^2$. If computing with positive quantities it makes life more predictable if all intermediate terms stay positive. Some iteration formulas will be divergent. Computing using the secant or Dekker's method will be faster than such simple iterations.
                $endgroup$
                – LutzL
                Nov 26 '18 at 14:09










              • $begingroup$
                Thanks for the help and the tips!
                $endgroup$
                – Arc Neoepi
                Nov 26 '18 at 14:17














              1












              1








              1





              $begingroup$

              (a) Correct. (b) Yes. It is also bad because $1$ is far away from the root, as can be seen comparing the function values. And the function is very non-linear around $x=0$, making linear approximations relatively bad.
              enter image description here



              (c) No, you are supposed to solve $x^2=0.1$, disregarding the $x^3$ term. Or to put it into inequalities, as $0<x<1$ you also have
              $$
              x^2le 0.1le 2x^2implies sqrt{0.05}le x le sqrt{0.1}.
              $$

              Then iterate further, for instance using $x_{k+1}=sqrt{frac{0.1}{1+x_k}}$.



              k    x[k]
              ------------------
              1 0.316227766017
              2 0.275635071544
              3 0.279986295554
              4 0.279509993492
              5 0.279562012937
              6 0.279556330208
              7 0.279556950986
              8 0.279556883172
              9 0.27955689058





              share|cite|improve this answer











              $endgroup$



              (a) Correct. (b) Yes. It is also bad because $1$ is far away from the root, as can be seen comparing the function values. And the function is very non-linear around $x=0$, making linear approximations relatively bad.
              enter image description here



              (c) No, you are supposed to solve $x^2=0.1$, disregarding the $x^3$ term. Or to put it into inequalities, as $0<x<1$ you also have
              $$
              x^2le 0.1le 2x^2implies sqrt{0.05}le x le sqrt{0.1}.
              $$

              Then iterate further, for instance using $x_{k+1}=sqrt{frac{0.1}{1+x_k}}$.



              k    x[k]
              ------------------
              1 0.316227766017
              2 0.275635071544
              3 0.279986295554
              4 0.279509993492
              5 0.279562012937
              6 0.279556330208
              7 0.279556950986
              8 0.279556883172
              9 0.27955689058






              share|cite|improve this answer














              share|cite|improve this answer



              share|cite|improve this answer








              edited Nov 26 '18 at 14:05

























              answered Nov 26 '18 at 13:53









              LutzLLutzL

              57.3k42054




              57.3k42054












              • $begingroup$
                So, I have 2 qn. 1) How do you get from $0<x<1$ to $x^2le 0.1le 2x^2$ ? 2) Why use $x_{k+1}=sqrt{frac{0.1}{1+x_k}}$ ? Is $x_{k+1}=(0.1-(x_k)^2)^{1/3}$ okay?
                $endgroup$
                – Arc Neoepi
                Nov 26 '18 at 14:01








              • 1




                $begingroup$
                1) Put $x^2+x^3$ in place of $0.1$
                $endgroup$
                – Empy2
                Nov 26 '18 at 14:06








              • 1




                $begingroup$
                $0<x<1implies 0<x^3<x^2implies x^2<0.1<2x^2$. If computing with positive quantities it makes life more predictable if all intermediate terms stay positive. Some iteration formulas will be divergent. Computing using the secant or Dekker's method will be faster than such simple iterations.
                $endgroup$
                – LutzL
                Nov 26 '18 at 14:09










              • $begingroup$
                Thanks for the help and the tips!
                $endgroup$
                – Arc Neoepi
                Nov 26 '18 at 14:17


















              • $begingroup$
                So, I have 2 qn. 1) How do you get from $0<x<1$ to $x^2le 0.1le 2x^2$ ? 2) Why use $x_{k+1}=sqrt{frac{0.1}{1+x_k}}$ ? Is $x_{k+1}=(0.1-(x_k)^2)^{1/3}$ okay?
                $endgroup$
                – Arc Neoepi
                Nov 26 '18 at 14:01








              • 1




                $begingroup$
                1) Put $x^2+x^3$ in place of $0.1$
                $endgroup$
                – Empy2
                Nov 26 '18 at 14:06








              • 1




                $begingroup$
                $0<x<1implies 0<x^3<x^2implies x^2<0.1<2x^2$. If computing with positive quantities it makes life more predictable if all intermediate terms stay positive. Some iteration formulas will be divergent. Computing using the secant or Dekker's method will be faster than such simple iterations.
                $endgroup$
                – LutzL
                Nov 26 '18 at 14:09










              • $begingroup$
                Thanks for the help and the tips!
                $endgroup$
                – Arc Neoepi
                Nov 26 '18 at 14:17
















              $begingroup$
              So, I have 2 qn. 1) How do you get from $0<x<1$ to $x^2le 0.1le 2x^2$ ? 2) Why use $x_{k+1}=sqrt{frac{0.1}{1+x_k}}$ ? Is $x_{k+1}=(0.1-(x_k)^2)^{1/3}$ okay?
              $endgroup$
              – Arc Neoepi
              Nov 26 '18 at 14:01






              $begingroup$
              So, I have 2 qn. 1) How do you get from $0<x<1$ to $x^2le 0.1le 2x^2$ ? 2) Why use $x_{k+1}=sqrt{frac{0.1}{1+x_k}}$ ? Is $x_{k+1}=(0.1-(x_k)^2)^{1/3}$ okay?
              $endgroup$
              – Arc Neoepi
              Nov 26 '18 at 14:01






              1




              1




              $begingroup$
              1) Put $x^2+x^3$ in place of $0.1$
              $endgroup$
              – Empy2
              Nov 26 '18 at 14:06






              $begingroup$
              1) Put $x^2+x^3$ in place of $0.1$
              $endgroup$
              – Empy2
              Nov 26 '18 at 14:06






              1




              1




              $begingroup$
              $0<x<1implies 0<x^3<x^2implies x^2<0.1<2x^2$. If computing with positive quantities it makes life more predictable if all intermediate terms stay positive. Some iteration formulas will be divergent. Computing using the secant or Dekker's method will be faster than such simple iterations.
              $endgroup$
              – LutzL
              Nov 26 '18 at 14:09




              $begingroup$
              $0<x<1implies 0<x^3<x^2implies x^2<0.1<2x^2$. If computing with positive quantities it makes life more predictable if all intermediate terms stay positive. Some iteration formulas will be divergent. Computing using the secant or Dekker's method will be faster than such simple iterations.
              $endgroup$
              – LutzL
              Nov 26 '18 at 14:09












              $begingroup$
              Thanks for the help and the tips!
              $endgroup$
              – Arc Neoepi
              Nov 26 '18 at 14:17




              $begingroup$
              Thanks for the help and the tips!
              $endgroup$
              – Arc Neoepi
              Nov 26 '18 at 14:17











              0












              $begingroup$

              Just for your curiosity.



              We can approximate functions using Padé approximants much better than with Taylor series (remember that Newton method is equivalent to an $O(x^2)$ Taylor expansion).



              Since we know how to solve easily quadratic equations, let us consider the $[2,2]$ Padé approximant. It will be
              $$x^3+x^2 simfrac{x^2}{x^2-x+1}$$ If you develop the rhs of the above as a Taylor series, you would get $x^2+x^3-x^5+Oleft(x^6right)$ (pretty close, isn't it ?).



              So, for small values of $a$, the approximate solution of $x^3+x^2=a$ is given by the solution of $(1-a) x^2+a x-a=0$ that is to say
              $$x_pm=frac{apmsqrt{4 a-3 a^2}}{2 (a-1)}$$ So, for $a=frac 1 {10}$, the estimate would be $xapprox 0.282376$ which is quite close to the "exact" solution $xapprox 0.279557$






              share|cite|improve this answer









              $endgroup$


















                0












                $begingroup$

                Just for your curiosity.



                We can approximate functions using Padé approximants much better than with Taylor series (remember that Newton method is equivalent to an $O(x^2)$ Taylor expansion).



                Since we know how to solve easily quadratic equations, let us consider the $[2,2]$ Padé approximant. It will be
                $$x^3+x^2 simfrac{x^2}{x^2-x+1}$$ If you develop the rhs of the above as a Taylor series, you would get $x^2+x^3-x^5+Oleft(x^6right)$ (pretty close, isn't it ?).



                So, for small values of $a$, the approximate solution of $x^3+x^2=a$ is given by the solution of $(1-a) x^2+a x-a=0$ that is to say
                $$x_pm=frac{apmsqrt{4 a-3 a^2}}{2 (a-1)}$$ So, for $a=frac 1 {10}$, the estimate would be $xapprox 0.282376$ which is quite close to the "exact" solution $xapprox 0.279557$






                share|cite|improve this answer









                $endgroup$
















                  0












                  0








                  0





                  $begingroup$

                  Just for your curiosity.



                  We can approximate functions using Padé approximants much better than with Taylor series (remember that Newton method is equivalent to an $O(x^2)$ Taylor expansion).



                  Since we know how to solve easily quadratic equations, let us consider the $[2,2]$ Padé approximant. It will be
                  $$x^3+x^2 simfrac{x^2}{x^2-x+1}$$ If you develop the rhs of the above as a Taylor series, you would get $x^2+x^3-x^5+Oleft(x^6right)$ (pretty close, isn't it ?).



                  So, for small values of $a$, the approximate solution of $x^3+x^2=a$ is given by the solution of $(1-a) x^2+a x-a=0$ that is to say
                  $$x_pm=frac{apmsqrt{4 a-3 a^2}}{2 (a-1)}$$ So, for $a=frac 1 {10}$, the estimate would be $xapprox 0.282376$ which is quite close to the "exact" solution $xapprox 0.279557$






                  share|cite|improve this answer









                  $endgroup$



                  Just for your curiosity.



                  We can approximate functions using Padé approximants much better than with Taylor series (remember that Newton method is equivalent to an $O(x^2)$ Taylor expansion).



                  Since we know how to solve easily quadratic equations, let us consider the $[2,2]$ Padé approximant. It will be
                  $$x^3+x^2 simfrac{x^2}{x^2-x+1}$$ If you develop the rhs of the above as a Taylor series, you would get $x^2+x^3-x^5+Oleft(x^6right)$ (pretty close, isn't it ?).



                  So, for small values of $a$, the approximate solution of $x^3+x^2=a$ is given by the solution of $(1-a) x^2+a x-a=0$ that is to say
                  $$x_pm=frac{apmsqrt{4 a-3 a^2}}{2 (a-1)}$$ So, for $a=frac 1 {10}$, the estimate would be $xapprox 0.282376$ which is quite close to the "exact" solution $xapprox 0.279557$







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Nov 27 '18 at 10:07









                  Claude LeiboviciClaude Leibovici

                  120k1157132




                  120k1157132























                      0












                      $begingroup$

                      $newcommand{bbx}[1]{,bbox[15px,border:1px groove navy]{displaystyle{#1}},}
                      newcommand{braces}[1]{leftlbrace,{#1},rightrbrace}
                      newcommand{bracks}[1]{leftlbrack,{#1},rightrbrack}
                      newcommand{dd}{mathrm{d}}
                      newcommand{ds}[1]{displaystyle{#1}}
                      newcommand{expo}[1]{,mathrm{e}^{#1},}
                      newcommand{ic}{mathrm{i}}
                      newcommand{mc}[1]{mathcal{#1}}
                      newcommand{mrm}[1]{mathrm{#1}}
                      newcommand{pars}[1]{left(,{#1},right)}
                      newcommand{partiald}[3]{frac{partial^{#1} #2}{partial #3^{#1}}}
                      newcommand{root}[2]{,sqrt[#1]{,{#2},},}
                      newcommand{totald}[3]{frac{mathrm{d}^{#1} #2}{mathrm{d} #3^{#1}}}
                      newcommand{verts}[1]{leftvert,{#1},rightvert}$




                      • Start with the iteration
                        $$
                        x_{n} = root{0.1 - x_{n}^{3}},,qquad
                        x_{0} = root{0.1} approx 0.3162
                        $$

                        It seems to be that $ds{10}$ iterations are enough. Namely, $$
                        x_{10} approx 0.279564quadmbox{and}quadmrm{f}pars{x_{10}} approx 5.43129 times 10^{-6}
                        $$

                        where $bbox[10px,#ffd,border: 1px groove navy]{ds{mrm{f}pars{x} equiv x^{3} + x^{2} - 0.1}}$.

                      • In addition, you can refine the above result by means of a Newton-Rapson Iteration:
                        $$
                        y_{n} = y_{n - 1} - {y_{n - 1}^{3} + y_{n - 1}^{2} - 0.1 over
                        3y_{n - 1}^{2} + 2y_{n - 1}},,qquad y_{0} = x_{10} approx 0.279564
                        $$

                        With about three iterations, I'll find
                        $$
                        y_{3} approx bbx{large 0.279957} implies mrm{f}pars{y_{3}} approx
                        2.77556 times10^{-17}
                        $$

                        begin{align}
                        end{align}







                      share|cite|improve this answer











                      $endgroup$


















                        0












                        $begingroup$

                        $newcommand{bbx}[1]{,bbox[15px,border:1px groove navy]{displaystyle{#1}},}
                        newcommand{braces}[1]{leftlbrace,{#1},rightrbrace}
                        newcommand{bracks}[1]{leftlbrack,{#1},rightrbrack}
                        newcommand{dd}{mathrm{d}}
                        newcommand{ds}[1]{displaystyle{#1}}
                        newcommand{expo}[1]{,mathrm{e}^{#1},}
                        newcommand{ic}{mathrm{i}}
                        newcommand{mc}[1]{mathcal{#1}}
                        newcommand{mrm}[1]{mathrm{#1}}
                        newcommand{pars}[1]{left(,{#1},right)}
                        newcommand{partiald}[3]{frac{partial^{#1} #2}{partial #3^{#1}}}
                        newcommand{root}[2]{,sqrt[#1]{,{#2},},}
                        newcommand{totald}[3]{frac{mathrm{d}^{#1} #2}{mathrm{d} #3^{#1}}}
                        newcommand{verts}[1]{leftvert,{#1},rightvert}$




                        • Start with the iteration
                          $$
                          x_{n} = root{0.1 - x_{n}^{3}},,qquad
                          x_{0} = root{0.1} approx 0.3162
                          $$

                          It seems to be that $ds{10}$ iterations are enough. Namely, $$
                          x_{10} approx 0.279564quadmbox{and}quadmrm{f}pars{x_{10}} approx 5.43129 times 10^{-6}
                          $$

                          where $bbox[10px,#ffd,border: 1px groove navy]{ds{mrm{f}pars{x} equiv x^{3} + x^{2} - 0.1}}$.

                        • In addition, you can refine the above result by means of a Newton-Rapson Iteration:
                          $$
                          y_{n} = y_{n - 1} - {y_{n - 1}^{3} + y_{n - 1}^{2} - 0.1 over
                          3y_{n - 1}^{2} + 2y_{n - 1}},,qquad y_{0} = x_{10} approx 0.279564
                          $$

                          With about three iterations, I'll find
                          $$
                          y_{3} approx bbx{large 0.279957} implies mrm{f}pars{y_{3}} approx
                          2.77556 times10^{-17}
                          $$

                          begin{align}
                          end{align}







                        share|cite|improve this answer











                        $endgroup$
















                          0












                          0








                          0





                          $begingroup$

                          $newcommand{bbx}[1]{,bbox[15px,border:1px groove navy]{displaystyle{#1}},}
                          newcommand{braces}[1]{leftlbrace,{#1},rightrbrace}
                          newcommand{bracks}[1]{leftlbrack,{#1},rightrbrack}
                          newcommand{dd}{mathrm{d}}
                          newcommand{ds}[1]{displaystyle{#1}}
                          newcommand{expo}[1]{,mathrm{e}^{#1},}
                          newcommand{ic}{mathrm{i}}
                          newcommand{mc}[1]{mathcal{#1}}
                          newcommand{mrm}[1]{mathrm{#1}}
                          newcommand{pars}[1]{left(,{#1},right)}
                          newcommand{partiald}[3]{frac{partial^{#1} #2}{partial #3^{#1}}}
                          newcommand{root}[2]{,sqrt[#1]{,{#2},},}
                          newcommand{totald}[3]{frac{mathrm{d}^{#1} #2}{mathrm{d} #3^{#1}}}
                          newcommand{verts}[1]{leftvert,{#1},rightvert}$




                          • Start with the iteration
                            $$
                            x_{n} = root{0.1 - x_{n}^{3}},,qquad
                            x_{0} = root{0.1} approx 0.3162
                            $$

                            It seems to be that $ds{10}$ iterations are enough. Namely, $$
                            x_{10} approx 0.279564quadmbox{and}quadmrm{f}pars{x_{10}} approx 5.43129 times 10^{-6}
                            $$

                            where $bbox[10px,#ffd,border: 1px groove navy]{ds{mrm{f}pars{x} equiv x^{3} + x^{2} - 0.1}}$.

                          • In addition, you can refine the above result by means of a Newton-Rapson Iteration:
                            $$
                            y_{n} = y_{n - 1} - {y_{n - 1}^{3} + y_{n - 1}^{2} - 0.1 over
                            3y_{n - 1}^{2} + 2y_{n - 1}},,qquad y_{0} = x_{10} approx 0.279564
                            $$

                            With about three iterations, I'll find
                            $$
                            y_{3} approx bbx{large 0.279957} implies mrm{f}pars{y_{3}} approx
                            2.77556 times10^{-17}
                            $$

                            begin{align}
                            end{align}







                          share|cite|improve this answer











                          $endgroup$



                          $newcommand{bbx}[1]{,bbox[15px,border:1px groove navy]{displaystyle{#1}},}
                          newcommand{braces}[1]{leftlbrace,{#1},rightrbrace}
                          newcommand{bracks}[1]{leftlbrack,{#1},rightrbrack}
                          newcommand{dd}{mathrm{d}}
                          newcommand{ds}[1]{displaystyle{#1}}
                          newcommand{expo}[1]{,mathrm{e}^{#1},}
                          newcommand{ic}{mathrm{i}}
                          newcommand{mc}[1]{mathcal{#1}}
                          newcommand{mrm}[1]{mathrm{#1}}
                          newcommand{pars}[1]{left(,{#1},right)}
                          newcommand{partiald}[3]{frac{partial^{#1} #2}{partial #3^{#1}}}
                          newcommand{root}[2]{,sqrt[#1]{,{#2},},}
                          newcommand{totald}[3]{frac{mathrm{d}^{#1} #2}{mathrm{d} #3^{#1}}}
                          newcommand{verts}[1]{leftvert,{#1},rightvert}$




                          • Start with the iteration
                            $$
                            x_{n} = root{0.1 - x_{n}^{3}},,qquad
                            x_{0} = root{0.1} approx 0.3162
                            $$

                            It seems to be that $ds{10}$ iterations are enough. Namely, $$
                            x_{10} approx 0.279564quadmbox{and}quadmrm{f}pars{x_{10}} approx 5.43129 times 10^{-6}
                            $$

                            where $bbox[10px,#ffd,border: 1px groove navy]{ds{mrm{f}pars{x} equiv x^{3} + x^{2} - 0.1}}$.

                          • In addition, you can refine the above result by means of a Newton-Rapson Iteration:
                            $$
                            y_{n} = y_{n - 1} - {y_{n - 1}^{3} + y_{n - 1}^{2} - 0.1 over
                            3y_{n - 1}^{2} + 2y_{n - 1}},,qquad y_{0} = x_{10} approx 0.279564
                            $$

                            With about three iterations, I'll find
                            $$
                            y_{3} approx bbx{large 0.279957} implies mrm{f}pars{y_{3}} approx
                            2.77556 times10^{-17}
                            $$

                            begin{align}
                            end{align}








                          share|cite|improve this answer














                          share|cite|improve this answer



                          share|cite|improve this answer








                          edited Nov 29 '18 at 1:16

























                          answered Nov 27 '18 at 21:12









                          Felix MarinFelix Marin

                          67.5k7107141




                          67.5k7107141






























                              draft saved

                              draft discarded




















































                              Thanks for contributing an answer to Mathematics Stack Exchange!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function () {
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3014334%2ffinding-the-positive-root-of-x3-x2-0-1-by-numerical-methods%23new-answer', 'question_page');
                              }
                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              Biblatex bibliography style without URLs when DOI exists (in Overleaf with Zotero bibliography)

                              ComboBox Display Member on multiple fields

                              Is it possible to collect Nectar points via Trainline?