Convex Minimization Over the Unit Simplex











up vote
5
down vote

favorite
3












I have a simple (few variables), continuous, twice differentiable convex function that I wish to minimize over the unit simplex. In other words, $min. f(mathbf{x})$, $text{s.t. } mathbf{0} preceq x$ and $mathbf{1}^top mathbf{x} = {1}$. This will have to be performed multiple times.



What is a good optimization method to use? Preferably fairly simple to describe and implement?



There are some really fancy methods (such as “Mirror descent and nonlinear projected subgradient methods for convex optimization”, Beck and Teboulle) that specifically have methods for minimizing over the unit simplex. But these methods only use the gradient and not the Hessian.










share|cite|improve this question
























  • If the function is convex, why wouldn't a simple line search method work? Since the objective function is twice-differentiable, I'd think Newton's method would work fine -- if only once-differentiable, gradient descent should work. Both are extremely well documented online with pseudocode and application-specific code abound.
    – jedwards
    Dec 6 '11 at 3:34












  • How about the "Karush–Kuhn–Tucker conditions"? en.wikipedia.org/wiki/…
    – matt
    Dec 6 '11 at 12:35










  • Thanks for the kind reply. I was wondering if there is a specific algorithm for minimization over the unit simplex that uses the Hessian (since it is easily calculated and the problem size is small). (I am still new to convex optimization) In the end I implemented the Interior Point method and it works well.
    – dcm29
    Dec 12 '11 at 3:05








  • 2




    This answer probably comes to late, but have you already looked into the paper "Projected newton methods for optimization problems with simple constraints" by Bertsekas (SIAM J. Control and Optimization, Vol. 20, No. 2, March 1982)? As the title says, a projected Newton algorithm is introduced together with an Armijo-like linesearch. Also, superlinear convergence is proved for simple constraints. The above problem is listed as an example.
    – user103342
    Oct 26 '13 at 11:12












  • See the paper... Efficient Projections onto the l1-Ball for Learning in High Dimensions This paper can perfectly solve your objective.
    – Liang
    Nov 22 '13 at 13:16















up vote
5
down vote

favorite
3












I have a simple (few variables), continuous, twice differentiable convex function that I wish to minimize over the unit simplex. In other words, $min. f(mathbf{x})$, $text{s.t. } mathbf{0} preceq x$ and $mathbf{1}^top mathbf{x} = {1}$. This will have to be performed multiple times.



What is a good optimization method to use? Preferably fairly simple to describe and implement?



There are some really fancy methods (such as “Mirror descent and nonlinear projected subgradient methods for convex optimization”, Beck and Teboulle) that specifically have methods for minimizing over the unit simplex. But these methods only use the gradient and not the Hessian.










share|cite|improve this question
























  • If the function is convex, why wouldn't a simple line search method work? Since the objective function is twice-differentiable, I'd think Newton's method would work fine -- if only once-differentiable, gradient descent should work. Both are extremely well documented online with pseudocode and application-specific code abound.
    – jedwards
    Dec 6 '11 at 3:34












  • How about the "Karush–Kuhn–Tucker conditions"? en.wikipedia.org/wiki/…
    – matt
    Dec 6 '11 at 12:35










  • Thanks for the kind reply. I was wondering if there is a specific algorithm for minimization over the unit simplex that uses the Hessian (since it is easily calculated and the problem size is small). (I am still new to convex optimization) In the end I implemented the Interior Point method and it works well.
    – dcm29
    Dec 12 '11 at 3:05








  • 2




    This answer probably comes to late, but have you already looked into the paper "Projected newton methods for optimization problems with simple constraints" by Bertsekas (SIAM J. Control and Optimization, Vol. 20, No. 2, March 1982)? As the title says, a projected Newton algorithm is introduced together with an Armijo-like linesearch. Also, superlinear convergence is proved for simple constraints. The above problem is listed as an example.
    – user103342
    Oct 26 '13 at 11:12












  • See the paper... Efficient Projections onto the l1-Ball for Learning in High Dimensions This paper can perfectly solve your objective.
    – Liang
    Nov 22 '13 at 13:16













up vote
5
down vote

favorite
3









up vote
5
down vote

favorite
3






3





I have a simple (few variables), continuous, twice differentiable convex function that I wish to minimize over the unit simplex. In other words, $min. f(mathbf{x})$, $text{s.t. } mathbf{0} preceq x$ and $mathbf{1}^top mathbf{x} = {1}$. This will have to be performed multiple times.



What is a good optimization method to use? Preferably fairly simple to describe and implement?



There are some really fancy methods (such as “Mirror descent and nonlinear projected subgradient methods for convex optimization”, Beck and Teboulle) that specifically have methods for minimizing over the unit simplex. But these methods only use the gradient and not the Hessian.










share|cite|improve this question















I have a simple (few variables), continuous, twice differentiable convex function that I wish to minimize over the unit simplex. In other words, $min. f(mathbf{x})$, $text{s.t. } mathbf{0} preceq x$ and $mathbf{1}^top mathbf{x} = {1}$. This will have to be performed multiple times.



What is a good optimization method to use? Preferably fairly simple to describe and implement?



There are some really fancy methods (such as “Mirror descent and nonlinear projected subgradient methods for convex optimization”, Beck and Teboulle) that specifically have methods for minimizing over the unit simplex. But these methods only use the gradient and not the Hessian.







optimization convex-optimization nonlinear-optimization






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited May 31 at 20:44









Royi

3,26512249




3,26512249










asked Dec 6 '11 at 2:30









dcm29

262




262












  • If the function is convex, why wouldn't a simple line search method work? Since the objective function is twice-differentiable, I'd think Newton's method would work fine -- if only once-differentiable, gradient descent should work. Both are extremely well documented online with pseudocode and application-specific code abound.
    – jedwards
    Dec 6 '11 at 3:34












  • How about the "Karush–Kuhn–Tucker conditions"? en.wikipedia.org/wiki/…
    – matt
    Dec 6 '11 at 12:35










  • Thanks for the kind reply. I was wondering if there is a specific algorithm for minimization over the unit simplex that uses the Hessian (since it is easily calculated and the problem size is small). (I am still new to convex optimization) In the end I implemented the Interior Point method and it works well.
    – dcm29
    Dec 12 '11 at 3:05








  • 2




    This answer probably comes to late, but have you already looked into the paper "Projected newton methods for optimization problems with simple constraints" by Bertsekas (SIAM J. Control and Optimization, Vol. 20, No. 2, March 1982)? As the title says, a projected Newton algorithm is introduced together with an Armijo-like linesearch. Also, superlinear convergence is proved for simple constraints. The above problem is listed as an example.
    – user103342
    Oct 26 '13 at 11:12












  • See the paper... Efficient Projections onto the l1-Ball for Learning in High Dimensions This paper can perfectly solve your objective.
    – Liang
    Nov 22 '13 at 13:16


















  • If the function is convex, why wouldn't a simple line search method work? Since the objective function is twice-differentiable, I'd think Newton's method would work fine -- if only once-differentiable, gradient descent should work. Both are extremely well documented online with pseudocode and application-specific code abound.
    – jedwards
    Dec 6 '11 at 3:34












  • How about the "Karush–Kuhn–Tucker conditions"? en.wikipedia.org/wiki/…
    – matt
    Dec 6 '11 at 12:35










  • Thanks for the kind reply. I was wondering if there is a specific algorithm for minimization over the unit simplex that uses the Hessian (since it is easily calculated and the problem size is small). (I am still new to convex optimization) In the end I implemented the Interior Point method and it works well.
    – dcm29
    Dec 12 '11 at 3:05








  • 2




    This answer probably comes to late, but have you already looked into the paper "Projected newton methods for optimization problems with simple constraints" by Bertsekas (SIAM J. Control and Optimization, Vol. 20, No. 2, March 1982)? As the title says, a projected Newton algorithm is introduced together with an Armijo-like linesearch. Also, superlinear convergence is proved for simple constraints. The above problem is listed as an example.
    – user103342
    Oct 26 '13 at 11:12












  • See the paper... Efficient Projections onto the l1-Ball for Learning in High Dimensions This paper can perfectly solve your objective.
    – Liang
    Nov 22 '13 at 13:16
















If the function is convex, why wouldn't a simple line search method work? Since the objective function is twice-differentiable, I'd think Newton's method would work fine -- if only once-differentiable, gradient descent should work. Both are extremely well documented online with pseudocode and application-specific code abound.
– jedwards
Dec 6 '11 at 3:34






If the function is convex, why wouldn't a simple line search method work? Since the objective function is twice-differentiable, I'd think Newton's method would work fine -- if only once-differentiable, gradient descent should work. Both are extremely well documented online with pseudocode and application-specific code abound.
– jedwards
Dec 6 '11 at 3:34














How about the "Karush–Kuhn–Tucker conditions"? en.wikipedia.org/wiki/…
– matt
Dec 6 '11 at 12:35




How about the "Karush–Kuhn–Tucker conditions"? en.wikipedia.org/wiki/…
– matt
Dec 6 '11 at 12:35












Thanks for the kind reply. I was wondering if there is a specific algorithm for minimization over the unit simplex that uses the Hessian (since it is easily calculated and the problem size is small). (I am still new to convex optimization) In the end I implemented the Interior Point method and it works well.
– dcm29
Dec 12 '11 at 3:05






Thanks for the kind reply. I was wondering if there is a specific algorithm for minimization over the unit simplex that uses the Hessian (since it is easily calculated and the problem size is small). (I am still new to convex optimization) In the end I implemented the Interior Point method and it works well.
– dcm29
Dec 12 '11 at 3:05






2




2




This answer probably comes to late, but have you already looked into the paper "Projected newton methods for optimization problems with simple constraints" by Bertsekas (SIAM J. Control and Optimization, Vol. 20, No. 2, March 1982)? As the title says, a projected Newton algorithm is introduced together with an Armijo-like linesearch. Also, superlinear convergence is proved for simple constraints. The above problem is listed as an example.
– user103342
Oct 26 '13 at 11:12






This answer probably comes to late, but have you already looked into the paper "Projected newton methods for optimization problems with simple constraints" by Bertsekas (SIAM J. Control and Optimization, Vol. 20, No. 2, March 1982)? As the title says, a projected Newton algorithm is introduced together with an Armijo-like linesearch. Also, superlinear convergence is proved for simple constraints. The above problem is listed as an example.
– user103342
Oct 26 '13 at 11:12














See the paper... Efficient Projections onto the l1-Ball for Learning in High Dimensions This paper can perfectly solve your objective.
– Liang
Nov 22 '13 at 13:16




See the paper... Efficient Projections onto the l1-Ball for Learning in High Dimensions This paper can perfectly solve your objective.
– Liang
Nov 22 '13 at 13:16










2 Answers
2






active

oldest

votes

















up vote
0
down vote













The simplest, yet pretty fast method (In running time, not iteration time), would be Accelerated Projected Gradient Descent method.



All you need is to calculate the Gradient of the function $ f left( x right) $ and project each iteration onto the Unit Simplex.



A MATLAB Code would be:



simplexRadius = 1;
stopThr = 1e-5;
vX = vXInit;

for ii = 1:numIterations

vG = CalcFunGrad(vX); %<! The Gradient
vX = vX - (stepSize * vG);
vX = ProjectSimplex(vX, simplexRadius, stopThr);

end


You can have the Simplex Projection function from the link I pasted (Or directly in my Ball Projection GitHub Repository).






share|cite|improve this answer




























    up vote
    0
    down vote













    The Exponentiated Gradient Descend algorithm may be another approach simple to implement.



    The lecture material from the University of Chicago gives a nice overview..






    share|cite|improve this answer








    New contributor




    Milan is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.


















      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














       

      draft saved


      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f88746%2fconvex-minimization-over-the-unit-simplex%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      0
      down vote













      The simplest, yet pretty fast method (In running time, not iteration time), would be Accelerated Projected Gradient Descent method.



      All you need is to calculate the Gradient of the function $ f left( x right) $ and project each iteration onto the Unit Simplex.



      A MATLAB Code would be:



      simplexRadius = 1;
      stopThr = 1e-5;
      vX = vXInit;

      for ii = 1:numIterations

      vG = CalcFunGrad(vX); %<! The Gradient
      vX = vX - (stepSize * vG);
      vX = ProjectSimplex(vX, simplexRadius, stopThr);

      end


      You can have the Simplex Projection function from the link I pasted (Or directly in my Ball Projection GitHub Repository).






      share|cite|improve this answer

























        up vote
        0
        down vote













        The simplest, yet pretty fast method (In running time, not iteration time), would be Accelerated Projected Gradient Descent method.



        All you need is to calculate the Gradient of the function $ f left( x right) $ and project each iteration onto the Unit Simplex.



        A MATLAB Code would be:



        simplexRadius = 1;
        stopThr = 1e-5;
        vX = vXInit;

        for ii = 1:numIterations

        vG = CalcFunGrad(vX); %<! The Gradient
        vX = vX - (stepSize * vG);
        vX = ProjectSimplex(vX, simplexRadius, stopThr);

        end


        You can have the Simplex Projection function from the link I pasted (Or directly in my Ball Projection GitHub Repository).






        share|cite|improve this answer























          up vote
          0
          down vote










          up vote
          0
          down vote









          The simplest, yet pretty fast method (In running time, not iteration time), would be Accelerated Projected Gradient Descent method.



          All you need is to calculate the Gradient of the function $ f left( x right) $ and project each iteration onto the Unit Simplex.



          A MATLAB Code would be:



          simplexRadius = 1;
          stopThr = 1e-5;
          vX = vXInit;

          for ii = 1:numIterations

          vG = CalcFunGrad(vX); %<! The Gradient
          vX = vX - (stepSize * vG);
          vX = ProjectSimplex(vX, simplexRadius, stopThr);

          end


          You can have the Simplex Projection function from the link I pasted (Or directly in my Ball Projection GitHub Repository).






          share|cite|improve this answer












          The simplest, yet pretty fast method (In running time, not iteration time), would be Accelerated Projected Gradient Descent method.



          All you need is to calculate the Gradient of the function $ f left( x right) $ and project each iteration onto the Unit Simplex.



          A MATLAB Code would be:



          simplexRadius = 1;
          stopThr = 1e-5;
          vX = vXInit;

          for ii = 1:numIterations

          vG = CalcFunGrad(vX); %<! The Gradient
          vX = vX - (stepSize * vG);
          vX = ProjectSimplex(vX, simplexRadius, stopThr);

          end


          You can have the Simplex Projection function from the link I pasted (Or directly in my Ball Projection GitHub Repository).







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Aug 22 '17 at 12:40









          Royi

          3,26512249




          3,26512249






















              up vote
              0
              down vote













              The Exponentiated Gradient Descend algorithm may be another approach simple to implement.



              The lecture material from the University of Chicago gives a nice overview..






              share|cite|improve this answer








              New contributor




              Milan is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.






















                up vote
                0
                down vote













                The Exponentiated Gradient Descend algorithm may be another approach simple to implement.



                The lecture material from the University of Chicago gives a nice overview..






                share|cite|improve this answer








                New contributor




                Milan is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.




















                  up vote
                  0
                  down vote










                  up vote
                  0
                  down vote









                  The Exponentiated Gradient Descend algorithm may be another approach simple to implement.



                  The lecture material from the University of Chicago gives a nice overview..






                  share|cite|improve this answer








                  New contributor




                  Milan is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.









                  The Exponentiated Gradient Descend algorithm may be another approach simple to implement.



                  The lecture material from the University of Chicago gives a nice overview..







                  share|cite|improve this answer








                  New contributor




                  Milan is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.









                  share|cite|improve this answer



                  share|cite|improve this answer






                  New contributor




                  Milan is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.









                  answered Nov 12 at 16:23









                  Milan

                  1011




                  1011




                  New contributor




                  Milan is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.





                  New contributor





                  Milan is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.






                  Milan is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.






























                       

                      draft saved


                      draft discarded



















































                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f88746%2fconvex-minimization-over-the-unit-simplex%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Biblatex bibliography style without URLs when DOI exists (in Overleaf with Zotero bibliography)

                      ComboBox Display Member on multiple fields

                      Is it possible to collect Nectar points via Trainline?