Convex Minimization Over the Unit Simplex
up vote
5
down vote
favorite
I have a simple (few variables), continuous, twice differentiable convex function that I wish to minimize over the unit simplex. In other words, $min. f(mathbf{x})$, $text{s.t. } mathbf{0} preceq x$ and $mathbf{1}^top mathbf{x} = {1}$. This will have to be performed multiple times.
What is a good optimization method to use? Preferably fairly simple to describe and implement?
There are some really fancy methods (such as “Mirror descent and nonlinear projected subgradient methods for convex optimization”, Beck and Teboulle) that specifically have methods for minimizing over the unit simplex. But these methods only use the gradient and not the Hessian.
optimization convex-optimization nonlinear-optimization
|
show 2 more comments
up vote
5
down vote
favorite
I have a simple (few variables), continuous, twice differentiable convex function that I wish to minimize over the unit simplex. In other words, $min. f(mathbf{x})$, $text{s.t. } mathbf{0} preceq x$ and $mathbf{1}^top mathbf{x} = {1}$. This will have to be performed multiple times.
What is a good optimization method to use? Preferably fairly simple to describe and implement?
There are some really fancy methods (such as “Mirror descent and nonlinear projected subgradient methods for convex optimization”, Beck and Teboulle) that specifically have methods for minimizing over the unit simplex. But these methods only use the gradient and not the Hessian.
optimization convex-optimization nonlinear-optimization
If the function is convex, why wouldn't a simple line search method work? Since the objective function is twice-differentiable, I'd think Newton's method would work fine -- if only once-differentiable, gradient descent should work. Both are extremely well documented online with pseudocode and application-specific code abound.
– jedwards
Dec 6 '11 at 3:34
How about the "Karush–Kuhn–Tucker conditions"? en.wikipedia.org/wiki/…
– matt
Dec 6 '11 at 12:35
Thanks for the kind reply. I was wondering if there is a specific algorithm for minimization over the unit simplex that uses the Hessian (since it is easily calculated and the problem size is small). (I am still new to convex optimization) In the end I implemented the Interior Point method and it works well.
– dcm29
Dec 12 '11 at 3:05
2
This answer probably comes to late, but have you already looked into the paper "Projected newton methods for optimization problems with simple constraints" by Bertsekas (SIAM J. Control and Optimization, Vol. 20, No. 2, March 1982)? As the title says, a projected Newton algorithm is introduced together with an Armijo-like linesearch. Also, superlinear convergence is proved for simple constraints. The above problem is listed as an example.
– user103342
Oct 26 '13 at 11:12
See the paper... Efficient Projections onto the l1-Ball for Learning in High Dimensions This paper can perfectly solve your objective.
– Liang
Nov 22 '13 at 13:16
|
show 2 more comments
up vote
5
down vote
favorite
up vote
5
down vote
favorite
I have a simple (few variables), continuous, twice differentiable convex function that I wish to minimize over the unit simplex. In other words, $min. f(mathbf{x})$, $text{s.t. } mathbf{0} preceq x$ and $mathbf{1}^top mathbf{x} = {1}$. This will have to be performed multiple times.
What is a good optimization method to use? Preferably fairly simple to describe and implement?
There are some really fancy methods (such as “Mirror descent and nonlinear projected subgradient methods for convex optimization”, Beck and Teboulle) that specifically have methods for minimizing over the unit simplex. But these methods only use the gradient and not the Hessian.
optimization convex-optimization nonlinear-optimization
I have a simple (few variables), continuous, twice differentiable convex function that I wish to minimize over the unit simplex. In other words, $min. f(mathbf{x})$, $text{s.t. } mathbf{0} preceq x$ and $mathbf{1}^top mathbf{x} = {1}$. This will have to be performed multiple times.
What is a good optimization method to use? Preferably fairly simple to describe and implement?
There are some really fancy methods (such as “Mirror descent and nonlinear projected subgradient methods for convex optimization”, Beck and Teboulle) that specifically have methods for minimizing over the unit simplex. But these methods only use the gradient and not the Hessian.
optimization convex-optimization nonlinear-optimization
optimization convex-optimization nonlinear-optimization
edited May 31 at 20:44
Royi
3,26512249
3,26512249
asked Dec 6 '11 at 2:30
dcm29
262
262
If the function is convex, why wouldn't a simple line search method work? Since the objective function is twice-differentiable, I'd think Newton's method would work fine -- if only once-differentiable, gradient descent should work. Both are extremely well documented online with pseudocode and application-specific code abound.
– jedwards
Dec 6 '11 at 3:34
How about the "Karush–Kuhn–Tucker conditions"? en.wikipedia.org/wiki/…
– matt
Dec 6 '11 at 12:35
Thanks for the kind reply. I was wondering if there is a specific algorithm for minimization over the unit simplex that uses the Hessian (since it is easily calculated and the problem size is small). (I am still new to convex optimization) In the end I implemented the Interior Point method and it works well.
– dcm29
Dec 12 '11 at 3:05
2
This answer probably comes to late, but have you already looked into the paper "Projected newton methods for optimization problems with simple constraints" by Bertsekas (SIAM J. Control and Optimization, Vol. 20, No. 2, March 1982)? As the title says, a projected Newton algorithm is introduced together with an Armijo-like linesearch. Also, superlinear convergence is proved for simple constraints. The above problem is listed as an example.
– user103342
Oct 26 '13 at 11:12
See the paper... Efficient Projections onto the l1-Ball for Learning in High Dimensions This paper can perfectly solve your objective.
– Liang
Nov 22 '13 at 13:16
|
show 2 more comments
If the function is convex, why wouldn't a simple line search method work? Since the objective function is twice-differentiable, I'd think Newton's method would work fine -- if only once-differentiable, gradient descent should work. Both are extremely well documented online with pseudocode and application-specific code abound.
– jedwards
Dec 6 '11 at 3:34
How about the "Karush–Kuhn–Tucker conditions"? en.wikipedia.org/wiki/…
– matt
Dec 6 '11 at 12:35
Thanks for the kind reply. I was wondering if there is a specific algorithm for minimization over the unit simplex that uses the Hessian (since it is easily calculated and the problem size is small). (I am still new to convex optimization) In the end I implemented the Interior Point method and it works well.
– dcm29
Dec 12 '11 at 3:05
2
This answer probably comes to late, but have you already looked into the paper "Projected newton methods for optimization problems with simple constraints" by Bertsekas (SIAM J. Control and Optimization, Vol. 20, No. 2, March 1982)? As the title says, a projected Newton algorithm is introduced together with an Armijo-like linesearch. Also, superlinear convergence is proved for simple constraints. The above problem is listed as an example.
– user103342
Oct 26 '13 at 11:12
See the paper... Efficient Projections onto the l1-Ball for Learning in High Dimensions This paper can perfectly solve your objective.
– Liang
Nov 22 '13 at 13:16
If the function is convex, why wouldn't a simple line search method work? Since the objective function is twice-differentiable, I'd think Newton's method would work fine -- if only once-differentiable, gradient descent should work. Both are extremely well documented online with pseudocode and application-specific code abound.
– jedwards
Dec 6 '11 at 3:34
If the function is convex, why wouldn't a simple line search method work? Since the objective function is twice-differentiable, I'd think Newton's method would work fine -- if only once-differentiable, gradient descent should work. Both are extremely well documented online with pseudocode and application-specific code abound.
– jedwards
Dec 6 '11 at 3:34
How about the "Karush–Kuhn–Tucker conditions"? en.wikipedia.org/wiki/…
– matt
Dec 6 '11 at 12:35
How about the "Karush–Kuhn–Tucker conditions"? en.wikipedia.org/wiki/…
– matt
Dec 6 '11 at 12:35
Thanks for the kind reply. I was wondering if there is a specific algorithm for minimization over the unit simplex that uses the Hessian (since it is easily calculated and the problem size is small). (I am still new to convex optimization) In the end I implemented the Interior Point method and it works well.
– dcm29
Dec 12 '11 at 3:05
Thanks for the kind reply. I was wondering if there is a specific algorithm for minimization over the unit simplex that uses the Hessian (since it is easily calculated and the problem size is small). (I am still new to convex optimization) In the end I implemented the Interior Point method and it works well.
– dcm29
Dec 12 '11 at 3:05
2
2
This answer probably comes to late, but have you already looked into the paper "Projected newton methods for optimization problems with simple constraints" by Bertsekas (SIAM J. Control and Optimization, Vol. 20, No. 2, March 1982)? As the title says, a projected Newton algorithm is introduced together with an Armijo-like linesearch. Also, superlinear convergence is proved for simple constraints. The above problem is listed as an example.
– user103342
Oct 26 '13 at 11:12
This answer probably comes to late, but have you already looked into the paper "Projected newton methods for optimization problems with simple constraints" by Bertsekas (SIAM J. Control and Optimization, Vol. 20, No. 2, March 1982)? As the title says, a projected Newton algorithm is introduced together with an Armijo-like linesearch. Also, superlinear convergence is proved for simple constraints. The above problem is listed as an example.
– user103342
Oct 26 '13 at 11:12
See the paper... Efficient Projections onto the l1-Ball for Learning in High Dimensions This paper can perfectly solve your objective.
– Liang
Nov 22 '13 at 13:16
See the paper... Efficient Projections onto the l1-Ball for Learning in High Dimensions This paper can perfectly solve your objective.
– Liang
Nov 22 '13 at 13:16
|
show 2 more comments
2 Answers
2
active
oldest
votes
up vote
0
down vote
The simplest, yet pretty fast method (In running time, not iteration time), would be Accelerated Projected Gradient Descent method.
All you need is to calculate the Gradient of the function $ f left( x right) $ and project each iteration onto the Unit Simplex.
A MATLAB Code would be:
simplexRadius = 1;
stopThr = 1e-5;
vX = vXInit;
for ii = 1:numIterations
vG = CalcFunGrad(vX); %<! The Gradient
vX = vX - (stepSize * vG);
vX = ProjectSimplex(vX, simplexRadius, stopThr);
end
You can have the Simplex Projection function from the link I pasted (Or directly in my Ball Projection GitHub Repository).
add a comment |
up vote
0
down vote
The Exponentiated Gradient Descend algorithm may be another approach simple to implement.
The lecture material from the University of Chicago gives a nice overview..
New contributor
add a comment |
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
The simplest, yet pretty fast method (In running time, not iteration time), would be Accelerated Projected Gradient Descent method.
All you need is to calculate the Gradient of the function $ f left( x right) $ and project each iteration onto the Unit Simplex.
A MATLAB Code would be:
simplexRadius = 1;
stopThr = 1e-5;
vX = vXInit;
for ii = 1:numIterations
vG = CalcFunGrad(vX); %<! The Gradient
vX = vX - (stepSize * vG);
vX = ProjectSimplex(vX, simplexRadius, stopThr);
end
You can have the Simplex Projection function from the link I pasted (Or directly in my Ball Projection GitHub Repository).
add a comment |
up vote
0
down vote
The simplest, yet pretty fast method (In running time, not iteration time), would be Accelerated Projected Gradient Descent method.
All you need is to calculate the Gradient of the function $ f left( x right) $ and project each iteration onto the Unit Simplex.
A MATLAB Code would be:
simplexRadius = 1;
stopThr = 1e-5;
vX = vXInit;
for ii = 1:numIterations
vG = CalcFunGrad(vX); %<! The Gradient
vX = vX - (stepSize * vG);
vX = ProjectSimplex(vX, simplexRadius, stopThr);
end
You can have the Simplex Projection function from the link I pasted (Or directly in my Ball Projection GitHub Repository).
add a comment |
up vote
0
down vote
up vote
0
down vote
The simplest, yet pretty fast method (In running time, not iteration time), would be Accelerated Projected Gradient Descent method.
All you need is to calculate the Gradient of the function $ f left( x right) $ and project each iteration onto the Unit Simplex.
A MATLAB Code would be:
simplexRadius = 1;
stopThr = 1e-5;
vX = vXInit;
for ii = 1:numIterations
vG = CalcFunGrad(vX); %<! The Gradient
vX = vX - (stepSize * vG);
vX = ProjectSimplex(vX, simplexRadius, stopThr);
end
You can have the Simplex Projection function from the link I pasted (Or directly in my Ball Projection GitHub Repository).
The simplest, yet pretty fast method (In running time, not iteration time), would be Accelerated Projected Gradient Descent method.
All you need is to calculate the Gradient of the function $ f left( x right) $ and project each iteration onto the Unit Simplex.
A MATLAB Code would be:
simplexRadius = 1;
stopThr = 1e-5;
vX = vXInit;
for ii = 1:numIterations
vG = CalcFunGrad(vX); %<! The Gradient
vX = vX - (stepSize * vG);
vX = ProjectSimplex(vX, simplexRadius, stopThr);
end
You can have the Simplex Projection function from the link I pasted (Or directly in my Ball Projection GitHub Repository).
answered Aug 22 '17 at 12:40
Royi
3,26512249
3,26512249
add a comment |
add a comment |
up vote
0
down vote
The Exponentiated Gradient Descend algorithm may be another approach simple to implement.
The lecture material from the University of Chicago gives a nice overview..
New contributor
add a comment |
up vote
0
down vote
The Exponentiated Gradient Descend algorithm may be another approach simple to implement.
The lecture material from the University of Chicago gives a nice overview..
New contributor
add a comment |
up vote
0
down vote
up vote
0
down vote
The Exponentiated Gradient Descend algorithm may be another approach simple to implement.
The lecture material from the University of Chicago gives a nice overview..
New contributor
The Exponentiated Gradient Descend algorithm may be another approach simple to implement.
The lecture material from the University of Chicago gives a nice overview..
New contributor
New contributor
answered Nov 12 at 16:23
Milan
1011
1011
New contributor
New contributor
add a comment |
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f88746%2fconvex-minimization-over-the-unit-simplex%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
If the function is convex, why wouldn't a simple line search method work? Since the objective function is twice-differentiable, I'd think Newton's method would work fine -- if only once-differentiable, gradient descent should work. Both are extremely well documented online with pseudocode and application-specific code abound.
– jedwards
Dec 6 '11 at 3:34
How about the "Karush–Kuhn–Tucker conditions"? en.wikipedia.org/wiki/…
– matt
Dec 6 '11 at 12:35
Thanks for the kind reply. I was wondering if there is a specific algorithm for minimization over the unit simplex that uses the Hessian (since it is easily calculated and the problem size is small). (I am still new to convex optimization) In the end I implemented the Interior Point method and it works well.
– dcm29
Dec 12 '11 at 3:05
2
This answer probably comes to late, but have you already looked into the paper "Projected newton methods for optimization problems with simple constraints" by Bertsekas (SIAM J. Control and Optimization, Vol. 20, No. 2, March 1982)? As the title says, a projected Newton algorithm is introduced together with an Armijo-like linesearch. Also, superlinear convergence is proved for simple constraints. The above problem is listed as an example.
– user103342
Oct 26 '13 at 11:12
See the paper... Efficient Projections onto the l1-Ball for Learning in High Dimensions This paper can perfectly solve your objective.
– Liang
Nov 22 '13 at 13:16