A condition for 'similarity' of subgradients of convex proper functions











up vote
0
down vote

favorite
1












Motivated by the answer in Under which conditions does small uniform norm imply 'similarity' of subgradients. I have the following question.



Let $mathcal{S}$ be the set of proper convex functions functions from $X$ to $mathbb{R}$, where $X$ is a open and convex subset of $mathbb{R}^{n}$. I was wondering under which conditions on $mathcal{S}$ we have
begin{gather}
forall epsilon>0 exists delta>0 text{ such that for } f,h in mathcal{S} , ||f-h||_{infty} < delta \
Rightarrow underset{ x in X}{sup} underset{ v in partial f(x), w in partial h(x)}{inf}||v-w||_2 <epsilon
end{gather}



In particular, I'm more interested in an answer to my 'additional question' below, but an answer to any of my two questions would be fine.



Motivation: Intuitively, the fact that $||f-h||_{infty}$ is small, means that the shape of the graphs of the functions are similar and hence also their suporting hyperplanes might be similar. Of course this is just a picture that I have in mind for the 1-dimensional case.



From where the problemm comes: I have a function $F$ that maps convex functions to elements of their subgradient at any given point. I would like to show that if the mapped functions are similar, i.e. $||f-h||_{infty}$ is sufficiently small, then we can say that $||F(h)-F(f)||_{2}$ is small.



Some references: A similar question was asked here [Hausdorff Distance between Subdifferential sets. The problem is that the theory defines more general types of convergences and analyzes the convergence of the subdifferentials in that framework. I have almost no background in functional analysis so I would really just like to know under which conditions my type of 'convergence'; holds, as defined above.



Regarding my qustion: regarding the conditions on the set $mathcal{S} $: The set I'm working with contains positive concave functions (nothing more is assumed), and $X$ is the open unit simplex. Hence, It would be great if we would have the implication for negative convex functions on the open unit simplex. If the implication does not hold, can we make the conditions on the set $mathcal{S}$ stronger to make sure that the implication hodls?



additional question: what if we consider two Sets, i.e the set $mathcal{S}$ of uniformly bounded and continously differentiale functions such that the gradient at any point is uiformly bounded and $mathcal{M}$ the set of piecewise affine functions? It is well known that for all $f$ in the first set there exists a function in the Second set such that the supremum norm is small (as small as we want). Would we then have the Implication for the subgradients of the two functions?










share|cite|improve this question
























  • In your final paragraph, do you mean "positive convex functions" instead of "positive concave functions"?
    – gerw
    Nov 22 at 7:56










  • No I mean positive concave functions. Substituting $f$ with $-f$ it would be enough to show the implication for negative convex functions. This is what I meant.
    – sigmatau
    Nov 22 at 7:59















up vote
0
down vote

favorite
1












Motivated by the answer in Under which conditions does small uniform norm imply 'similarity' of subgradients. I have the following question.



Let $mathcal{S}$ be the set of proper convex functions functions from $X$ to $mathbb{R}$, where $X$ is a open and convex subset of $mathbb{R}^{n}$. I was wondering under which conditions on $mathcal{S}$ we have
begin{gather}
forall epsilon>0 exists delta>0 text{ such that for } f,h in mathcal{S} , ||f-h||_{infty} < delta \
Rightarrow underset{ x in X}{sup} underset{ v in partial f(x), w in partial h(x)}{inf}||v-w||_2 <epsilon
end{gather}



In particular, I'm more interested in an answer to my 'additional question' below, but an answer to any of my two questions would be fine.



Motivation: Intuitively, the fact that $||f-h||_{infty}$ is small, means that the shape of the graphs of the functions are similar and hence also their suporting hyperplanes might be similar. Of course this is just a picture that I have in mind for the 1-dimensional case.



From where the problemm comes: I have a function $F$ that maps convex functions to elements of their subgradient at any given point. I would like to show that if the mapped functions are similar, i.e. $||f-h||_{infty}$ is sufficiently small, then we can say that $||F(h)-F(f)||_{2}$ is small.



Some references: A similar question was asked here [Hausdorff Distance between Subdifferential sets. The problem is that the theory defines more general types of convergences and analyzes the convergence of the subdifferentials in that framework. I have almost no background in functional analysis so I would really just like to know under which conditions my type of 'convergence'; holds, as defined above.



Regarding my qustion: regarding the conditions on the set $mathcal{S} $: The set I'm working with contains positive concave functions (nothing more is assumed), and $X$ is the open unit simplex. Hence, It would be great if we would have the implication for negative convex functions on the open unit simplex. If the implication does not hold, can we make the conditions on the set $mathcal{S}$ stronger to make sure that the implication hodls?



additional question: what if we consider two Sets, i.e the set $mathcal{S}$ of uniformly bounded and continously differentiale functions such that the gradient at any point is uiformly bounded and $mathcal{M}$ the set of piecewise affine functions? It is well known that for all $f$ in the first set there exists a function in the Second set such that the supremum norm is small (as small as we want). Would we then have the Implication for the subgradients of the two functions?










share|cite|improve this question
























  • In your final paragraph, do you mean "positive convex functions" instead of "positive concave functions"?
    – gerw
    Nov 22 at 7:56










  • No I mean positive concave functions. Substituting $f$ with $-f$ it would be enough to show the implication for negative convex functions. This is what I meant.
    – sigmatau
    Nov 22 at 7:59













up vote
0
down vote

favorite
1









up vote
0
down vote

favorite
1






1





Motivated by the answer in Under which conditions does small uniform norm imply 'similarity' of subgradients. I have the following question.



Let $mathcal{S}$ be the set of proper convex functions functions from $X$ to $mathbb{R}$, where $X$ is a open and convex subset of $mathbb{R}^{n}$. I was wondering under which conditions on $mathcal{S}$ we have
begin{gather}
forall epsilon>0 exists delta>0 text{ such that for } f,h in mathcal{S} , ||f-h||_{infty} < delta \
Rightarrow underset{ x in X}{sup} underset{ v in partial f(x), w in partial h(x)}{inf}||v-w||_2 <epsilon
end{gather}



In particular, I'm more interested in an answer to my 'additional question' below, but an answer to any of my two questions would be fine.



Motivation: Intuitively, the fact that $||f-h||_{infty}$ is small, means that the shape of the graphs of the functions are similar and hence also their suporting hyperplanes might be similar. Of course this is just a picture that I have in mind for the 1-dimensional case.



From where the problemm comes: I have a function $F$ that maps convex functions to elements of their subgradient at any given point. I would like to show that if the mapped functions are similar, i.e. $||f-h||_{infty}$ is sufficiently small, then we can say that $||F(h)-F(f)||_{2}$ is small.



Some references: A similar question was asked here [Hausdorff Distance between Subdifferential sets. The problem is that the theory defines more general types of convergences and analyzes the convergence of the subdifferentials in that framework. I have almost no background in functional analysis so I would really just like to know under which conditions my type of 'convergence'; holds, as defined above.



Regarding my qustion: regarding the conditions on the set $mathcal{S} $: The set I'm working with contains positive concave functions (nothing more is assumed), and $X$ is the open unit simplex. Hence, It would be great if we would have the implication for negative convex functions on the open unit simplex. If the implication does not hold, can we make the conditions on the set $mathcal{S}$ stronger to make sure that the implication hodls?



additional question: what if we consider two Sets, i.e the set $mathcal{S}$ of uniformly bounded and continously differentiale functions such that the gradient at any point is uiformly bounded and $mathcal{M}$ the set of piecewise affine functions? It is well known that for all $f$ in the first set there exists a function in the Second set such that the supremum norm is small (as small as we want). Would we then have the Implication for the subgradients of the two functions?










share|cite|improve this question















Motivated by the answer in Under which conditions does small uniform norm imply 'similarity' of subgradients. I have the following question.



Let $mathcal{S}$ be the set of proper convex functions functions from $X$ to $mathbb{R}$, where $X$ is a open and convex subset of $mathbb{R}^{n}$. I was wondering under which conditions on $mathcal{S}$ we have
begin{gather}
forall epsilon>0 exists delta>0 text{ such that for } f,h in mathcal{S} , ||f-h||_{infty} < delta \
Rightarrow underset{ x in X}{sup} underset{ v in partial f(x), w in partial h(x)}{inf}||v-w||_2 <epsilon
end{gather}



In particular, I'm more interested in an answer to my 'additional question' below, but an answer to any of my two questions would be fine.



Motivation: Intuitively, the fact that $||f-h||_{infty}$ is small, means that the shape of the graphs of the functions are similar and hence also their suporting hyperplanes might be similar. Of course this is just a picture that I have in mind for the 1-dimensional case.



From where the problemm comes: I have a function $F$ that maps convex functions to elements of their subgradient at any given point. I would like to show that if the mapped functions are similar, i.e. $||f-h||_{infty}$ is sufficiently small, then we can say that $||F(h)-F(f)||_{2}$ is small.



Some references: A similar question was asked here [Hausdorff Distance between Subdifferential sets. The problem is that the theory defines more general types of convergences and analyzes the convergence of the subdifferentials in that framework. I have almost no background in functional analysis so I would really just like to know under which conditions my type of 'convergence'; holds, as defined above.



Regarding my qustion: regarding the conditions on the set $mathcal{S} $: The set I'm working with contains positive concave functions (nothing more is assumed), and $X$ is the open unit simplex. Hence, It would be great if we would have the implication for negative convex functions on the open unit simplex. If the implication does not hold, can we make the conditions on the set $mathcal{S}$ stronger to make sure that the implication hodls?



additional question: what if we consider two Sets, i.e the set $mathcal{S}$ of uniformly bounded and continously differentiale functions such that the gradient at any point is uiformly bounded and $mathcal{M}$ the set of piecewise affine functions? It is well known that for all $f$ in the first set there exists a function in the Second set such that the supremum norm is small (as small as we want). Would we then have the Implication for the subgradients of the two functions?







functional-analysis convex-analysis convex-optimization






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Nov 22 at 9:13

























asked Nov 19 at 10:05









sigmatau

1,7401924




1,7401924












  • In your final paragraph, do you mean "positive convex functions" instead of "positive concave functions"?
    – gerw
    Nov 22 at 7:56










  • No I mean positive concave functions. Substituting $f$ with $-f$ it would be enough to show the implication for negative convex functions. This is what I meant.
    – sigmatau
    Nov 22 at 7:59


















  • In your final paragraph, do you mean "positive convex functions" instead of "positive concave functions"?
    – gerw
    Nov 22 at 7:56










  • No I mean positive concave functions. Substituting $f$ with $-f$ it would be enough to show the implication for negative convex functions. This is what I meant.
    – sigmatau
    Nov 22 at 7:59
















In your final paragraph, do you mean "positive convex functions" instead of "positive concave functions"?
– gerw
Nov 22 at 7:56




In your final paragraph, do you mean "positive convex functions" instead of "positive concave functions"?
– gerw
Nov 22 at 7:56












No I mean positive concave functions. Substituting $f$ with $-f$ it would be enough to show the implication for negative convex functions. This is what I meant.
– sigmatau
Nov 22 at 7:59




No I mean positive concave functions. Substituting $f$ with $-f$ it would be enough to show the implication for negative convex functions. This is what I meant.
– sigmatau
Nov 22 at 7:59










2 Answers
2






active

oldest

votes

















up vote
1
down vote



accepted
+100










If $S$ contains uniformly lipschitz functions (As it holds for the sets $S$ and $M$ you mentioned in additional part of your question) Then:



If $h$ monotonically approaches to $h$ then $partial h$ approaches to $partial f$ (in terms of the following distance ) $$d(partial h , partial f) :=
underset{ x in X}{sup} underset{ v in partial f(x), w in partial h(x)}{inf}||v-w||_2
$$



In another words, $$h_n uparrow f quad quad Rightarrow quad quad partial h_n to partial f $$



Proof Assume $h_n uparrow f $. Since $h_n$ are uniformly Lipschitz then there exist a $M > 0$ such that $$| h_n(x) - h_m(y)| leq M | x -y|$$ this implies for all $x in X$, $n in N$, and $ v in partial h_n (x)$ we have $| v | leq M.$



Now pick $x , y in X $ and fix $x in X$, assume $epsilon > 0$ is arbitrary. Since $h_n uparrow f $, large enough $n$ we have $$ h_n(y) leq f(y), quad f(x) -epsilon leq h_n (x) $$



Pick $v_n in partial h_n(x)$ therefore:
$$ langle v_n , y- xrangle leq h_n(y) - h_n(x) leq f(y) - f(x) + epsilon . $$



Since ${ v_n }$ is bounded sequence, without loss of generality we may assume $v_n to v .$ So by letting $ n to infty$ in above we get



$$ langle v , y- xrangle leq f(y) - f(x) + epsilon $$



Now because $epsilon > 0$ was arbitrary, let $epsilon to 0$ in above so we arrive $$ langle v , y- xrangle leq f(y) - f(x) $$ which means $ v in partial f(x)$






share|cite|improve this answer























  • Yes! This is what I am looking for. Would be great if you could add a proof. I was not able to write down a proof for now but intuitively it would make sense. The picture I have in mind are two tangents at the continous function. If we let the distance between the tangent points go to zero then the slopes of the two intersecting tangents should get nearer and nearer to the slope of the continous function in between those tangent points.But this is just a picture I have in mind...
    – sigmatau
    Nov 22 at 17:50






  • 1




    @sigmatau let me know if the proof is not clear for you.
    – Red shoes
    Nov 24 at 1:41










  • yes, it is clear.
    – sigmatau
    Nov 24 at 8:06










  • I noticed that in your proof the choice of $v_n in partial h_n(x)$ is arbitrary. In particular, this implies (for the case of comparing functions from the set $S$ and $M$) that if $f$ is continously differentiable we have for $n$ large enough $underset{x in X}{sup} underset{ v_n in partial h_n(x)}{sup} || v_n - nabla f(x)||<epsilon$. right? Hence here we have a nicer result, where we can use the $sup$ instead of the $inf$.
    – sigmatau
    Nov 25 at 18:16








  • 1




    @sigmatau Yes it is right . Also Monotonically convergence is not really needed. You only need to have $h_n leq f$ or $f leq h_n$ for all $n$.
    – Red shoes
    Nov 25 at 22:40


















up vote
0
down vote













Here is a small example with $X = (-1,1)$, showing that you need some assumptions on $mathcal S$.



We take $f(x) = |x|$ and $h_delta(x) = |x - delta/2|$.
Then, $|f - h_delta| = delta/2 < delta$, but
$$partial f(delta/4) = {1},qquad partial h_delta(delta/4) = {-1}.$$
Thus, for $varepsilon < 2$ there is no $delta > 0$ such that your conclusion holds for the set
$$mathcal S = {f, h_delta mid delta > 0}.$$






share|cite|improve this answer























  • I think you mean $partial h_{delta}(delta / 4)$ right? This is a nice example that shows where the 'weak spot' is, but it is not a complete answer. I also edited the last section of my question. Thank you.
    – sigmatau
    Nov 22 at 8:24










  • Of, course, it should be $delta/4$.
    – gerw
    Nov 22 at 8:58











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3004735%2fa-condition-for-similarity-of-subgradients-of-convex-proper-functions%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
1
down vote



accepted
+100










If $S$ contains uniformly lipschitz functions (As it holds for the sets $S$ and $M$ you mentioned in additional part of your question) Then:



If $h$ monotonically approaches to $h$ then $partial h$ approaches to $partial f$ (in terms of the following distance ) $$d(partial h , partial f) :=
underset{ x in X}{sup} underset{ v in partial f(x), w in partial h(x)}{inf}||v-w||_2
$$



In another words, $$h_n uparrow f quad quad Rightarrow quad quad partial h_n to partial f $$



Proof Assume $h_n uparrow f $. Since $h_n$ are uniformly Lipschitz then there exist a $M > 0$ such that $$| h_n(x) - h_m(y)| leq M | x -y|$$ this implies for all $x in X$, $n in N$, and $ v in partial h_n (x)$ we have $| v | leq M.$



Now pick $x , y in X $ and fix $x in X$, assume $epsilon > 0$ is arbitrary. Since $h_n uparrow f $, large enough $n$ we have $$ h_n(y) leq f(y), quad f(x) -epsilon leq h_n (x) $$



Pick $v_n in partial h_n(x)$ therefore:
$$ langle v_n , y- xrangle leq h_n(y) - h_n(x) leq f(y) - f(x) + epsilon . $$



Since ${ v_n }$ is bounded sequence, without loss of generality we may assume $v_n to v .$ So by letting $ n to infty$ in above we get



$$ langle v , y- xrangle leq f(y) - f(x) + epsilon $$



Now because $epsilon > 0$ was arbitrary, let $epsilon to 0$ in above so we arrive $$ langle v , y- xrangle leq f(y) - f(x) $$ which means $ v in partial f(x)$






share|cite|improve this answer























  • Yes! This is what I am looking for. Would be great if you could add a proof. I was not able to write down a proof for now but intuitively it would make sense. The picture I have in mind are two tangents at the continous function. If we let the distance between the tangent points go to zero then the slopes of the two intersecting tangents should get nearer and nearer to the slope of the continous function in between those tangent points.But this is just a picture I have in mind...
    – sigmatau
    Nov 22 at 17:50






  • 1




    @sigmatau let me know if the proof is not clear for you.
    – Red shoes
    Nov 24 at 1:41










  • yes, it is clear.
    – sigmatau
    Nov 24 at 8:06










  • I noticed that in your proof the choice of $v_n in partial h_n(x)$ is arbitrary. In particular, this implies (for the case of comparing functions from the set $S$ and $M$) that if $f$ is continously differentiable we have for $n$ large enough $underset{x in X}{sup} underset{ v_n in partial h_n(x)}{sup} || v_n - nabla f(x)||<epsilon$. right? Hence here we have a nicer result, where we can use the $sup$ instead of the $inf$.
    – sigmatau
    Nov 25 at 18:16








  • 1




    @sigmatau Yes it is right . Also Monotonically convergence is not really needed. You only need to have $h_n leq f$ or $f leq h_n$ for all $n$.
    – Red shoes
    Nov 25 at 22:40















up vote
1
down vote



accepted
+100










If $S$ contains uniformly lipschitz functions (As it holds for the sets $S$ and $M$ you mentioned in additional part of your question) Then:



If $h$ monotonically approaches to $h$ then $partial h$ approaches to $partial f$ (in terms of the following distance ) $$d(partial h , partial f) :=
underset{ x in X}{sup} underset{ v in partial f(x), w in partial h(x)}{inf}||v-w||_2
$$



In another words, $$h_n uparrow f quad quad Rightarrow quad quad partial h_n to partial f $$



Proof Assume $h_n uparrow f $. Since $h_n$ are uniformly Lipschitz then there exist a $M > 0$ such that $$| h_n(x) - h_m(y)| leq M | x -y|$$ this implies for all $x in X$, $n in N$, and $ v in partial h_n (x)$ we have $| v | leq M.$



Now pick $x , y in X $ and fix $x in X$, assume $epsilon > 0$ is arbitrary. Since $h_n uparrow f $, large enough $n$ we have $$ h_n(y) leq f(y), quad f(x) -epsilon leq h_n (x) $$



Pick $v_n in partial h_n(x)$ therefore:
$$ langle v_n , y- xrangle leq h_n(y) - h_n(x) leq f(y) - f(x) + epsilon . $$



Since ${ v_n }$ is bounded sequence, without loss of generality we may assume $v_n to v .$ So by letting $ n to infty$ in above we get



$$ langle v , y- xrangle leq f(y) - f(x) + epsilon $$



Now because $epsilon > 0$ was arbitrary, let $epsilon to 0$ in above so we arrive $$ langle v , y- xrangle leq f(y) - f(x) $$ which means $ v in partial f(x)$






share|cite|improve this answer























  • Yes! This is what I am looking for. Would be great if you could add a proof. I was not able to write down a proof for now but intuitively it would make sense. The picture I have in mind are two tangents at the continous function. If we let the distance between the tangent points go to zero then the slopes of the two intersecting tangents should get nearer and nearer to the slope of the continous function in between those tangent points.But this is just a picture I have in mind...
    – sigmatau
    Nov 22 at 17:50






  • 1




    @sigmatau let me know if the proof is not clear for you.
    – Red shoes
    Nov 24 at 1:41










  • yes, it is clear.
    – sigmatau
    Nov 24 at 8:06










  • I noticed that in your proof the choice of $v_n in partial h_n(x)$ is arbitrary. In particular, this implies (for the case of comparing functions from the set $S$ and $M$) that if $f$ is continously differentiable we have for $n$ large enough $underset{x in X}{sup} underset{ v_n in partial h_n(x)}{sup} || v_n - nabla f(x)||<epsilon$. right? Hence here we have a nicer result, where we can use the $sup$ instead of the $inf$.
    – sigmatau
    Nov 25 at 18:16








  • 1




    @sigmatau Yes it is right . Also Monotonically convergence is not really needed. You only need to have $h_n leq f$ or $f leq h_n$ for all $n$.
    – Red shoes
    Nov 25 at 22:40













up vote
1
down vote



accepted
+100







up vote
1
down vote



accepted
+100




+100




If $S$ contains uniformly lipschitz functions (As it holds for the sets $S$ and $M$ you mentioned in additional part of your question) Then:



If $h$ monotonically approaches to $h$ then $partial h$ approaches to $partial f$ (in terms of the following distance ) $$d(partial h , partial f) :=
underset{ x in X}{sup} underset{ v in partial f(x), w in partial h(x)}{inf}||v-w||_2
$$



In another words, $$h_n uparrow f quad quad Rightarrow quad quad partial h_n to partial f $$



Proof Assume $h_n uparrow f $. Since $h_n$ are uniformly Lipschitz then there exist a $M > 0$ such that $$| h_n(x) - h_m(y)| leq M | x -y|$$ this implies for all $x in X$, $n in N$, and $ v in partial h_n (x)$ we have $| v | leq M.$



Now pick $x , y in X $ and fix $x in X$, assume $epsilon > 0$ is arbitrary. Since $h_n uparrow f $, large enough $n$ we have $$ h_n(y) leq f(y), quad f(x) -epsilon leq h_n (x) $$



Pick $v_n in partial h_n(x)$ therefore:
$$ langle v_n , y- xrangle leq h_n(y) - h_n(x) leq f(y) - f(x) + epsilon . $$



Since ${ v_n }$ is bounded sequence, without loss of generality we may assume $v_n to v .$ So by letting $ n to infty$ in above we get



$$ langle v , y- xrangle leq f(y) - f(x) + epsilon $$



Now because $epsilon > 0$ was arbitrary, let $epsilon to 0$ in above so we arrive $$ langle v , y- xrangle leq f(y) - f(x) $$ which means $ v in partial f(x)$






share|cite|improve this answer














If $S$ contains uniformly lipschitz functions (As it holds for the sets $S$ and $M$ you mentioned in additional part of your question) Then:



If $h$ monotonically approaches to $h$ then $partial h$ approaches to $partial f$ (in terms of the following distance ) $$d(partial h , partial f) :=
underset{ x in X}{sup} underset{ v in partial f(x), w in partial h(x)}{inf}||v-w||_2
$$



In another words, $$h_n uparrow f quad quad Rightarrow quad quad partial h_n to partial f $$



Proof Assume $h_n uparrow f $. Since $h_n$ are uniformly Lipschitz then there exist a $M > 0$ such that $$| h_n(x) - h_m(y)| leq M | x -y|$$ this implies for all $x in X$, $n in N$, and $ v in partial h_n (x)$ we have $| v | leq M.$



Now pick $x , y in X $ and fix $x in X$, assume $epsilon > 0$ is arbitrary. Since $h_n uparrow f $, large enough $n$ we have $$ h_n(y) leq f(y), quad f(x) -epsilon leq h_n (x) $$



Pick $v_n in partial h_n(x)$ therefore:
$$ langle v_n , y- xrangle leq h_n(y) - h_n(x) leq f(y) - f(x) + epsilon . $$



Since ${ v_n }$ is bounded sequence, without loss of generality we may assume $v_n to v .$ So by letting $ n to infty$ in above we get



$$ langle v , y- xrangle leq f(y) - f(x) + epsilon $$



Now because $epsilon > 0$ was arbitrary, let $epsilon to 0$ in above so we arrive $$ langle v , y- xrangle leq f(y) - f(x) $$ which means $ v in partial f(x)$







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Nov 23 at 18:40

























answered Nov 22 at 17:34









Red shoes

4,676621




4,676621












  • Yes! This is what I am looking for. Would be great if you could add a proof. I was not able to write down a proof for now but intuitively it would make sense. The picture I have in mind are two tangents at the continous function. If we let the distance between the tangent points go to zero then the slopes of the two intersecting tangents should get nearer and nearer to the slope of the continous function in between those tangent points.But this is just a picture I have in mind...
    – sigmatau
    Nov 22 at 17:50






  • 1




    @sigmatau let me know if the proof is not clear for you.
    – Red shoes
    Nov 24 at 1:41










  • yes, it is clear.
    – sigmatau
    Nov 24 at 8:06










  • I noticed that in your proof the choice of $v_n in partial h_n(x)$ is arbitrary. In particular, this implies (for the case of comparing functions from the set $S$ and $M$) that if $f$ is continously differentiable we have for $n$ large enough $underset{x in X}{sup} underset{ v_n in partial h_n(x)}{sup} || v_n - nabla f(x)||<epsilon$. right? Hence here we have a nicer result, where we can use the $sup$ instead of the $inf$.
    – sigmatau
    Nov 25 at 18:16








  • 1




    @sigmatau Yes it is right . Also Monotonically convergence is not really needed. You only need to have $h_n leq f$ or $f leq h_n$ for all $n$.
    – Red shoes
    Nov 25 at 22:40


















  • Yes! This is what I am looking for. Would be great if you could add a proof. I was not able to write down a proof for now but intuitively it would make sense. The picture I have in mind are two tangents at the continous function. If we let the distance between the tangent points go to zero then the slopes of the two intersecting tangents should get nearer and nearer to the slope of the continous function in between those tangent points.But this is just a picture I have in mind...
    – sigmatau
    Nov 22 at 17:50






  • 1




    @sigmatau let me know if the proof is not clear for you.
    – Red shoes
    Nov 24 at 1:41










  • yes, it is clear.
    – sigmatau
    Nov 24 at 8:06










  • I noticed that in your proof the choice of $v_n in partial h_n(x)$ is arbitrary. In particular, this implies (for the case of comparing functions from the set $S$ and $M$) that if $f$ is continously differentiable we have for $n$ large enough $underset{x in X}{sup} underset{ v_n in partial h_n(x)}{sup} || v_n - nabla f(x)||<epsilon$. right? Hence here we have a nicer result, where we can use the $sup$ instead of the $inf$.
    – sigmatau
    Nov 25 at 18:16








  • 1




    @sigmatau Yes it is right . Also Monotonically convergence is not really needed. You only need to have $h_n leq f$ or $f leq h_n$ for all $n$.
    – Red shoes
    Nov 25 at 22:40
















Yes! This is what I am looking for. Would be great if you could add a proof. I was not able to write down a proof for now but intuitively it would make sense. The picture I have in mind are two tangents at the continous function. If we let the distance between the tangent points go to zero then the slopes of the two intersecting tangents should get nearer and nearer to the slope of the continous function in between those tangent points.But this is just a picture I have in mind...
– sigmatau
Nov 22 at 17:50




Yes! This is what I am looking for. Would be great if you could add a proof. I was not able to write down a proof for now but intuitively it would make sense. The picture I have in mind are two tangents at the continous function. If we let the distance between the tangent points go to zero then the slopes of the two intersecting tangents should get nearer and nearer to the slope of the continous function in between those tangent points.But this is just a picture I have in mind...
– sigmatau
Nov 22 at 17:50




1




1




@sigmatau let me know if the proof is not clear for you.
– Red shoes
Nov 24 at 1:41




@sigmatau let me know if the proof is not clear for you.
– Red shoes
Nov 24 at 1:41












yes, it is clear.
– sigmatau
Nov 24 at 8:06




yes, it is clear.
– sigmatau
Nov 24 at 8:06












I noticed that in your proof the choice of $v_n in partial h_n(x)$ is arbitrary. In particular, this implies (for the case of comparing functions from the set $S$ and $M$) that if $f$ is continously differentiable we have for $n$ large enough $underset{x in X}{sup} underset{ v_n in partial h_n(x)}{sup} || v_n - nabla f(x)||<epsilon$. right? Hence here we have a nicer result, where we can use the $sup$ instead of the $inf$.
– sigmatau
Nov 25 at 18:16






I noticed that in your proof the choice of $v_n in partial h_n(x)$ is arbitrary. In particular, this implies (for the case of comparing functions from the set $S$ and $M$) that if $f$ is continously differentiable we have for $n$ large enough $underset{x in X}{sup} underset{ v_n in partial h_n(x)}{sup} || v_n - nabla f(x)||<epsilon$. right? Hence here we have a nicer result, where we can use the $sup$ instead of the $inf$.
– sigmatau
Nov 25 at 18:16






1




1




@sigmatau Yes it is right . Also Monotonically convergence is not really needed. You only need to have $h_n leq f$ or $f leq h_n$ for all $n$.
– Red shoes
Nov 25 at 22:40




@sigmatau Yes it is right . Also Monotonically convergence is not really needed. You only need to have $h_n leq f$ or $f leq h_n$ for all $n$.
– Red shoes
Nov 25 at 22:40










up vote
0
down vote













Here is a small example with $X = (-1,1)$, showing that you need some assumptions on $mathcal S$.



We take $f(x) = |x|$ and $h_delta(x) = |x - delta/2|$.
Then, $|f - h_delta| = delta/2 < delta$, but
$$partial f(delta/4) = {1},qquad partial h_delta(delta/4) = {-1}.$$
Thus, for $varepsilon < 2$ there is no $delta > 0$ such that your conclusion holds for the set
$$mathcal S = {f, h_delta mid delta > 0}.$$






share|cite|improve this answer























  • I think you mean $partial h_{delta}(delta / 4)$ right? This is a nice example that shows where the 'weak spot' is, but it is not a complete answer. I also edited the last section of my question. Thank you.
    – sigmatau
    Nov 22 at 8:24










  • Of, course, it should be $delta/4$.
    – gerw
    Nov 22 at 8:58















up vote
0
down vote













Here is a small example with $X = (-1,1)$, showing that you need some assumptions on $mathcal S$.



We take $f(x) = |x|$ and $h_delta(x) = |x - delta/2|$.
Then, $|f - h_delta| = delta/2 < delta$, but
$$partial f(delta/4) = {1},qquad partial h_delta(delta/4) = {-1}.$$
Thus, for $varepsilon < 2$ there is no $delta > 0$ such that your conclusion holds for the set
$$mathcal S = {f, h_delta mid delta > 0}.$$






share|cite|improve this answer























  • I think you mean $partial h_{delta}(delta / 4)$ right? This is a nice example that shows where the 'weak spot' is, but it is not a complete answer. I also edited the last section of my question. Thank you.
    – sigmatau
    Nov 22 at 8:24










  • Of, course, it should be $delta/4$.
    – gerw
    Nov 22 at 8:58













up vote
0
down vote










up vote
0
down vote









Here is a small example with $X = (-1,1)$, showing that you need some assumptions on $mathcal S$.



We take $f(x) = |x|$ and $h_delta(x) = |x - delta/2|$.
Then, $|f - h_delta| = delta/2 < delta$, but
$$partial f(delta/4) = {1},qquad partial h_delta(delta/4) = {-1}.$$
Thus, for $varepsilon < 2$ there is no $delta > 0$ such that your conclusion holds for the set
$$mathcal S = {f, h_delta mid delta > 0}.$$






share|cite|improve this answer














Here is a small example with $X = (-1,1)$, showing that you need some assumptions on $mathcal S$.



We take $f(x) = |x|$ and $h_delta(x) = |x - delta/2|$.
Then, $|f - h_delta| = delta/2 < delta$, but
$$partial f(delta/4) = {1},qquad partial h_delta(delta/4) = {-1}.$$
Thus, for $varepsilon < 2$ there is no $delta > 0$ such that your conclusion holds for the set
$$mathcal S = {f, h_delta mid delta > 0}.$$







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Nov 22 at 8:59

























answered Nov 22 at 8:03









gerw

18.9k11133




18.9k11133












  • I think you mean $partial h_{delta}(delta / 4)$ right? This is a nice example that shows where the 'weak spot' is, but it is not a complete answer. I also edited the last section of my question. Thank you.
    – sigmatau
    Nov 22 at 8:24










  • Of, course, it should be $delta/4$.
    – gerw
    Nov 22 at 8:58


















  • I think you mean $partial h_{delta}(delta / 4)$ right? This is a nice example that shows where the 'weak spot' is, but it is not a complete answer. I also edited the last section of my question. Thank you.
    – sigmatau
    Nov 22 at 8:24










  • Of, course, it should be $delta/4$.
    – gerw
    Nov 22 at 8:58
















I think you mean $partial h_{delta}(delta / 4)$ right? This is a nice example that shows where the 'weak spot' is, but it is not a complete answer. I also edited the last section of my question. Thank you.
– sigmatau
Nov 22 at 8:24




I think you mean $partial h_{delta}(delta / 4)$ right? This is a nice example that shows where the 'weak spot' is, but it is not a complete answer. I also edited the last section of my question. Thank you.
– sigmatau
Nov 22 at 8:24












Of, course, it should be $delta/4$.
– gerw
Nov 22 at 8:58




Of, course, it should be $delta/4$.
– gerw
Nov 22 at 8:58


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3004735%2fa-condition-for-similarity-of-subgradients-of-convex-proper-functions%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

mysqli_query(): Empty query in /home/lucindabrummitt/public_html/blog/wp-includes/wp-db.php on line 1924

How to change which sound is reproduced for terminal bell?

Can I use Tabulator js library in my java Spring + Thymeleaf project?