Expected value of $k$th ordered statistic in Uniform(0, r) for r<1
$begingroup$
Suppose that we draw $X_1, ldots, X_n$ independently and uniformly from $(0,r)$ and let $X_{(k)}$ denote the $k$th smallest number drawn.
Denoting the pdf of $X_{(k)}$ by function $f_k$, I know that
$$
f_{k}(x) = n frac{1}{r} binom{n-1}{k-1}(x/r)^{k-1}(1-x/r)^{n-k}.
$$
Questions
- How can I find the expected value of $X_{(k)}$?
I know that $mathbb{E}[X_{(k)}] = int_0^rf_k(x)xdx$; but I don't know how to solve it. Intuitively, I believe the answer should be $mathbb{E}[X_{(k)}]=frac{k}{n+1}r$ but I don't have any proofs for it.
- How concentrated is $X_{(k)}$ around its expectation? More precisely, I would like to know an upper bound on $Pr[|X_{(k)}-mathbb{E}[X_{(k)}]| > epsilon]$ for given $epsilon$.
probability statistics
$endgroup$
add a comment |
$begingroup$
Suppose that we draw $X_1, ldots, X_n$ independently and uniformly from $(0,r)$ and let $X_{(k)}$ denote the $k$th smallest number drawn.
Denoting the pdf of $X_{(k)}$ by function $f_k$, I know that
$$
f_{k}(x) = n frac{1}{r} binom{n-1}{k-1}(x/r)^{k-1}(1-x/r)^{n-k}.
$$
Questions
- How can I find the expected value of $X_{(k)}$?
I know that $mathbb{E}[X_{(k)}] = int_0^rf_k(x)xdx$; but I don't know how to solve it. Intuitively, I believe the answer should be $mathbb{E}[X_{(k)}]=frac{k}{n+1}r$ but I don't have any proofs for it.
- How concentrated is $X_{(k)}$ around its expectation? More precisely, I would like to know an upper bound on $Pr[|X_{(k)}-mathbb{E}[X_{(k)}]| > epsilon]$ for given $epsilon$.
probability statistics
$endgroup$
1
$begingroup$
Hint for $mathbb{E}[X_{(k)}]$: observe that $Y_{(k)} = X_{(k)}/r$ has a probability density function which is proportional to $y^{k-1}(1-y)^{n-k}$ for $0 < y < 1$. What distribution does $Y_{(k)}$ follow? Then, use the fact that $r$ is a known constant to find $mathbb{E}[X_{(k)}]$.
$endgroup$
– Clarinetist
Dec 30 '18 at 3:09
$begingroup$
@Clarinetist Thank you. It follows the Beta distribution and after writing it down I realized that it indeed implies that $mathbb{E}[X_{(k)}] = frac{k}{n+1}r$. Do you have any suggestions for the second question?
$endgroup$
– Jeff Cooper
Dec 30 '18 at 3:52
$begingroup$
For the second question, you can start off with Chebyshev's inequality.
$endgroup$
– StubbornAtom
Dec 30 '18 at 6:15
$begingroup$
Just a remark aside: in cases like this first solve it for special case $Y_1,dots, Y_k$ where $r=1$ and afterwards take $X_i=rY_i$. This decreases probability on mistakes with (awkward) parameter $r$.
$endgroup$
– drhab
Dec 30 '18 at 9:27
add a comment |
$begingroup$
Suppose that we draw $X_1, ldots, X_n$ independently and uniformly from $(0,r)$ and let $X_{(k)}$ denote the $k$th smallest number drawn.
Denoting the pdf of $X_{(k)}$ by function $f_k$, I know that
$$
f_{k}(x) = n frac{1}{r} binom{n-1}{k-1}(x/r)^{k-1}(1-x/r)^{n-k}.
$$
Questions
- How can I find the expected value of $X_{(k)}$?
I know that $mathbb{E}[X_{(k)}] = int_0^rf_k(x)xdx$; but I don't know how to solve it. Intuitively, I believe the answer should be $mathbb{E}[X_{(k)}]=frac{k}{n+1}r$ but I don't have any proofs for it.
- How concentrated is $X_{(k)}$ around its expectation? More precisely, I would like to know an upper bound on $Pr[|X_{(k)}-mathbb{E}[X_{(k)}]| > epsilon]$ for given $epsilon$.
probability statistics
$endgroup$
Suppose that we draw $X_1, ldots, X_n$ independently and uniformly from $(0,r)$ and let $X_{(k)}$ denote the $k$th smallest number drawn.
Denoting the pdf of $X_{(k)}$ by function $f_k$, I know that
$$
f_{k}(x) = n frac{1}{r} binom{n-1}{k-1}(x/r)^{k-1}(1-x/r)^{n-k}.
$$
Questions
- How can I find the expected value of $X_{(k)}$?
I know that $mathbb{E}[X_{(k)}] = int_0^rf_k(x)xdx$; but I don't know how to solve it. Intuitively, I believe the answer should be $mathbb{E}[X_{(k)}]=frac{k}{n+1}r$ but I don't have any proofs for it.
- How concentrated is $X_{(k)}$ around its expectation? More precisely, I would like to know an upper bound on $Pr[|X_{(k)}-mathbb{E}[X_{(k)}]| > epsilon]$ for given $epsilon$.
probability statistics
probability statistics
asked Dec 30 '18 at 3:02
Jeff CooperJeff Cooper
62
62
1
$begingroup$
Hint for $mathbb{E}[X_{(k)}]$: observe that $Y_{(k)} = X_{(k)}/r$ has a probability density function which is proportional to $y^{k-1}(1-y)^{n-k}$ for $0 < y < 1$. What distribution does $Y_{(k)}$ follow? Then, use the fact that $r$ is a known constant to find $mathbb{E}[X_{(k)}]$.
$endgroup$
– Clarinetist
Dec 30 '18 at 3:09
$begingroup$
@Clarinetist Thank you. It follows the Beta distribution and after writing it down I realized that it indeed implies that $mathbb{E}[X_{(k)}] = frac{k}{n+1}r$. Do you have any suggestions for the second question?
$endgroup$
– Jeff Cooper
Dec 30 '18 at 3:52
$begingroup$
For the second question, you can start off with Chebyshev's inequality.
$endgroup$
– StubbornAtom
Dec 30 '18 at 6:15
$begingroup$
Just a remark aside: in cases like this first solve it for special case $Y_1,dots, Y_k$ where $r=1$ and afterwards take $X_i=rY_i$. This decreases probability on mistakes with (awkward) parameter $r$.
$endgroup$
– drhab
Dec 30 '18 at 9:27
add a comment |
1
$begingroup$
Hint for $mathbb{E}[X_{(k)}]$: observe that $Y_{(k)} = X_{(k)}/r$ has a probability density function which is proportional to $y^{k-1}(1-y)^{n-k}$ for $0 < y < 1$. What distribution does $Y_{(k)}$ follow? Then, use the fact that $r$ is a known constant to find $mathbb{E}[X_{(k)}]$.
$endgroup$
– Clarinetist
Dec 30 '18 at 3:09
$begingroup$
@Clarinetist Thank you. It follows the Beta distribution and after writing it down I realized that it indeed implies that $mathbb{E}[X_{(k)}] = frac{k}{n+1}r$. Do you have any suggestions for the second question?
$endgroup$
– Jeff Cooper
Dec 30 '18 at 3:52
$begingroup$
For the second question, you can start off with Chebyshev's inequality.
$endgroup$
– StubbornAtom
Dec 30 '18 at 6:15
$begingroup$
Just a remark aside: in cases like this first solve it for special case $Y_1,dots, Y_k$ where $r=1$ and afterwards take $X_i=rY_i$. This decreases probability on mistakes with (awkward) parameter $r$.
$endgroup$
– drhab
Dec 30 '18 at 9:27
1
1
$begingroup$
Hint for $mathbb{E}[X_{(k)}]$: observe that $Y_{(k)} = X_{(k)}/r$ has a probability density function which is proportional to $y^{k-1}(1-y)^{n-k}$ for $0 < y < 1$. What distribution does $Y_{(k)}$ follow? Then, use the fact that $r$ is a known constant to find $mathbb{E}[X_{(k)}]$.
$endgroup$
– Clarinetist
Dec 30 '18 at 3:09
$begingroup$
Hint for $mathbb{E}[X_{(k)}]$: observe that $Y_{(k)} = X_{(k)}/r$ has a probability density function which is proportional to $y^{k-1}(1-y)^{n-k}$ for $0 < y < 1$. What distribution does $Y_{(k)}$ follow? Then, use the fact that $r$ is a known constant to find $mathbb{E}[X_{(k)}]$.
$endgroup$
– Clarinetist
Dec 30 '18 at 3:09
$begingroup$
@Clarinetist Thank you. It follows the Beta distribution and after writing it down I realized that it indeed implies that $mathbb{E}[X_{(k)}] = frac{k}{n+1}r$. Do you have any suggestions for the second question?
$endgroup$
– Jeff Cooper
Dec 30 '18 at 3:52
$begingroup$
@Clarinetist Thank you. It follows the Beta distribution and after writing it down I realized that it indeed implies that $mathbb{E}[X_{(k)}] = frac{k}{n+1}r$. Do you have any suggestions for the second question?
$endgroup$
– Jeff Cooper
Dec 30 '18 at 3:52
$begingroup$
For the second question, you can start off with Chebyshev's inequality.
$endgroup$
– StubbornAtom
Dec 30 '18 at 6:15
$begingroup$
For the second question, you can start off with Chebyshev's inequality.
$endgroup$
– StubbornAtom
Dec 30 '18 at 6:15
$begingroup$
Just a remark aside: in cases like this first solve it for special case $Y_1,dots, Y_k$ where $r=1$ and afterwards take $X_i=rY_i$. This decreases probability on mistakes with (awkward) parameter $r$.
$endgroup$
– drhab
Dec 30 '18 at 9:27
$begingroup$
Just a remark aside: in cases like this first solve it for special case $Y_1,dots, Y_k$ where $r=1$ and afterwards take $X_i=rY_i$. This decreases probability on mistakes with (awkward) parameter $r$.
$endgroup$
– drhab
Dec 30 '18 at 9:27
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
Here is an intuitive proof by contradiction of the first question without any Beta distribution. I will work with the case $r=1$ since for general $r$ you can just rescale.
So suppose without loss of generality that $mathbb{E} X_{(k)} > frac{k}{n+1}$. What this means is that if you draw $n$ points on $[0, 1]$ uniformly at random, repeatedly for $m$ times, eventually as $m to infty$ you will see points more concentrated on the right of $mathbb{E} X_{(k)}$ than on its left. But this contradicts the assumption of uniform sampling on $[0, 1]$.
For the second question, you can get various bounds through the 2nd, 3rd, 4th, etc moments of Beta distributions, or their moment generating functions, as listed in the wikipedia page.
$endgroup$
$begingroup$
Thanks John. For the second question, I prefer to get the strongest possible concentration bound which I assume should be done via the moment generating function, but I have no experience doing it. In particular, the equations that I end up with seem very complicated and hard to deal with. Is there a standard way of achieving a simple exponential concentration bound? I do not care about constants.
$endgroup$
– Jeff Cooper
Dec 30 '18 at 20:10
add a comment |
$begingroup$
Drawing $X_1,dots,X_n$ independently and uniformly from $(0,2pi)$ can be done like this:
Choose $Z_1,dots, Z_n, Z_{n+1}$ independently and uniformly on $S_1={(x,y)mid x^2+y^2}$.
Then let $X_i$ be the length of the arc that connects $Z_{n+1}$ and $Z_i$ anti-clockwise.
The circle is split up in $n+1$ disjoint arcs and the setup reveals that the lengths of these arcs have equal distribution hence equal expectation.
That gives expectation $frac{k}{n+1}cdot2pi$ for the length of the arc corresponding with $X_{(k)}$.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3056471%2fexpected-value-of-kth-ordered-statistic-in-uniform0-r-for-r1%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Here is an intuitive proof by contradiction of the first question without any Beta distribution. I will work with the case $r=1$ since for general $r$ you can just rescale.
So suppose without loss of generality that $mathbb{E} X_{(k)} > frac{k}{n+1}$. What this means is that if you draw $n$ points on $[0, 1]$ uniformly at random, repeatedly for $m$ times, eventually as $m to infty$ you will see points more concentrated on the right of $mathbb{E} X_{(k)}$ than on its left. But this contradicts the assumption of uniform sampling on $[0, 1]$.
For the second question, you can get various bounds through the 2nd, 3rd, 4th, etc moments of Beta distributions, or their moment generating functions, as listed in the wikipedia page.
$endgroup$
$begingroup$
Thanks John. For the second question, I prefer to get the strongest possible concentration bound which I assume should be done via the moment generating function, but I have no experience doing it. In particular, the equations that I end up with seem very complicated and hard to deal with. Is there a standard way of achieving a simple exponential concentration bound? I do not care about constants.
$endgroup$
– Jeff Cooper
Dec 30 '18 at 20:10
add a comment |
$begingroup$
Here is an intuitive proof by contradiction of the first question without any Beta distribution. I will work with the case $r=1$ since for general $r$ you can just rescale.
So suppose without loss of generality that $mathbb{E} X_{(k)} > frac{k}{n+1}$. What this means is that if you draw $n$ points on $[0, 1]$ uniformly at random, repeatedly for $m$ times, eventually as $m to infty$ you will see points more concentrated on the right of $mathbb{E} X_{(k)}$ than on its left. But this contradicts the assumption of uniform sampling on $[0, 1]$.
For the second question, you can get various bounds through the 2nd, 3rd, 4th, etc moments of Beta distributions, or their moment generating functions, as listed in the wikipedia page.
$endgroup$
$begingroup$
Thanks John. For the second question, I prefer to get the strongest possible concentration bound which I assume should be done via the moment generating function, but I have no experience doing it. In particular, the equations that I end up with seem very complicated and hard to deal with. Is there a standard way of achieving a simple exponential concentration bound? I do not care about constants.
$endgroup$
– Jeff Cooper
Dec 30 '18 at 20:10
add a comment |
$begingroup$
Here is an intuitive proof by contradiction of the first question without any Beta distribution. I will work with the case $r=1$ since for general $r$ you can just rescale.
So suppose without loss of generality that $mathbb{E} X_{(k)} > frac{k}{n+1}$. What this means is that if you draw $n$ points on $[0, 1]$ uniformly at random, repeatedly for $m$ times, eventually as $m to infty$ you will see points more concentrated on the right of $mathbb{E} X_{(k)}$ than on its left. But this contradicts the assumption of uniform sampling on $[0, 1]$.
For the second question, you can get various bounds through the 2nd, 3rd, 4th, etc moments of Beta distributions, or their moment generating functions, as listed in the wikipedia page.
$endgroup$
Here is an intuitive proof by contradiction of the first question without any Beta distribution. I will work with the case $r=1$ since for general $r$ you can just rescale.
So suppose without loss of generality that $mathbb{E} X_{(k)} > frac{k}{n+1}$. What this means is that if you draw $n$ points on $[0, 1]$ uniformly at random, repeatedly for $m$ times, eventually as $m to infty$ you will see points more concentrated on the right of $mathbb{E} X_{(k)}$ than on its left. But this contradicts the assumption of uniform sampling on $[0, 1]$.
For the second question, you can get various bounds through the 2nd, 3rd, 4th, etc moments of Beta distributions, or their moment generating functions, as listed in the wikipedia page.
answered Dec 30 '18 at 6:06
John JiangJohn Jiang
369312
369312
$begingroup$
Thanks John. For the second question, I prefer to get the strongest possible concentration bound which I assume should be done via the moment generating function, but I have no experience doing it. In particular, the equations that I end up with seem very complicated and hard to deal with. Is there a standard way of achieving a simple exponential concentration bound? I do not care about constants.
$endgroup$
– Jeff Cooper
Dec 30 '18 at 20:10
add a comment |
$begingroup$
Thanks John. For the second question, I prefer to get the strongest possible concentration bound which I assume should be done via the moment generating function, but I have no experience doing it. In particular, the equations that I end up with seem very complicated and hard to deal with. Is there a standard way of achieving a simple exponential concentration bound? I do not care about constants.
$endgroup$
– Jeff Cooper
Dec 30 '18 at 20:10
$begingroup$
Thanks John. For the second question, I prefer to get the strongest possible concentration bound which I assume should be done via the moment generating function, but I have no experience doing it. In particular, the equations that I end up with seem very complicated and hard to deal with. Is there a standard way of achieving a simple exponential concentration bound? I do not care about constants.
$endgroup$
– Jeff Cooper
Dec 30 '18 at 20:10
$begingroup$
Thanks John. For the second question, I prefer to get the strongest possible concentration bound which I assume should be done via the moment generating function, but I have no experience doing it. In particular, the equations that I end up with seem very complicated and hard to deal with. Is there a standard way of achieving a simple exponential concentration bound? I do not care about constants.
$endgroup$
– Jeff Cooper
Dec 30 '18 at 20:10
add a comment |
$begingroup$
Drawing $X_1,dots,X_n$ independently and uniformly from $(0,2pi)$ can be done like this:
Choose $Z_1,dots, Z_n, Z_{n+1}$ independently and uniformly on $S_1={(x,y)mid x^2+y^2}$.
Then let $X_i$ be the length of the arc that connects $Z_{n+1}$ and $Z_i$ anti-clockwise.
The circle is split up in $n+1$ disjoint arcs and the setup reveals that the lengths of these arcs have equal distribution hence equal expectation.
That gives expectation $frac{k}{n+1}cdot2pi$ for the length of the arc corresponding with $X_{(k)}$.
$endgroup$
add a comment |
$begingroup$
Drawing $X_1,dots,X_n$ independently and uniformly from $(0,2pi)$ can be done like this:
Choose $Z_1,dots, Z_n, Z_{n+1}$ independently and uniformly on $S_1={(x,y)mid x^2+y^2}$.
Then let $X_i$ be the length of the arc that connects $Z_{n+1}$ and $Z_i$ anti-clockwise.
The circle is split up in $n+1$ disjoint arcs and the setup reveals that the lengths of these arcs have equal distribution hence equal expectation.
That gives expectation $frac{k}{n+1}cdot2pi$ for the length of the arc corresponding with $X_{(k)}$.
$endgroup$
add a comment |
$begingroup$
Drawing $X_1,dots,X_n$ independently and uniformly from $(0,2pi)$ can be done like this:
Choose $Z_1,dots, Z_n, Z_{n+1}$ independently and uniformly on $S_1={(x,y)mid x^2+y^2}$.
Then let $X_i$ be the length of the arc that connects $Z_{n+1}$ and $Z_i$ anti-clockwise.
The circle is split up in $n+1$ disjoint arcs and the setup reveals that the lengths of these arcs have equal distribution hence equal expectation.
That gives expectation $frac{k}{n+1}cdot2pi$ for the length of the arc corresponding with $X_{(k)}$.
$endgroup$
Drawing $X_1,dots,X_n$ independently and uniformly from $(0,2pi)$ can be done like this:
Choose $Z_1,dots, Z_n, Z_{n+1}$ independently and uniformly on $S_1={(x,y)mid x^2+y^2}$.
Then let $X_i$ be the length of the arc that connects $Z_{n+1}$ and $Z_i$ anti-clockwise.
The circle is split up in $n+1$ disjoint arcs and the setup reveals that the lengths of these arcs have equal distribution hence equal expectation.
That gives expectation $frac{k}{n+1}cdot2pi$ for the length of the arc corresponding with $X_{(k)}$.
answered Dec 31 '18 at 10:01
drhabdrhab
104k545136
104k545136
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3056471%2fexpected-value-of-kth-ordered-statistic-in-uniform0-r-for-r1%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
Hint for $mathbb{E}[X_{(k)}]$: observe that $Y_{(k)} = X_{(k)}/r$ has a probability density function which is proportional to $y^{k-1}(1-y)^{n-k}$ for $0 < y < 1$. What distribution does $Y_{(k)}$ follow? Then, use the fact that $r$ is a known constant to find $mathbb{E}[X_{(k)}]$.
$endgroup$
– Clarinetist
Dec 30 '18 at 3:09
$begingroup$
@Clarinetist Thank you. It follows the Beta distribution and after writing it down I realized that it indeed implies that $mathbb{E}[X_{(k)}] = frac{k}{n+1}r$. Do you have any suggestions for the second question?
$endgroup$
– Jeff Cooper
Dec 30 '18 at 3:52
$begingroup$
For the second question, you can start off with Chebyshev's inequality.
$endgroup$
– StubbornAtom
Dec 30 '18 at 6:15
$begingroup$
Just a remark aside: in cases like this first solve it for special case $Y_1,dots, Y_k$ where $r=1$ and afterwards take $X_i=rY_i$. This decreases probability on mistakes with (awkward) parameter $r$.
$endgroup$
– drhab
Dec 30 '18 at 9:27