What is the rate of growth of $M_n := max_{1 le i le n} U_i^{(n)}$, where $U_i^{(n)} sim...












6












$begingroup$


On pp. 370-374 (see this previous question) of Cramer's 1946 Mathematical Methods of Statistics, the author shows that for any continuous distribution $P$, if $X_1, dots, X_n sim P$ are i.i.d., then the RV:



$$ Xi_n overset{def}{=} n (1 - F(M_n)) ,, quad text{where }M_n overset{def}{=} max_{1 le i le n} X_i ,, $$



has the same distribution as the maximum of $n$ i.i.d. RV's $U_1, dots, U_n$, where $U_i sim operatorname{Uniform}(0,n)$, inasmuch as he shows that they have the same density (cf. p.49 of Keener, Theoretical Statistics). (Here $F$ of course denotes the CDF of each of the observations $X_i$.)



Cramer says later that we can assume $Xi_n$ "is bounded" - but in what sense precisely is that true?




Question: What is the almost sure rate of growth of the random variables $Xi_n$? Of $log(Xi_n)$?



In particular, is it true that $log(Xi_n) = o(loglog n)$ almost surely?




$log(Xi_n) = o(loglog n)$ a.s. appears to be sufficient to show that $max_{1 le i le n} X_i sim sqrt{2 log n}$ a.s. where the $X_i sim mathscr{N}(0,1)$, using the argument found here, hence my interest.



In a community wiki "answer" below I have posted what I believe to be a proof that $Xi_n = o(log n)$ a.s., at least when the $X_i sim mathscr{N}(0,1)$, an assumption which would be considered acceptable for any accepted answer. Note that, it appears to be sufficient to show that for all $varepsilon >0$, $Xi_n = o((log n)^{varepsilon})$ a.s. in order to conclude $log (Xi_n) = o(log log n)$ a.s.



Note: For each $n$, $Xi_n$ has the distribution of the maximum of $n$ i.i.d. uniforms supported on $[0,n]$ (not $[0,1]$ or $[0, theta]$ for some fixed constant), so this problem does not seem completely trivial.



Note 2: Following the argument below, it seems it suffices to show that (assuming this is even true), for any $epsilon >0$, $delta > 0$ that:



$$ sum_{n=1}^{infty} frac{(log n)^delta}{n (exp ((log n)^delta ))^epsilon } < infty ,.$$



I haven't been able to figure out how to control (i.e. bound) the $exp ( (log n)^delta )$ term/factor usefully. (Comparing with this, the case where $alpha=1$ and $beta < 1$, seems to show that this is actually false for $delta < 1$ or small enough. Which would be OK since this would only be sufficient, not necessary, anyway.)



Note 3: If it really is true that $Xi_n$ is stochastically dominated by an exponential random variable (with parameter $1$) for all $n$, then maybe it's possible to get the necessary asymptotic probabilistic bound more easily for exponentials and then use the stochastic dominance to conclude.










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    Excuse me if I'm being naive, but isn't $Xi_n sim n text{Beta}(n, 1) = O_p(n)$? And $text{Beta}(n, 1)$ converges in probability to 1.
    $endgroup$
    – Tom Chen
    Feb 12 at 0:44








  • 1




    $begingroup$
    If we look here: math.stackexchange.com/questions/313390/…, we see the pdf is that of a beta, and I mean $text{Beta}(n, 1)$. Give me a moment to see where you get $(1 - xi/n)^{n-1}$, because that is $text{Beta}(1, n)$, so maybe something is mixed up. But if the latter is true, then we have $Xi_n = O_p(1)$, or $Xi_n$ is bounded in probability, stronger than $log n$.
    $endgroup$
    – Tom Chen
    Feb 12 at 2:14








  • 1




    $begingroup$
    Actually, $text{Beta}(1, n)$ converges in probability to 0. It actually turns out that $Xi_n sim n text{Beta}(1, n)$, in contrast to the typos in the previous comments and in the post itself. Then by here: stats.stackexchange.com/questions/314782/…, this converges to $text{Exp}(1)$, which is $O_p(1)$.
    $endgroup$
    – Tom Chen
    Feb 14 at 22:04






  • 1




    $begingroup$
    Yep, exactly. I wrote out our discussion as a formal answer below, and tried to fill in as many holes in definitions/notations/typos. Thank you very much! I can always discuss more.
    $endgroup$
    – Tom Chen
    Feb 15 at 6:40






  • 1




    $begingroup$
    I would like to point out that there is a mismatch between the title and the body of your question. For $U_n$'s i.i.d. and $simoperatorname{Uniform}([0,1])$, we have $$n max{U_1,cdots,U_n} to infty quad text{a.s.}$$ simply because $max{U_1,cdots,U_n} to 1$ a.s. But your $Xi_n = n(1 - F(max{X_1,cdots,X_n}))$ is in fact the 'minimum of $n$ i.i.d. uniform variables, since we can rewrite as $$ Xi_n = n min{1-F(X_1), cdots, 1-F(X_n)}$$ and $1-F(X_i) sim operatorname{Uniform}([0, 1])$. So the title is totally different from what you are asking in the body.
    $endgroup$
    – Sangchul Lee
    Feb 17 at 0:12


















6












$begingroup$


On pp. 370-374 (see this previous question) of Cramer's 1946 Mathematical Methods of Statistics, the author shows that for any continuous distribution $P$, if $X_1, dots, X_n sim P$ are i.i.d., then the RV:



$$ Xi_n overset{def}{=} n (1 - F(M_n)) ,, quad text{where }M_n overset{def}{=} max_{1 le i le n} X_i ,, $$



has the same distribution as the maximum of $n$ i.i.d. RV's $U_1, dots, U_n$, where $U_i sim operatorname{Uniform}(0,n)$, inasmuch as he shows that they have the same density (cf. p.49 of Keener, Theoretical Statistics). (Here $F$ of course denotes the CDF of each of the observations $X_i$.)



Cramer says later that we can assume $Xi_n$ "is bounded" - but in what sense precisely is that true?




Question: What is the almost sure rate of growth of the random variables $Xi_n$? Of $log(Xi_n)$?



In particular, is it true that $log(Xi_n) = o(loglog n)$ almost surely?




$log(Xi_n) = o(loglog n)$ a.s. appears to be sufficient to show that $max_{1 le i le n} X_i sim sqrt{2 log n}$ a.s. where the $X_i sim mathscr{N}(0,1)$, using the argument found here, hence my interest.



In a community wiki "answer" below I have posted what I believe to be a proof that $Xi_n = o(log n)$ a.s., at least when the $X_i sim mathscr{N}(0,1)$, an assumption which would be considered acceptable for any accepted answer. Note that, it appears to be sufficient to show that for all $varepsilon >0$, $Xi_n = o((log n)^{varepsilon})$ a.s. in order to conclude $log (Xi_n) = o(log log n)$ a.s.



Note: For each $n$, $Xi_n$ has the distribution of the maximum of $n$ i.i.d. uniforms supported on $[0,n]$ (not $[0,1]$ or $[0, theta]$ for some fixed constant), so this problem does not seem completely trivial.



Note 2: Following the argument below, it seems it suffices to show that (assuming this is even true), for any $epsilon >0$, $delta > 0$ that:



$$ sum_{n=1}^{infty} frac{(log n)^delta}{n (exp ((log n)^delta ))^epsilon } < infty ,.$$



I haven't been able to figure out how to control (i.e. bound) the $exp ( (log n)^delta )$ term/factor usefully. (Comparing with this, the case where $alpha=1$ and $beta < 1$, seems to show that this is actually false for $delta < 1$ or small enough. Which would be OK since this would only be sufficient, not necessary, anyway.)



Note 3: If it really is true that $Xi_n$ is stochastically dominated by an exponential random variable (with parameter $1$) for all $n$, then maybe it's possible to get the necessary asymptotic probabilistic bound more easily for exponentials and then use the stochastic dominance to conclude.










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    Excuse me if I'm being naive, but isn't $Xi_n sim n text{Beta}(n, 1) = O_p(n)$? And $text{Beta}(n, 1)$ converges in probability to 1.
    $endgroup$
    – Tom Chen
    Feb 12 at 0:44








  • 1




    $begingroup$
    If we look here: math.stackexchange.com/questions/313390/…, we see the pdf is that of a beta, and I mean $text{Beta}(n, 1)$. Give me a moment to see where you get $(1 - xi/n)^{n-1}$, because that is $text{Beta}(1, n)$, so maybe something is mixed up. But if the latter is true, then we have $Xi_n = O_p(1)$, or $Xi_n$ is bounded in probability, stronger than $log n$.
    $endgroup$
    – Tom Chen
    Feb 12 at 2:14








  • 1




    $begingroup$
    Actually, $text{Beta}(1, n)$ converges in probability to 0. It actually turns out that $Xi_n sim n text{Beta}(1, n)$, in contrast to the typos in the previous comments and in the post itself. Then by here: stats.stackexchange.com/questions/314782/…, this converges to $text{Exp}(1)$, which is $O_p(1)$.
    $endgroup$
    – Tom Chen
    Feb 14 at 22:04






  • 1




    $begingroup$
    Yep, exactly. I wrote out our discussion as a formal answer below, and tried to fill in as many holes in definitions/notations/typos. Thank you very much! I can always discuss more.
    $endgroup$
    – Tom Chen
    Feb 15 at 6:40






  • 1




    $begingroup$
    I would like to point out that there is a mismatch between the title and the body of your question. For $U_n$'s i.i.d. and $simoperatorname{Uniform}([0,1])$, we have $$n max{U_1,cdots,U_n} to infty quad text{a.s.}$$ simply because $max{U_1,cdots,U_n} to 1$ a.s. But your $Xi_n = n(1 - F(max{X_1,cdots,X_n}))$ is in fact the 'minimum of $n$ i.i.d. uniform variables, since we can rewrite as $$ Xi_n = n min{1-F(X_1), cdots, 1-F(X_n)}$$ and $1-F(X_i) sim operatorname{Uniform}([0, 1])$. So the title is totally different from what you are asking in the body.
    $endgroup$
    – Sangchul Lee
    Feb 17 at 0:12
















6












6








6


2



$begingroup$


On pp. 370-374 (see this previous question) of Cramer's 1946 Mathematical Methods of Statistics, the author shows that for any continuous distribution $P$, if $X_1, dots, X_n sim P$ are i.i.d., then the RV:



$$ Xi_n overset{def}{=} n (1 - F(M_n)) ,, quad text{where }M_n overset{def}{=} max_{1 le i le n} X_i ,, $$



has the same distribution as the maximum of $n$ i.i.d. RV's $U_1, dots, U_n$, where $U_i sim operatorname{Uniform}(0,n)$, inasmuch as he shows that they have the same density (cf. p.49 of Keener, Theoretical Statistics). (Here $F$ of course denotes the CDF of each of the observations $X_i$.)



Cramer says later that we can assume $Xi_n$ "is bounded" - but in what sense precisely is that true?




Question: What is the almost sure rate of growth of the random variables $Xi_n$? Of $log(Xi_n)$?



In particular, is it true that $log(Xi_n) = o(loglog n)$ almost surely?




$log(Xi_n) = o(loglog n)$ a.s. appears to be sufficient to show that $max_{1 le i le n} X_i sim sqrt{2 log n}$ a.s. where the $X_i sim mathscr{N}(0,1)$, using the argument found here, hence my interest.



In a community wiki "answer" below I have posted what I believe to be a proof that $Xi_n = o(log n)$ a.s., at least when the $X_i sim mathscr{N}(0,1)$, an assumption which would be considered acceptable for any accepted answer. Note that, it appears to be sufficient to show that for all $varepsilon >0$, $Xi_n = o((log n)^{varepsilon})$ a.s. in order to conclude $log (Xi_n) = o(log log n)$ a.s.



Note: For each $n$, $Xi_n$ has the distribution of the maximum of $n$ i.i.d. uniforms supported on $[0,n]$ (not $[0,1]$ or $[0, theta]$ for some fixed constant), so this problem does not seem completely trivial.



Note 2: Following the argument below, it seems it suffices to show that (assuming this is even true), for any $epsilon >0$, $delta > 0$ that:



$$ sum_{n=1}^{infty} frac{(log n)^delta}{n (exp ((log n)^delta ))^epsilon } < infty ,.$$



I haven't been able to figure out how to control (i.e. bound) the $exp ( (log n)^delta )$ term/factor usefully. (Comparing with this, the case where $alpha=1$ and $beta < 1$, seems to show that this is actually false for $delta < 1$ or small enough. Which would be OK since this would only be sufficient, not necessary, anyway.)



Note 3: If it really is true that $Xi_n$ is stochastically dominated by an exponential random variable (with parameter $1$) for all $n$, then maybe it's possible to get the necessary asymptotic probabilistic bound more easily for exponentials and then use the stochastic dominance to conclude.










share|cite|improve this question











$endgroup$




On pp. 370-374 (see this previous question) of Cramer's 1946 Mathematical Methods of Statistics, the author shows that for any continuous distribution $P$, if $X_1, dots, X_n sim P$ are i.i.d., then the RV:



$$ Xi_n overset{def}{=} n (1 - F(M_n)) ,, quad text{where }M_n overset{def}{=} max_{1 le i le n} X_i ,, $$



has the same distribution as the maximum of $n$ i.i.d. RV's $U_1, dots, U_n$, where $U_i sim operatorname{Uniform}(0,n)$, inasmuch as he shows that they have the same density (cf. p.49 of Keener, Theoretical Statistics). (Here $F$ of course denotes the CDF of each of the observations $X_i$.)



Cramer says later that we can assume $Xi_n$ "is bounded" - but in what sense precisely is that true?




Question: What is the almost sure rate of growth of the random variables $Xi_n$? Of $log(Xi_n)$?



In particular, is it true that $log(Xi_n) = o(loglog n)$ almost surely?




$log(Xi_n) = o(loglog n)$ a.s. appears to be sufficient to show that $max_{1 le i le n} X_i sim sqrt{2 log n}$ a.s. where the $X_i sim mathscr{N}(0,1)$, using the argument found here, hence my interest.



In a community wiki "answer" below I have posted what I believe to be a proof that $Xi_n = o(log n)$ a.s., at least when the $X_i sim mathscr{N}(0,1)$, an assumption which would be considered acceptable for any accepted answer. Note that, it appears to be sufficient to show that for all $varepsilon >0$, $Xi_n = o((log n)^{varepsilon})$ a.s. in order to conclude $log (Xi_n) = o(log log n)$ a.s.



Note: For each $n$, $Xi_n$ has the distribution of the maximum of $n$ i.i.d. uniforms supported on $[0,n]$ (not $[0,1]$ or $[0, theta]$ for some fixed constant), so this problem does not seem completely trivial.



Note 2: Following the argument below, it seems it suffices to show that (assuming this is even true), for any $epsilon >0$, $delta > 0$ that:



$$ sum_{n=1}^{infty} frac{(log n)^delta}{n (exp ((log n)^delta ))^epsilon } < infty ,.$$



I haven't been able to figure out how to control (i.e. bound) the $exp ( (log n)^delta )$ term/factor usefully. (Comparing with this, the case where $alpha=1$ and $beta < 1$, seems to show that this is actually false for $delta < 1$ or small enough. Which would be OK since this would only be sufficient, not necessary, anyway.)



Note 3: If it really is true that $Xi_n$ is stochastically dominated by an exponential random variable (with parameter $1$) for all $n$, then maybe it's possible to get the necessary asymptotic probabilistic bound more easily for exponentials and then use the stochastic dominance to conclude.







probability-theory asymptotics probability-limit-theorems order-statistics upper-lower-bounds






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Feb 12 at 18:20







Chill2Macht

















asked Dec 11 '18 at 0:10









Chill2MachtChill2Macht

11.6k91768




11.6k91768








  • 1




    $begingroup$
    Excuse me if I'm being naive, but isn't $Xi_n sim n text{Beta}(n, 1) = O_p(n)$? And $text{Beta}(n, 1)$ converges in probability to 1.
    $endgroup$
    – Tom Chen
    Feb 12 at 0:44








  • 1




    $begingroup$
    If we look here: math.stackexchange.com/questions/313390/…, we see the pdf is that of a beta, and I mean $text{Beta}(n, 1)$. Give me a moment to see where you get $(1 - xi/n)^{n-1}$, because that is $text{Beta}(1, n)$, so maybe something is mixed up. But if the latter is true, then we have $Xi_n = O_p(1)$, or $Xi_n$ is bounded in probability, stronger than $log n$.
    $endgroup$
    – Tom Chen
    Feb 12 at 2:14








  • 1




    $begingroup$
    Actually, $text{Beta}(1, n)$ converges in probability to 0. It actually turns out that $Xi_n sim n text{Beta}(1, n)$, in contrast to the typos in the previous comments and in the post itself. Then by here: stats.stackexchange.com/questions/314782/…, this converges to $text{Exp}(1)$, which is $O_p(1)$.
    $endgroup$
    – Tom Chen
    Feb 14 at 22:04






  • 1




    $begingroup$
    Yep, exactly. I wrote out our discussion as a formal answer below, and tried to fill in as many holes in definitions/notations/typos. Thank you very much! I can always discuss more.
    $endgroup$
    – Tom Chen
    Feb 15 at 6:40






  • 1




    $begingroup$
    I would like to point out that there is a mismatch between the title and the body of your question. For $U_n$'s i.i.d. and $simoperatorname{Uniform}([0,1])$, we have $$n max{U_1,cdots,U_n} to infty quad text{a.s.}$$ simply because $max{U_1,cdots,U_n} to 1$ a.s. But your $Xi_n = n(1 - F(max{X_1,cdots,X_n}))$ is in fact the 'minimum of $n$ i.i.d. uniform variables, since we can rewrite as $$ Xi_n = n min{1-F(X_1), cdots, 1-F(X_n)}$$ and $1-F(X_i) sim operatorname{Uniform}([0, 1])$. So the title is totally different from what you are asking in the body.
    $endgroup$
    – Sangchul Lee
    Feb 17 at 0:12
















  • 1




    $begingroup$
    Excuse me if I'm being naive, but isn't $Xi_n sim n text{Beta}(n, 1) = O_p(n)$? And $text{Beta}(n, 1)$ converges in probability to 1.
    $endgroup$
    – Tom Chen
    Feb 12 at 0:44








  • 1




    $begingroup$
    If we look here: math.stackexchange.com/questions/313390/…, we see the pdf is that of a beta, and I mean $text{Beta}(n, 1)$. Give me a moment to see where you get $(1 - xi/n)^{n-1}$, because that is $text{Beta}(1, n)$, so maybe something is mixed up. But if the latter is true, then we have $Xi_n = O_p(1)$, or $Xi_n$ is bounded in probability, stronger than $log n$.
    $endgroup$
    – Tom Chen
    Feb 12 at 2:14








  • 1




    $begingroup$
    Actually, $text{Beta}(1, n)$ converges in probability to 0. It actually turns out that $Xi_n sim n text{Beta}(1, n)$, in contrast to the typos in the previous comments and in the post itself. Then by here: stats.stackexchange.com/questions/314782/…, this converges to $text{Exp}(1)$, which is $O_p(1)$.
    $endgroup$
    – Tom Chen
    Feb 14 at 22:04






  • 1




    $begingroup$
    Yep, exactly. I wrote out our discussion as a formal answer below, and tried to fill in as many holes in definitions/notations/typos. Thank you very much! I can always discuss more.
    $endgroup$
    – Tom Chen
    Feb 15 at 6:40






  • 1




    $begingroup$
    I would like to point out that there is a mismatch between the title and the body of your question. For $U_n$'s i.i.d. and $simoperatorname{Uniform}([0,1])$, we have $$n max{U_1,cdots,U_n} to infty quad text{a.s.}$$ simply because $max{U_1,cdots,U_n} to 1$ a.s. But your $Xi_n = n(1 - F(max{X_1,cdots,X_n}))$ is in fact the 'minimum of $n$ i.i.d. uniform variables, since we can rewrite as $$ Xi_n = n min{1-F(X_1), cdots, 1-F(X_n)}$$ and $1-F(X_i) sim operatorname{Uniform}([0, 1])$. So the title is totally different from what you are asking in the body.
    $endgroup$
    – Sangchul Lee
    Feb 17 at 0:12










1




1




$begingroup$
Excuse me if I'm being naive, but isn't $Xi_n sim n text{Beta}(n, 1) = O_p(n)$? And $text{Beta}(n, 1)$ converges in probability to 1.
$endgroup$
– Tom Chen
Feb 12 at 0:44






$begingroup$
Excuse me if I'm being naive, but isn't $Xi_n sim n text{Beta}(n, 1) = O_p(n)$? And $text{Beta}(n, 1)$ converges in probability to 1.
$endgroup$
– Tom Chen
Feb 12 at 0:44






1




1




$begingroup$
If we look here: math.stackexchange.com/questions/313390/…, we see the pdf is that of a beta, and I mean $text{Beta}(n, 1)$. Give me a moment to see where you get $(1 - xi/n)^{n-1}$, because that is $text{Beta}(1, n)$, so maybe something is mixed up. But if the latter is true, then we have $Xi_n = O_p(1)$, or $Xi_n$ is bounded in probability, stronger than $log n$.
$endgroup$
– Tom Chen
Feb 12 at 2:14






$begingroup$
If we look here: math.stackexchange.com/questions/313390/…, we see the pdf is that of a beta, and I mean $text{Beta}(n, 1)$. Give me a moment to see where you get $(1 - xi/n)^{n-1}$, because that is $text{Beta}(1, n)$, so maybe something is mixed up. But if the latter is true, then we have $Xi_n = O_p(1)$, or $Xi_n$ is bounded in probability, stronger than $log n$.
$endgroup$
– Tom Chen
Feb 12 at 2:14






1




1




$begingroup$
Actually, $text{Beta}(1, n)$ converges in probability to 0. It actually turns out that $Xi_n sim n text{Beta}(1, n)$, in contrast to the typos in the previous comments and in the post itself. Then by here: stats.stackexchange.com/questions/314782/…, this converges to $text{Exp}(1)$, which is $O_p(1)$.
$endgroup$
– Tom Chen
Feb 14 at 22:04




$begingroup$
Actually, $text{Beta}(1, n)$ converges in probability to 0. It actually turns out that $Xi_n sim n text{Beta}(1, n)$, in contrast to the typos in the previous comments and in the post itself. Then by here: stats.stackexchange.com/questions/314782/…, this converges to $text{Exp}(1)$, which is $O_p(1)$.
$endgroup$
– Tom Chen
Feb 14 at 22:04




1




1




$begingroup$
Yep, exactly. I wrote out our discussion as a formal answer below, and tried to fill in as many holes in definitions/notations/typos. Thank you very much! I can always discuss more.
$endgroup$
– Tom Chen
Feb 15 at 6:40




$begingroup$
Yep, exactly. I wrote out our discussion as a formal answer below, and tried to fill in as many holes in definitions/notations/typos. Thank you very much! I can always discuss more.
$endgroup$
– Tom Chen
Feb 15 at 6:40




1




1




$begingroup$
I would like to point out that there is a mismatch between the title and the body of your question. For $U_n$'s i.i.d. and $simoperatorname{Uniform}([0,1])$, we have $$n max{U_1,cdots,U_n} to infty quad text{a.s.}$$ simply because $max{U_1,cdots,U_n} to 1$ a.s. But your $Xi_n = n(1 - F(max{X_1,cdots,X_n}))$ is in fact the 'minimum of $n$ i.i.d. uniform variables, since we can rewrite as $$ Xi_n = n min{1-F(X_1), cdots, 1-F(X_n)}$$ and $1-F(X_i) sim operatorname{Uniform}([0, 1])$. So the title is totally different from what you are asking in the body.
$endgroup$
– Sangchul Lee
Feb 17 at 0:12






$begingroup$
I would like to point out that there is a mismatch between the title and the body of your question. For $U_n$'s i.i.d. and $simoperatorname{Uniform}([0,1])$, we have $$n max{U_1,cdots,U_n} to infty quad text{a.s.}$$ simply because $max{U_1,cdots,U_n} to 1$ a.s. But your $Xi_n = n(1 - F(max{X_1,cdots,X_n}))$ is in fact the 'minimum of $n$ i.i.d. uniform variables, since we can rewrite as $$ Xi_n = n min{1-F(X_1), cdots, 1-F(X_n)}$$ and $1-F(X_i) sim operatorname{Uniform}([0, 1])$. So the title is totally different from what you are asking in the body.
$endgroup$
– Sangchul Lee
Feb 17 at 0:12












3 Answers
3






active

oldest

votes


















2












$begingroup$

By the setting, $V_n = 1 - F(U_n)$ is a sequence of i.i.d. random variables with $V_n sim operatorname{Uniform}([0,1])$. Write



$$ xi_n = min{V_1, cdots, V_n}, qquad Xi_n = n xi_n.$$



Here we prove the following claim.




Claim. $displaystyle limsup_{ntoinfty} frac{Xi_n}{log log n} = 1 $ holds almost surely.




As usual, we show that one bounds the other.



Proof of $ ``leqtext{''}$. We first fix $epsilon in (0, 1)$ and $alpha > 1$, and set $n_k = lfloor alpha^k rfloor$. Then from the inequality $ mathbb{P}( Xi_n > t) = left( 1 - frac{t}{n} right)^n leq e^{-t} $, it follows that



$$ sum_{k=1}^{infty} mathbb{P}(Xi_{n_k} > (1+epsilon)loglog n_k)
leq sum_{k=1}^{infty} frac{1}{left( log n_k right)^{1+epsilon}}
< infty $$



since $(log n_k)^{-1-epsilon} asymp k^{-1-epsilon}$ as $ktoinfty$. By Borel-Cantelli's lemma, $Xi_{n_k} leq (1+epsilon)loglog n_k$ eventually holds a.s. But for each $n in [n_k, n_{k+1}]$, we have



$$ frac{Xi_{n}}{loglog n} = frac{n xi_n}{loglog n} leq frac{n_{k+1} xi_{n_{k+1}}}{loglog n_k} = frac{n_{k+1}loglog n_{k+1}}{n_k loglog n_k} frac{Sigma_{n_k}}{loglog n_{k+1}}, $$



and so, taking limsup as $ktoinfty$ shows that



$$ limsup_{ntoinfty} frac{Xi_n}{loglog n} leq alpha(1+epsilon) quad text{a.s.} $$



But since this is true for any $epsilon > 0$ and $alpha > 1$, letting $epsilon to 0$ and $alpha to 1$ along subsequence (so that we only care about countable intersection of events) proves the desired inequality.



Proof of $ ``geqtext{''}$. This part is much harder. Define $X_n$ and $T_n$ as follows:




  • $T_0 = 0$, and $T_{n} = inf { m > T_{n-1} : xi_m < xi_{T_{n-1}} }$ for $n geq 1$. (Here, the convention $xi_0 = 1$ is used.) In other words, $T_n$'s record the $n$-th time instance at which $xi_n$ becomes smaller.


  • $X_0 = 1$, and $X_{n} = xi_{T_{n}} / xi_{T_{n-1}}$.



Then $(X_n)_{n=1}^{infty}$ are i.i.d. with $X_n sim operatorname{Uniform}([0,1])$, and $T_{n} - T_{n-1} sim operatorname{Geometric}(X_0 cdots X_{n-1})$ for $n geq 1$. Now we fix $epsilon in (0, 1)$ and focus on the extreme behavior of



$$tau_n := X_1 cdots X_{n-1} (T_n - T_{n-1}).$$



To this end, write $mathcal{F}_n = sigma( X_0, cdots, X_n, T_0, cdots, T_n)$ for the natural filtration of $(X_n)$ and $(T_n)$. Then $tau_n$ is $mathcal{F}_n$-measurable, and



$$
mathbb{P}(tau_n > (1-epsilon)log n mid mathcal{F}_{n-1})
geq left( 1 - X_1cdots X_{n-1} right)^{frac{(1-epsilon)log n}{X_1 cdots X_{n-1}}}
$$



By the strong law of large numbers, $X_1 cdots X_n = e^{-(1+o(1))n}$, and so, this lower bound is $asymp n^{-(1-epsilon)}$ a.s. This tells that $sum_{n=1}^{infty} mathbb{P}(tau_n > (1-epsilon)log n mid mathcal{F}_{n-1}) = infty$ a.s., and so, by the generalized second Borel-Cantelli lemma, it follows that



$$ tau_n > (1-epsilon)log nquad text{infinitely often} quad text{a.s.} tag{1} $$



On the other hand, for $n geq 3$,



$$
mathbb{P}(tau_n > 2019log n)
leq mathbb{E} left[ left( 1 - X_1cdots X_{n-1} right)^{frac{2019log n}{X_1 cdots X_{n-1}}-1} right]
leq frac{c}{n^{2019}}
$$



where $c > 0$ is an absolute constant. (We can pick $c = mathbb{E}[(1-X_1 X_2)^{-1}] = pi^2/6$, for instance, although this value is not important.) So by the Borel-Cantelli lemma, $mathbb{P}(tau_n leq (1+epsilon)log n text{ i.o}) = 1$. Moreover, by the strong law of large numbers again, $mathbb{P}(X_1 cdots X_n geq e^{-(1+epsilon)n} text{ eventually}) = 1$. Therefore



$$ T_n
= sum_{k=1}^{n} frac{tau_k}{X_1 cdots X_k}
leq sum_{k=1}^{n} e^{(1+epsilon)k} 2019 log k + mathcal{O}(1)
leq C e^{(1+epsilon)n}log n $$



for some random variable $C$ which a.s. finite and positive. Using this inequality,



$$ frac{Xi_{T_n - 1}}{loglog T_n}
= frac{X_1 cdots X_{n-1} (T_n - 1)}{log log T_n}
geq frac{tau_n}{log n + O(1)}. tag{2} $$



Combining $text{(1)}$ and $text{(2)}$, it follows that



$$ limsup_{ntoinfty} frac{Xi_n}{loglog n}
geq limsup_{ntoinfty} frac{Xi_{T_n - 1}}{log log T_n}
geq 1 - epsilon quad text{a.s.}, $$



and since this is true for any $epsilon in (0, 1)$, the desired inequality follows. $square$






share|cite|improve this answer











$endgroup$





















    1












    $begingroup$

    Attempt: I don't feel comfortable that this is correct, but here we go. Because (according to Cramer), $Xi_n$ is $Beta(1,n)$, and thus has the density $(1 - frac{xi}{n})^n$ with support $[0,n]$, we can conclude that every $Xi_n$ is stochastically bounded (I think) by an exponential random variable with parameter $1$.



    This is because of the inequality $-x ge log(1-x)$, since this implies that:



    $$ -frac{x}{n} ge log ( 1 - frac{x}{n}) iff -x ge n log (1 - frac{x}{n}) iff e^{-x} ge left( 1 - frac{x}{n} right)^n $$



    where $e^{-x}$ is the PDF of an exponential(1).



    The stochastic ordering doesn't seem to depend on the probabilistic dependence of the sequence of exponential random variables, so we can assume we have some sequence of $Y_n$ i.i.d. exponential(1) R.V.'s.



    This is where it seems iffy to me: basically I will claim that the stochastic ordering of $Xi_n preceq Y_n$ for each individual $n$ somehow implies that the stochastic ordering transfers to the entire sequences of $Xi_n$'s and $Y_n$'s.



    So I believe it's the case (please correct me if I'm wrong) that some sequence of RVs $X_n$ is $O_p(1)$ if and only if for all $M>0$, $$mathbb{P}(X_n > M text{ i.o.}) = 0$$



    So the claim is that because $Xi_n preceq Y_n$ for all $n$ (don't really understand how the PDF of one can uniformly dominate the PDF of another but both integrate to $1$, but whatever) that the following would hold:



    $$ mathbb{P}(Xi_n > M text{ i.o.}) le mathbb{P}(Y_n > M text{ i.o.}) ,. $$



    So we just have to show that $mathbb{P}(Y_n > M text{ i.o.}) = 0$ for all $M$, but that's easy (I think), since by our assumptions



    $$ mathbb{P}(Y_n > M text{ i.o.}) =C_{{ n_k }}lim_{K to infty} prod_{k=1}^K mathbb{P}(Y_{n_k} > M) =C_{{ n_k }}lim_{K to infty} (e^{-M})^k =0$$



    where ${ n_k}$ is some subsequence, and $C_{{n_k}}$ is some finite constant depending on the particular subsequence.



    If anyone can fix this, or show why it's false and doomed to fail even with modifications, I would really appreciate it.





    First let's go through a chain of trivialities in order to rephrase the problem in a way which makes it easier to solve (note that by definition $Xi_n ge 0$):



    $$Xi_n = o(log n) quad iff quad lim_{n to infty} frac{Xi_n}{log n} = 0 quad iff quad \ forall varepsilon > 0, frac{Xi_n}{log n} > varepsilon text{ only finitely many times} quad iff \ forall varepsilon >0, quad Xi_n > varepsilon log n text{ only finitely many times} ,.$$



    One also has that:



    $$Xi_n > varepsilon log n quad iff quad n(1 - F(Z_n)) > varepsilon log n quad iff quad 1 - F(Z_n) > frac{varepsilon log n}{n} \ iff quad F(Z_n) < 1 - frac{varepsilon log n}{n} quad iff quad Z_n le inf left{ y: F(y) ge 1 - frac{varepsilon log n}{n} right} ,. $$



    Correspondingly, define for all $n$:



    $$ u_n^{(varepsilon)} = inf left{ y: F(y) ge 1 - frac{varepsilon log n}{n} right} ,. $$



    Therefore the above steps show us that:



    $$Xi_n = o(log n) text{ a.s.} quad iff quad mathbb{P}(Xi_n = o(log n)) = 1 quad iff quad \
    mathbb{P}(forall varepsilon > 0 , Xi_n > varepsilon log n text{ only finitely many times}) = 1 \
    iff mathbb{P}(forall varepsilon > 0, Z_n le u_n^{(varepsilon)} text{ only finitely many times}) = 1 \
    iff mathbb{P}(forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often}) =0 ,. $$



    Notice that we can write:



    $$ { forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often} } = bigcap_{varepsilon > 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } ,.$$



    The sequences $u_n^{(varepsilon)}$ become uniformly larger as $varepsilon$ decreases, so we can conclude that the events $${ Z_n le u_n^{(varepsilon)} text{ infinitely often} } $$ are decreasing (or at least somehow monotonic) as $varepsilon$ goes to $0$. Therefore the probability axiom regarding monotonic sequences of events allows us to conclude that:



    $$mathbb{P}(forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often}) = \
    mathbb{P} left( bigcap_{varepsilon > 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } right) = \
    mathbb{P} left( lim_{varepsilon downarrow 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } right) = \
    lim_{varepsilon downarrow 0} mathbb{P}(Z_n le u_n^{(varepsilon)} text{ infinitely often}) ,.$$



    Therefore it suffices to show that for all $varepsilon >0$,



    $$mathbb{P}(Z_n le u_n^{(varepsilon)} text{ infinitely often}) = 0 $$



    because of course the limit of any constant sequence is the constant.



    Here is somewhat of a sledgehammer result:




    Theorem 4.3.1., p. 252 of Galambos, The Asymptotic Theory of Extreme Order Statistics, 2nd edition. Let $X_1, X_2, dots$ be i.i.d. variables with common nondegenerate and continuous distribution function $F(x)$, and let $u_n$ be a nondecreasing sequence such that $n(1 - F(u_n))$ is also nondecreasing. Then, for $u_n < sup { x: F(x) <1 }$, $$mathbb{P}(Z_n le u_n text{ infinitely often}) =0 text{ or }1 $$
    according as
    $$sum_{j=1}^{+infty}[1 - F(u_j)]exp(-j[1-F(u_j)]) < +infty text{ or }=+infty ,. $$




    The proof is technical and takes around five pages, but ultimately it turns out to be a corollary of one of the Borel-Cantelli lemmas. I may get around to trying to condense the proof to only use the part required for this analysis as well as only the assumptions which hold in the Gaussian case, which may be shorter (but maybe it isn't) and type it up here, but holding your breath is not recommended. Note that in this case $omega(F)=+infty$, so that condition is vacuous, and $n(1-F(n))$ is $varepsilon log n$ thus clearly non-decreasing.



    Anyway the point being that, appealing to this theorem, if we can show that:



    $$sum_{j=1}^{+infty}[1 - F(u_j^{(varepsilon)})]exp(-j[1-F(u_j^{(varepsilon)})]) = sum_{j=1}^{+infty}left[ frac{varepsilon log j}{j} right]exp(-varepsilon log j) = varepsilon sum_{j=1}^{+infty} frac{ log j}{j^{1 + varepsilon}} < + infty ,. $$



    Note that since logarithmic growth is slower than any power law growth for any positive power law exponent (logarithms and exponentials are monotonicity preserving, so $log log n le alpha log n iff log n le n^{alpha}$ and the former inequality can always be seen to hold for all $n$ large enough due to the fact that $log n le n$ and a change of variables), we have that:



    $$ sum_{j=1}^{+infty} frac{log j}{j^{1 + varepsilon}} le sum_{j=1}^{+infty} frac{j^{varepsilon/2}}{j^{1 + varepsilon}} = sum_{j=1}^{+infty} frac{1}{j^{1 + varepsilon/2}} < +infty ,,$$



    since the p-series is known to converge for all $p>1$, and $varepsilon >0$ of course implies $1 + varepsilon/2 > 1$.



    Thus using the above theorem we have shown that for all $varepsilon >0$, $mathbb{P}(Z_n le u_n^{(varepsilon)} text{ i.o.}) = 0$, which to recapitulate should mean that $Xi_n = o(log n)$ almost surely.



    We need to show still that $log Xi_n = o(log log n)$. This doesn't follow from the above, since, e.g.,



    $$ frac{1}{n} log n = o(log n) ,, - log n + log log n not= o(log log n) ,. $$



    However, given a sequence $x_n$, if one can show that $x_n = o( (log n)^{delta})$ for arbitrary $delta >0$, then it does follow that $log(x_n) = o(log log n)$. Ideally I would like to be able to show this for $Xi_n$ using the above lemma (assuming it's even true), but am not able to (as of yet).






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      If $Y_n$'s are i.i.d. exponential variables of the unit rate, then for each $Y_n$ there is a probability $e^{-M}$ of exceeding the level $M$. In other words, $mathbf{1}_{{Y_n > M}}$ is a Bernoulli process with the parameter $e^{-M}$, and so, $Y_n > M$ will occur quite frequently. In fact, one can show that $max{Y_1,cdots,Y_n} = (1+o(1))log n$ almost surely. What is interesting is that the maximum of $Xi_n$ grows much slower than the i.i.d. case, because the 'updates' of $Xi_n / n$ occurs at exponentially slow speed.
      $endgroup$
      – Sangchul Lee
      Feb 16 at 4:00





















    1





    +500







    $begingroup$

    I was able to get my hands on a 1999 version of Cramer's Mathematical Methods of Statistics. Let me cite the relevant section for this problem, with some appropriate rewording for better flow.




    Let $M^v_n$ be the $v$th value from the top in a sample $X_1, cdots, X_n overset{text{iid}}{sim} F$. If we introduce $Xi_n$ by the substitution
    begin{align*}
    Xi_n = n(1 - F(M_n^v))
    end{align*}

    This variable has density
    begin{align*}
    f_{Xi_n}(xi) = binom{n-1}{v-1}left(frac{xi}{n}right)^{v-1}left(1-frac{xi}{n}right)^{n-v}
    end{align*}




    Hence, for the maximum order statistic, $v=1$, we therefore have the density
    begin{align*}
    f_{Xi_n}(xi) = left(1-frac{xi}{n}right)^{n-1}
    end{align*}

    or a $ntext{Beta}(1, n)$ distribution. Previously, the OP and I had some confusion in the comments and chat about the formulation s/he described:
    begin{align*}
    Xi_n overset{mathcal{D}}{=}max(U_1, cdots, U_n), qquad text{where } U_i overset{text{iid}}{sim} text{Unif}(0, n)
    end{align*}

    Since this would imply that $Xi_n$ is instead $ntext{Beta}(n, 1)$ (see here). Indeed, if we were to define $m_n^v$ as the $v$th value from the bottom of the sample, and define
    begin{align*}
    H_n = n F(m_n^v)
    end{align*}

    We would also get
    begin{align*}
    f_{H_n}(eta) = binom{n-1}{v-1}left(frac{eta}{n}right)^{v-1}left(1-frac{eta}{n}right)^{n-v}
    end{align*}

    with $v = 1$ giving us that $H_n$ is also $n text{Beta}(1, n)$.



    Before we continue, let me provide a concrete definition of growth rates used in probability theory.




    A sequence of random variables $X_n$ are said to be bounded in probability or stochastically bounded if, for all $epsilon > 0$, there exists $M, N$ such that, for all $n > N$, we have
    begin{align*}
    P(|X_n| < M) > 1-epsilon
    end{align*}

    We then say $X_n = O_p(a_n)$ if $X_n/a_n$ is stochastically bounded. Similarly, we say $X_n$ is stochastically convergent to zero if
    begin{align*}
    lim_{nrightarrow infty} P(|X_n| < M) = 1
    end{align*}

    for all $M > 0$, and write $X_n = o_p(a_n)$ if $X_n/a_n$ is stochastically convergent to zero.




    Remark: For example, if $X_n sim mathcal{N}(0, sin^2(n))$, then $X_n = O_p(1)$, since
    begin{align*}
    P(|X_n| < M) = 2Phileft(frac{M}{|sin(n)|}right) - 1 ge 2Phi(M) - 1 > 1 - epsilon
    end{align*}

    by choosing $M = Phi^{-1}(1 - frac{epsilon}{2})+1$ and $N = 1$.



    Back to the problem at hand, it turns out that
    begin{align*}
    ntext{Beta}(1, n) overset{mathcal{D}}{rightarrow} text{Exp}(1)
    end{align*}

    In other words, there exists a random variable $Omega sim text{Exp}(1)$ such that
    begin{align*}
    Xi_n = Omega + o_p(1)
    end{align*}

    Since $Omega = O_p(1)$, we have that $Xi_n = O_p(1) + o_p(1) = O_p(1)$ by big-O/little-o arithmetic. Indeed, looking further into Cramer's book, he stated that
    begin{align*}
    lim_{nrightarrow infty} f_{Xi_n}(xi) = frac{xi^{v-1}}{Gamma(v)}e^{-xi}
    end{align*}

    which is a $text{Gamma}(n, 1)$ distribution, which is also $O_p(1)$. Hence, for any fixed $v$, $Xi_n$ would be stochastically bounded.






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      OK, but to clarify being $O_P(1)$ doesn't guarantee that the sequence is $O(1)$ a.s.? Only that it's $O(1)$ in probability? So this isn't enough to show that the maxima of Gaussians is a.s. $sqrt{2 log n}$? Just in probability? (Is it enough to show it in expectation?)
      $endgroup$
      – Chill2Macht
      Feb 17 at 21:40












    • $begingroup$
      Yes, $O_p(1)$ means $O(1)$ in probability, while what you and Sangchul were aiming for is $O(log log n)$ a.s. Both are strengthened in one direction, while weakened in another (i.e. in probability vs a.s., and $O(1)$ vs $O(log log n)$). I took Cramer's comment on "bounded" literally, so I took the in probability route. It's pretty neat, because this parallel also occurs if we define $Xi_n = S_n/sqrt{n}$, where $S_n$ is a sum of i.i.d r.v's. Then $Xi_n = O_p(1)$, yet $Xi_n/sqrt{log log n}$ is $O(1)$ a.s. You can read more about this so-called "law of the iterated logarithm".
      $endgroup$
      – Tom Chen
      Feb 18 at 21:38












    • $begingroup$
      Yes, I know what all of those things are, the point being that if it equals $O(1)$ almost surely then it's possible to prove a lot more. Also I wasn't aiming for $O(log log n)$ almost surely, I was looking for $log Xi_n$ to be $o(log log n)$ almost surely, since the point of the proof where that's needed, we're working with $log Xi_n$. But if $Xi_n$ were $O(1)$ almost surely, then it would be trivial that $log Xi_n$ is $O(1)$ almost surely, and thus in particular $o(log log n)$ a.s. My question is about whether this type of proof can be extended into a more useful direction.
      $endgroup$
      – Chill2Macht
      Feb 19 at 23:47













    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3034681%2fwhat-is-the-rate-of-growth-of-m-n-max-1-le-i-le-n-u-in-where-u%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    2












    $begingroup$

    By the setting, $V_n = 1 - F(U_n)$ is a sequence of i.i.d. random variables with $V_n sim operatorname{Uniform}([0,1])$. Write



    $$ xi_n = min{V_1, cdots, V_n}, qquad Xi_n = n xi_n.$$



    Here we prove the following claim.




    Claim. $displaystyle limsup_{ntoinfty} frac{Xi_n}{log log n} = 1 $ holds almost surely.




    As usual, we show that one bounds the other.



    Proof of $ ``leqtext{''}$. We first fix $epsilon in (0, 1)$ and $alpha > 1$, and set $n_k = lfloor alpha^k rfloor$. Then from the inequality $ mathbb{P}( Xi_n > t) = left( 1 - frac{t}{n} right)^n leq e^{-t} $, it follows that



    $$ sum_{k=1}^{infty} mathbb{P}(Xi_{n_k} > (1+epsilon)loglog n_k)
    leq sum_{k=1}^{infty} frac{1}{left( log n_k right)^{1+epsilon}}
    < infty $$



    since $(log n_k)^{-1-epsilon} asymp k^{-1-epsilon}$ as $ktoinfty$. By Borel-Cantelli's lemma, $Xi_{n_k} leq (1+epsilon)loglog n_k$ eventually holds a.s. But for each $n in [n_k, n_{k+1}]$, we have



    $$ frac{Xi_{n}}{loglog n} = frac{n xi_n}{loglog n} leq frac{n_{k+1} xi_{n_{k+1}}}{loglog n_k} = frac{n_{k+1}loglog n_{k+1}}{n_k loglog n_k} frac{Sigma_{n_k}}{loglog n_{k+1}}, $$



    and so, taking limsup as $ktoinfty$ shows that



    $$ limsup_{ntoinfty} frac{Xi_n}{loglog n} leq alpha(1+epsilon) quad text{a.s.} $$



    But since this is true for any $epsilon > 0$ and $alpha > 1$, letting $epsilon to 0$ and $alpha to 1$ along subsequence (so that we only care about countable intersection of events) proves the desired inequality.



    Proof of $ ``geqtext{''}$. This part is much harder. Define $X_n$ and $T_n$ as follows:




    • $T_0 = 0$, and $T_{n} = inf { m > T_{n-1} : xi_m < xi_{T_{n-1}} }$ for $n geq 1$. (Here, the convention $xi_0 = 1$ is used.) In other words, $T_n$'s record the $n$-th time instance at which $xi_n$ becomes smaller.


    • $X_0 = 1$, and $X_{n} = xi_{T_{n}} / xi_{T_{n-1}}$.



    Then $(X_n)_{n=1}^{infty}$ are i.i.d. with $X_n sim operatorname{Uniform}([0,1])$, and $T_{n} - T_{n-1} sim operatorname{Geometric}(X_0 cdots X_{n-1})$ for $n geq 1$. Now we fix $epsilon in (0, 1)$ and focus on the extreme behavior of



    $$tau_n := X_1 cdots X_{n-1} (T_n - T_{n-1}).$$



    To this end, write $mathcal{F}_n = sigma( X_0, cdots, X_n, T_0, cdots, T_n)$ for the natural filtration of $(X_n)$ and $(T_n)$. Then $tau_n$ is $mathcal{F}_n$-measurable, and



    $$
    mathbb{P}(tau_n > (1-epsilon)log n mid mathcal{F}_{n-1})
    geq left( 1 - X_1cdots X_{n-1} right)^{frac{(1-epsilon)log n}{X_1 cdots X_{n-1}}}
    $$



    By the strong law of large numbers, $X_1 cdots X_n = e^{-(1+o(1))n}$, and so, this lower bound is $asymp n^{-(1-epsilon)}$ a.s. This tells that $sum_{n=1}^{infty} mathbb{P}(tau_n > (1-epsilon)log n mid mathcal{F}_{n-1}) = infty$ a.s., and so, by the generalized second Borel-Cantelli lemma, it follows that



    $$ tau_n > (1-epsilon)log nquad text{infinitely often} quad text{a.s.} tag{1} $$



    On the other hand, for $n geq 3$,



    $$
    mathbb{P}(tau_n > 2019log n)
    leq mathbb{E} left[ left( 1 - X_1cdots X_{n-1} right)^{frac{2019log n}{X_1 cdots X_{n-1}}-1} right]
    leq frac{c}{n^{2019}}
    $$



    where $c > 0$ is an absolute constant. (We can pick $c = mathbb{E}[(1-X_1 X_2)^{-1}] = pi^2/6$, for instance, although this value is not important.) So by the Borel-Cantelli lemma, $mathbb{P}(tau_n leq (1+epsilon)log n text{ i.o}) = 1$. Moreover, by the strong law of large numbers again, $mathbb{P}(X_1 cdots X_n geq e^{-(1+epsilon)n} text{ eventually}) = 1$. Therefore



    $$ T_n
    = sum_{k=1}^{n} frac{tau_k}{X_1 cdots X_k}
    leq sum_{k=1}^{n} e^{(1+epsilon)k} 2019 log k + mathcal{O}(1)
    leq C e^{(1+epsilon)n}log n $$



    for some random variable $C$ which a.s. finite and positive. Using this inequality,



    $$ frac{Xi_{T_n - 1}}{loglog T_n}
    = frac{X_1 cdots X_{n-1} (T_n - 1)}{log log T_n}
    geq frac{tau_n}{log n + O(1)}. tag{2} $$



    Combining $text{(1)}$ and $text{(2)}$, it follows that



    $$ limsup_{ntoinfty} frac{Xi_n}{loglog n}
    geq limsup_{ntoinfty} frac{Xi_{T_n - 1}}{log log T_n}
    geq 1 - epsilon quad text{a.s.}, $$



    and since this is true for any $epsilon in (0, 1)$, the desired inequality follows. $square$






    share|cite|improve this answer











    $endgroup$


















      2












      $begingroup$

      By the setting, $V_n = 1 - F(U_n)$ is a sequence of i.i.d. random variables with $V_n sim operatorname{Uniform}([0,1])$. Write



      $$ xi_n = min{V_1, cdots, V_n}, qquad Xi_n = n xi_n.$$



      Here we prove the following claim.




      Claim. $displaystyle limsup_{ntoinfty} frac{Xi_n}{log log n} = 1 $ holds almost surely.




      As usual, we show that one bounds the other.



      Proof of $ ``leqtext{''}$. We first fix $epsilon in (0, 1)$ and $alpha > 1$, and set $n_k = lfloor alpha^k rfloor$. Then from the inequality $ mathbb{P}( Xi_n > t) = left( 1 - frac{t}{n} right)^n leq e^{-t} $, it follows that



      $$ sum_{k=1}^{infty} mathbb{P}(Xi_{n_k} > (1+epsilon)loglog n_k)
      leq sum_{k=1}^{infty} frac{1}{left( log n_k right)^{1+epsilon}}
      < infty $$



      since $(log n_k)^{-1-epsilon} asymp k^{-1-epsilon}$ as $ktoinfty$. By Borel-Cantelli's lemma, $Xi_{n_k} leq (1+epsilon)loglog n_k$ eventually holds a.s. But for each $n in [n_k, n_{k+1}]$, we have



      $$ frac{Xi_{n}}{loglog n} = frac{n xi_n}{loglog n} leq frac{n_{k+1} xi_{n_{k+1}}}{loglog n_k} = frac{n_{k+1}loglog n_{k+1}}{n_k loglog n_k} frac{Sigma_{n_k}}{loglog n_{k+1}}, $$



      and so, taking limsup as $ktoinfty$ shows that



      $$ limsup_{ntoinfty} frac{Xi_n}{loglog n} leq alpha(1+epsilon) quad text{a.s.} $$



      But since this is true for any $epsilon > 0$ and $alpha > 1$, letting $epsilon to 0$ and $alpha to 1$ along subsequence (so that we only care about countable intersection of events) proves the desired inequality.



      Proof of $ ``geqtext{''}$. This part is much harder. Define $X_n$ and $T_n$ as follows:




      • $T_0 = 0$, and $T_{n} = inf { m > T_{n-1} : xi_m < xi_{T_{n-1}} }$ for $n geq 1$. (Here, the convention $xi_0 = 1$ is used.) In other words, $T_n$'s record the $n$-th time instance at which $xi_n$ becomes smaller.


      • $X_0 = 1$, and $X_{n} = xi_{T_{n}} / xi_{T_{n-1}}$.



      Then $(X_n)_{n=1}^{infty}$ are i.i.d. with $X_n sim operatorname{Uniform}([0,1])$, and $T_{n} - T_{n-1} sim operatorname{Geometric}(X_0 cdots X_{n-1})$ for $n geq 1$. Now we fix $epsilon in (0, 1)$ and focus on the extreme behavior of



      $$tau_n := X_1 cdots X_{n-1} (T_n - T_{n-1}).$$



      To this end, write $mathcal{F}_n = sigma( X_0, cdots, X_n, T_0, cdots, T_n)$ for the natural filtration of $(X_n)$ and $(T_n)$. Then $tau_n$ is $mathcal{F}_n$-measurable, and



      $$
      mathbb{P}(tau_n > (1-epsilon)log n mid mathcal{F}_{n-1})
      geq left( 1 - X_1cdots X_{n-1} right)^{frac{(1-epsilon)log n}{X_1 cdots X_{n-1}}}
      $$



      By the strong law of large numbers, $X_1 cdots X_n = e^{-(1+o(1))n}$, and so, this lower bound is $asymp n^{-(1-epsilon)}$ a.s. This tells that $sum_{n=1}^{infty} mathbb{P}(tau_n > (1-epsilon)log n mid mathcal{F}_{n-1}) = infty$ a.s., and so, by the generalized second Borel-Cantelli lemma, it follows that



      $$ tau_n > (1-epsilon)log nquad text{infinitely often} quad text{a.s.} tag{1} $$



      On the other hand, for $n geq 3$,



      $$
      mathbb{P}(tau_n > 2019log n)
      leq mathbb{E} left[ left( 1 - X_1cdots X_{n-1} right)^{frac{2019log n}{X_1 cdots X_{n-1}}-1} right]
      leq frac{c}{n^{2019}}
      $$



      where $c > 0$ is an absolute constant. (We can pick $c = mathbb{E}[(1-X_1 X_2)^{-1}] = pi^2/6$, for instance, although this value is not important.) So by the Borel-Cantelli lemma, $mathbb{P}(tau_n leq (1+epsilon)log n text{ i.o}) = 1$. Moreover, by the strong law of large numbers again, $mathbb{P}(X_1 cdots X_n geq e^{-(1+epsilon)n} text{ eventually}) = 1$. Therefore



      $$ T_n
      = sum_{k=1}^{n} frac{tau_k}{X_1 cdots X_k}
      leq sum_{k=1}^{n} e^{(1+epsilon)k} 2019 log k + mathcal{O}(1)
      leq C e^{(1+epsilon)n}log n $$



      for some random variable $C$ which a.s. finite and positive. Using this inequality,



      $$ frac{Xi_{T_n - 1}}{loglog T_n}
      = frac{X_1 cdots X_{n-1} (T_n - 1)}{log log T_n}
      geq frac{tau_n}{log n + O(1)}. tag{2} $$



      Combining $text{(1)}$ and $text{(2)}$, it follows that



      $$ limsup_{ntoinfty} frac{Xi_n}{loglog n}
      geq limsup_{ntoinfty} frac{Xi_{T_n - 1}}{log log T_n}
      geq 1 - epsilon quad text{a.s.}, $$



      and since this is true for any $epsilon in (0, 1)$, the desired inequality follows. $square$






      share|cite|improve this answer











      $endgroup$
















        2












        2








        2





        $begingroup$

        By the setting, $V_n = 1 - F(U_n)$ is a sequence of i.i.d. random variables with $V_n sim operatorname{Uniform}([0,1])$. Write



        $$ xi_n = min{V_1, cdots, V_n}, qquad Xi_n = n xi_n.$$



        Here we prove the following claim.




        Claim. $displaystyle limsup_{ntoinfty} frac{Xi_n}{log log n} = 1 $ holds almost surely.




        As usual, we show that one bounds the other.



        Proof of $ ``leqtext{''}$. We first fix $epsilon in (0, 1)$ and $alpha > 1$, and set $n_k = lfloor alpha^k rfloor$. Then from the inequality $ mathbb{P}( Xi_n > t) = left( 1 - frac{t}{n} right)^n leq e^{-t} $, it follows that



        $$ sum_{k=1}^{infty} mathbb{P}(Xi_{n_k} > (1+epsilon)loglog n_k)
        leq sum_{k=1}^{infty} frac{1}{left( log n_k right)^{1+epsilon}}
        < infty $$



        since $(log n_k)^{-1-epsilon} asymp k^{-1-epsilon}$ as $ktoinfty$. By Borel-Cantelli's lemma, $Xi_{n_k} leq (1+epsilon)loglog n_k$ eventually holds a.s. But for each $n in [n_k, n_{k+1}]$, we have



        $$ frac{Xi_{n}}{loglog n} = frac{n xi_n}{loglog n} leq frac{n_{k+1} xi_{n_{k+1}}}{loglog n_k} = frac{n_{k+1}loglog n_{k+1}}{n_k loglog n_k} frac{Sigma_{n_k}}{loglog n_{k+1}}, $$



        and so, taking limsup as $ktoinfty$ shows that



        $$ limsup_{ntoinfty} frac{Xi_n}{loglog n} leq alpha(1+epsilon) quad text{a.s.} $$



        But since this is true for any $epsilon > 0$ and $alpha > 1$, letting $epsilon to 0$ and $alpha to 1$ along subsequence (so that we only care about countable intersection of events) proves the desired inequality.



        Proof of $ ``geqtext{''}$. This part is much harder. Define $X_n$ and $T_n$ as follows:




        • $T_0 = 0$, and $T_{n} = inf { m > T_{n-1} : xi_m < xi_{T_{n-1}} }$ for $n geq 1$. (Here, the convention $xi_0 = 1$ is used.) In other words, $T_n$'s record the $n$-th time instance at which $xi_n$ becomes smaller.


        • $X_0 = 1$, and $X_{n} = xi_{T_{n}} / xi_{T_{n-1}}$.



        Then $(X_n)_{n=1}^{infty}$ are i.i.d. with $X_n sim operatorname{Uniform}([0,1])$, and $T_{n} - T_{n-1} sim operatorname{Geometric}(X_0 cdots X_{n-1})$ for $n geq 1$. Now we fix $epsilon in (0, 1)$ and focus on the extreme behavior of



        $$tau_n := X_1 cdots X_{n-1} (T_n - T_{n-1}).$$



        To this end, write $mathcal{F}_n = sigma( X_0, cdots, X_n, T_0, cdots, T_n)$ for the natural filtration of $(X_n)$ and $(T_n)$. Then $tau_n$ is $mathcal{F}_n$-measurable, and



        $$
        mathbb{P}(tau_n > (1-epsilon)log n mid mathcal{F}_{n-1})
        geq left( 1 - X_1cdots X_{n-1} right)^{frac{(1-epsilon)log n}{X_1 cdots X_{n-1}}}
        $$



        By the strong law of large numbers, $X_1 cdots X_n = e^{-(1+o(1))n}$, and so, this lower bound is $asymp n^{-(1-epsilon)}$ a.s. This tells that $sum_{n=1}^{infty} mathbb{P}(tau_n > (1-epsilon)log n mid mathcal{F}_{n-1}) = infty$ a.s., and so, by the generalized second Borel-Cantelli lemma, it follows that



        $$ tau_n > (1-epsilon)log nquad text{infinitely often} quad text{a.s.} tag{1} $$



        On the other hand, for $n geq 3$,



        $$
        mathbb{P}(tau_n > 2019log n)
        leq mathbb{E} left[ left( 1 - X_1cdots X_{n-1} right)^{frac{2019log n}{X_1 cdots X_{n-1}}-1} right]
        leq frac{c}{n^{2019}}
        $$



        where $c > 0$ is an absolute constant. (We can pick $c = mathbb{E}[(1-X_1 X_2)^{-1}] = pi^2/6$, for instance, although this value is not important.) So by the Borel-Cantelli lemma, $mathbb{P}(tau_n leq (1+epsilon)log n text{ i.o}) = 1$. Moreover, by the strong law of large numbers again, $mathbb{P}(X_1 cdots X_n geq e^{-(1+epsilon)n} text{ eventually}) = 1$. Therefore



        $$ T_n
        = sum_{k=1}^{n} frac{tau_k}{X_1 cdots X_k}
        leq sum_{k=1}^{n} e^{(1+epsilon)k} 2019 log k + mathcal{O}(1)
        leq C e^{(1+epsilon)n}log n $$



        for some random variable $C$ which a.s. finite and positive. Using this inequality,



        $$ frac{Xi_{T_n - 1}}{loglog T_n}
        = frac{X_1 cdots X_{n-1} (T_n - 1)}{log log T_n}
        geq frac{tau_n}{log n + O(1)}. tag{2} $$



        Combining $text{(1)}$ and $text{(2)}$, it follows that



        $$ limsup_{ntoinfty} frac{Xi_n}{loglog n}
        geq limsup_{ntoinfty} frac{Xi_{T_n - 1}}{log log T_n}
        geq 1 - epsilon quad text{a.s.}, $$



        and since this is true for any $epsilon in (0, 1)$, the desired inequality follows. $square$






        share|cite|improve this answer











        $endgroup$



        By the setting, $V_n = 1 - F(U_n)$ is a sequence of i.i.d. random variables with $V_n sim operatorname{Uniform}([0,1])$. Write



        $$ xi_n = min{V_1, cdots, V_n}, qquad Xi_n = n xi_n.$$



        Here we prove the following claim.




        Claim. $displaystyle limsup_{ntoinfty} frac{Xi_n}{log log n} = 1 $ holds almost surely.




        As usual, we show that one bounds the other.



        Proof of $ ``leqtext{''}$. We first fix $epsilon in (0, 1)$ and $alpha > 1$, and set $n_k = lfloor alpha^k rfloor$. Then from the inequality $ mathbb{P}( Xi_n > t) = left( 1 - frac{t}{n} right)^n leq e^{-t} $, it follows that



        $$ sum_{k=1}^{infty} mathbb{P}(Xi_{n_k} > (1+epsilon)loglog n_k)
        leq sum_{k=1}^{infty} frac{1}{left( log n_k right)^{1+epsilon}}
        < infty $$



        since $(log n_k)^{-1-epsilon} asymp k^{-1-epsilon}$ as $ktoinfty$. By Borel-Cantelli's lemma, $Xi_{n_k} leq (1+epsilon)loglog n_k$ eventually holds a.s. But for each $n in [n_k, n_{k+1}]$, we have



        $$ frac{Xi_{n}}{loglog n} = frac{n xi_n}{loglog n} leq frac{n_{k+1} xi_{n_{k+1}}}{loglog n_k} = frac{n_{k+1}loglog n_{k+1}}{n_k loglog n_k} frac{Sigma_{n_k}}{loglog n_{k+1}}, $$



        and so, taking limsup as $ktoinfty$ shows that



        $$ limsup_{ntoinfty} frac{Xi_n}{loglog n} leq alpha(1+epsilon) quad text{a.s.} $$



        But since this is true for any $epsilon > 0$ and $alpha > 1$, letting $epsilon to 0$ and $alpha to 1$ along subsequence (so that we only care about countable intersection of events) proves the desired inequality.



        Proof of $ ``geqtext{''}$. This part is much harder. Define $X_n$ and $T_n$ as follows:




        • $T_0 = 0$, and $T_{n} = inf { m > T_{n-1} : xi_m < xi_{T_{n-1}} }$ for $n geq 1$. (Here, the convention $xi_0 = 1$ is used.) In other words, $T_n$'s record the $n$-th time instance at which $xi_n$ becomes smaller.


        • $X_0 = 1$, and $X_{n} = xi_{T_{n}} / xi_{T_{n-1}}$.



        Then $(X_n)_{n=1}^{infty}$ are i.i.d. with $X_n sim operatorname{Uniform}([0,1])$, and $T_{n} - T_{n-1} sim operatorname{Geometric}(X_0 cdots X_{n-1})$ for $n geq 1$. Now we fix $epsilon in (0, 1)$ and focus on the extreme behavior of



        $$tau_n := X_1 cdots X_{n-1} (T_n - T_{n-1}).$$



        To this end, write $mathcal{F}_n = sigma( X_0, cdots, X_n, T_0, cdots, T_n)$ for the natural filtration of $(X_n)$ and $(T_n)$. Then $tau_n$ is $mathcal{F}_n$-measurable, and



        $$
        mathbb{P}(tau_n > (1-epsilon)log n mid mathcal{F}_{n-1})
        geq left( 1 - X_1cdots X_{n-1} right)^{frac{(1-epsilon)log n}{X_1 cdots X_{n-1}}}
        $$



        By the strong law of large numbers, $X_1 cdots X_n = e^{-(1+o(1))n}$, and so, this lower bound is $asymp n^{-(1-epsilon)}$ a.s. This tells that $sum_{n=1}^{infty} mathbb{P}(tau_n > (1-epsilon)log n mid mathcal{F}_{n-1}) = infty$ a.s., and so, by the generalized second Borel-Cantelli lemma, it follows that



        $$ tau_n > (1-epsilon)log nquad text{infinitely often} quad text{a.s.} tag{1} $$



        On the other hand, for $n geq 3$,



        $$
        mathbb{P}(tau_n > 2019log n)
        leq mathbb{E} left[ left( 1 - X_1cdots X_{n-1} right)^{frac{2019log n}{X_1 cdots X_{n-1}}-1} right]
        leq frac{c}{n^{2019}}
        $$



        where $c > 0$ is an absolute constant. (We can pick $c = mathbb{E}[(1-X_1 X_2)^{-1}] = pi^2/6$, for instance, although this value is not important.) So by the Borel-Cantelli lemma, $mathbb{P}(tau_n leq (1+epsilon)log n text{ i.o}) = 1$. Moreover, by the strong law of large numbers again, $mathbb{P}(X_1 cdots X_n geq e^{-(1+epsilon)n} text{ eventually}) = 1$. Therefore



        $$ T_n
        = sum_{k=1}^{n} frac{tau_k}{X_1 cdots X_k}
        leq sum_{k=1}^{n} e^{(1+epsilon)k} 2019 log k + mathcal{O}(1)
        leq C e^{(1+epsilon)n}log n $$



        for some random variable $C$ which a.s. finite and positive. Using this inequality,



        $$ frac{Xi_{T_n - 1}}{loglog T_n}
        = frac{X_1 cdots X_{n-1} (T_n - 1)}{log log T_n}
        geq frac{tau_n}{log n + O(1)}. tag{2} $$



        Combining $text{(1)}$ and $text{(2)}$, it follows that



        $$ limsup_{ntoinfty} frac{Xi_n}{loglog n}
        geq limsup_{ntoinfty} frac{Xi_{T_n - 1}}{log log T_n}
        geq 1 - epsilon quad text{a.s.}, $$



        and since this is true for any $epsilon in (0, 1)$, the desired inequality follows. $square$







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Feb 17 at 0:06

























        answered Feb 16 at 3:53









        Sangchul LeeSangchul Lee

        96.2k12171282




        96.2k12171282























            1












            $begingroup$

            Attempt: I don't feel comfortable that this is correct, but here we go. Because (according to Cramer), $Xi_n$ is $Beta(1,n)$, and thus has the density $(1 - frac{xi}{n})^n$ with support $[0,n]$, we can conclude that every $Xi_n$ is stochastically bounded (I think) by an exponential random variable with parameter $1$.



            This is because of the inequality $-x ge log(1-x)$, since this implies that:



            $$ -frac{x}{n} ge log ( 1 - frac{x}{n}) iff -x ge n log (1 - frac{x}{n}) iff e^{-x} ge left( 1 - frac{x}{n} right)^n $$



            where $e^{-x}$ is the PDF of an exponential(1).



            The stochastic ordering doesn't seem to depend on the probabilistic dependence of the sequence of exponential random variables, so we can assume we have some sequence of $Y_n$ i.i.d. exponential(1) R.V.'s.



            This is where it seems iffy to me: basically I will claim that the stochastic ordering of $Xi_n preceq Y_n$ for each individual $n$ somehow implies that the stochastic ordering transfers to the entire sequences of $Xi_n$'s and $Y_n$'s.



            So I believe it's the case (please correct me if I'm wrong) that some sequence of RVs $X_n$ is $O_p(1)$ if and only if for all $M>0$, $$mathbb{P}(X_n > M text{ i.o.}) = 0$$



            So the claim is that because $Xi_n preceq Y_n$ for all $n$ (don't really understand how the PDF of one can uniformly dominate the PDF of another but both integrate to $1$, but whatever) that the following would hold:



            $$ mathbb{P}(Xi_n > M text{ i.o.}) le mathbb{P}(Y_n > M text{ i.o.}) ,. $$



            So we just have to show that $mathbb{P}(Y_n > M text{ i.o.}) = 0$ for all $M$, but that's easy (I think), since by our assumptions



            $$ mathbb{P}(Y_n > M text{ i.o.}) =C_{{ n_k }}lim_{K to infty} prod_{k=1}^K mathbb{P}(Y_{n_k} > M) =C_{{ n_k }}lim_{K to infty} (e^{-M})^k =0$$



            where ${ n_k}$ is some subsequence, and $C_{{n_k}}$ is some finite constant depending on the particular subsequence.



            If anyone can fix this, or show why it's false and doomed to fail even with modifications, I would really appreciate it.





            First let's go through a chain of trivialities in order to rephrase the problem in a way which makes it easier to solve (note that by definition $Xi_n ge 0$):



            $$Xi_n = o(log n) quad iff quad lim_{n to infty} frac{Xi_n}{log n} = 0 quad iff quad \ forall varepsilon > 0, frac{Xi_n}{log n} > varepsilon text{ only finitely many times} quad iff \ forall varepsilon >0, quad Xi_n > varepsilon log n text{ only finitely many times} ,.$$



            One also has that:



            $$Xi_n > varepsilon log n quad iff quad n(1 - F(Z_n)) > varepsilon log n quad iff quad 1 - F(Z_n) > frac{varepsilon log n}{n} \ iff quad F(Z_n) < 1 - frac{varepsilon log n}{n} quad iff quad Z_n le inf left{ y: F(y) ge 1 - frac{varepsilon log n}{n} right} ,. $$



            Correspondingly, define for all $n$:



            $$ u_n^{(varepsilon)} = inf left{ y: F(y) ge 1 - frac{varepsilon log n}{n} right} ,. $$



            Therefore the above steps show us that:



            $$Xi_n = o(log n) text{ a.s.} quad iff quad mathbb{P}(Xi_n = o(log n)) = 1 quad iff quad \
            mathbb{P}(forall varepsilon > 0 , Xi_n > varepsilon log n text{ only finitely many times}) = 1 \
            iff mathbb{P}(forall varepsilon > 0, Z_n le u_n^{(varepsilon)} text{ only finitely many times}) = 1 \
            iff mathbb{P}(forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often}) =0 ,. $$



            Notice that we can write:



            $$ { forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often} } = bigcap_{varepsilon > 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } ,.$$



            The sequences $u_n^{(varepsilon)}$ become uniformly larger as $varepsilon$ decreases, so we can conclude that the events $${ Z_n le u_n^{(varepsilon)} text{ infinitely often} } $$ are decreasing (or at least somehow monotonic) as $varepsilon$ goes to $0$. Therefore the probability axiom regarding monotonic sequences of events allows us to conclude that:



            $$mathbb{P}(forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often}) = \
            mathbb{P} left( bigcap_{varepsilon > 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } right) = \
            mathbb{P} left( lim_{varepsilon downarrow 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } right) = \
            lim_{varepsilon downarrow 0} mathbb{P}(Z_n le u_n^{(varepsilon)} text{ infinitely often}) ,.$$



            Therefore it suffices to show that for all $varepsilon >0$,



            $$mathbb{P}(Z_n le u_n^{(varepsilon)} text{ infinitely often}) = 0 $$



            because of course the limit of any constant sequence is the constant.



            Here is somewhat of a sledgehammer result:




            Theorem 4.3.1., p. 252 of Galambos, The Asymptotic Theory of Extreme Order Statistics, 2nd edition. Let $X_1, X_2, dots$ be i.i.d. variables with common nondegenerate and continuous distribution function $F(x)$, and let $u_n$ be a nondecreasing sequence such that $n(1 - F(u_n))$ is also nondecreasing. Then, for $u_n < sup { x: F(x) <1 }$, $$mathbb{P}(Z_n le u_n text{ infinitely often}) =0 text{ or }1 $$
            according as
            $$sum_{j=1}^{+infty}[1 - F(u_j)]exp(-j[1-F(u_j)]) < +infty text{ or }=+infty ,. $$




            The proof is technical and takes around five pages, but ultimately it turns out to be a corollary of one of the Borel-Cantelli lemmas. I may get around to trying to condense the proof to only use the part required for this analysis as well as only the assumptions which hold in the Gaussian case, which may be shorter (but maybe it isn't) and type it up here, but holding your breath is not recommended. Note that in this case $omega(F)=+infty$, so that condition is vacuous, and $n(1-F(n))$ is $varepsilon log n$ thus clearly non-decreasing.



            Anyway the point being that, appealing to this theorem, if we can show that:



            $$sum_{j=1}^{+infty}[1 - F(u_j^{(varepsilon)})]exp(-j[1-F(u_j^{(varepsilon)})]) = sum_{j=1}^{+infty}left[ frac{varepsilon log j}{j} right]exp(-varepsilon log j) = varepsilon sum_{j=1}^{+infty} frac{ log j}{j^{1 + varepsilon}} < + infty ,. $$



            Note that since logarithmic growth is slower than any power law growth for any positive power law exponent (logarithms and exponentials are monotonicity preserving, so $log log n le alpha log n iff log n le n^{alpha}$ and the former inequality can always be seen to hold for all $n$ large enough due to the fact that $log n le n$ and a change of variables), we have that:



            $$ sum_{j=1}^{+infty} frac{log j}{j^{1 + varepsilon}} le sum_{j=1}^{+infty} frac{j^{varepsilon/2}}{j^{1 + varepsilon}} = sum_{j=1}^{+infty} frac{1}{j^{1 + varepsilon/2}} < +infty ,,$$



            since the p-series is known to converge for all $p>1$, and $varepsilon >0$ of course implies $1 + varepsilon/2 > 1$.



            Thus using the above theorem we have shown that for all $varepsilon >0$, $mathbb{P}(Z_n le u_n^{(varepsilon)} text{ i.o.}) = 0$, which to recapitulate should mean that $Xi_n = o(log n)$ almost surely.



            We need to show still that $log Xi_n = o(log log n)$. This doesn't follow from the above, since, e.g.,



            $$ frac{1}{n} log n = o(log n) ,, - log n + log log n not= o(log log n) ,. $$



            However, given a sequence $x_n$, if one can show that $x_n = o( (log n)^{delta})$ for arbitrary $delta >0$, then it does follow that $log(x_n) = o(log log n)$. Ideally I would like to be able to show this for $Xi_n$ using the above lemma (assuming it's even true), but am not able to (as of yet).






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              If $Y_n$'s are i.i.d. exponential variables of the unit rate, then for each $Y_n$ there is a probability $e^{-M}$ of exceeding the level $M$. In other words, $mathbf{1}_{{Y_n > M}}$ is a Bernoulli process with the parameter $e^{-M}$, and so, $Y_n > M$ will occur quite frequently. In fact, one can show that $max{Y_1,cdots,Y_n} = (1+o(1))log n$ almost surely. What is interesting is that the maximum of $Xi_n$ grows much slower than the i.i.d. case, because the 'updates' of $Xi_n / n$ occurs at exponentially slow speed.
              $endgroup$
              – Sangchul Lee
              Feb 16 at 4:00


















            1












            $begingroup$

            Attempt: I don't feel comfortable that this is correct, but here we go. Because (according to Cramer), $Xi_n$ is $Beta(1,n)$, and thus has the density $(1 - frac{xi}{n})^n$ with support $[0,n]$, we can conclude that every $Xi_n$ is stochastically bounded (I think) by an exponential random variable with parameter $1$.



            This is because of the inequality $-x ge log(1-x)$, since this implies that:



            $$ -frac{x}{n} ge log ( 1 - frac{x}{n}) iff -x ge n log (1 - frac{x}{n}) iff e^{-x} ge left( 1 - frac{x}{n} right)^n $$



            where $e^{-x}$ is the PDF of an exponential(1).



            The stochastic ordering doesn't seem to depend on the probabilistic dependence of the sequence of exponential random variables, so we can assume we have some sequence of $Y_n$ i.i.d. exponential(1) R.V.'s.



            This is where it seems iffy to me: basically I will claim that the stochastic ordering of $Xi_n preceq Y_n$ for each individual $n$ somehow implies that the stochastic ordering transfers to the entire sequences of $Xi_n$'s and $Y_n$'s.



            So I believe it's the case (please correct me if I'm wrong) that some sequence of RVs $X_n$ is $O_p(1)$ if and only if for all $M>0$, $$mathbb{P}(X_n > M text{ i.o.}) = 0$$



            So the claim is that because $Xi_n preceq Y_n$ for all $n$ (don't really understand how the PDF of one can uniformly dominate the PDF of another but both integrate to $1$, but whatever) that the following would hold:



            $$ mathbb{P}(Xi_n > M text{ i.o.}) le mathbb{P}(Y_n > M text{ i.o.}) ,. $$



            So we just have to show that $mathbb{P}(Y_n > M text{ i.o.}) = 0$ for all $M$, but that's easy (I think), since by our assumptions



            $$ mathbb{P}(Y_n > M text{ i.o.}) =C_{{ n_k }}lim_{K to infty} prod_{k=1}^K mathbb{P}(Y_{n_k} > M) =C_{{ n_k }}lim_{K to infty} (e^{-M})^k =0$$



            where ${ n_k}$ is some subsequence, and $C_{{n_k}}$ is some finite constant depending on the particular subsequence.



            If anyone can fix this, or show why it's false and doomed to fail even with modifications, I would really appreciate it.





            First let's go through a chain of trivialities in order to rephrase the problem in a way which makes it easier to solve (note that by definition $Xi_n ge 0$):



            $$Xi_n = o(log n) quad iff quad lim_{n to infty} frac{Xi_n}{log n} = 0 quad iff quad \ forall varepsilon > 0, frac{Xi_n}{log n} > varepsilon text{ only finitely many times} quad iff \ forall varepsilon >0, quad Xi_n > varepsilon log n text{ only finitely many times} ,.$$



            One also has that:



            $$Xi_n > varepsilon log n quad iff quad n(1 - F(Z_n)) > varepsilon log n quad iff quad 1 - F(Z_n) > frac{varepsilon log n}{n} \ iff quad F(Z_n) < 1 - frac{varepsilon log n}{n} quad iff quad Z_n le inf left{ y: F(y) ge 1 - frac{varepsilon log n}{n} right} ,. $$



            Correspondingly, define for all $n$:



            $$ u_n^{(varepsilon)} = inf left{ y: F(y) ge 1 - frac{varepsilon log n}{n} right} ,. $$



            Therefore the above steps show us that:



            $$Xi_n = o(log n) text{ a.s.} quad iff quad mathbb{P}(Xi_n = o(log n)) = 1 quad iff quad \
            mathbb{P}(forall varepsilon > 0 , Xi_n > varepsilon log n text{ only finitely many times}) = 1 \
            iff mathbb{P}(forall varepsilon > 0, Z_n le u_n^{(varepsilon)} text{ only finitely many times}) = 1 \
            iff mathbb{P}(forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often}) =0 ,. $$



            Notice that we can write:



            $$ { forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often} } = bigcap_{varepsilon > 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } ,.$$



            The sequences $u_n^{(varepsilon)}$ become uniformly larger as $varepsilon$ decreases, so we can conclude that the events $${ Z_n le u_n^{(varepsilon)} text{ infinitely often} } $$ are decreasing (or at least somehow monotonic) as $varepsilon$ goes to $0$. Therefore the probability axiom regarding monotonic sequences of events allows us to conclude that:



            $$mathbb{P}(forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often}) = \
            mathbb{P} left( bigcap_{varepsilon > 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } right) = \
            mathbb{P} left( lim_{varepsilon downarrow 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } right) = \
            lim_{varepsilon downarrow 0} mathbb{P}(Z_n le u_n^{(varepsilon)} text{ infinitely often}) ,.$$



            Therefore it suffices to show that for all $varepsilon >0$,



            $$mathbb{P}(Z_n le u_n^{(varepsilon)} text{ infinitely often}) = 0 $$



            because of course the limit of any constant sequence is the constant.



            Here is somewhat of a sledgehammer result:




            Theorem 4.3.1., p. 252 of Galambos, The Asymptotic Theory of Extreme Order Statistics, 2nd edition. Let $X_1, X_2, dots$ be i.i.d. variables with common nondegenerate and continuous distribution function $F(x)$, and let $u_n$ be a nondecreasing sequence such that $n(1 - F(u_n))$ is also nondecreasing. Then, for $u_n < sup { x: F(x) <1 }$, $$mathbb{P}(Z_n le u_n text{ infinitely often}) =0 text{ or }1 $$
            according as
            $$sum_{j=1}^{+infty}[1 - F(u_j)]exp(-j[1-F(u_j)]) < +infty text{ or }=+infty ,. $$




            The proof is technical and takes around five pages, but ultimately it turns out to be a corollary of one of the Borel-Cantelli lemmas. I may get around to trying to condense the proof to only use the part required for this analysis as well as only the assumptions which hold in the Gaussian case, which may be shorter (but maybe it isn't) and type it up here, but holding your breath is not recommended. Note that in this case $omega(F)=+infty$, so that condition is vacuous, and $n(1-F(n))$ is $varepsilon log n$ thus clearly non-decreasing.



            Anyway the point being that, appealing to this theorem, if we can show that:



            $$sum_{j=1}^{+infty}[1 - F(u_j^{(varepsilon)})]exp(-j[1-F(u_j^{(varepsilon)})]) = sum_{j=1}^{+infty}left[ frac{varepsilon log j}{j} right]exp(-varepsilon log j) = varepsilon sum_{j=1}^{+infty} frac{ log j}{j^{1 + varepsilon}} < + infty ,. $$



            Note that since logarithmic growth is slower than any power law growth for any positive power law exponent (logarithms and exponentials are monotonicity preserving, so $log log n le alpha log n iff log n le n^{alpha}$ and the former inequality can always be seen to hold for all $n$ large enough due to the fact that $log n le n$ and a change of variables), we have that:



            $$ sum_{j=1}^{+infty} frac{log j}{j^{1 + varepsilon}} le sum_{j=1}^{+infty} frac{j^{varepsilon/2}}{j^{1 + varepsilon}} = sum_{j=1}^{+infty} frac{1}{j^{1 + varepsilon/2}} < +infty ,,$$



            since the p-series is known to converge for all $p>1$, and $varepsilon >0$ of course implies $1 + varepsilon/2 > 1$.



            Thus using the above theorem we have shown that for all $varepsilon >0$, $mathbb{P}(Z_n le u_n^{(varepsilon)} text{ i.o.}) = 0$, which to recapitulate should mean that $Xi_n = o(log n)$ almost surely.



            We need to show still that $log Xi_n = o(log log n)$. This doesn't follow from the above, since, e.g.,



            $$ frac{1}{n} log n = o(log n) ,, - log n + log log n not= o(log log n) ,. $$



            However, given a sequence $x_n$, if one can show that $x_n = o( (log n)^{delta})$ for arbitrary $delta >0$, then it does follow that $log(x_n) = o(log log n)$. Ideally I would like to be able to show this for $Xi_n$ using the above lemma (assuming it's even true), but am not able to (as of yet).






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              If $Y_n$'s are i.i.d. exponential variables of the unit rate, then for each $Y_n$ there is a probability $e^{-M}$ of exceeding the level $M$. In other words, $mathbf{1}_{{Y_n > M}}$ is a Bernoulli process with the parameter $e^{-M}$, and so, $Y_n > M$ will occur quite frequently. In fact, one can show that $max{Y_1,cdots,Y_n} = (1+o(1))log n$ almost surely. What is interesting is that the maximum of $Xi_n$ grows much slower than the i.i.d. case, because the 'updates' of $Xi_n / n$ occurs at exponentially slow speed.
              $endgroup$
              – Sangchul Lee
              Feb 16 at 4:00
















            1












            1








            1





            $begingroup$

            Attempt: I don't feel comfortable that this is correct, but here we go. Because (according to Cramer), $Xi_n$ is $Beta(1,n)$, and thus has the density $(1 - frac{xi}{n})^n$ with support $[0,n]$, we can conclude that every $Xi_n$ is stochastically bounded (I think) by an exponential random variable with parameter $1$.



            This is because of the inequality $-x ge log(1-x)$, since this implies that:



            $$ -frac{x}{n} ge log ( 1 - frac{x}{n}) iff -x ge n log (1 - frac{x}{n}) iff e^{-x} ge left( 1 - frac{x}{n} right)^n $$



            where $e^{-x}$ is the PDF of an exponential(1).



            The stochastic ordering doesn't seem to depend on the probabilistic dependence of the sequence of exponential random variables, so we can assume we have some sequence of $Y_n$ i.i.d. exponential(1) R.V.'s.



            This is where it seems iffy to me: basically I will claim that the stochastic ordering of $Xi_n preceq Y_n$ for each individual $n$ somehow implies that the stochastic ordering transfers to the entire sequences of $Xi_n$'s and $Y_n$'s.



            So I believe it's the case (please correct me if I'm wrong) that some sequence of RVs $X_n$ is $O_p(1)$ if and only if for all $M>0$, $$mathbb{P}(X_n > M text{ i.o.}) = 0$$



            So the claim is that because $Xi_n preceq Y_n$ for all $n$ (don't really understand how the PDF of one can uniformly dominate the PDF of another but both integrate to $1$, but whatever) that the following would hold:



            $$ mathbb{P}(Xi_n > M text{ i.o.}) le mathbb{P}(Y_n > M text{ i.o.}) ,. $$



            So we just have to show that $mathbb{P}(Y_n > M text{ i.o.}) = 0$ for all $M$, but that's easy (I think), since by our assumptions



            $$ mathbb{P}(Y_n > M text{ i.o.}) =C_{{ n_k }}lim_{K to infty} prod_{k=1}^K mathbb{P}(Y_{n_k} > M) =C_{{ n_k }}lim_{K to infty} (e^{-M})^k =0$$



            where ${ n_k}$ is some subsequence, and $C_{{n_k}}$ is some finite constant depending on the particular subsequence.



            If anyone can fix this, or show why it's false and doomed to fail even with modifications, I would really appreciate it.





            First let's go through a chain of trivialities in order to rephrase the problem in a way which makes it easier to solve (note that by definition $Xi_n ge 0$):



            $$Xi_n = o(log n) quad iff quad lim_{n to infty} frac{Xi_n}{log n} = 0 quad iff quad \ forall varepsilon > 0, frac{Xi_n}{log n} > varepsilon text{ only finitely many times} quad iff \ forall varepsilon >0, quad Xi_n > varepsilon log n text{ only finitely many times} ,.$$



            One also has that:



            $$Xi_n > varepsilon log n quad iff quad n(1 - F(Z_n)) > varepsilon log n quad iff quad 1 - F(Z_n) > frac{varepsilon log n}{n} \ iff quad F(Z_n) < 1 - frac{varepsilon log n}{n} quad iff quad Z_n le inf left{ y: F(y) ge 1 - frac{varepsilon log n}{n} right} ,. $$



            Correspondingly, define for all $n$:



            $$ u_n^{(varepsilon)} = inf left{ y: F(y) ge 1 - frac{varepsilon log n}{n} right} ,. $$



            Therefore the above steps show us that:



            $$Xi_n = o(log n) text{ a.s.} quad iff quad mathbb{P}(Xi_n = o(log n)) = 1 quad iff quad \
            mathbb{P}(forall varepsilon > 0 , Xi_n > varepsilon log n text{ only finitely many times}) = 1 \
            iff mathbb{P}(forall varepsilon > 0, Z_n le u_n^{(varepsilon)} text{ only finitely many times}) = 1 \
            iff mathbb{P}(forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often}) =0 ,. $$



            Notice that we can write:



            $$ { forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often} } = bigcap_{varepsilon > 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } ,.$$



            The sequences $u_n^{(varepsilon)}$ become uniformly larger as $varepsilon$ decreases, so we can conclude that the events $${ Z_n le u_n^{(varepsilon)} text{ infinitely often} } $$ are decreasing (or at least somehow monotonic) as $varepsilon$ goes to $0$. Therefore the probability axiom regarding monotonic sequences of events allows us to conclude that:



            $$mathbb{P}(forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often}) = \
            mathbb{P} left( bigcap_{varepsilon > 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } right) = \
            mathbb{P} left( lim_{varepsilon downarrow 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } right) = \
            lim_{varepsilon downarrow 0} mathbb{P}(Z_n le u_n^{(varepsilon)} text{ infinitely often}) ,.$$



            Therefore it suffices to show that for all $varepsilon >0$,



            $$mathbb{P}(Z_n le u_n^{(varepsilon)} text{ infinitely often}) = 0 $$



            because of course the limit of any constant sequence is the constant.



            Here is somewhat of a sledgehammer result:




            Theorem 4.3.1., p. 252 of Galambos, The Asymptotic Theory of Extreme Order Statistics, 2nd edition. Let $X_1, X_2, dots$ be i.i.d. variables with common nondegenerate and continuous distribution function $F(x)$, and let $u_n$ be a nondecreasing sequence such that $n(1 - F(u_n))$ is also nondecreasing. Then, for $u_n < sup { x: F(x) <1 }$, $$mathbb{P}(Z_n le u_n text{ infinitely often}) =0 text{ or }1 $$
            according as
            $$sum_{j=1}^{+infty}[1 - F(u_j)]exp(-j[1-F(u_j)]) < +infty text{ or }=+infty ,. $$




            The proof is technical and takes around five pages, but ultimately it turns out to be a corollary of one of the Borel-Cantelli lemmas. I may get around to trying to condense the proof to only use the part required for this analysis as well as only the assumptions which hold in the Gaussian case, which may be shorter (but maybe it isn't) and type it up here, but holding your breath is not recommended. Note that in this case $omega(F)=+infty$, so that condition is vacuous, and $n(1-F(n))$ is $varepsilon log n$ thus clearly non-decreasing.



            Anyway the point being that, appealing to this theorem, if we can show that:



            $$sum_{j=1}^{+infty}[1 - F(u_j^{(varepsilon)})]exp(-j[1-F(u_j^{(varepsilon)})]) = sum_{j=1}^{+infty}left[ frac{varepsilon log j}{j} right]exp(-varepsilon log j) = varepsilon sum_{j=1}^{+infty} frac{ log j}{j^{1 + varepsilon}} < + infty ,. $$



            Note that since logarithmic growth is slower than any power law growth for any positive power law exponent (logarithms and exponentials are monotonicity preserving, so $log log n le alpha log n iff log n le n^{alpha}$ and the former inequality can always be seen to hold for all $n$ large enough due to the fact that $log n le n$ and a change of variables), we have that:



            $$ sum_{j=1}^{+infty} frac{log j}{j^{1 + varepsilon}} le sum_{j=1}^{+infty} frac{j^{varepsilon/2}}{j^{1 + varepsilon}} = sum_{j=1}^{+infty} frac{1}{j^{1 + varepsilon/2}} < +infty ,,$$



            since the p-series is known to converge for all $p>1$, and $varepsilon >0$ of course implies $1 + varepsilon/2 > 1$.



            Thus using the above theorem we have shown that for all $varepsilon >0$, $mathbb{P}(Z_n le u_n^{(varepsilon)} text{ i.o.}) = 0$, which to recapitulate should mean that $Xi_n = o(log n)$ almost surely.



            We need to show still that $log Xi_n = o(log log n)$. This doesn't follow from the above, since, e.g.,



            $$ frac{1}{n} log n = o(log n) ,, - log n + log log n not= o(log log n) ,. $$



            However, given a sequence $x_n$, if one can show that $x_n = o( (log n)^{delta})$ for arbitrary $delta >0$, then it does follow that $log(x_n) = o(log log n)$. Ideally I would like to be able to show this for $Xi_n$ using the above lemma (assuming it's even true), but am not able to (as of yet).






            share|cite|improve this answer











            $endgroup$



            Attempt: I don't feel comfortable that this is correct, but here we go. Because (according to Cramer), $Xi_n$ is $Beta(1,n)$, and thus has the density $(1 - frac{xi}{n})^n$ with support $[0,n]$, we can conclude that every $Xi_n$ is stochastically bounded (I think) by an exponential random variable with parameter $1$.



            This is because of the inequality $-x ge log(1-x)$, since this implies that:



            $$ -frac{x}{n} ge log ( 1 - frac{x}{n}) iff -x ge n log (1 - frac{x}{n}) iff e^{-x} ge left( 1 - frac{x}{n} right)^n $$



            where $e^{-x}$ is the PDF of an exponential(1).



            The stochastic ordering doesn't seem to depend on the probabilistic dependence of the sequence of exponential random variables, so we can assume we have some sequence of $Y_n$ i.i.d. exponential(1) R.V.'s.



            This is where it seems iffy to me: basically I will claim that the stochastic ordering of $Xi_n preceq Y_n$ for each individual $n$ somehow implies that the stochastic ordering transfers to the entire sequences of $Xi_n$'s and $Y_n$'s.



            So I believe it's the case (please correct me if I'm wrong) that some sequence of RVs $X_n$ is $O_p(1)$ if and only if for all $M>0$, $$mathbb{P}(X_n > M text{ i.o.}) = 0$$



            So the claim is that because $Xi_n preceq Y_n$ for all $n$ (don't really understand how the PDF of one can uniformly dominate the PDF of another but both integrate to $1$, but whatever) that the following would hold:



            $$ mathbb{P}(Xi_n > M text{ i.o.}) le mathbb{P}(Y_n > M text{ i.o.}) ,. $$



            So we just have to show that $mathbb{P}(Y_n > M text{ i.o.}) = 0$ for all $M$, but that's easy (I think), since by our assumptions



            $$ mathbb{P}(Y_n > M text{ i.o.}) =C_{{ n_k }}lim_{K to infty} prod_{k=1}^K mathbb{P}(Y_{n_k} > M) =C_{{ n_k }}lim_{K to infty} (e^{-M})^k =0$$



            where ${ n_k}$ is some subsequence, and $C_{{n_k}}$ is some finite constant depending on the particular subsequence.



            If anyone can fix this, or show why it's false and doomed to fail even with modifications, I would really appreciate it.





            First let's go through a chain of trivialities in order to rephrase the problem in a way which makes it easier to solve (note that by definition $Xi_n ge 0$):



            $$Xi_n = o(log n) quad iff quad lim_{n to infty} frac{Xi_n}{log n} = 0 quad iff quad \ forall varepsilon > 0, frac{Xi_n}{log n} > varepsilon text{ only finitely many times} quad iff \ forall varepsilon >0, quad Xi_n > varepsilon log n text{ only finitely many times} ,.$$



            One also has that:



            $$Xi_n > varepsilon log n quad iff quad n(1 - F(Z_n)) > varepsilon log n quad iff quad 1 - F(Z_n) > frac{varepsilon log n}{n} \ iff quad F(Z_n) < 1 - frac{varepsilon log n}{n} quad iff quad Z_n le inf left{ y: F(y) ge 1 - frac{varepsilon log n}{n} right} ,. $$



            Correspondingly, define for all $n$:



            $$ u_n^{(varepsilon)} = inf left{ y: F(y) ge 1 - frac{varepsilon log n}{n} right} ,. $$



            Therefore the above steps show us that:



            $$Xi_n = o(log n) text{ a.s.} quad iff quad mathbb{P}(Xi_n = o(log n)) = 1 quad iff quad \
            mathbb{P}(forall varepsilon > 0 , Xi_n > varepsilon log n text{ only finitely many times}) = 1 \
            iff mathbb{P}(forall varepsilon > 0, Z_n le u_n^{(varepsilon)} text{ only finitely many times}) = 1 \
            iff mathbb{P}(forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often}) =0 ,. $$



            Notice that we can write:



            $$ { forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often} } = bigcap_{varepsilon > 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } ,.$$



            The sequences $u_n^{(varepsilon)}$ become uniformly larger as $varepsilon$ decreases, so we can conclude that the events $${ Z_n le u_n^{(varepsilon)} text{ infinitely often} } $$ are decreasing (or at least somehow monotonic) as $varepsilon$ goes to $0$. Therefore the probability axiom regarding monotonic sequences of events allows us to conclude that:



            $$mathbb{P}(forall varepsilon >0, Z_n le u_n^{(varepsilon)} text{ infinitely often}) = \
            mathbb{P} left( bigcap_{varepsilon > 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } right) = \
            mathbb{P} left( lim_{varepsilon downarrow 0} { Z_n le u_n^{(varepsilon)} text{ infinitely often} } right) = \
            lim_{varepsilon downarrow 0} mathbb{P}(Z_n le u_n^{(varepsilon)} text{ infinitely often}) ,.$$



            Therefore it suffices to show that for all $varepsilon >0$,



            $$mathbb{P}(Z_n le u_n^{(varepsilon)} text{ infinitely often}) = 0 $$



            because of course the limit of any constant sequence is the constant.



            Here is somewhat of a sledgehammer result:




            Theorem 4.3.1., p. 252 of Galambos, The Asymptotic Theory of Extreme Order Statistics, 2nd edition. Let $X_1, X_2, dots$ be i.i.d. variables with common nondegenerate and continuous distribution function $F(x)$, and let $u_n$ be a nondecreasing sequence such that $n(1 - F(u_n))$ is also nondecreasing. Then, for $u_n < sup { x: F(x) <1 }$, $$mathbb{P}(Z_n le u_n text{ infinitely often}) =0 text{ or }1 $$
            according as
            $$sum_{j=1}^{+infty}[1 - F(u_j)]exp(-j[1-F(u_j)]) < +infty text{ or }=+infty ,. $$




            The proof is technical and takes around five pages, but ultimately it turns out to be a corollary of one of the Borel-Cantelli lemmas. I may get around to trying to condense the proof to only use the part required for this analysis as well as only the assumptions which hold in the Gaussian case, which may be shorter (but maybe it isn't) and type it up here, but holding your breath is not recommended. Note that in this case $omega(F)=+infty$, so that condition is vacuous, and $n(1-F(n))$ is $varepsilon log n$ thus clearly non-decreasing.



            Anyway the point being that, appealing to this theorem, if we can show that:



            $$sum_{j=1}^{+infty}[1 - F(u_j^{(varepsilon)})]exp(-j[1-F(u_j^{(varepsilon)})]) = sum_{j=1}^{+infty}left[ frac{varepsilon log j}{j} right]exp(-varepsilon log j) = varepsilon sum_{j=1}^{+infty} frac{ log j}{j^{1 + varepsilon}} < + infty ,. $$



            Note that since logarithmic growth is slower than any power law growth for any positive power law exponent (logarithms and exponentials are monotonicity preserving, so $log log n le alpha log n iff log n le n^{alpha}$ and the former inequality can always be seen to hold for all $n$ large enough due to the fact that $log n le n$ and a change of variables), we have that:



            $$ sum_{j=1}^{+infty} frac{log j}{j^{1 + varepsilon}} le sum_{j=1}^{+infty} frac{j^{varepsilon/2}}{j^{1 + varepsilon}} = sum_{j=1}^{+infty} frac{1}{j^{1 + varepsilon/2}} < +infty ,,$$



            since the p-series is known to converge for all $p>1$, and $varepsilon >0$ of course implies $1 + varepsilon/2 > 1$.



            Thus using the above theorem we have shown that for all $varepsilon >0$, $mathbb{P}(Z_n le u_n^{(varepsilon)} text{ i.o.}) = 0$, which to recapitulate should mean that $Xi_n = o(log n)$ almost surely.



            We need to show still that $log Xi_n = o(log log n)$. This doesn't follow from the above, since, e.g.,



            $$ frac{1}{n} log n = o(log n) ,, - log n + log log n not= o(log log n) ,. $$



            However, given a sequence $x_n$, if one can show that $x_n = o( (log n)^{delta})$ for arbitrary $delta >0$, then it does follow that $log(x_n) = o(log log n)$. Ideally I would like to be able to show this for $Xi_n$ using the above lemma (assuming it's even true), but am not able to (as of yet).







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited Feb 12 at 18:17


























            community wiki





            3 revs
            Chill2Macht













            • $begingroup$
              If $Y_n$'s are i.i.d. exponential variables of the unit rate, then for each $Y_n$ there is a probability $e^{-M}$ of exceeding the level $M$. In other words, $mathbf{1}_{{Y_n > M}}$ is a Bernoulli process with the parameter $e^{-M}$, and so, $Y_n > M$ will occur quite frequently. In fact, one can show that $max{Y_1,cdots,Y_n} = (1+o(1))log n$ almost surely. What is interesting is that the maximum of $Xi_n$ grows much slower than the i.i.d. case, because the 'updates' of $Xi_n / n$ occurs at exponentially slow speed.
              $endgroup$
              – Sangchul Lee
              Feb 16 at 4:00




















            • $begingroup$
              If $Y_n$'s are i.i.d. exponential variables of the unit rate, then for each $Y_n$ there is a probability $e^{-M}$ of exceeding the level $M$. In other words, $mathbf{1}_{{Y_n > M}}$ is a Bernoulli process with the parameter $e^{-M}$, and so, $Y_n > M$ will occur quite frequently. In fact, one can show that $max{Y_1,cdots,Y_n} = (1+o(1))log n$ almost surely. What is interesting is that the maximum of $Xi_n$ grows much slower than the i.i.d. case, because the 'updates' of $Xi_n / n$ occurs at exponentially slow speed.
              $endgroup$
              – Sangchul Lee
              Feb 16 at 4:00


















            $begingroup$
            If $Y_n$'s are i.i.d. exponential variables of the unit rate, then for each $Y_n$ there is a probability $e^{-M}$ of exceeding the level $M$. In other words, $mathbf{1}_{{Y_n > M}}$ is a Bernoulli process with the parameter $e^{-M}$, and so, $Y_n > M$ will occur quite frequently. In fact, one can show that $max{Y_1,cdots,Y_n} = (1+o(1))log n$ almost surely. What is interesting is that the maximum of $Xi_n$ grows much slower than the i.i.d. case, because the 'updates' of $Xi_n / n$ occurs at exponentially slow speed.
            $endgroup$
            – Sangchul Lee
            Feb 16 at 4:00






            $begingroup$
            If $Y_n$'s are i.i.d. exponential variables of the unit rate, then for each $Y_n$ there is a probability $e^{-M}$ of exceeding the level $M$. In other words, $mathbf{1}_{{Y_n > M}}$ is a Bernoulli process with the parameter $e^{-M}$, and so, $Y_n > M$ will occur quite frequently. In fact, one can show that $max{Y_1,cdots,Y_n} = (1+o(1))log n$ almost surely. What is interesting is that the maximum of $Xi_n$ grows much slower than the i.i.d. case, because the 'updates' of $Xi_n / n$ occurs at exponentially slow speed.
            $endgroup$
            – Sangchul Lee
            Feb 16 at 4:00













            1





            +500







            $begingroup$

            I was able to get my hands on a 1999 version of Cramer's Mathematical Methods of Statistics. Let me cite the relevant section for this problem, with some appropriate rewording for better flow.




            Let $M^v_n$ be the $v$th value from the top in a sample $X_1, cdots, X_n overset{text{iid}}{sim} F$. If we introduce $Xi_n$ by the substitution
            begin{align*}
            Xi_n = n(1 - F(M_n^v))
            end{align*}

            This variable has density
            begin{align*}
            f_{Xi_n}(xi) = binom{n-1}{v-1}left(frac{xi}{n}right)^{v-1}left(1-frac{xi}{n}right)^{n-v}
            end{align*}




            Hence, for the maximum order statistic, $v=1$, we therefore have the density
            begin{align*}
            f_{Xi_n}(xi) = left(1-frac{xi}{n}right)^{n-1}
            end{align*}

            or a $ntext{Beta}(1, n)$ distribution. Previously, the OP and I had some confusion in the comments and chat about the formulation s/he described:
            begin{align*}
            Xi_n overset{mathcal{D}}{=}max(U_1, cdots, U_n), qquad text{where } U_i overset{text{iid}}{sim} text{Unif}(0, n)
            end{align*}

            Since this would imply that $Xi_n$ is instead $ntext{Beta}(n, 1)$ (see here). Indeed, if we were to define $m_n^v$ as the $v$th value from the bottom of the sample, and define
            begin{align*}
            H_n = n F(m_n^v)
            end{align*}

            We would also get
            begin{align*}
            f_{H_n}(eta) = binom{n-1}{v-1}left(frac{eta}{n}right)^{v-1}left(1-frac{eta}{n}right)^{n-v}
            end{align*}

            with $v = 1$ giving us that $H_n$ is also $n text{Beta}(1, n)$.



            Before we continue, let me provide a concrete definition of growth rates used in probability theory.




            A sequence of random variables $X_n$ are said to be bounded in probability or stochastically bounded if, for all $epsilon > 0$, there exists $M, N$ such that, for all $n > N$, we have
            begin{align*}
            P(|X_n| < M) > 1-epsilon
            end{align*}

            We then say $X_n = O_p(a_n)$ if $X_n/a_n$ is stochastically bounded. Similarly, we say $X_n$ is stochastically convergent to zero if
            begin{align*}
            lim_{nrightarrow infty} P(|X_n| < M) = 1
            end{align*}

            for all $M > 0$, and write $X_n = o_p(a_n)$ if $X_n/a_n$ is stochastically convergent to zero.




            Remark: For example, if $X_n sim mathcal{N}(0, sin^2(n))$, then $X_n = O_p(1)$, since
            begin{align*}
            P(|X_n| < M) = 2Phileft(frac{M}{|sin(n)|}right) - 1 ge 2Phi(M) - 1 > 1 - epsilon
            end{align*}

            by choosing $M = Phi^{-1}(1 - frac{epsilon}{2})+1$ and $N = 1$.



            Back to the problem at hand, it turns out that
            begin{align*}
            ntext{Beta}(1, n) overset{mathcal{D}}{rightarrow} text{Exp}(1)
            end{align*}

            In other words, there exists a random variable $Omega sim text{Exp}(1)$ such that
            begin{align*}
            Xi_n = Omega + o_p(1)
            end{align*}

            Since $Omega = O_p(1)$, we have that $Xi_n = O_p(1) + o_p(1) = O_p(1)$ by big-O/little-o arithmetic. Indeed, looking further into Cramer's book, he stated that
            begin{align*}
            lim_{nrightarrow infty} f_{Xi_n}(xi) = frac{xi^{v-1}}{Gamma(v)}e^{-xi}
            end{align*}

            which is a $text{Gamma}(n, 1)$ distribution, which is also $O_p(1)$. Hence, for any fixed $v$, $Xi_n$ would be stochastically bounded.






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              OK, but to clarify being $O_P(1)$ doesn't guarantee that the sequence is $O(1)$ a.s.? Only that it's $O(1)$ in probability? So this isn't enough to show that the maxima of Gaussians is a.s. $sqrt{2 log n}$? Just in probability? (Is it enough to show it in expectation?)
              $endgroup$
              – Chill2Macht
              Feb 17 at 21:40












            • $begingroup$
              Yes, $O_p(1)$ means $O(1)$ in probability, while what you and Sangchul were aiming for is $O(log log n)$ a.s. Both are strengthened in one direction, while weakened in another (i.e. in probability vs a.s., and $O(1)$ vs $O(log log n)$). I took Cramer's comment on "bounded" literally, so I took the in probability route. It's pretty neat, because this parallel also occurs if we define $Xi_n = S_n/sqrt{n}$, where $S_n$ is a sum of i.i.d r.v's. Then $Xi_n = O_p(1)$, yet $Xi_n/sqrt{log log n}$ is $O(1)$ a.s. You can read more about this so-called "law of the iterated logarithm".
              $endgroup$
              – Tom Chen
              Feb 18 at 21:38












            • $begingroup$
              Yes, I know what all of those things are, the point being that if it equals $O(1)$ almost surely then it's possible to prove a lot more. Also I wasn't aiming for $O(log log n)$ almost surely, I was looking for $log Xi_n$ to be $o(log log n)$ almost surely, since the point of the proof where that's needed, we're working with $log Xi_n$. But if $Xi_n$ were $O(1)$ almost surely, then it would be trivial that $log Xi_n$ is $O(1)$ almost surely, and thus in particular $o(log log n)$ a.s. My question is about whether this type of proof can be extended into a more useful direction.
              $endgroup$
              – Chill2Macht
              Feb 19 at 23:47


















            1





            +500







            $begingroup$

            I was able to get my hands on a 1999 version of Cramer's Mathematical Methods of Statistics. Let me cite the relevant section for this problem, with some appropriate rewording for better flow.




            Let $M^v_n$ be the $v$th value from the top in a sample $X_1, cdots, X_n overset{text{iid}}{sim} F$. If we introduce $Xi_n$ by the substitution
            begin{align*}
            Xi_n = n(1 - F(M_n^v))
            end{align*}

            This variable has density
            begin{align*}
            f_{Xi_n}(xi) = binom{n-1}{v-1}left(frac{xi}{n}right)^{v-1}left(1-frac{xi}{n}right)^{n-v}
            end{align*}




            Hence, for the maximum order statistic, $v=1$, we therefore have the density
            begin{align*}
            f_{Xi_n}(xi) = left(1-frac{xi}{n}right)^{n-1}
            end{align*}

            or a $ntext{Beta}(1, n)$ distribution. Previously, the OP and I had some confusion in the comments and chat about the formulation s/he described:
            begin{align*}
            Xi_n overset{mathcal{D}}{=}max(U_1, cdots, U_n), qquad text{where } U_i overset{text{iid}}{sim} text{Unif}(0, n)
            end{align*}

            Since this would imply that $Xi_n$ is instead $ntext{Beta}(n, 1)$ (see here). Indeed, if we were to define $m_n^v$ as the $v$th value from the bottom of the sample, and define
            begin{align*}
            H_n = n F(m_n^v)
            end{align*}

            We would also get
            begin{align*}
            f_{H_n}(eta) = binom{n-1}{v-1}left(frac{eta}{n}right)^{v-1}left(1-frac{eta}{n}right)^{n-v}
            end{align*}

            with $v = 1$ giving us that $H_n$ is also $n text{Beta}(1, n)$.



            Before we continue, let me provide a concrete definition of growth rates used in probability theory.




            A sequence of random variables $X_n$ are said to be bounded in probability or stochastically bounded if, for all $epsilon > 0$, there exists $M, N$ such that, for all $n > N$, we have
            begin{align*}
            P(|X_n| < M) > 1-epsilon
            end{align*}

            We then say $X_n = O_p(a_n)$ if $X_n/a_n$ is stochastically bounded. Similarly, we say $X_n$ is stochastically convergent to zero if
            begin{align*}
            lim_{nrightarrow infty} P(|X_n| < M) = 1
            end{align*}

            for all $M > 0$, and write $X_n = o_p(a_n)$ if $X_n/a_n$ is stochastically convergent to zero.




            Remark: For example, if $X_n sim mathcal{N}(0, sin^2(n))$, then $X_n = O_p(1)$, since
            begin{align*}
            P(|X_n| < M) = 2Phileft(frac{M}{|sin(n)|}right) - 1 ge 2Phi(M) - 1 > 1 - epsilon
            end{align*}

            by choosing $M = Phi^{-1}(1 - frac{epsilon}{2})+1$ and $N = 1$.



            Back to the problem at hand, it turns out that
            begin{align*}
            ntext{Beta}(1, n) overset{mathcal{D}}{rightarrow} text{Exp}(1)
            end{align*}

            In other words, there exists a random variable $Omega sim text{Exp}(1)$ such that
            begin{align*}
            Xi_n = Omega + o_p(1)
            end{align*}

            Since $Omega = O_p(1)$, we have that $Xi_n = O_p(1) + o_p(1) = O_p(1)$ by big-O/little-o arithmetic. Indeed, looking further into Cramer's book, he stated that
            begin{align*}
            lim_{nrightarrow infty} f_{Xi_n}(xi) = frac{xi^{v-1}}{Gamma(v)}e^{-xi}
            end{align*}

            which is a $text{Gamma}(n, 1)$ distribution, which is also $O_p(1)$. Hence, for any fixed $v$, $Xi_n$ would be stochastically bounded.






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              OK, but to clarify being $O_P(1)$ doesn't guarantee that the sequence is $O(1)$ a.s.? Only that it's $O(1)$ in probability? So this isn't enough to show that the maxima of Gaussians is a.s. $sqrt{2 log n}$? Just in probability? (Is it enough to show it in expectation?)
              $endgroup$
              – Chill2Macht
              Feb 17 at 21:40












            • $begingroup$
              Yes, $O_p(1)$ means $O(1)$ in probability, while what you and Sangchul were aiming for is $O(log log n)$ a.s. Both are strengthened in one direction, while weakened in another (i.e. in probability vs a.s., and $O(1)$ vs $O(log log n)$). I took Cramer's comment on "bounded" literally, so I took the in probability route. It's pretty neat, because this parallel also occurs if we define $Xi_n = S_n/sqrt{n}$, where $S_n$ is a sum of i.i.d r.v's. Then $Xi_n = O_p(1)$, yet $Xi_n/sqrt{log log n}$ is $O(1)$ a.s. You can read more about this so-called "law of the iterated logarithm".
              $endgroup$
              – Tom Chen
              Feb 18 at 21:38












            • $begingroup$
              Yes, I know what all of those things are, the point being that if it equals $O(1)$ almost surely then it's possible to prove a lot more. Also I wasn't aiming for $O(log log n)$ almost surely, I was looking for $log Xi_n$ to be $o(log log n)$ almost surely, since the point of the proof where that's needed, we're working with $log Xi_n$. But if $Xi_n$ were $O(1)$ almost surely, then it would be trivial that $log Xi_n$ is $O(1)$ almost surely, and thus in particular $o(log log n)$ a.s. My question is about whether this type of proof can be extended into a more useful direction.
              $endgroup$
              – Chill2Macht
              Feb 19 at 23:47
















            1





            +500







            1





            +500



            1




            +500



            $begingroup$

            I was able to get my hands on a 1999 version of Cramer's Mathematical Methods of Statistics. Let me cite the relevant section for this problem, with some appropriate rewording for better flow.




            Let $M^v_n$ be the $v$th value from the top in a sample $X_1, cdots, X_n overset{text{iid}}{sim} F$. If we introduce $Xi_n$ by the substitution
            begin{align*}
            Xi_n = n(1 - F(M_n^v))
            end{align*}

            This variable has density
            begin{align*}
            f_{Xi_n}(xi) = binom{n-1}{v-1}left(frac{xi}{n}right)^{v-1}left(1-frac{xi}{n}right)^{n-v}
            end{align*}




            Hence, for the maximum order statistic, $v=1$, we therefore have the density
            begin{align*}
            f_{Xi_n}(xi) = left(1-frac{xi}{n}right)^{n-1}
            end{align*}

            or a $ntext{Beta}(1, n)$ distribution. Previously, the OP and I had some confusion in the comments and chat about the formulation s/he described:
            begin{align*}
            Xi_n overset{mathcal{D}}{=}max(U_1, cdots, U_n), qquad text{where } U_i overset{text{iid}}{sim} text{Unif}(0, n)
            end{align*}

            Since this would imply that $Xi_n$ is instead $ntext{Beta}(n, 1)$ (see here). Indeed, if we were to define $m_n^v$ as the $v$th value from the bottom of the sample, and define
            begin{align*}
            H_n = n F(m_n^v)
            end{align*}

            We would also get
            begin{align*}
            f_{H_n}(eta) = binom{n-1}{v-1}left(frac{eta}{n}right)^{v-1}left(1-frac{eta}{n}right)^{n-v}
            end{align*}

            with $v = 1$ giving us that $H_n$ is also $n text{Beta}(1, n)$.



            Before we continue, let me provide a concrete definition of growth rates used in probability theory.




            A sequence of random variables $X_n$ are said to be bounded in probability or stochastically bounded if, for all $epsilon > 0$, there exists $M, N$ such that, for all $n > N$, we have
            begin{align*}
            P(|X_n| < M) > 1-epsilon
            end{align*}

            We then say $X_n = O_p(a_n)$ if $X_n/a_n$ is stochastically bounded. Similarly, we say $X_n$ is stochastically convergent to zero if
            begin{align*}
            lim_{nrightarrow infty} P(|X_n| < M) = 1
            end{align*}

            for all $M > 0$, and write $X_n = o_p(a_n)$ if $X_n/a_n$ is stochastically convergent to zero.




            Remark: For example, if $X_n sim mathcal{N}(0, sin^2(n))$, then $X_n = O_p(1)$, since
            begin{align*}
            P(|X_n| < M) = 2Phileft(frac{M}{|sin(n)|}right) - 1 ge 2Phi(M) - 1 > 1 - epsilon
            end{align*}

            by choosing $M = Phi^{-1}(1 - frac{epsilon}{2})+1$ and $N = 1$.



            Back to the problem at hand, it turns out that
            begin{align*}
            ntext{Beta}(1, n) overset{mathcal{D}}{rightarrow} text{Exp}(1)
            end{align*}

            In other words, there exists a random variable $Omega sim text{Exp}(1)$ such that
            begin{align*}
            Xi_n = Omega + o_p(1)
            end{align*}

            Since $Omega = O_p(1)$, we have that $Xi_n = O_p(1) + o_p(1) = O_p(1)$ by big-O/little-o arithmetic. Indeed, looking further into Cramer's book, he stated that
            begin{align*}
            lim_{nrightarrow infty} f_{Xi_n}(xi) = frac{xi^{v-1}}{Gamma(v)}e^{-xi}
            end{align*}

            which is a $text{Gamma}(n, 1)$ distribution, which is also $O_p(1)$. Hence, for any fixed $v$, $Xi_n$ would be stochastically bounded.






            share|cite|improve this answer











            $endgroup$



            I was able to get my hands on a 1999 version of Cramer's Mathematical Methods of Statistics. Let me cite the relevant section for this problem, with some appropriate rewording for better flow.




            Let $M^v_n$ be the $v$th value from the top in a sample $X_1, cdots, X_n overset{text{iid}}{sim} F$. If we introduce $Xi_n$ by the substitution
            begin{align*}
            Xi_n = n(1 - F(M_n^v))
            end{align*}

            This variable has density
            begin{align*}
            f_{Xi_n}(xi) = binom{n-1}{v-1}left(frac{xi}{n}right)^{v-1}left(1-frac{xi}{n}right)^{n-v}
            end{align*}




            Hence, for the maximum order statistic, $v=1$, we therefore have the density
            begin{align*}
            f_{Xi_n}(xi) = left(1-frac{xi}{n}right)^{n-1}
            end{align*}

            or a $ntext{Beta}(1, n)$ distribution. Previously, the OP and I had some confusion in the comments and chat about the formulation s/he described:
            begin{align*}
            Xi_n overset{mathcal{D}}{=}max(U_1, cdots, U_n), qquad text{where } U_i overset{text{iid}}{sim} text{Unif}(0, n)
            end{align*}

            Since this would imply that $Xi_n$ is instead $ntext{Beta}(n, 1)$ (see here). Indeed, if we were to define $m_n^v$ as the $v$th value from the bottom of the sample, and define
            begin{align*}
            H_n = n F(m_n^v)
            end{align*}

            We would also get
            begin{align*}
            f_{H_n}(eta) = binom{n-1}{v-1}left(frac{eta}{n}right)^{v-1}left(1-frac{eta}{n}right)^{n-v}
            end{align*}

            with $v = 1$ giving us that $H_n$ is also $n text{Beta}(1, n)$.



            Before we continue, let me provide a concrete definition of growth rates used in probability theory.




            A sequence of random variables $X_n$ are said to be bounded in probability or stochastically bounded if, for all $epsilon > 0$, there exists $M, N$ such that, for all $n > N$, we have
            begin{align*}
            P(|X_n| < M) > 1-epsilon
            end{align*}

            We then say $X_n = O_p(a_n)$ if $X_n/a_n$ is stochastically bounded. Similarly, we say $X_n$ is stochastically convergent to zero if
            begin{align*}
            lim_{nrightarrow infty} P(|X_n| < M) = 1
            end{align*}

            for all $M > 0$, and write $X_n = o_p(a_n)$ if $X_n/a_n$ is stochastically convergent to zero.




            Remark: For example, if $X_n sim mathcal{N}(0, sin^2(n))$, then $X_n = O_p(1)$, since
            begin{align*}
            P(|X_n| < M) = 2Phileft(frac{M}{|sin(n)|}right) - 1 ge 2Phi(M) - 1 > 1 - epsilon
            end{align*}

            by choosing $M = Phi^{-1}(1 - frac{epsilon}{2})+1$ and $N = 1$.



            Back to the problem at hand, it turns out that
            begin{align*}
            ntext{Beta}(1, n) overset{mathcal{D}}{rightarrow} text{Exp}(1)
            end{align*}

            In other words, there exists a random variable $Omega sim text{Exp}(1)$ such that
            begin{align*}
            Xi_n = Omega + o_p(1)
            end{align*}

            Since $Omega = O_p(1)$, we have that $Xi_n = O_p(1) + o_p(1) = O_p(1)$ by big-O/little-o arithmetic. Indeed, looking further into Cramer's book, he stated that
            begin{align*}
            lim_{nrightarrow infty} f_{Xi_n}(xi) = frac{xi^{v-1}}{Gamma(v)}e^{-xi}
            end{align*}

            which is a $text{Gamma}(n, 1)$ distribution, which is also $O_p(1)$. Hence, for any fixed $v$, $Xi_n$ would be stochastically bounded.







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited Feb 15 at 6:42

























            answered Feb 15 at 6:36









            Tom ChenTom Chen

            1,773715




            1,773715












            • $begingroup$
              OK, but to clarify being $O_P(1)$ doesn't guarantee that the sequence is $O(1)$ a.s.? Only that it's $O(1)$ in probability? So this isn't enough to show that the maxima of Gaussians is a.s. $sqrt{2 log n}$? Just in probability? (Is it enough to show it in expectation?)
              $endgroup$
              – Chill2Macht
              Feb 17 at 21:40












            • $begingroup$
              Yes, $O_p(1)$ means $O(1)$ in probability, while what you and Sangchul were aiming for is $O(log log n)$ a.s. Both are strengthened in one direction, while weakened in another (i.e. in probability vs a.s., and $O(1)$ vs $O(log log n)$). I took Cramer's comment on "bounded" literally, so I took the in probability route. It's pretty neat, because this parallel also occurs if we define $Xi_n = S_n/sqrt{n}$, where $S_n$ is a sum of i.i.d r.v's. Then $Xi_n = O_p(1)$, yet $Xi_n/sqrt{log log n}$ is $O(1)$ a.s. You can read more about this so-called "law of the iterated logarithm".
              $endgroup$
              – Tom Chen
              Feb 18 at 21:38












            • $begingroup$
              Yes, I know what all of those things are, the point being that if it equals $O(1)$ almost surely then it's possible to prove a lot more. Also I wasn't aiming for $O(log log n)$ almost surely, I was looking for $log Xi_n$ to be $o(log log n)$ almost surely, since the point of the proof where that's needed, we're working with $log Xi_n$. But if $Xi_n$ were $O(1)$ almost surely, then it would be trivial that $log Xi_n$ is $O(1)$ almost surely, and thus in particular $o(log log n)$ a.s. My question is about whether this type of proof can be extended into a more useful direction.
              $endgroup$
              – Chill2Macht
              Feb 19 at 23:47




















            • $begingroup$
              OK, but to clarify being $O_P(1)$ doesn't guarantee that the sequence is $O(1)$ a.s.? Only that it's $O(1)$ in probability? So this isn't enough to show that the maxima of Gaussians is a.s. $sqrt{2 log n}$? Just in probability? (Is it enough to show it in expectation?)
              $endgroup$
              – Chill2Macht
              Feb 17 at 21:40












            • $begingroup$
              Yes, $O_p(1)$ means $O(1)$ in probability, while what you and Sangchul were aiming for is $O(log log n)$ a.s. Both are strengthened in one direction, while weakened in another (i.e. in probability vs a.s., and $O(1)$ vs $O(log log n)$). I took Cramer's comment on "bounded" literally, so I took the in probability route. It's pretty neat, because this parallel also occurs if we define $Xi_n = S_n/sqrt{n}$, where $S_n$ is a sum of i.i.d r.v's. Then $Xi_n = O_p(1)$, yet $Xi_n/sqrt{log log n}$ is $O(1)$ a.s. You can read more about this so-called "law of the iterated logarithm".
              $endgroup$
              – Tom Chen
              Feb 18 at 21:38












            • $begingroup$
              Yes, I know what all of those things are, the point being that if it equals $O(1)$ almost surely then it's possible to prove a lot more. Also I wasn't aiming for $O(log log n)$ almost surely, I was looking for $log Xi_n$ to be $o(log log n)$ almost surely, since the point of the proof where that's needed, we're working with $log Xi_n$. But if $Xi_n$ were $O(1)$ almost surely, then it would be trivial that $log Xi_n$ is $O(1)$ almost surely, and thus in particular $o(log log n)$ a.s. My question is about whether this type of proof can be extended into a more useful direction.
              $endgroup$
              – Chill2Macht
              Feb 19 at 23:47


















            $begingroup$
            OK, but to clarify being $O_P(1)$ doesn't guarantee that the sequence is $O(1)$ a.s.? Only that it's $O(1)$ in probability? So this isn't enough to show that the maxima of Gaussians is a.s. $sqrt{2 log n}$? Just in probability? (Is it enough to show it in expectation?)
            $endgroup$
            – Chill2Macht
            Feb 17 at 21:40






            $begingroup$
            OK, but to clarify being $O_P(1)$ doesn't guarantee that the sequence is $O(1)$ a.s.? Only that it's $O(1)$ in probability? So this isn't enough to show that the maxima of Gaussians is a.s. $sqrt{2 log n}$? Just in probability? (Is it enough to show it in expectation?)
            $endgroup$
            – Chill2Macht
            Feb 17 at 21:40














            $begingroup$
            Yes, $O_p(1)$ means $O(1)$ in probability, while what you and Sangchul were aiming for is $O(log log n)$ a.s. Both are strengthened in one direction, while weakened in another (i.e. in probability vs a.s., and $O(1)$ vs $O(log log n)$). I took Cramer's comment on "bounded" literally, so I took the in probability route. It's pretty neat, because this parallel also occurs if we define $Xi_n = S_n/sqrt{n}$, where $S_n$ is a sum of i.i.d r.v's. Then $Xi_n = O_p(1)$, yet $Xi_n/sqrt{log log n}$ is $O(1)$ a.s. You can read more about this so-called "law of the iterated logarithm".
            $endgroup$
            – Tom Chen
            Feb 18 at 21:38






            $begingroup$
            Yes, $O_p(1)$ means $O(1)$ in probability, while what you and Sangchul were aiming for is $O(log log n)$ a.s. Both are strengthened in one direction, while weakened in another (i.e. in probability vs a.s., and $O(1)$ vs $O(log log n)$). I took Cramer's comment on "bounded" literally, so I took the in probability route. It's pretty neat, because this parallel also occurs if we define $Xi_n = S_n/sqrt{n}$, where $S_n$ is a sum of i.i.d r.v's. Then $Xi_n = O_p(1)$, yet $Xi_n/sqrt{log log n}$ is $O(1)$ a.s. You can read more about this so-called "law of the iterated logarithm".
            $endgroup$
            – Tom Chen
            Feb 18 at 21:38














            $begingroup$
            Yes, I know what all of those things are, the point being that if it equals $O(1)$ almost surely then it's possible to prove a lot more. Also I wasn't aiming for $O(log log n)$ almost surely, I was looking for $log Xi_n$ to be $o(log log n)$ almost surely, since the point of the proof where that's needed, we're working with $log Xi_n$. But if $Xi_n$ were $O(1)$ almost surely, then it would be trivial that $log Xi_n$ is $O(1)$ almost surely, and thus in particular $o(log log n)$ a.s. My question is about whether this type of proof can be extended into a more useful direction.
            $endgroup$
            – Chill2Macht
            Feb 19 at 23:47






            $begingroup$
            Yes, I know what all of those things are, the point being that if it equals $O(1)$ almost surely then it's possible to prove a lot more. Also I wasn't aiming for $O(log log n)$ almost surely, I was looking for $log Xi_n$ to be $o(log log n)$ almost surely, since the point of the proof where that's needed, we're working with $log Xi_n$. But if $Xi_n$ were $O(1)$ almost surely, then it would be trivial that $log Xi_n$ is $O(1)$ almost surely, and thus in particular $o(log log n)$ a.s. My question is about whether this type of proof can be extended into a more useful direction.
            $endgroup$
            – Chill2Macht
            Feb 19 at 23:47




















            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3034681%2fwhat-is-the-rate-of-growth-of-m-n-max-1-le-i-le-n-u-in-where-u%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Biblatex bibliography style without URLs when DOI exists (in Overleaf with Zotero bibliography)

            ComboBox Display Member on multiple fields

            Is it possible to collect Nectar points via Trainline?