How to prove that the following estimators are biased and consistent?
$begingroup$
Given a random variable $X$ following a geometric distribution with parameter $p.$ Then one estimator that can be obtained by considering the second moment $E[X^{2}]=frac{2-p}{p^2}$, which is
$$hat{p}_1=frac{-1+sqrt{1+frac{8}{n}sum_{i=1}^{n}X_i^2}}{frac{2}{n}sum_{i=1}^{n}X_i^2}.$$
Another family of estimators can be obtained by observing that $E[mathbf{1}_{[k,infty)}X_1] = P[X_1>k]=(1-p)^{k}$ and so
$$hat{p}_2 = 1- frac{logleft(frac{1}{n}sum_{i=1}^{n}mathbb{1}_{[k,+infty)}(X_i)right)}{k}.$$
I want to determine whether $hat{p}_1$ and $hat{p}_2$ are biased and consistent, but this seems difficult since I am not able to figure the distribution of these estimators. Perhaps I have to use inequalities, but I am not sure how to proceed. Any hints will be much appreciated.
Edit:
Let $Y=sum_{i=1}^{n}X_i^2/n$ then
$$E[hat{p}_1]=E[f(Y)]$$
where
$$f(y) = frac{-1+sqrt{1+8y}}{2y}$$
then since $f''(y)>0$ using the Jensen Inequality we have that:
$$E[f(Y)]>f(E[Y])implies E[hat{p}_1]>p.$$
For the second estimator, we try Jensen with $Y=frac{1}{n}sum_{i=1}^{n}mathbb{1}_{[k,+infty)}(X_i)$ and $$f(y)=1-frac{log(y)}{k}.$$Then $$f''(y)=-frac{1}{yk}<0$$ and so we have that
$$E[hat{p}_2]=E[f(Y)]<f(E[Y])=p.$$
I think this argument shows that $hat{p}_1$ and $hat{p}_2$ are biased. I am not very sure as to how the law of large numbers will work with the random variable $Y=sum_{i=1}^{n}X_i^2/n$ since it is not exactly an average. Perhaps someone could explain this to me.
probability-theory statistics statistical-inference probability-limit-theorems
$endgroup$
add a comment |
$begingroup$
Given a random variable $X$ following a geometric distribution with parameter $p.$ Then one estimator that can be obtained by considering the second moment $E[X^{2}]=frac{2-p}{p^2}$, which is
$$hat{p}_1=frac{-1+sqrt{1+frac{8}{n}sum_{i=1}^{n}X_i^2}}{frac{2}{n}sum_{i=1}^{n}X_i^2}.$$
Another family of estimators can be obtained by observing that $E[mathbf{1}_{[k,infty)}X_1] = P[X_1>k]=(1-p)^{k}$ and so
$$hat{p}_2 = 1- frac{logleft(frac{1}{n}sum_{i=1}^{n}mathbb{1}_{[k,+infty)}(X_i)right)}{k}.$$
I want to determine whether $hat{p}_1$ and $hat{p}_2$ are biased and consistent, but this seems difficult since I am not able to figure the distribution of these estimators. Perhaps I have to use inequalities, but I am not sure how to proceed. Any hints will be much appreciated.
Edit:
Let $Y=sum_{i=1}^{n}X_i^2/n$ then
$$E[hat{p}_1]=E[f(Y)]$$
where
$$f(y) = frac{-1+sqrt{1+8y}}{2y}$$
then since $f''(y)>0$ using the Jensen Inequality we have that:
$$E[f(Y)]>f(E[Y])implies E[hat{p}_1]>p.$$
For the second estimator, we try Jensen with $Y=frac{1}{n}sum_{i=1}^{n}mathbb{1}_{[k,+infty)}(X_i)$ and $$f(y)=1-frac{log(y)}{k}.$$Then $$f''(y)=-frac{1}{yk}<0$$ and so we have that
$$E[hat{p}_2]=E[f(Y)]<f(E[Y])=p.$$
I think this argument shows that $hat{p}_1$ and $hat{p}_2$ are biased. I am not very sure as to how the law of large numbers will work with the random variable $Y=sum_{i=1}^{n}X_i^2/n$ since it is not exactly an average. Perhaps someone could explain this to me.
probability-theory statistics statistical-inference probability-limit-theorems
$endgroup$
1
$begingroup$
Both are consistent by the law of large numbers. The second one is biased by Jensen inequality. Probably the first one as well, also by convexity...
$endgroup$
– Did
Dec 31 '18 at 22:33
$begingroup$
Hmmm... All these arguments were already mentioned to you à propos your previous recent question. Why are you not trying to apply them in the present context?
$endgroup$
– Did
Dec 31 '18 at 22:39
$begingroup$
Hey, I made some edits. I hope my question makes more sense now.
$endgroup$
– model_checker
Jan 1 at 0:33
$begingroup$
1. In the log case, your $f''$ is wrong. 2. The random variable $Y=frac1nsum X_k^2$ is exactly an average hence the LLN applies perfectly.
$endgroup$
– Did
Jan 1 at 11:35
add a comment |
$begingroup$
Given a random variable $X$ following a geometric distribution with parameter $p.$ Then one estimator that can be obtained by considering the second moment $E[X^{2}]=frac{2-p}{p^2}$, which is
$$hat{p}_1=frac{-1+sqrt{1+frac{8}{n}sum_{i=1}^{n}X_i^2}}{frac{2}{n}sum_{i=1}^{n}X_i^2}.$$
Another family of estimators can be obtained by observing that $E[mathbf{1}_{[k,infty)}X_1] = P[X_1>k]=(1-p)^{k}$ and so
$$hat{p}_2 = 1- frac{logleft(frac{1}{n}sum_{i=1}^{n}mathbb{1}_{[k,+infty)}(X_i)right)}{k}.$$
I want to determine whether $hat{p}_1$ and $hat{p}_2$ are biased and consistent, but this seems difficult since I am not able to figure the distribution of these estimators. Perhaps I have to use inequalities, but I am not sure how to proceed. Any hints will be much appreciated.
Edit:
Let $Y=sum_{i=1}^{n}X_i^2/n$ then
$$E[hat{p}_1]=E[f(Y)]$$
where
$$f(y) = frac{-1+sqrt{1+8y}}{2y}$$
then since $f''(y)>0$ using the Jensen Inequality we have that:
$$E[f(Y)]>f(E[Y])implies E[hat{p}_1]>p.$$
For the second estimator, we try Jensen with $Y=frac{1}{n}sum_{i=1}^{n}mathbb{1}_{[k,+infty)}(X_i)$ and $$f(y)=1-frac{log(y)}{k}.$$Then $$f''(y)=-frac{1}{yk}<0$$ and so we have that
$$E[hat{p}_2]=E[f(Y)]<f(E[Y])=p.$$
I think this argument shows that $hat{p}_1$ and $hat{p}_2$ are biased. I am not very sure as to how the law of large numbers will work with the random variable $Y=sum_{i=1}^{n}X_i^2/n$ since it is not exactly an average. Perhaps someone could explain this to me.
probability-theory statistics statistical-inference probability-limit-theorems
$endgroup$
Given a random variable $X$ following a geometric distribution with parameter $p.$ Then one estimator that can be obtained by considering the second moment $E[X^{2}]=frac{2-p}{p^2}$, which is
$$hat{p}_1=frac{-1+sqrt{1+frac{8}{n}sum_{i=1}^{n}X_i^2}}{frac{2}{n}sum_{i=1}^{n}X_i^2}.$$
Another family of estimators can be obtained by observing that $E[mathbf{1}_{[k,infty)}X_1] = P[X_1>k]=(1-p)^{k}$ and so
$$hat{p}_2 = 1- frac{logleft(frac{1}{n}sum_{i=1}^{n}mathbb{1}_{[k,+infty)}(X_i)right)}{k}.$$
I want to determine whether $hat{p}_1$ and $hat{p}_2$ are biased and consistent, but this seems difficult since I am not able to figure the distribution of these estimators. Perhaps I have to use inequalities, but I am not sure how to proceed. Any hints will be much appreciated.
Edit:
Let $Y=sum_{i=1}^{n}X_i^2/n$ then
$$E[hat{p}_1]=E[f(Y)]$$
where
$$f(y) = frac{-1+sqrt{1+8y}}{2y}$$
then since $f''(y)>0$ using the Jensen Inequality we have that:
$$E[f(Y)]>f(E[Y])implies E[hat{p}_1]>p.$$
For the second estimator, we try Jensen with $Y=frac{1}{n}sum_{i=1}^{n}mathbb{1}_{[k,+infty)}(X_i)$ and $$f(y)=1-frac{log(y)}{k}.$$Then $$f''(y)=-frac{1}{yk}<0$$ and so we have that
$$E[hat{p}_2]=E[f(Y)]<f(E[Y])=p.$$
I think this argument shows that $hat{p}_1$ and $hat{p}_2$ are biased. I am not very sure as to how the law of large numbers will work with the random variable $Y=sum_{i=1}^{n}X_i^2/n$ since it is not exactly an average. Perhaps someone could explain this to me.
probability-theory statistics statistical-inference probability-limit-theorems
probability-theory statistics statistical-inference probability-limit-theorems
edited Jan 1 at 0:33
model_checker
asked Dec 31 '18 at 22:26
model_checkermodel_checker
4,45021931
4,45021931
1
$begingroup$
Both are consistent by the law of large numbers. The second one is biased by Jensen inequality. Probably the first one as well, also by convexity...
$endgroup$
– Did
Dec 31 '18 at 22:33
$begingroup$
Hmmm... All these arguments were already mentioned to you à propos your previous recent question. Why are you not trying to apply them in the present context?
$endgroup$
– Did
Dec 31 '18 at 22:39
$begingroup$
Hey, I made some edits. I hope my question makes more sense now.
$endgroup$
– model_checker
Jan 1 at 0:33
$begingroup$
1. In the log case, your $f''$ is wrong. 2. The random variable $Y=frac1nsum X_k^2$ is exactly an average hence the LLN applies perfectly.
$endgroup$
– Did
Jan 1 at 11:35
add a comment |
1
$begingroup$
Both are consistent by the law of large numbers. The second one is biased by Jensen inequality. Probably the first one as well, also by convexity...
$endgroup$
– Did
Dec 31 '18 at 22:33
$begingroup$
Hmmm... All these arguments were already mentioned to you à propos your previous recent question. Why are you not trying to apply them in the present context?
$endgroup$
– Did
Dec 31 '18 at 22:39
$begingroup$
Hey, I made some edits. I hope my question makes more sense now.
$endgroup$
– model_checker
Jan 1 at 0:33
$begingroup$
1. In the log case, your $f''$ is wrong. 2. The random variable $Y=frac1nsum X_k^2$ is exactly an average hence the LLN applies perfectly.
$endgroup$
– Did
Jan 1 at 11:35
1
1
$begingroup$
Both are consistent by the law of large numbers. The second one is biased by Jensen inequality. Probably the first one as well, also by convexity...
$endgroup$
– Did
Dec 31 '18 at 22:33
$begingroup$
Both are consistent by the law of large numbers. The second one is biased by Jensen inequality. Probably the first one as well, also by convexity...
$endgroup$
– Did
Dec 31 '18 at 22:33
$begingroup$
Hmmm... All these arguments were already mentioned to you à propos your previous recent question. Why are you not trying to apply them in the present context?
$endgroup$
– Did
Dec 31 '18 at 22:39
$begingroup$
Hmmm... All these arguments were already mentioned to you à propos your previous recent question. Why are you not trying to apply them in the present context?
$endgroup$
– Did
Dec 31 '18 at 22:39
$begingroup$
Hey, I made some edits. I hope my question makes more sense now.
$endgroup$
– model_checker
Jan 1 at 0:33
$begingroup$
Hey, I made some edits. I hope my question makes more sense now.
$endgroup$
– model_checker
Jan 1 at 0:33
$begingroup$
1. In the log case, your $f''$ is wrong. 2. The random variable $Y=frac1nsum X_k^2$ is exactly an average hence the LLN applies perfectly.
$endgroup$
– Did
Jan 1 at 11:35
$begingroup$
1. In the log case, your $f''$ is wrong. 2. The random variable $Y=frac1nsum X_k^2$ is exactly an average hence the LLN applies perfectly.
$endgroup$
– Did
Jan 1 at 11:35
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
For the first estimator we can use the strong law of large numbers to deduce that since $(X_i)_{igeq 1}$ are i.i.d, it follows that $sum_1^n X_i/nstackrel{text{a.s}}{to} EX=1/p$ whence
$$
hat{p}=frac{1}{sum_1^n X_i/n}stackrel{text{a.s}}{to} frac{1}{1/p}=p
$$
as $nto infty$. So the first estimator is strongly consistent.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3058088%2fhow-to-prove-that-the-following-estimators-are-biased-and-consistent%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
For the first estimator we can use the strong law of large numbers to deduce that since $(X_i)_{igeq 1}$ are i.i.d, it follows that $sum_1^n X_i/nstackrel{text{a.s}}{to} EX=1/p$ whence
$$
hat{p}=frac{1}{sum_1^n X_i/n}stackrel{text{a.s}}{to} frac{1}{1/p}=p
$$
as $nto infty$. So the first estimator is strongly consistent.
$endgroup$
add a comment |
$begingroup$
For the first estimator we can use the strong law of large numbers to deduce that since $(X_i)_{igeq 1}$ are i.i.d, it follows that $sum_1^n X_i/nstackrel{text{a.s}}{to} EX=1/p$ whence
$$
hat{p}=frac{1}{sum_1^n X_i/n}stackrel{text{a.s}}{to} frac{1}{1/p}=p
$$
as $nto infty$. So the first estimator is strongly consistent.
$endgroup$
add a comment |
$begingroup$
For the first estimator we can use the strong law of large numbers to deduce that since $(X_i)_{igeq 1}$ are i.i.d, it follows that $sum_1^n X_i/nstackrel{text{a.s}}{to} EX=1/p$ whence
$$
hat{p}=frac{1}{sum_1^n X_i/n}stackrel{text{a.s}}{to} frac{1}{1/p}=p
$$
as $nto infty$. So the first estimator is strongly consistent.
$endgroup$
For the first estimator we can use the strong law of large numbers to deduce that since $(X_i)_{igeq 1}$ are i.i.d, it follows that $sum_1^n X_i/nstackrel{text{a.s}}{to} EX=1/p$ whence
$$
hat{p}=frac{1}{sum_1^n X_i/n}stackrel{text{a.s}}{to} frac{1}{1/p}=p
$$
as $nto infty$. So the first estimator is strongly consistent.
answered Dec 31 '18 at 22:35
Foobaz JohnFoobaz John
23k41552
23k41552
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3058088%2fhow-to-prove-that-the-following-estimators-are-biased-and-consistent%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
Both are consistent by the law of large numbers. The second one is biased by Jensen inequality. Probably the first one as well, also by convexity...
$endgroup$
– Did
Dec 31 '18 at 22:33
$begingroup$
Hmmm... All these arguments were already mentioned to you à propos your previous recent question. Why are you not trying to apply them in the present context?
$endgroup$
– Did
Dec 31 '18 at 22:39
$begingroup$
Hey, I made some edits. I hope my question makes more sense now.
$endgroup$
– model_checker
Jan 1 at 0:33
$begingroup$
1. In the log case, your $f''$ is wrong. 2. The random variable $Y=frac1nsum X_k^2$ is exactly an average hence the LLN applies perfectly.
$endgroup$
– Did
Jan 1 at 11:35