Why does the function VarianceMLE give a different result from Variance?
$begingroup$
Why does the function VarianceMLE
give a different result from Variance
?
And what is it in Mathematica 11.3?
Please see the picture above t665he MLE is 621 and the other is 665.
probability-or-statistics
New contributor
$endgroup$
add a comment |
$begingroup$
Why does the function VarianceMLE
give a different result from Variance
?
And what is it in Mathematica 11.3?
Please see the picture above t665he MLE is 621 and the other is 665.
probability-or-statistics
New contributor
$endgroup$
$begingroup$
In Mathematica 11.3,<< Statistics`
produces an error message andVariance@data // N
gives 665.524
$endgroup$
– m_goldberg
21 hours ago
2
$begingroup$
I'm going to guess that VarianceMLE is the maximum likelihood variance estimator rather than the unbiased one (assuming normally distributed data). The difference between the two is thatVariance
divides byN-1
(N == Length[data]
) while the MLE estimator divides byN
.
$endgroup$
– Sjoerd Smit
19 hours ago
1
$begingroup$
And for future reference: please post copyable code in your question rather than a screenshot. This makes it much easier for someone else to copy your code and try things out.
$endgroup$
– Sjoerd Smit
19 hours ago
$begingroup$
Which book did you see this in? It looks like a scan.
$endgroup$
– Szabolcs
17 hours ago
add a comment |
$begingroup$
Why does the function VarianceMLE
give a different result from Variance
?
And what is it in Mathematica 11.3?
Please see the picture above t665he MLE is 621 and the other is 665.
probability-or-statistics
New contributor
$endgroup$
Why does the function VarianceMLE
give a different result from Variance
?
And what is it in Mathematica 11.3?
Please see the picture above t665he MLE is 621 and the other is 665.
probability-or-statistics
probability-or-statistics
New contributor
New contributor
edited 20 hours ago
m_goldberg
84.7k872196
84.7k872196
New contributor
asked 21 hours ago
FacetFacet
254
254
New contributor
New contributor
$begingroup$
In Mathematica 11.3,<< Statistics`
produces an error message andVariance@data // N
gives 665.524
$endgroup$
– m_goldberg
21 hours ago
2
$begingroup$
I'm going to guess that VarianceMLE is the maximum likelihood variance estimator rather than the unbiased one (assuming normally distributed data). The difference between the two is thatVariance
divides byN-1
(N == Length[data]
) while the MLE estimator divides byN
.
$endgroup$
– Sjoerd Smit
19 hours ago
1
$begingroup$
And for future reference: please post copyable code in your question rather than a screenshot. This makes it much easier for someone else to copy your code and try things out.
$endgroup$
– Sjoerd Smit
19 hours ago
$begingroup$
Which book did you see this in? It looks like a scan.
$endgroup$
– Szabolcs
17 hours ago
add a comment |
$begingroup$
In Mathematica 11.3,<< Statistics`
produces an error message andVariance@data // N
gives 665.524
$endgroup$
– m_goldberg
21 hours ago
2
$begingroup$
I'm going to guess that VarianceMLE is the maximum likelihood variance estimator rather than the unbiased one (assuming normally distributed data). The difference between the two is thatVariance
divides byN-1
(N == Length[data]
) while the MLE estimator divides byN
.
$endgroup$
– Sjoerd Smit
19 hours ago
1
$begingroup$
And for future reference: please post copyable code in your question rather than a screenshot. This makes it much easier for someone else to copy your code and try things out.
$endgroup$
– Sjoerd Smit
19 hours ago
$begingroup$
Which book did you see this in? It looks like a scan.
$endgroup$
– Szabolcs
17 hours ago
$begingroup$
In Mathematica 11.3,
<< Statistics`
produces an error message and Variance@data // N
gives 665.524$endgroup$
– m_goldberg
21 hours ago
$begingroup$
In Mathematica 11.3,
<< Statistics`
produces an error message and Variance@data // N
gives 665.524$endgroup$
– m_goldberg
21 hours ago
2
2
$begingroup$
I'm going to guess that VarianceMLE is the maximum likelihood variance estimator rather than the unbiased one (assuming normally distributed data). The difference between the two is that
Variance
divides by N-1
(N == Length[data]
) while the MLE estimator divides by N
.$endgroup$
– Sjoerd Smit
19 hours ago
$begingroup$
I'm going to guess that VarianceMLE is the maximum likelihood variance estimator rather than the unbiased one (assuming normally distributed data). The difference between the two is that
Variance
divides by N-1
(N == Length[data]
) while the MLE estimator divides by N
.$endgroup$
– Sjoerd Smit
19 hours ago
1
1
$begingroup$
And for future reference: please post copyable code in your question rather than a screenshot. This makes it much easier for someone else to copy your code and try things out.
$endgroup$
– Sjoerd Smit
19 hours ago
$begingroup$
And for future reference: please post copyable code in your question rather than a screenshot. This makes it much easier for someone else to copy your code and try things out.
$endgroup$
– Sjoerd Smit
19 hours ago
$begingroup$
Which book did you see this in? It looks like a scan.
$endgroup$
– Szabolcs
17 hours ago
$begingroup$
Which book did you see this in? It looks like a scan.
$endgroup$
– Szabolcs
17 hours ago
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
I just checked my guess in my comment and I was right. VarianceMLE is the maximum likelihood variance estimator (see, e.g. here).
data = {34, 56, 28, 62, 32, 90, 20, 10, 12, 35, 63, 78, 12, 25, 68};
Variance[data]
13976/21
myVariance[lst_List] := Total[(lst - Mean[lst])^2]/(Length[lst] - 1);
myVarianceMLE[lst_List] := Total[(lst - Mean[lst])^2]/Length[lst];
myVariance[data]
myVarianceMLE[data]
13976/21
27952/45
$endgroup$
$begingroup$
Thank you for your kind help, I will remember to post code next time.
$endgroup$
– Facet
14 hours ago
add a comment |
$begingroup$
VarianceMLE
computes a biased, maximum likelihood estimate of the population variance. Variance
computes an unbiased estimate of the population variance. It can be shown that VarianceMLE
underestimates the variance of the population.
Let ${y_i : 1 leq i leq n}$ be a sample of $n$ values from a population. The variance (central second moment) of the sample is
$$ sigma_y^2 = frac{1}{n} sum_{i=1}^n (y_i - bar{y}) text{,} $$
where $bar{y} = frac{1}{n} sum_{i=1}^n y_i$ is the sample mean. This $sigma_y^2$ is computed by VarianceMLE
.
If $sigma^2$ is the population variance, with some work, one can show that the expected value of $sigma_y^2$ is $frac{n-1}{n} sigma^2$, so the sample variance is a biased estimator of the population variance. We can make this an unbiased estimator via
$$ s^2 = frac{n}{n-1} sigma_y^2 = frac{1}{n-1}sum_{i=1}^n (y_i - bar{y}) text{.} $$
This $s^2$ is computed by Variance
. From the documentation (in the Details):
"Variance[list] is equivalent to Total[(list-Mean[list])^2]/(Length[list]-1) for real-valued data."
The very sparse documentation for VarianceMLE
indicates that it is implemented in terms of Variance
:
"VarianceMLE[data_] := Variance[data] (Length[data] - 1)/Length[data]"
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "387"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Facet is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f189445%2fwhy-does-the-function-variancemle-give-a-different-result-from-variance%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
I just checked my guess in my comment and I was right. VarianceMLE is the maximum likelihood variance estimator (see, e.g. here).
data = {34, 56, 28, 62, 32, 90, 20, 10, 12, 35, 63, 78, 12, 25, 68};
Variance[data]
13976/21
myVariance[lst_List] := Total[(lst - Mean[lst])^2]/(Length[lst] - 1);
myVarianceMLE[lst_List] := Total[(lst - Mean[lst])^2]/Length[lst];
myVariance[data]
myVarianceMLE[data]
13976/21
27952/45
$endgroup$
$begingroup$
Thank you for your kind help, I will remember to post code next time.
$endgroup$
– Facet
14 hours ago
add a comment |
$begingroup$
I just checked my guess in my comment and I was right. VarianceMLE is the maximum likelihood variance estimator (see, e.g. here).
data = {34, 56, 28, 62, 32, 90, 20, 10, 12, 35, 63, 78, 12, 25, 68};
Variance[data]
13976/21
myVariance[lst_List] := Total[(lst - Mean[lst])^2]/(Length[lst] - 1);
myVarianceMLE[lst_List] := Total[(lst - Mean[lst])^2]/Length[lst];
myVariance[data]
myVarianceMLE[data]
13976/21
27952/45
$endgroup$
$begingroup$
Thank you for your kind help, I will remember to post code next time.
$endgroup$
– Facet
14 hours ago
add a comment |
$begingroup$
I just checked my guess in my comment and I was right. VarianceMLE is the maximum likelihood variance estimator (see, e.g. here).
data = {34, 56, 28, 62, 32, 90, 20, 10, 12, 35, 63, 78, 12, 25, 68};
Variance[data]
13976/21
myVariance[lst_List] := Total[(lst - Mean[lst])^2]/(Length[lst] - 1);
myVarianceMLE[lst_List] := Total[(lst - Mean[lst])^2]/Length[lst];
myVariance[data]
myVarianceMLE[data]
13976/21
27952/45
$endgroup$
I just checked my guess in my comment and I was right. VarianceMLE is the maximum likelihood variance estimator (see, e.g. here).
data = {34, 56, 28, 62, 32, 90, 20, 10, 12, 35, 63, 78, 12, 25, 68};
Variance[data]
13976/21
myVariance[lst_List] := Total[(lst - Mean[lst])^2]/(Length[lst] - 1);
myVarianceMLE[lst_List] := Total[(lst - Mean[lst])^2]/Length[lst];
myVariance[data]
myVarianceMLE[data]
13976/21
27952/45
answered 19 hours ago
Sjoerd SmitSjoerd Smit
3,410715
3,410715
$begingroup$
Thank you for your kind help, I will remember to post code next time.
$endgroup$
– Facet
14 hours ago
add a comment |
$begingroup$
Thank you for your kind help, I will remember to post code next time.
$endgroup$
– Facet
14 hours ago
$begingroup$
Thank you for your kind help, I will remember to post code next time.
$endgroup$
– Facet
14 hours ago
$begingroup$
Thank you for your kind help, I will remember to post code next time.
$endgroup$
– Facet
14 hours ago
add a comment |
$begingroup$
VarianceMLE
computes a biased, maximum likelihood estimate of the population variance. Variance
computes an unbiased estimate of the population variance. It can be shown that VarianceMLE
underestimates the variance of the population.
Let ${y_i : 1 leq i leq n}$ be a sample of $n$ values from a population. The variance (central second moment) of the sample is
$$ sigma_y^2 = frac{1}{n} sum_{i=1}^n (y_i - bar{y}) text{,} $$
where $bar{y} = frac{1}{n} sum_{i=1}^n y_i$ is the sample mean. This $sigma_y^2$ is computed by VarianceMLE
.
If $sigma^2$ is the population variance, with some work, one can show that the expected value of $sigma_y^2$ is $frac{n-1}{n} sigma^2$, so the sample variance is a biased estimator of the population variance. We can make this an unbiased estimator via
$$ s^2 = frac{n}{n-1} sigma_y^2 = frac{1}{n-1}sum_{i=1}^n (y_i - bar{y}) text{.} $$
This $s^2$ is computed by Variance
. From the documentation (in the Details):
"Variance[list] is equivalent to Total[(list-Mean[list])^2]/(Length[list]-1) for real-valued data."
The very sparse documentation for VarianceMLE
indicates that it is implemented in terms of Variance
:
"VarianceMLE[data_] := Variance[data] (Length[data] - 1)/Length[data]"
$endgroup$
add a comment |
$begingroup$
VarianceMLE
computes a biased, maximum likelihood estimate of the population variance. Variance
computes an unbiased estimate of the population variance. It can be shown that VarianceMLE
underestimates the variance of the population.
Let ${y_i : 1 leq i leq n}$ be a sample of $n$ values from a population. The variance (central second moment) of the sample is
$$ sigma_y^2 = frac{1}{n} sum_{i=1}^n (y_i - bar{y}) text{,} $$
where $bar{y} = frac{1}{n} sum_{i=1}^n y_i$ is the sample mean. This $sigma_y^2$ is computed by VarianceMLE
.
If $sigma^2$ is the population variance, with some work, one can show that the expected value of $sigma_y^2$ is $frac{n-1}{n} sigma^2$, so the sample variance is a biased estimator of the population variance. We can make this an unbiased estimator via
$$ s^2 = frac{n}{n-1} sigma_y^2 = frac{1}{n-1}sum_{i=1}^n (y_i - bar{y}) text{.} $$
This $s^2$ is computed by Variance
. From the documentation (in the Details):
"Variance[list] is equivalent to Total[(list-Mean[list])^2]/(Length[list]-1) for real-valued data."
The very sparse documentation for VarianceMLE
indicates that it is implemented in terms of Variance
:
"VarianceMLE[data_] := Variance[data] (Length[data] - 1)/Length[data]"
$endgroup$
add a comment |
$begingroup$
VarianceMLE
computes a biased, maximum likelihood estimate of the population variance. Variance
computes an unbiased estimate of the population variance. It can be shown that VarianceMLE
underestimates the variance of the population.
Let ${y_i : 1 leq i leq n}$ be a sample of $n$ values from a population. The variance (central second moment) of the sample is
$$ sigma_y^2 = frac{1}{n} sum_{i=1}^n (y_i - bar{y}) text{,} $$
where $bar{y} = frac{1}{n} sum_{i=1}^n y_i$ is the sample mean. This $sigma_y^2$ is computed by VarianceMLE
.
If $sigma^2$ is the population variance, with some work, one can show that the expected value of $sigma_y^2$ is $frac{n-1}{n} sigma^2$, so the sample variance is a biased estimator of the population variance. We can make this an unbiased estimator via
$$ s^2 = frac{n}{n-1} sigma_y^2 = frac{1}{n-1}sum_{i=1}^n (y_i - bar{y}) text{.} $$
This $s^2$ is computed by Variance
. From the documentation (in the Details):
"Variance[list] is equivalent to Total[(list-Mean[list])^2]/(Length[list]-1) for real-valued data."
The very sparse documentation for VarianceMLE
indicates that it is implemented in terms of Variance
:
"VarianceMLE[data_] := Variance[data] (Length[data] - 1)/Length[data]"
$endgroup$
VarianceMLE
computes a biased, maximum likelihood estimate of the population variance. Variance
computes an unbiased estimate of the population variance. It can be shown that VarianceMLE
underestimates the variance of the population.
Let ${y_i : 1 leq i leq n}$ be a sample of $n$ values from a population. The variance (central second moment) of the sample is
$$ sigma_y^2 = frac{1}{n} sum_{i=1}^n (y_i - bar{y}) text{,} $$
where $bar{y} = frac{1}{n} sum_{i=1}^n y_i$ is the sample mean. This $sigma_y^2$ is computed by VarianceMLE
.
If $sigma^2$ is the population variance, with some work, one can show that the expected value of $sigma_y^2$ is $frac{n-1}{n} sigma^2$, so the sample variance is a biased estimator of the population variance. We can make this an unbiased estimator via
$$ s^2 = frac{n}{n-1} sigma_y^2 = frac{1}{n-1}sum_{i=1}^n (y_i - bar{y}) text{.} $$
This $s^2$ is computed by Variance
. From the documentation (in the Details):
"Variance[list] is equivalent to Total[(list-Mean[list])^2]/(Length[list]-1) for real-valued data."
The very sparse documentation for VarianceMLE
indicates that it is implemented in terms of Variance
:
"VarianceMLE[data_] := Variance[data] (Length[data] - 1)/Length[data]"
answered 13 hours ago
Eric TowersEric Towers
2,286613
2,286613
add a comment |
add a comment |
Facet is a new contributor. Be nice, and check out our Code of Conduct.
Facet is a new contributor. Be nice, and check out our Code of Conduct.
Facet is a new contributor. Be nice, and check out our Code of Conduct.
Facet is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Mathematica Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f189445%2fwhy-does-the-function-variancemle-give-a-different-result-from-variance%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
In Mathematica 11.3,
<< Statistics`
produces an error message andVariance@data // N
gives 665.524$endgroup$
– m_goldberg
21 hours ago
2
$begingroup$
I'm going to guess that VarianceMLE is the maximum likelihood variance estimator rather than the unbiased one (assuming normally distributed data). The difference between the two is that
Variance
divides byN-1
(N == Length[data]
) while the MLE estimator divides byN
.$endgroup$
– Sjoerd Smit
19 hours ago
1
$begingroup$
And for future reference: please post copyable code in your question rather than a screenshot. This makes it much easier for someone else to copy your code and try things out.
$endgroup$
– Sjoerd Smit
19 hours ago
$begingroup$
Which book did you see this in? It looks like a scan.
$endgroup$
– Szabolcs
17 hours ago