Select two best classifier using F1-score,Recall and precision
I have three classifiers that classify same dataset with these results:
classifier A:
precision recall f1-score
micro avg 0.36 0.36 0.36
macro avg 0.38 0.43 0.36
weighted avg 0.36 0.36 0.32
classifier B:
precision recall f1-score
micro avg 0.55 0.55 0.55
macro avg 0.60 0.60 0.56
weighted avg 0.61 0.55 0.53
classifier C:
precision recall f1-score
micro avg 0.34 0.34 0.34
macro avg 0.36 0.38 0.32
weighted avg 0.39 0.34 0.32
I want two select two best of them, and I know F1-score is a parameter for compare the classifiers because of its harmony between precision and recall.
So, at first I select classifier B for its best F1-score. for next, both A and C have a same F1-measure,
I want to ask how can I select between them?
machine-learning python classification
add a comment |
I have three classifiers that classify same dataset with these results:
classifier A:
precision recall f1-score
micro avg 0.36 0.36 0.36
macro avg 0.38 0.43 0.36
weighted avg 0.36 0.36 0.32
classifier B:
precision recall f1-score
micro avg 0.55 0.55 0.55
macro avg 0.60 0.60 0.56
weighted avg 0.61 0.55 0.53
classifier C:
precision recall f1-score
micro avg 0.34 0.34 0.34
macro avg 0.36 0.38 0.32
weighted avg 0.39 0.34 0.32
I want two select two best of them, and I know F1-score is a parameter for compare the classifiers because of its harmony between precision and recall.
So, at first I select classifier B for its best F1-score. for next, both A and C have a same F1-measure,
I want to ask how can I select between them?
machine-learning python classification
Why is accuracy not the best measure for assessing classification models? Everything in that thread applies equally to F1, recall and precision. See also Classification probability threshold.
– Stephan Kolassa
Dec 28 '18 at 13:31
add a comment |
I have three classifiers that classify same dataset with these results:
classifier A:
precision recall f1-score
micro avg 0.36 0.36 0.36
macro avg 0.38 0.43 0.36
weighted avg 0.36 0.36 0.32
classifier B:
precision recall f1-score
micro avg 0.55 0.55 0.55
macro avg 0.60 0.60 0.56
weighted avg 0.61 0.55 0.53
classifier C:
precision recall f1-score
micro avg 0.34 0.34 0.34
macro avg 0.36 0.38 0.32
weighted avg 0.39 0.34 0.32
I want two select two best of them, and I know F1-score is a parameter for compare the classifiers because of its harmony between precision and recall.
So, at first I select classifier B for its best F1-score. for next, both A and C have a same F1-measure,
I want to ask how can I select between them?
machine-learning python classification
I have three classifiers that classify same dataset with these results:
classifier A:
precision recall f1-score
micro avg 0.36 0.36 0.36
macro avg 0.38 0.43 0.36
weighted avg 0.36 0.36 0.32
classifier B:
precision recall f1-score
micro avg 0.55 0.55 0.55
macro avg 0.60 0.60 0.56
weighted avg 0.61 0.55 0.53
classifier C:
precision recall f1-score
micro avg 0.34 0.34 0.34
macro avg 0.36 0.38 0.32
weighted avg 0.39 0.34 0.32
I want two select two best of them, and I know F1-score is a parameter for compare the classifiers because of its harmony between precision and recall.
So, at first I select classifier B for its best F1-score. for next, both A and C have a same F1-measure,
I want to ask how can I select between them?
machine-learning python classification
machine-learning python classification
asked Dec 28 '18 at 11:47
Saha
266
266
Why is accuracy not the best measure for assessing classification models? Everything in that thread applies equally to F1, recall and precision. See also Classification probability threshold.
– Stephan Kolassa
Dec 28 '18 at 13:31
add a comment |
Why is accuracy not the best measure for assessing classification models? Everything in that thread applies equally to F1, recall and precision. See also Classification probability threshold.
– Stephan Kolassa
Dec 28 '18 at 13:31
Why is accuracy not the best measure for assessing classification models? Everything in that thread applies equally to F1, recall and precision. See also Classification probability threshold.
– Stephan Kolassa
Dec 28 '18 at 13:31
Why is accuracy not the best measure for assessing classification models? Everything in that thread applies equally to F1, recall and precision. See also Classification probability threshold.
– Stephan Kolassa
Dec 28 '18 at 13:31
add a comment |
2 Answers
2
active
oldest
votes
f1-score combines precision and recall in a single figure. As both are pretty similar in A and C cases, f1-score is similar too.
Your choice depends on what it is less harmful in your categorization: false positives or false negatives.
I do recommend you to read the 3rd chapter of "DEEP LEARNING:From Basics to Practice" volume 1 by Andrew Glassner. There you have the three concepts (precision, recall and f1-score) described in a very illustrative way.
they are not comparable if we dont have FP or FN?
– Saha
Dec 28 '18 at 12:07
You do not need direct access to FN or FN values. Precision and recall only varies in a part of their fraction formulas which includes FN or FP, so your displayed values reflect their relationship. What Nga Dao and I mean is that your choice depends on the impact of false positives or false negatives in your system. Nga Dao example is quite nice.
– David
Dec 29 '18 at 8:16
add a comment |
It depends on your application. Assume that you design a classifier model to predict whether a person has cancer. If you wanna say confidently that a person has cancer, you probably prefer a classifier with high precision.
On the other hand, if you want to make sure all people with cancer will be caught, you probably prefer a classifier with high recall.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f43240%2fselect-two-best-classifier-using-f1-score-recall-and-precision%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
f1-score combines precision and recall in a single figure. As both are pretty similar in A and C cases, f1-score is similar too.
Your choice depends on what it is less harmful in your categorization: false positives or false negatives.
I do recommend you to read the 3rd chapter of "DEEP LEARNING:From Basics to Practice" volume 1 by Andrew Glassner. There you have the three concepts (precision, recall and f1-score) described in a very illustrative way.
they are not comparable if we dont have FP or FN?
– Saha
Dec 28 '18 at 12:07
You do not need direct access to FN or FN values. Precision and recall only varies in a part of their fraction formulas which includes FN or FP, so your displayed values reflect their relationship. What Nga Dao and I mean is that your choice depends on the impact of false positives or false negatives in your system. Nga Dao example is quite nice.
– David
Dec 29 '18 at 8:16
add a comment |
f1-score combines precision and recall in a single figure. As both are pretty similar in A and C cases, f1-score is similar too.
Your choice depends on what it is less harmful in your categorization: false positives or false negatives.
I do recommend you to read the 3rd chapter of "DEEP LEARNING:From Basics to Practice" volume 1 by Andrew Glassner. There you have the three concepts (precision, recall and f1-score) described in a very illustrative way.
they are not comparable if we dont have FP or FN?
– Saha
Dec 28 '18 at 12:07
You do not need direct access to FN or FN values. Precision and recall only varies in a part of their fraction formulas which includes FN or FP, so your displayed values reflect their relationship. What Nga Dao and I mean is that your choice depends on the impact of false positives or false negatives in your system. Nga Dao example is quite nice.
– David
Dec 29 '18 at 8:16
add a comment |
f1-score combines precision and recall in a single figure. As both are pretty similar in A and C cases, f1-score is similar too.
Your choice depends on what it is less harmful in your categorization: false positives or false negatives.
I do recommend you to read the 3rd chapter of "DEEP LEARNING:From Basics to Practice" volume 1 by Andrew Glassner. There you have the three concepts (precision, recall and f1-score) described in a very illustrative way.
f1-score combines precision and recall in a single figure. As both are pretty similar in A and C cases, f1-score is similar too.
Your choice depends on what it is less harmful in your categorization: false positives or false negatives.
I do recommend you to read the 3rd chapter of "DEEP LEARNING:From Basics to Practice" volume 1 by Andrew Glassner. There you have the three concepts (precision, recall and f1-score) described in a very illustrative way.
answered Dec 28 '18 at 12:03
David
562
562
they are not comparable if we dont have FP or FN?
– Saha
Dec 28 '18 at 12:07
You do not need direct access to FN or FN values. Precision and recall only varies in a part of their fraction formulas which includes FN or FP, so your displayed values reflect their relationship. What Nga Dao and I mean is that your choice depends on the impact of false positives or false negatives in your system. Nga Dao example is quite nice.
– David
Dec 29 '18 at 8:16
add a comment |
they are not comparable if we dont have FP or FN?
– Saha
Dec 28 '18 at 12:07
You do not need direct access to FN or FN values. Precision and recall only varies in a part of their fraction formulas which includes FN or FP, so your displayed values reflect their relationship. What Nga Dao and I mean is that your choice depends on the impact of false positives or false negatives in your system. Nga Dao example is quite nice.
– David
Dec 29 '18 at 8:16
they are not comparable if we dont have FP or FN?
– Saha
Dec 28 '18 at 12:07
they are not comparable if we dont have FP or FN?
– Saha
Dec 28 '18 at 12:07
You do not need direct access to FN or FN values. Precision and recall only varies in a part of their fraction formulas which includes FN or FP, so your displayed values reflect their relationship. What Nga Dao and I mean is that your choice depends on the impact of false positives or false negatives in your system. Nga Dao example is quite nice.
– David
Dec 29 '18 at 8:16
You do not need direct access to FN or FN values. Precision and recall only varies in a part of their fraction formulas which includes FN or FP, so your displayed values reflect their relationship. What Nga Dao and I mean is that your choice depends on the impact of false positives or false negatives in your system. Nga Dao example is quite nice.
– David
Dec 29 '18 at 8:16
add a comment |
It depends on your application. Assume that you design a classifier model to predict whether a person has cancer. If you wanna say confidently that a person has cancer, you probably prefer a classifier with high precision.
On the other hand, if you want to make sure all people with cancer will be caught, you probably prefer a classifier with high recall.
add a comment |
It depends on your application. Assume that you design a classifier model to predict whether a person has cancer. If you wanna say confidently that a person has cancer, you probably prefer a classifier with high precision.
On the other hand, if you want to make sure all people with cancer will be caught, you probably prefer a classifier with high recall.
add a comment |
It depends on your application. Assume that you design a classifier model to predict whether a person has cancer. If you wanna say confidently that a person has cancer, you probably prefer a classifier with high precision.
On the other hand, if you want to make sure all people with cancer will be caught, you probably prefer a classifier with high recall.
It depends on your application. Assume that you design a classifier model to predict whether a person has cancer. If you wanna say confidently that a person has cancer, you probably prefer a classifier with high precision.
On the other hand, if you want to make sure all people with cancer will be caught, you probably prefer a classifier with high recall.
answered Dec 28 '18 at 13:38
Nga Dao
1495
1495
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f43240%2fselect-two-best-classifier-using-f1-score-recall-and-precision%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Why is accuracy not the best measure for assessing classification models? Everything in that thread applies equally to F1, recall and precision. See also Classification probability threshold.
– Stephan Kolassa
Dec 28 '18 at 13:31