How do I train the Convolutional Neural Network with negative and positive elements as the input of the first...
Just I am curious why I have to scale the testing set on the testing set, and not on the training set when I’m training a model on, for example, CNN?!
Or am I wrong? And I still have to scale it on the training set.
Also, can I train a dataset in the CNN that contents positive and negative elements as the first input of the network?
Any answers with reference will be really appreciated.
python tensorflow conv-neural-network
add a comment |
Just I am curious why I have to scale the testing set on the testing set, and not on the training set when I’m training a model on, for example, CNN?!
Or am I wrong? And I still have to scale it on the training set.
Also, can I train a dataset in the CNN that contents positive and negative elements as the first input of the network?
Any answers with reference will be really appreciated.
python tensorflow conv-neural-network
you scale both train and test data
– Mitch Wheat
Nov 21 '18 at 3:06
I know that. My question is why should I scale the testing data on the testing data instead of the training data?
– Protocol313
Nov 21 '18 at 3:12
add a comment |
Just I am curious why I have to scale the testing set on the testing set, and not on the training set when I’m training a model on, for example, CNN?!
Or am I wrong? And I still have to scale it on the training set.
Also, can I train a dataset in the CNN that contents positive and negative elements as the first input of the network?
Any answers with reference will be really appreciated.
python tensorflow conv-neural-network
Just I am curious why I have to scale the testing set on the testing set, and not on the training set when I’m training a model on, for example, CNN?!
Or am I wrong? And I still have to scale it on the training set.
Also, can I train a dataset in the CNN that contents positive and negative elements as the first input of the network?
Any answers with reference will be really appreciated.
python tensorflow conv-neural-network
python tensorflow conv-neural-network
edited Nov 21 '18 at 3:15
Protocol313
asked Nov 21 '18 at 3:00
Protocol313Protocol313
44
44
you scale both train and test data
– Mitch Wheat
Nov 21 '18 at 3:06
I know that. My question is why should I scale the testing data on the testing data instead of the training data?
– Protocol313
Nov 21 '18 at 3:12
add a comment |
you scale both train and test data
– Mitch Wheat
Nov 21 '18 at 3:06
I know that. My question is why should I scale the testing data on the testing data instead of the training data?
– Protocol313
Nov 21 '18 at 3:12
you scale both train and test data
– Mitch Wheat
Nov 21 '18 at 3:06
you scale both train and test data
– Mitch Wheat
Nov 21 '18 at 3:06
I know that. My question is why should I scale the testing data on the testing data instead of the training data?
– Protocol313
Nov 21 '18 at 3:12
I know that. My question is why should I scale the testing data on the testing data instead of the training data?
– Protocol313
Nov 21 '18 at 3:12
add a comment |
3 Answers
3
active
oldest
votes
Only because I cannot comment (reputation barrier), I am writing it here as an answer. Scaling data depends upon the requirement as well the feed/data you got. Test data gets scaled with Test data only, because Test data don't have the Target variable (one less feature in Test data). If we scale our Training data with new Test data, our model will not be able to correlate with any target variable and thus fail to learn. So the key difference is the existence of Target variable.
Thanks for your answer. I knew the basic concepts of the answer, but I have not found any references yet. Do you have any reference for the answer? Please. Also, do you have an answer to the second one? Thanks
– Protocol313
Nov 21 '18 at 3:37
Few worthy references that I could find are Article1, Article2 and Article3 (kindly google for retraining ML models for more references). For second one, kindly detail your question with more info on what you mean by positive/negative elements (that too being passed in a CNN?)
– Random Nerd
Nov 21 '18 at 4:27
add a comment |
We usually have 3 types of datasets for getting a model trained,
- Training Dataset
- Validation Dataset
- Test Dataset
Training Dataset
This should be an evenly distributed data set which covers all varieties of data. If your train with more epochs, the model will get used to the training dataset and will only give proper proper prediction on the training dataset and this is called Overfitting. Only way to keep a check on overfitting is by having other datasets which the model has never been trained on.
Validation Dataset
This can be used fine tune model hyperparameters
Test Dataset
This is the dataset which the model has not been trained on has never been a part of deciding the hyperparameters and will give the reality of how the model is performing.
add a comment |
If scaling and normalization is used, the testing set should use the same parameters used during training.
A good answer that links to that: https://datascience.stackexchange.com/questions/27615/should-we-apply-normalization-to-test-data-as-well
Also, some models tend to require normalization and others do not.
The Neural Network architectures are normally robust and might not need normalization.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53404679%2fhow-do-i-train-the-convolutional-neural-network-with-negative-and-positive-eleme%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
Only because I cannot comment (reputation barrier), I am writing it here as an answer. Scaling data depends upon the requirement as well the feed/data you got. Test data gets scaled with Test data only, because Test data don't have the Target variable (one less feature in Test data). If we scale our Training data with new Test data, our model will not be able to correlate with any target variable and thus fail to learn. So the key difference is the existence of Target variable.
Thanks for your answer. I knew the basic concepts of the answer, but I have not found any references yet. Do you have any reference for the answer? Please. Also, do you have an answer to the second one? Thanks
– Protocol313
Nov 21 '18 at 3:37
Few worthy references that I could find are Article1, Article2 and Article3 (kindly google for retraining ML models for more references). For second one, kindly detail your question with more info on what you mean by positive/negative elements (that too being passed in a CNN?)
– Random Nerd
Nov 21 '18 at 4:27
add a comment |
Only because I cannot comment (reputation barrier), I am writing it here as an answer. Scaling data depends upon the requirement as well the feed/data you got. Test data gets scaled with Test data only, because Test data don't have the Target variable (one less feature in Test data). If we scale our Training data with new Test data, our model will not be able to correlate with any target variable and thus fail to learn. So the key difference is the existence of Target variable.
Thanks for your answer. I knew the basic concepts of the answer, but I have not found any references yet. Do you have any reference for the answer? Please. Also, do you have an answer to the second one? Thanks
– Protocol313
Nov 21 '18 at 3:37
Few worthy references that I could find are Article1, Article2 and Article3 (kindly google for retraining ML models for more references). For second one, kindly detail your question with more info on what you mean by positive/negative elements (that too being passed in a CNN?)
– Random Nerd
Nov 21 '18 at 4:27
add a comment |
Only because I cannot comment (reputation barrier), I am writing it here as an answer. Scaling data depends upon the requirement as well the feed/data you got. Test data gets scaled with Test data only, because Test data don't have the Target variable (one less feature in Test data). If we scale our Training data with new Test data, our model will not be able to correlate with any target variable and thus fail to learn. So the key difference is the existence of Target variable.
Only because I cannot comment (reputation barrier), I am writing it here as an answer. Scaling data depends upon the requirement as well the feed/data you got. Test data gets scaled with Test data only, because Test data don't have the Target variable (one less feature in Test data). If we scale our Training data with new Test data, our model will not be able to correlate with any target variable and thus fail to learn. So the key difference is the existence of Target variable.
answered Nov 21 '18 at 3:23
Random NerdRandom Nerd
1314
1314
Thanks for your answer. I knew the basic concepts of the answer, but I have not found any references yet. Do you have any reference for the answer? Please. Also, do you have an answer to the second one? Thanks
– Protocol313
Nov 21 '18 at 3:37
Few worthy references that I could find are Article1, Article2 and Article3 (kindly google for retraining ML models for more references). For second one, kindly detail your question with more info on what you mean by positive/negative elements (that too being passed in a CNN?)
– Random Nerd
Nov 21 '18 at 4:27
add a comment |
Thanks for your answer. I knew the basic concepts of the answer, but I have not found any references yet. Do you have any reference for the answer? Please. Also, do you have an answer to the second one? Thanks
– Protocol313
Nov 21 '18 at 3:37
Few worthy references that I could find are Article1, Article2 and Article3 (kindly google for retraining ML models for more references). For second one, kindly detail your question with more info on what you mean by positive/negative elements (that too being passed in a CNN?)
– Random Nerd
Nov 21 '18 at 4:27
Thanks for your answer. I knew the basic concepts of the answer, but I have not found any references yet. Do you have any reference for the answer? Please. Also, do you have an answer to the second one? Thanks
– Protocol313
Nov 21 '18 at 3:37
Thanks for your answer. I knew the basic concepts of the answer, but I have not found any references yet. Do you have any reference for the answer? Please. Also, do you have an answer to the second one? Thanks
– Protocol313
Nov 21 '18 at 3:37
Few worthy references that I could find are Article1, Article2 and Article3 (kindly google for retraining ML models for more references). For second one, kindly detail your question with more info on what you mean by positive/negative elements (that too being passed in a CNN?)
– Random Nerd
Nov 21 '18 at 4:27
Few worthy references that I could find are Article1, Article2 and Article3 (kindly google for retraining ML models for more references). For second one, kindly detail your question with more info on what you mean by positive/negative elements (that too being passed in a CNN?)
– Random Nerd
Nov 21 '18 at 4:27
add a comment |
We usually have 3 types of datasets for getting a model trained,
- Training Dataset
- Validation Dataset
- Test Dataset
Training Dataset
This should be an evenly distributed data set which covers all varieties of data. If your train with more epochs, the model will get used to the training dataset and will only give proper proper prediction on the training dataset and this is called Overfitting. Only way to keep a check on overfitting is by having other datasets which the model has never been trained on.
Validation Dataset
This can be used fine tune model hyperparameters
Test Dataset
This is the dataset which the model has not been trained on has never been a part of deciding the hyperparameters and will give the reality of how the model is performing.
add a comment |
We usually have 3 types of datasets for getting a model trained,
- Training Dataset
- Validation Dataset
- Test Dataset
Training Dataset
This should be an evenly distributed data set which covers all varieties of data. If your train with more epochs, the model will get used to the training dataset and will only give proper proper prediction on the training dataset and this is called Overfitting. Only way to keep a check on overfitting is by having other datasets which the model has never been trained on.
Validation Dataset
This can be used fine tune model hyperparameters
Test Dataset
This is the dataset which the model has not been trained on has never been a part of deciding the hyperparameters and will give the reality of how the model is performing.
add a comment |
We usually have 3 types of datasets for getting a model trained,
- Training Dataset
- Validation Dataset
- Test Dataset
Training Dataset
This should be an evenly distributed data set which covers all varieties of data. If your train with more epochs, the model will get used to the training dataset and will only give proper proper prediction on the training dataset and this is called Overfitting. Only way to keep a check on overfitting is by having other datasets which the model has never been trained on.
Validation Dataset
This can be used fine tune model hyperparameters
Test Dataset
This is the dataset which the model has not been trained on has never been a part of deciding the hyperparameters and will give the reality of how the model is performing.
We usually have 3 types of datasets for getting a model trained,
- Training Dataset
- Validation Dataset
- Test Dataset
Training Dataset
This should be an evenly distributed data set which covers all varieties of data. If your train with more epochs, the model will get used to the training dataset and will only give proper proper prediction on the training dataset and this is called Overfitting. Only way to keep a check on overfitting is by having other datasets which the model has never been trained on.
Validation Dataset
This can be used fine tune model hyperparameters
Test Dataset
This is the dataset which the model has not been trained on has never been a part of deciding the hyperparameters and will give the reality of how the model is performing.
answered Nov 21 '18 at 3:46
JeevanJeevan
1326
1326
add a comment |
add a comment |
If scaling and normalization is used, the testing set should use the same parameters used during training.
A good answer that links to that: https://datascience.stackexchange.com/questions/27615/should-we-apply-normalization-to-test-data-as-well
Also, some models tend to require normalization and others do not.
The Neural Network architectures are normally robust and might not need normalization.
add a comment |
If scaling and normalization is used, the testing set should use the same parameters used during training.
A good answer that links to that: https://datascience.stackexchange.com/questions/27615/should-we-apply-normalization-to-test-data-as-well
Also, some models tend to require normalization and others do not.
The Neural Network architectures are normally robust and might not need normalization.
add a comment |
If scaling and normalization is used, the testing set should use the same parameters used during training.
A good answer that links to that: https://datascience.stackexchange.com/questions/27615/should-we-apply-normalization-to-test-data-as-well
Also, some models tend to require normalization and others do not.
The Neural Network architectures are normally robust and might not need normalization.
If scaling and normalization is used, the testing set should use the same parameters used during training.
A good answer that links to that: https://datascience.stackexchange.com/questions/27615/should-we-apply-normalization-to-test-data-as-well
Also, some models tend to require normalization and others do not.
The Neural Network architectures are normally robust and might not need normalization.
answered Nov 21 '18 at 12:38
Pedro TorresPedro Torres
703413
703413
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53404679%2fhow-do-i-train-the-convolutional-neural-network-with-negative-and-positive-eleme%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
you scale both train and test data
– Mitch Wheat
Nov 21 '18 at 3:06
I know that. My question is why should I scale the testing data on the testing data instead of the training data?
– Protocol313
Nov 21 '18 at 3:12