Scale dummy variables in logistic regression












4














Let's say I have a data set that mixes categorical and continuous features and I would like to study the relative importance of each feature in the prediction of a certain class.



For that I am using the logistic regression with an l1 penalty because I want a sparse solution that maximizes the ROCAUC.



Before training the logistic regression, I first created dummy variables for my categorical features and I centered and scaled all my features, including the dummy variables I have created.



Can I center and scale the dummy variables? Because I want to compare the coefficients of the logistic regression trained on the data set in order to rank the features.



Thanks for the help!










share|cite|improve this question



























    4














    Let's say I have a data set that mixes categorical and continuous features and I would like to study the relative importance of each feature in the prediction of a certain class.



    For that I am using the logistic regression with an l1 penalty because I want a sparse solution that maximizes the ROCAUC.



    Before training the logistic regression, I first created dummy variables for my categorical features and I centered and scaled all my features, including the dummy variables I have created.



    Can I center and scale the dummy variables? Because I want to compare the coefficients of the logistic regression trained on the data set in order to rank the features.



    Thanks for the help!










    share|cite|improve this question

























      4












      4








      4







      Let's say I have a data set that mixes categorical and continuous features and I would like to study the relative importance of each feature in the prediction of a certain class.



      For that I am using the logistic regression with an l1 penalty because I want a sparse solution that maximizes the ROCAUC.



      Before training the logistic regression, I first created dummy variables for my categorical features and I centered and scaled all my features, including the dummy variables I have created.



      Can I center and scale the dummy variables? Because I want to compare the coefficients of the logistic regression trained on the data set in order to rank the features.



      Thanks for the help!










      share|cite|improve this question













      Let's say I have a data set that mixes categorical and continuous features and I would like to study the relative importance of each feature in the prediction of a certain class.



      For that I am using the logistic regression with an l1 penalty because I want a sparse solution that maximizes the ROCAUC.



      Before training the logistic regression, I first created dummy variables for my categorical features and I centered and scaled all my features, including the dummy variables I have created.



      Can I center and scale the dummy variables? Because I want to compare the coefficients of the logistic regression trained on the data set in order to rank the features.



      Thanks for the help!







      logistic classification importance






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Dec 12 '18 at 13:15









      shzt

      212




      212






















          1 Answer
          1






          active

          oldest

          votes


















          9














          AUROC ($c$-index; concordance probability, Somers' $D_{xy}$ rank correlation) is not a valid objective for optimization. It is fooled by a terribly miscalibrated model and is inefficient. Maximum likelihood estimation exists for a reason: optimizing the log likelihood function results in optimality properties of the estimators.



          And don't scale indicator variables. This adds confusion to the interpretation of coefficients.



          Don't rank features unless you accompany this with bootstrap confidence intervals for the ranks. You'll find that variable importance measures are volatile. The data do not have sufficient information to tell you which features of the data are most important. This is even more true when predictors are correlated.






          share|cite|improve this answer

















          • 2




            Could you possibly talk a bit more about this part: "The data do not have sufficient information to tell you which features of the data are most important.". I always thought when 2 variables are z-transformed one can say a change in x for 1 standard-dev leads to a change of b(x) standard-dev in y. Therefor i would interpret the variable with the larger Beta as more influential on y than others. It would be really helpful for me if you could add a few words and/or sources. Thanks in advance.
            – TinglTanglBob
            Dec 12 '18 at 16:02










          • Thank you Frank for your help. Noted, I will not scale indicator variables and will calculate the bootstrap confidence intervals. Why is therefore AUROC commonly used if it is inefficient? Thanks again!
            – shzt
            Dec 13 '18 at 9:35










          • A good question. Lots of bad ideas put into use in the world. I think people find the concordance probability to be the most interpretable measure of predictive discrimination, and think it should be favored over proper scoring rules for that reason.
            – Frank Harrell
            Dec 13 '18 at 12:40











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "65"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f381643%2fscale-dummy-variables-in-logistic-regression%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          9














          AUROC ($c$-index; concordance probability, Somers' $D_{xy}$ rank correlation) is not a valid objective for optimization. It is fooled by a terribly miscalibrated model and is inefficient. Maximum likelihood estimation exists for a reason: optimizing the log likelihood function results in optimality properties of the estimators.



          And don't scale indicator variables. This adds confusion to the interpretation of coefficients.



          Don't rank features unless you accompany this with bootstrap confidence intervals for the ranks. You'll find that variable importance measures are volatile. The data do not have sufficient information to tell you which features of the data are most important. This is even more true when predictors are correlated.






          share|cite|improve this answer

















          • 2




            Could you possibly talk a bit more about this part: "The data do not have sufficient information to tell you which features of the data are most important.". I always thought when 2 variables are z-transformed one can say a change in x for 1 standard-dev leads to a change of b(x) standard-dev in y. Therefor i would interpret the variable with the larger Beta as more influential on y than others. It would be really helpful for me if you could add a few words and/or sources. Thanks in advance.
            – TinglTanglBob
            Dec 12 '18 at 16:02










          • Thank you Frank for your help. Noted, I will not scale indicator variables and will calculate the bootstrap confidence intervals. Why is therefore AUROC commonly used if it is inefficient? Thanks again!
            – shzt
            Dec 13 '18 at 9:35










          • A good question. Lots of bad ideas put into use in the world. I think people find the concordance probability to be the most interpretable measure of predictive discrimination, and think it should be favored over proper scoring rules for that reason.
            – Frank Harrell
            Dec 13 '18 at 12:40
















          9














          AUROC ($c$-index; concordance probability, Somers' $D_{xy}$ rank correlation) is not a valid objective for optimization. It is fooled by a terribly miscalibrated model and is inefficient. Maximum likelihood estimation exists for a reason: optimizing the log likelihood function results in optimality properties of the estimators.



          And don't scale indicator variables. This adds confusion to the interpretation of coefficients.



          Don't rank features unless you accompany this with bootstrap confidence intervals for the ranks. You'll find that variable importance measures are volatile. The data do not have sufficient information to tell you which features of the data are most important. This is even more true when predictors are correlated.






          share|cite|improve this answer

















          • 2




            Could you possibly talk a bit more about this part: "The data do not have sufficient information to tell you which features of the data are most important.". I always thought when 2 variables are z-transformed one can say a change in x for 1 standard-dev leads to a change of b(x) standard-dev in y. Therefor i would interpret the variable with the larger Beta as more influential on y than others. It would be really helpful for me if you could add a few words and/or sources. Thanks in advance.
            – TinglTanglBob
            Dec 12 '18 at 16:02










          • Thank you Frank for your help. Noted, I will not scale indicator variables and will calculate the bootstrap confidence intervals. Why is therefore AUROC commonly used if it is inefficient? Thanks again!
            – shzt
            Dec 13 '18 at 9:35










          • A good question. Lots of bad ideas put into use in the world. I think people find the concordance probability to be the most interpretable measure of predictive discrimination, and think it should be favored over proper scoring rules for that reason.
            – Frank Harrell
            Dec 13 '18 at 12:40














          9












          9








          9






          AUROC ($c$-index; concordance probability, Somers' $D_{xy}$ rank correlation) is not a valid objective for optimization. It is fooled by a terribly miscalibrated model and is inefficient. Maximum likelihood estimation exists for a reason: optimizing the log likelihood function results in optimality properties of the estimators.



          And don't scale indicator variables. This adds confusion to the interpretation of coefficients.



          Don't rank features unless you accompany this with bootstrap confidence intervals for the ranks. You'll find that variable importance measures are volatile. The data do not have sufficient information to tell you which features of the data are most important. This is even more true when predictors are correlated.






          share|cite|improve this answer












          AUROC ($c$-index; concordance probability, Somers' $D_{xy}$ rank correlation) is not a valid objective for optimization. It is fooled by a terribly miscalibrated model and is inefficient. Maximum likelihood estimation exists for a reason: optimizing the log likelihood function results in optimality properties of the estimators.



          And don't scale indicator variables. This adds confusion to the interpretation of coefficients.



          Don't rank features unless you accompany this with bootstrap confidence intervals for the ranks. You'll find that variable importance measures are volatile. The data do not have sufficient information to tell you which features of the data are most important. This is even more true when predictors are correlated.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Dec 12 '18 at 14:33









          Frank Harrell

          54.5k3106239




          54.5k3106239








          • 2




            Could you possibly talk a bit more about this part: "The data do not have sufficient information to tell you which features of the data are most important.". I always thought when 2 variables are z-transformed one can say a change in x for 1 standard-dev leads to a change of b(x) standard-dev in y. Therefor i would interpret the variable with the larger Beta as more influential on y than others. It would be really helpful for me if you could add a few words and/or sources. Thanks in advance.
            – TinglTanglBob
            Dec 12 '18 at 16:02










          • Thank you Frank for your help. Noted, I will not scale indicator variables and will calculate the bootstrap confidence intervals. Why is therefore AUROC commonly used if it is inefficient? Thanks again!
            – shzt
            Dec 13 '18 at 9:35










          • A good question. Lots of bad ideas put into use in the world. I think people find the concordance probability to be the most interpretable measure of predictive discrimination, and think it should be favored over proper scoring rules for that reason.
            – Frank Harrell
            Dec 13 '18 at 12:40














          • 2




            Could you possibly talk a bit more about this part: "The data do not have sufficient information to tell you which features of the data are most important.". I always thought when 2 variables are z-transformed one can say a change in x for 1 standard-dev leads to a change of b(x) standard-dev in y. Therefor i would interpret the variable with the larger Beta as more influential on y than others. It would be really helpful for me if you could add a few words and/or sources. Thanks in advance.
            – TinglTanglBob
            Dec 12 '18 at 16:02










          • Thank you Frank for your help. Noted, I will not scale indicator variables and will calculate the bootstrap confidence intervals. Why is therefore AUROC commonly used if it is inefficient? Thanks again!
            – shzt
            Dec 13 '18 at 9:35










          • A good question. Lots of bad ideas put into use in the world. I think people find the concordance probability to be the most interpretable measure of predictive discrimination, and think it should be favored over proper scoring rules for that reason.
            – Frank Harrell
            Dec 13 '18 at 12:40








          2




          2




          Could you possibly talk a bit more about this part: "The data do not have sufficient information to tell you which features of the data are most important.". I always thought when 2 variables are z-transformed one can say a change in x for 1 standard-dev leads to a change of b(x) standard-dev in y. Therefor i would interpret the variable with the larger Beta as more influential on y than others. It would be really helpful for me if you could add a few words and/or sources. Thanks in advance.
          – TinglTanglBob
          Dec 12 '18 at 16:02




          Could you possibly talk a bit more about this part: "The data do not have sufficient information to tell you which features of the data are most important.". I always thought when 2 variables are z-transformed one can say a change in x for 1 standard-dev leads to a change of b(x) standard-dev in y. Therefor i would interpret the variable with the larger Beta as more influential on y than others. It would be really helpful for me if you could add a few words and/or sources. Thanks in advance.
          – TinglTanglBob
          Dec 12 '18 at 16:02












          Thank you Frank for your help. Noted, I will not scale indicator variables and will calculate the bootstrap confidence intervals. Why is therefore AUROC commonly used if it is inefficient? Thanks again!
          – shzt
          Dec 13 '18 at 9:35




          Thank you Frank for your help. Noted, I will not scale indicator variables and will calculate the bootstrap confidence intervals. Why is therefore AUROC commonly used if it is inefficient? Thanks again!
          – shzt
          Dec 13 '18 at 9:35












          A good question. Lots of bad ideas put into use in the world. I think people find the concordance probability to be the most interpretable measure of predictive discrimination, and think it should be favored over proper scoring rules for that reason.
          – Frank Harrell
          Dec 13 '18 at 12:40




          A good question. Lots of bad ideas put into use in the world. I think people find the concordance probability to be the most interpretable measure of predictive discrimination, and think it should be favored over proper scoring rules for that reason.
          – Frank Harrell
          Dec 13 '18 at 12:40


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Cross Validated!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f381643%2fscale-dummy-variables-in-logistic-regression%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          How to change which sound is reproduced for terminal bell?

          Can I use Tabulator js library in my java Spring + Thymeleaf project?

          Title Spacing in Bjornstrup Chapter, Removing Chapter Number From Contents