A social experiment. What is the worst that can happen? [closed]












41















I am a postdoc and I have been applying for jobs in both industry and academia. My h-index is good enough for junior faculty (~7).



Along my academic CV I have an industry-oriented CV and send both of them out accordingly. I have had a handful of final stage interviews (faculty/scientist) for academia but none in the industry so far.



I suspect I am being interviewed as the "token diverse female" (I'm asian) as my area of science is white male dominated. The whole experience, along with prior job hunts, has led me to suspect that my gender and race may be hindering my earning potential. I believe I have the required qualifications and skills.



I am thinking of reapplying as a white male to these same industry jobs I got rejected for (especially the rejections without interview) just to see if how far along I would get. Only for industry jobs because those CVs don't make it to the chief scientist's table. Maybe make a documentary or blog about this if there are significant findings. Now put your imagination to the test: what is the worst that can happen?










share|improve this question















closed as off-topic by Anonymous Physicist, corey979, user3209815, Azor Ahai, Jon Custer Mar 26 at 15:06


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question is not within the scope of this site as defined in the help center. Our scope particularly excludes the content of research, education outside of a university setting, and undergraduate admissions, life, and culture." – user3209815, Azor Ahai, Jon Custer

If this question can be reworded to fit the rules in the help center, please edit the question.

















  • Comments are not for extended discussion; this conversation has been moved to chat.

    – StrongBad
    Mar 25 at 16:01






  • 3





    Hiring biases in industry are off-topic.

    – Azor Ahai
    Mar 25 at 17:50
















41















I am a postdoc and I have been applying for jobs in both industry and academia. My h-index is good enough for junior faculty (~7).



Along my academic CV I have an industry-oriented CV and send both of them out accordingly. I have had a handful of final stage interviews (faculty/scientist) for academia but none in the industry so far.



I suspect I am being interviewed as the "token diverse female" (I'm asian) as my area of science is white male dominated. The whole experience, along with prior job hunts, has led me to suspect that my gender and race may be hindering my earning potential. I believe I have the required qualifications and skills.



I am thinking of reapplying as a white male to these same industry jobs I got rejected for (especially the rejections without interview) just to see if how far along I would get. Only for industry jobs because those CVs don't make it to the chief scientist's table. Maybe make a documentary or blog about this if there are significant findings. Now put your imagination to the test: what is the worst that can happen?










share|improve this question















closed as off-topic by Anonymous Physicist, corey979, user3209815, Azor Ahai, Jon Custer Mar 26 at 15:06


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question is not within the scope of this site as defined in the help center. Our scope particularly excludes the content of research, education outside of a university setting, and undergraduate admissions, life, and culture." – user3209815, Azor Ahai, Jon Custer

If this question can be reworded to fit the rules in the help center, please edit the question.

















  • Comments are not for extended discussion; this conversation has been moved to chat.

    – StrongBad
    Mar 25 at 16:01






  • 3





    Hiring biases in industry are off-topic.

    – Azor Ahai
    Mar 25 at 17:50














41












41








41


10






I am a postdoc and I have been applying for jobs in both industry and academia. My h-index is good enough for junior faculty (~7).



Along my academic CV I have an industry-oriented CV and send both of them out accordingly. I have had a handful of final stage interviews (faculty/scientist) for academia but none in the industry so far.



I suspect I am being interviewed as the "token diverse female" (I'm asian) as my area of science is white male dominated. The whole experience, along with prior job hunts, has led me to suspect that my gender and race may be hindering my earning potential. I believe I have the required qualifications and skills.



I am thinking of reapplying as a white male to these same industry jobs I got rejected for (especially the rejections without interview) just to see if how far along I would get. Only for industry jobs because those CVs don't make it to the chief scientist's table. Maybe make a documentary or blog about this if there are significant findings. Now put your imagination to the test: what is the worst that can happen?










share|improve this question
















I am a postdoc and I have been applying for jobs in both industry and academia. My h-index is good enough for junior faculty (~7).



Along my academic CV I have an industry-oriented CV and send both of them out accordingly. I have had a handful of final stage interviews (faculty/scientist) for academia but none in the industry so far.



I suspect I am being interviewed as the "token diverse female" (I'm asian) as my area of science is white male dominated. The whole experience, along with prior job hunts, has led me to suspect that my gender and race may be hindering my earning potential. I believe I have the required qualifications and skills.



I am thinking of reapplying as a white male to these same industry jobs I got rejected for (especially the rejections without interview) just to see if how far along I would get. Only for industry jobs because those CVs don't make it to the chief scientist's table. Maybe make a documentary or blog about this if there are significant findings. Now put your imagination to the test: what is the worst that can happen?







postdocs job-search job gender ethnicity






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Mar 24 at 20:32









kubanczyk

1032




1032










asked Mar 22 at 23:42









FrostedCentralFrostedCentral

378248




378248




closed as off-topic by Anonymous Physicist, corey979, user3209815, Azor Ahai, Jon Custer Mar 26 at 15:06


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question is not within the scope of this site as defined in the help center. Our scope particularly excludes the content of research, education outside of a university setting, and undergraduate admissions, life, and culture." – user3209815, Azor Ahai, Jon Custer

If this question can be reworded to fit the rules in the help center, please edit the question.







closed as off-topic by Anonymous Physicist, corey979, user3209815, Azor Ahai, Jon Custer Mar 26 at 15:06


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question is not within the scope of this site as defined in the help center. Our scope particularly excludes the content of research, education outside of a university setting, and undergraduate admissions, life, and culture." – user3209815, Azor Ahai, Jon Custer

If this question can be reworded to fit the rules in the help center, please edit the question.













  • Comments are not for extended discussion; this conversation has been moved to chat.

    – StrongBad
    Mar 25 at 16:01






  • 3





    Hiring biases in industry are off-topic.

    – Azor Ahai
    Mar 25 at 17:50



















  • Comments are not for extended discussion; this conversation has been moved to chat.

    – StrongBad
    Mar 25 at 16:01






  • 3





    Hiring biases in industry are off-topic.

    – Azor Ahai
    Mar 25 at 17:50

















Comments are not for extended discussion; this conversation has been moved to chat.

– StrongBad
Mar 25 at 16:01





Comments are not for extended discussion; this conversation has been moved to chat.

– StrongBad
Mar 25 at 16:01




3




3





Hiring biases in industry are off-topic.

– Azor Ahai
Mar 25 at 17:50





Hiring biases in industry are off-topic.

– Azor Ahai
Mar 25 at 17:50










3 Answers
3






active

oldest

votes


















129














I like to think I have a pretty vivid imagination, so the worst thing I can reasonably imagine happening is that you would be seen as trying to perform an experiment with human participants without proper controls, questionable experimental design, possibly a lack of appropriately rigorous analysis (if this isn't your specialty), and lack of ethical review and oversight. From an ethics perspective, you propose to use deception on uninformed participants while possibly obtaining personally identifiable information on them which could lead to them facing serious social consequences if they were identified. With the mob mentality of the viral internet as it is, the consequences could be pretty darn severe - up to loss of job, death threats, actual violence, and more. Sensitive topics require sensitive handling, which is what any functioning IRB and responsible researcher would insist on.



In reporting on the results, even if informally on a blog, you could be seen as offering a pseudo-scientific view (or worse) on a controversial and important topic. If your experiment was performed poorly or your analysis done improperly, you could lend weight to an incorrect view - either providing what could be cited as evidence that discrimination does not exist where it does (thus making it harder for people discriminated against to make changes or be taken seriously), or supporting the view that discrimination does exist where it doesn't (leading to negative consequences for people who are doing nothing wrong and deflecting attention away from more pressing, extant issues). Bad science, even done with good intention, can easily make the world worse.



If some random blogger or journalist did this, most of us can gratingly dismiss it as "they don't know any better", or just the world of click-bait, etc. But if you were a qualified scientist who should know better, people might not be so willing to dismiss such activity just because it wasn't intended to be scientific and it wasn't intended for publication. Most people won't even know the difference in what is and isn't intended to be scientific when done by a scientist, and many that do know the difference might not consider it an excuse.



As Dawn pointed out in a comment, one version of this is called an audit study, and there is a pretty large body of literature that tries to do basically what you are suggesting in a systematic way. I cannot even try to count how many studies of this sort are published, but I'd be surprised if it wasn't already in the thousands, looking at everything from gender to race to the impact of varying lengths of time gaps on a resume.



Finally, the nature of this sort of field study is that even with everything going your way they are hard to do correctly. No simple analysis method works even if you did everything right and collected all the data appropriately. There is too much randomness, too much heterogeneity, too much structure, to allow any simple bit of statistics to give the correct interpretation. In short, unless this is your specialty, it would be trivially easy to get everything else right and still come to exactly the wrong conclusion.



For those who are not familiar with this kind of statistics, a classic example of how a simple analysis can go wrong is Sex Bias in Graduate Admissions: Data from Berkeley. In a simple aggregate analysis, it looked quite clearly that women were being admitted at lower rates than men, and thus bias was quite obvious to the point that the deans of the school were concerned this could be the basis for a lawsuit. It turned out it was a nice example of Simpson's Paradox, as it turned out the cause for the difference was that women were more likely to apply to departments that were crowded and competitive and thus harder to get into for everyone, while men were more likely to apply to departments that were less competitive.



If a similar condition existed in the employment sector, where you were applying to jobs in industry that turned out to vary in their selectivity in a way that you were not considering, this would mess up your analysis, and you cannot easily collect more information that would allow you to fix it. After all, I'm sure you weren't inclined to use random selection in your own employment search!



So, in summation, the worst that I could imagine happening is: you end up doing bad science that would reflect badly on you and would not be easily excused just because it wasn't intended for publication; you come to the wrong conclusions and in a way which could hurt innocent people; you casually report information that could be used in dangerous and damaging way; you end up being identified as the person responsible and it goes viral, so now the most famous thing you'll ever be known for was this thing you didn't intend as a serious study (and which could have gone horribly wrong); and, as a bonus, you could just end up wasting your time and the time of others for no benefit.



And since its the worst thing that can happen, I suppose you could also end up with a headache. Things can always be worse by adding a headache.






share|improve this answer



















  • 13





    The question was "what's the worst that can happen", so this answer rightly applies a strict standard. But we don't generally demand scientific rigour from blog posts, journalism, or political advocacy. For example, interviewing an arbitrary sample of people in the streets is not a representative opinion poll, yet it is common practice in TV and newspaper reporting.

    – henning
    Mar 24 at 14:33








  • 4





    @henning those organization do pay a price for their lack of rigor. In lost reputation as well as in lawsuits.

    – A Simple Algorithm
    Mar 24 at 17:32






  • 7





    @A Simple Algorithm Not really.

    – henning
    Mar 24 at 17:41





















27














You're asking the wrong question.




... what is the worst that can happen?




Others have answered this. But it's the wrong question. What you should really ask is:




What's likely to happen?




You are likely to not get any statistically significant information, and probably not even a sound hunch about the reason you got more offers than your friend. You are likely to get into a mild amount of trouble at least with some of the potential workplaces when you retract your application or when they call up your references/former universities/etc. IMHO it is likely you will not have contributed even marginally with your experiment.



If you believe "affirmative-action"-type hiring is nothing but Tokenism and is inappropriate / discriminatory against people like your friend - act against it where you actually are present and have access to information, such as your next workplace; and in wider social contexts (e.g. participate in public awareness-raising campaigns, lobbying elected officials, organizing petitions, demonstrations etc.)



PS - Please do not construe this answer as an endorsement or criticism of "affirmative-action"-type hiring practices.






share|improve this answer

































    4














    The worst thing which can happen is that you get the study setup or the statistics wrong, and the interpretation in the blog. The world is too full of oversimplified blogs and video documentaries on perceived gender biases.



    There is a lot of research by people who devote their scientific career to this, and the statistics of "who studies what" is quite influential. For example I (as a technical team lead doing a lot of technical interviews) observe that most women who started to study engineering 10 years ago (Central Europe) did think about the study choice well before studying, while for many men it was "the default option". This alone makes it reasonable to select them for the interviews with a higher percentage. I cant quantify this -> read research on it.



    In the worst case (i assume that you are in a STEM field) it could cost you job opportunities to present a flawed analysis without peer-review.






    share|improve this answer






























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      129














      I like to think I have a pretty vivid imagination, so the worst thing I can reasonably imagine happening is that you would be seen as trying to perform an experiment with human participants without proper controls, questionable experimental design, possibly a lack of appropriately rigorous analysis (if this isn't your specialty), and lack of ethical review and oversight. From an ethics perspective, you propose to use deception on uninformed participants while possibly obtaining personally identifiable information on them which could lead to them facing serious social consequences if they were identified. With the mob mentality of the viral internet as it is, the consequences could be pretty darn severe - up to loss of job, death threats, actual violence, and more. Sensitive topics require sensitive handling, which is what any functioning IRB and responsible researcher would insist on.



      In reporting on the results, even if informally on a blog, you could be seen as offering a pseudo-scientific view (or worse) on a controversial and important topic. If your experiment was performed poorly or your analysis done improperly, you could lend weight to an incorrect view - either providing what could be cited as evidence that discrimination does not exist where it does (thus making it harder for people discriminated against to make changes or be taken seriously), or supporting the view that discrimination does exist where it doesn't (leading to negative consequences for people who are doing nothing wrong and deflecting attention away from more pressing, extant issues). Bad science, even done with good intention, can easily make the world worse.



      If some random blogger or journalist did this, most of us can gratingly dismiss it as "they don't know any better", or just the world of click-bait, etc. But if you were a qualified scientist who should know better, people might not be so willing to dismiss such activity just because it wasn't intended to be scientific and it wasn't intended for publication. Most people won't even know the difference in what is and isn't intended to be scientific when done by a scientist, and many that do know the difference might not consider it an excuse.



      As Dawn pointed out in a comment, one version of this is called an audit study, and there is a pretty large body of literature that tries to do basically what you are suggesting in a systematic way. I cannot even try to count how many studies of this sort are published, but I'd be surprised if it wasn't already in the thousands, looking at everything from gender to race to the impact of varying lengths of time gaps on a resume.



      Finally, the nature of this sort of field study is that even with everything going your way they are hard to do correctly. No simple analysis method works even if you did everything right and collected all the data appropriately. There is too much randomness, too much heterogeneity, too much structure, to allow any simple bit of statistics to give the correct interpretation. In short, unless this is your specialty, it would be trivially easy to get everything else right and still come to exactly the wrong conclusion.



      For those who are not familiar with this kind of statistics, a classic example of how a simple analysis can go wrong is Sex Bias in Graduate Admissions: Data from Berkeley. In a simple aggregate analysis, it looked quite clearly that women were being admitted at lower rates than men, and thus bias was quite obvious to the point that the deans of the school were concerned this could be the basis for a lawsuit. It turned out it was a nice example of Simpson's Paradox, as it turned out the cause for the difference was that women were more likely to apply to departments that were crowded and competitive and thus harder to get into for everyone, while men were more likely to apply to departments that were less competitive.



      If a similar condition existed in the employment sector, where you were applying to jobs in industry that turned out to vary in their selectivity in a way that you were not considering, this would mess up your analysis, and you cannot easily collect more information that would allow you to fix it. After all, I'm sure you weren't inclined to use random selection in your own employment search!



      So, in summation, the worst that I could imagine happening is: you end up doing bad science that would reflect badly on you and would not be easily excused just because it wasn't intended for publication; you come to the wrong conclusions and in a way which could hurt innocent people; you casually report information that could be used in dangerous and damaging way; you end up being identified as the person responsible and it goes viral, so now the most famous thing you'll ever be known for was this thing you didn't intend as a serious study (and which could have gone horribly wrong); and, as a bonus, you could just end up wasting your time and the time of others for no benefit.



      And since its the worst thing that can happen, I suppose you could also end up with a headache. Things can always be worse by adding a headache.






      share|improve this answer



















      • 13





        The question was "what's the worst that can happen", so this answer rightly applies a strict standard. But we don't generally demand scientific rigour from blog posts, journalism, or political advocacy. For example, interviewing an arbitrary sample of people in the streets is not a representative opinion poll, yet it is common practice in TV and newspaper reporting.

        – henning
        Mar 24 at 14:33








      • 4





        @henning those organization do pay a price for their lack of rigor. In lost reputation as well as in lawsuits.

        – A Simple Algorithm
        Mar 24 at 17:32






      • 7





        @A Simple Algorithm Not really.

        – henning
        Mar 24 at 17:41


















      129














      I like to think I have a pretty vivid imagination, so the worst thing I can reasonably imagine happening is that you would be seen as trying to perform an experiment with human participants without proper controls, questionable experimental design, possibly a lack of appropriately rigorous analysis (if this isn't your specialty), and lack of ethical review and oversight. From an ethics perspective, you propose to use deception on uninformed participants while possibly obtaining personally identifiable information on them which could lead to them facing serious social consequences if they were identified. With the mob mentality of the viral internet as it is, the consequences could be pretty darn severe - up to loss of job, death threats, actual violence, and more. Sensitive topics require sensitive handling, which is what any functioning IRB and responsible researcher would insist on.



      In reporting on the results, even if informally on a blog, you could be seen as offering a pseudo-scientific view (or worse) on a controversial and important topic. If your experiment was performed poorly or your analysis done improperly, you could lend weight to an incorrect view - either providing what could be cited as evidence that discrimination does not exist where it does (thus making it harder for people discriminated against to make changes or be taken seriously), or supporting the view that discrimination does exist where it doesn't (leading to negative consequences for people who are doing nothing wrong and deflecting attention away from more pressing, extant issues). Bad science, even done with good intention, can easily make the world worse.



      If some random blogger or journalist did this, most of us can gratingly dismiss it as "they don't know any better", or just the world of click-bait, etc. But if you were a qualified scientist who should know better, people might not be so willing to dismiss such activity just because it wasn't intended to be scientific and it wasn't intended for publication. Most people won't even know the difference in what is and isn't intended to be scientific when done by a scientist, and many that do know the difference might not consider it an excuse.



      As Dawn pointed out in a comment, one version of this is called an audit study, and there is a pretty large body of literature that tries to do basically what you are suggesting in a systematic way. I cannot even try to count how many studies of this sort are published, but I'd be surprised if it wasn't already in the thousands, looking at everything from gender to race to the impact of varying lengths of time gaps on a resume.



      Finally, the nature of this sort of field study is that even with everything going your way they are hard to do correctly. No simple analysis method works even if you did everything right and collected all the data appropriately. There is too much randomness, too much heterogeneity, too much structure, to allow any simple bit of statistics to give the correct interpretation. In short, unless this is your specialty, it would be trivially easy to get everything else right and still come to exactly the wrong conclusion.



      For those who are not familiar with this kind of statistics, a classic example of how a simple analysis can go wrong is Sex Bias in Graduate Admissions: Data from Berkeley. In a simple aggregate analysis, it looked quite clearly that women were being admitted at lower rates than men, and thus bias was quite obvious to the point that the deans of the school were concerned this could be the basis for a lawsuit. It turned out it was a nice example of Simpson's Paradox, as it turned out the cause for the difference was that women were more likely to apply to departments that were crowded and competitive and thus harder to get into for everyone, while men were more likely to apply to departments that were less competitive.



      If a similar condition existed in the employment sector, where you were applying to jobs in industry that turned out to vary in their selectivity in a way that you were not considering, this would mess up your analysis, and you cannot easily collect more information that would allow you to fix it. After all, I'm sure you weren't inclined to use random selection in your own employment search!



      So, in summation, the worst that I could imagine happening is: you end up doing bad science that would reflect badly on you and would not be easily excused just because it wasn't intended for publication; you come to the wrong conclusions and in a way which could hurt innocent people; you casually report information that could be used in dangerous and damaging way; you end up being identified as the person responsible and it goes viral, so now the most famous thing you'll ever be known for was this thing you didn't intend as a serious study (and which could have gone horribly wrong); and, as a bonus, you could just end up wasting your time and the time of others for no benefit.



      And since its the worst thing that can happen, I suppose you could also end up with a headache. Things can always be worse by adding a headache.






      share|improve this answer



















      • 13





        The question was "what's the worst that can happen", so this answer rightly applies a strict standard. But we don't generally demand scientific rigour from blog posts, journalism, or political advocacy. For example, interviewing an arbitrary sample of people in the streets is not a representative opinion poll, yet it is common practice in TV and newspaper reporting.

        – henning
        Mar 24 at 14:33








      • 4





        @henning those organization do pay a price for their lack of rigor. In lost reputation as well as in lawsuits.

        – A Simple Algorithm
        Mar 24 at 17:32






      • 7





        @A Simple Algorithm Not really.

        – henning
        Mar 24 at 17:41
















      129












      129








      129







      I like to think I have a pretty vivid imagination, so the worst thing I can reasonably imagine happening is that you would be seen as trying to perform an experiment with human participants without proper controls, questionable experimental design, possibly a lack of appropriately rigorous analysis (if this isn't your specialty), and lack of ethical review and oversight. From an ethics perspective, you propose to use deception on uninformed participants while possibly obtaining personally identifiable information on them which could lead to them facing serious social consequences if they were identified. With the mob mentality of the viral internet as it is, the consequences could be pretty darn severe - up to loss of job, death threats, actual violence, and more. Sensitive topics require sensitive handling, which is what any functioning IRB and responsible researcher would insist on.



      In reporting on the results, even if informally on a blog, you could be seen as offering a pseudo-scientific view (or worse) on a controversial and important topic. If your experiment was performed poorly or your analysis done improperly, you could lend weight to an incorrect view - either providing what could be cited as evidence that discrimination does not exist where it does (thus making it harder for people discriminated against to make changes or be taken seriously), or supporting the view that discrimination does exist where it doesn't (leading to negative consequences for people who are doing nothing wrong and deflecting attention away from more pressing, extant issues). Bad science, even done with good intention, can easily make the world worse.



      If some random blogger or journalist did this, most of us can gratingly dismiss it as "they don't know any better", or just the world of click-bait, etc. But if you were a qualified scientist who should know better, people might not be so willing to dismiss such activity just because it wasn't intended to be scientific and it wasn't intended for publication. Most people won't even know the difference in what is and isn't intended to be scientific when done by a scientist, and many that do know the difference might not consider it an excuse.



      As Dawn pointed out in a comment, one version of this is called an audit study, and there is a pretty large body of literature that tries to do basically what you are suggesting in a systematic way. I cannot even try to count how many studies of this sort are published, but I'd be surprised if it wasn't already in the thousands, looking at everything from gender to race to the impact of varying lengths of time gaps on a resume.



      Finally, the nature of this sort of field study is that even with everything going your way they are hard to do correctly. No simple analysis method works even if you did everything right and collected all the data appropriately. There is too much randomness, too much heterogeneity, too much structure, to allow any simple bit of statistics to give the correct interpretation. In short, unless this is your specialty, it would be trivially easy to get everything else right and still come to exactly the wrong conclusion.



      For those who are not familiar with this kind of statistics, a classic example of how a simple analysis can go wrong is Sex Bias in Graduate Admissions: Data from Berkeley. In a simple aggregate analysis, it looked quite clearly that women were being admitted at lower rates than men, and thus bias was quite obvious to the point that the deans of the school were concerned this could be the basis for a lawsuit. It turned out it was a nice example of Simpson's Paradox, as it turned out the cause for the difference was that women were more likely to apply to departments that were crowded and competitive and thus harder to get into for everyone, while men were more likely to apply to departments that were less competitive.



      If a similar condition existed in the employment sector, where you were applying to jobs in industry that turned out to vary in their selectivity in a way that you were not considering, this would mess up your analysis, and you cannot easily collect more information that would allow you to fix it. After all, I'm sure you weren't inclined to use random selection in your own employment search!



      So, in summation, the worst that I could imagine happening is: you end up doing bad science that would reflect badly on you and would not be easily excused just because it wasn't intended for publication; you come to the wrong conclusions and in a way which could hurt innocent people; you casually report information that could be used in dangerous and damaging way; you end up being identified as the person responsible and it goes viral, so now the most famous thing you'll ever be known for was this thing you didn't intend as a serious study (and which could have gone horribly wrong); and, as a bonus, you could just end up wasting your time and the time of others for no benefit.



      And since its the worst thing that can happen, I suppose you could also end up with a headache. Things can always be worse by adding a headache.






      share|improve this answer













      I like to think I have a pretty vivid imagination, so the worst thing I can reasonably imagine happening is that you would be seen as trying to perform an experiment with human participants without proper controls, questionable experimental design, possibly a lack of appropriately rigorous analysis (if this isn't your specialty), and lack of ethical review and oversight. From an ethics perspective, you propose to use deception on uninformed participants while possibly obtaining personally identifiable information on them which could lead to them facing serious social consequences if they were identified. With the mob mentality of the viral internet as it is, the consequences could be pretty darn severe - up to loss of job, death threats, actual violence, and more. Sensitive topics require sensitive handling, which is what any functioning IRB and responsible researcher would insist on.



      In reporting on the results, even if informally on a blog, you could be seen as offering a pseudo-scientific view (or worse) on a controversial and important topic. If your experiment was performed poorly or your analysis done improperly, you could lend weight to an incorrect view - either providing what could be cited as evidence that discrimination does not exist where it does (thus making it harder for people discriminated against to make changes or be taken seriously), or supporting the view that discrimination does exist where it doesn't (leading to negative consequences for people who are doing nothing wrong and deflecting attention away from more pressing, extant issues). Bad science, even done with good intention, can easily make the world worse.



      If some random blogger or journalist did this, most of us can gratingly dismiss it as "they don't know any better", or just the world of click-bait, etc. But if you were a qualified scientist who should know better, people might not be so willing to dismiss such activity just because it wasn't intended to be scientific and it wasn't intended for publication. Most people won't even know the difference in what is and isn't intended to be scientific when done by a scientist, and many that do know the difference might not consider it an excuse.



      As Dawn pointed out in a comment, one version of this is called an audit study, and there is a pretty large body of literature that tries to do basically what you are suggesting in a systematic way. I cannot even try to count how many studies of this sort are published, but I'd be surprised if it wasn't already in the thousands, looking at everything from gender to race to the impact of varying lengths of time gaps on a resume.



      Finally, the nature of this sort of field study is that even with everything going your way they are hard to do correctly. No simple analysis method works even if you did everything right and collected all the data appropriately. There is too much randomness, too much heterogeneity, too much structure, to allow any simple bit of statistics to give the correct interpretation. In short, unless this is your specialty, it would be trivially easy to get everything else right and still come to exactly the wrong conclusion.



      For those who are not familiar with this kind of statistics, a classic example of how a simple analysis can go wrong is Sex Bias in Graduate Admissions: Data from Berkeley. In a simple aggregate analysis, it looked quite clearly that women were being admitted at lower rates than men, and thus bias was quite obvious to the point that the deans of the school were concerned this could be the basis for a lawsuit. It turned out it was a nice example of Simpson's Paradox, as it turned out the cause for the difference was that women were more likely to apply to departments that were crowded and competitive and thus harder to get into for everyone, while men were more likely to apply to departments that were less competitive.



      If a similar condition existed in the employment sector, where you were applying to jobs in industry that turned out to vary in their selectivity in a way that you were not considering, this would mess up your analysis, and you cannot easily collect more information that would allow you to fix it. After all, I'm sure you weren't inclined to use random selection in your own employment search!



      So, in summation, the worst that I could imagine happening is: you end up doing bad science that would reflect badly on you and would not be easily excused just because it wasn't intended for publication; you come to the wrong conclusions and in a way which could hurt innocent people; you casually report information that could be used in dangerous and damaging way; you end up being identified as the person responsible and it goes viral, so now the most famous thing you'll ever be known for was this thing you didn't intend as a serious study (and which could have gone horribly wrong); and, as a bonus, you could just end up wasting your time and the time of others for no benefit.



      And since its the worst thing that can happen, I suppose you could also end up with a headache. Things can always be worse by adding a headache.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered Mar 23 at 0:55









      BrianHBrianH

      17.6k64172




      17.6k64172








      • 13





        The question was "what's the worst that can happen", so this answer rightly applies a strict standard. But we don't generally demand scientific rigour from blog posts, journalism, or political advocacy. For example, interviewing an arbitrary sample of people in the streets is not a representative opinion poll, yet it is common practice in TV and newspaper reporting.

        – henning
        Mar 24 at 14:33








      • 4





        @henning those organization do pay a price for their lack of rigor. In lost reputation as well as in lawsuits.

        – A Simple Algorithm
        Mar 24 at 17:32






      • 7





        @A Simple Algorithm Not really.

        – henning
        Mar 24 at 17:41
















      • 13





        The question was "what's the worst that can happen", so this answer rightly applies a strict standard. But we don't generally demand scientific rigour from blog posts, journalism, or political advocacy. For example, interviewing an arbitrary sample of people in the streets is not a representative opinion poll, yet it is common practice in TV and newspaper reporting.

        – henning
        Mar 24 at 14:33








      • 4





        @henning those organization do pay a price for their lack of rigor. In lost reputation as well as in lawsuits.

        – A Simple Algorithm
        Mar 24 at 17:32






      • 7





        @A Simple Algorithm Not really.

        – henning
        Mar 24 at 17:41










      13




      13





      The question was "what's the worst that can happen", so this answer rightly applies a strict standard. But we don't generally demand scientific rigour from blog posts, journalism, or political advocacy. For example, interviewing an arbitrary sample of people in the streets is not a representative opinion poll, yet it is common practice in TV and newspaper reporting.

      – henning
      Mar 24 at 14:33







      The question was "what's the worst that can happen", so this answer rightly applies a strict standard. But we don't generally demand scientific rigour from blog posts, journalism, or political advocacy. For example, interviewing an arbitrary sample of people in the streets is not a representative opinion poll, yet it is common practice in TV and newspaper reporting.

      – henning
      Mar 24 at 14:33






      4




      4





      @henning those organization do pay a price for their lack of rigor. In lost reputation as well as in lawsuits.

      – A Simple Algorithm
      Mar 24 at 17:32





      @henning those organization do pay a price for their lack of rigor. In lost reputation as well as in lawsuits.

      – A Simple Algorithm
      Mar 24 at 17:32




      7




      7





      @A Simple Algorithm Not really.

      – henning
      Mar 24 at 17:41







      @A Simple Algorithm Not really.

      – henning
      Mar 24 at 17:41













      27














      You're asking the wrong question.




      ... what is the worst that can happen?




      Others have answered this. But it's the wrong question. What you should really ask is:




      What's likely to happen?




      You are likely to not get any statistically significant information, and probably not even a sound hunch about the reason you got more offers than your friend. You are likely to get into a mild amount of trouble at least with some of the potential workplaces when you retract your application or when they call up your references/former universities/etc. IMHO it is likely you will not have contributed even marginally with your experiment.



      If you believe "affirmative-action"-type hiring is nothing but Tokenism and is inappropriate / discriminatory against people like your friend - act against it where you actually are present and have access to information, such as your next workplace; and in wider social contexts (e.g. participate in public awareness-raising campaigns, lobbying elected officials, organizing petitions, demonstrations etc.)



      PS - Please do not construe this answer as an endorsement or criticism of "affirmative-action"-type hiring practices.






      share|improve this answer






























        27














        You're asking the wrong question.




        ... what is the worst that can happen?




        Others have answered this. But it's the wrong question. What you should really ask is:




        What's likely to happen?




        You are likely to not get any statistically significant information, and probably not even a sound hunch about the reason you got more offers than your friend. You are likely to get into a mild amount of trouble at least with some of the potential workplaces when you retract your application or when they call up your references/former universities/etc. IMHO it is likely you will not have contributed even marginally with your experiment.



        If you believe "affirmative-action"-type hiring is nothing but Tokenism and is inappropriate / discriminatory against people like your friend - act against it where you actually are present and have access to information, such as your next workplace; and in wider social contexts (e.g. participate in public awareness-raising campaigns, lobbying elected officials, organizing petitions, demonstrations etc.)



        PS - Please do not construe this answer as an endorsement or criticism of "affirmative-action"-type hiring practices.






        share|improve this answer




























          27












          27








          27







          You're asking the wrong question.




          ... what is the worst that can happen?




          Others have answered this. But it's the wrong question. What you should really ask is:




          What's likely to happen?




          You are likely to not get any statistically significant information, and probably not even a sound hunch about the reason you got more offers than your friend. You are likely to get into a mild amount of trouble at least with some of the potential workplaces when you retract your application or when they call up your references/former universities/etc. IMHO it is likely you will not have contributed even marginally with your experiment.



          If you believe "affirmative-action"-type hiring is nothing but Tokenism and is inappropriate / discriminatory against people like your friend - act against it where you actually are present and have access to information, such as your next workplace; and in wider social contexts (e.g. participate in public awareness-raising campaigns, lobbying elected officials, organizing petitions, demonstrations etc.)



          PS - Please do not construe this answer as an endorsement or criticism of "affirmative-action"-type hiring practices.






          share|improve this answer















          You're asking the wrong question.




          ... what is the worst that can happen?




          Others have answered this. But it's the wrong question. What you should really ask is:




          What's likely to happen?




          You are likely to not get any statistically significant information, and probably not even a sound hunch about the reason you got more offers than your friend. You are likely to get into a mild amount of trouble at least with some of the potential workplaces when you retract your application or when they call up your references/former universities/etc. IMHO it is likely you will not have contributed even marginally with your experiment.



          If you believe "affirmative-action"-type hiring is nothing but Tokenism and is inappropriate / discriminatory against people like your friend - act against it where you actually are present and have access to information, such as your next workplace; and in wider social contexts (e.g. participate in public awareness-raising campaigns, lobbying elected officials, organizing petitions, demonstrations etc.)



          PS - Please do not construe this answer as an endorsement or criticism of "affirmative-action"-type hiring practices.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Mar 24 at 23:01

























          answered Mar 24 at 13:59









          einpoklumeinpoklum

          25.3k240145




          25.3k240145























              4














              The worst thing which can happen is that you get the study setup or the statistics wrong, and the interpretation in the blog. The world is too full of oversimplified blogs and video documentaries on perceived gender biases.



              There is a lot of research by people who devote their scientific career to this, and the statistics of "who studies what" is quite influential. For example I (as a technical team lead doing a lot of technical interviews) observe that most women who started to study engineering 10 years ago (Central Europe) did think about the study choice well before studying, while for many men it was "the default option". This alone makes it reasonable to select them for the interviews with a higher percentage. I cant quantify this -> read research on it.



              In the worst case (i assume that you are in a STEM field) it could cost you job opportunities to present a flawed analysis without peer-review.






              share|improve this answer




























                4














                The worst thing which can happen is that you get the study setup or the statistics wrong, and the interpretation in the blog. The world is too full of oversimplified blogs and video documentaries on perceived gender biases.



                There is a lot of research by people who devote their scientific career to this, and the statistics of "who studies what" is quite influential. For example I (as a technical team lead doing a lot of technical interviews) observe that most women who started to study engineering 10 years ago (Central Europe) did think about the study choice well before studying, while for many men it was "the default option". This alone makes it reasonable to select them for the interviews with a higher percentage. I cant quantify this -> read research on it.



                In the worst case (i assume that you are in a STEM field) it could cost you job opportunities to present a flawed analysis without peer-review.






                share|improve this answer


























                  4












                  4








                  4







                  The worst thing which can happen is that you get the study setup or the statistics wrong, and the interpretation in the blog. The world is too full of oversimplified blogs and video documentaries on perceived gender biases.



                  There is a lot of research by people who devote their scientific career to this, and the statistics of "who studies what" is quite influential. For example I (as a technical team lead doing a lot of technical interviews) observe that most women who started to study engineering 10 years ago (Central Europe) did think about the study choice well before studying, while for many men it was "the default option". This alone makes it reasonable to select them for the interviews with a higher percentage. I cant quantify this -> read research on it.



                  In the worst case (i assume that you are in a STEM field) it could cost you job opportunities to present a flawed analysis without peer-review.






                  share|improve this answer













                  The worst thing which can happen is that you get the study setup or the statistics wrong, and the interpretation in the blog. The world is too full of oversimplified blogs and video documentaries on perceived gender biases.



                  There is a lot of research by people who devote their scientific career to this, and the statistics of "who studies what" is quite influential. For example I (as a technical team lead doing a lot of technical interviews) observe that most women who started to study engineering 10 years ago (Central Europe) did think about the study choice well before studying, while for many men it was "the default option". This alone makes it reasonable to select them for the interviews with a higher percentage. I cant quantify this -> read research on it.



                  In the worst case (i assume that you are in a STEM field) it could cost you job opportunities to present a flawed analysis without peer-review.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Mar 23 at 13:04









                  SaschaSascha

                  1,636313




                  1,636313















                      Popular posts from this blog

                      How to change which sound is reproduced for terminal bell?

                      Title Spacing in Bjornstrup Chapter, Removing Chapter Number From Contents

                      Can I use Tabulator js library in my java Spring + Thymeleaf project?