Is there a way to drop duplicated rows based on an unhashable column?












1












$begingroup$


i have a pandas dataframe df with one column z filled with set values



i want to drop duplicated rows where 2 rows are considered duplicated version of one another when they have same column z values ( which are sets ).



import pandas as pd

lnks = [ ( 'a' , 'b' , { 'a' , 'b' } ) , ( 'b' , 'c' , { 'b' , 'c' } ) , ( 'b' , 'a' , { 'a' , 'b' } ) ]
lbls = [ 'x' , 'y' , 'z' ]
df = pd.DataFrame.from_records( lnks , columns = lbls )


Trying to drop duplicated rows based on column z values :



df.drop_duplicates( subset = 'z' , keep='first')


And i get the error message :



TypeError: unhashable type: 'set'


Is there a way to drop duplicated rows based on a unhashable typed column ?










share|improve this question











$endgroup$












  • $begingroup$
    I assume it is a typo - but there isn't actually a duplicate in row z anyway because one b also has a space: 'b '.
    $endgroup$
    – n1k31t4
    Mar 2 at 20:04










  • $begingroup$
    right. I've made a correction. thx.
    $endgroup$
    – Fabrice BOUCHAREL
    Mar 2 at 20:41
















1












$begingroup$


i have a pandas dataframe df with one column z filled with set values



i want to drop duplicated rows where 2 rows are considered duplicated version of one another when they have same column z values ( which are sets ).



import pandas as pd

lnks = [ ( 'a' , 'b' , { 'a' , 'b' } ) , ( 'b' , 'c' , { 'b' , 'c' } ) , ( 'b' , 'a' , { 'a' , 'b' } ) ]
lbls = [ 'x' , 'y' , 'z' ]
df = pd.DataFrame.from_records( lnks , columns = lbls )


Trying to drop duplicated rows based on column z values :



df.drop_duplicates( subset = 'z' , keep='first')


And i get the error message :



TypeError: unhashable type: 'set'


Is there a way to drop duplicated rows based on a unhashable typed column ?










share|improve this question











$endgroup$












  • $begingroup$
    I assume it is a typo - but there isn't actually a duplicate in row z anyway because one b also has a space: 'b '.
    $endgroup$
    – n1k31t4
    Mar 2 at 20:04










  • $begingroup$
    right. I've made a correction. thx.
    $endgroup$
    – Fabrice BOUCHAREL
    Mar 2 at 20:41














1












1








1





$begingroup$


i have a pandas dataframe df with one column z filled with set values



i want to drop duplicated rows where 2 rows are considered duplicated version of one another when they have same column z values ( which are sets ).



import pandas as pd

lnks = [ ( 'a' , 'b' , { 'a' , 'b' } ) , ( 'b' , 'c' , { 'b' , 'c' } ) , ( 'b' , 'a' , { 'a' , 'b' } ) ]
lbls = [ 'x' , 'y' , 'z' ]
df = pd.DataFrame.from_records( lnks , columns = lbls )


Trying to drop duplicated rows based on column z values :



df.drop_duplicates( subset = 'z' , keep='first')


And i get the error message :



TypeError: unhashable type: 'set'


Is there a way to drop duplicated rows based on a unhashable typed column ?










share|improve this question











$endgroup$




i have a pandas dataframe df with one column z filled with set values



i want to drop duplicated rows where 2 rows are considered duplicated version of one another when they have same column z values ( which are sets ).



import pandas as pd

lnks = [ ( 'a' , 'b' , { 'a' , 'b' } ) , ( 'b' , 'c' , { 'b' , 'c' } ) , ( 'b' , 'a' , { 'a' , 'b' } ) ]
lbls = [ 'x' , 'y' , 'z' ]
df = pd.DataFrame.from_records( lnks , columns = lbls )


Trying to drop duplicated rows based on column z values :



df.drop_duplicates( subset = 'z' , keep='first')


And i get the error message :



TypeError: unhashable type: 'set'


Is there a way to drop duplicated rows based on a unhashable typed column ?







python pandas dataframe






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Mar 2 at 23:23









n1k31t4

6,3262319




6,3262319










asked Mar 2 at 19:05









Fabrice BOUCHARELFabrice BOUCHAREL

585




585












  • $begingroup$
    I assume it is a typo - but there isn't actually a duplicate in row z anyway because one b also has a space: 'b '.
    $endgroup$
    – n1k31t4
    Mar 2 at 20:04










  • $begingroup$
    right. I've made a correction. thx.
    $endgroup$
    – Fabrice BOUCHAREL
    Mar 2 at 20:41


















  • $begingroup$
    I assume it is a typo - but there isn't actually a duplicate in row z anyway because one b also has a space: 'b '.
    $endgroup$
    – n1k31t4
    Mar 2 at 20:04










  • $begingroup$
    right. I've made a correction. thx.
    $endgroup$
    – Fabrice BOUCHAREL
    Mar 2 at 20:41
















$begingroup$
I assume it is a typo - but there isn't actually a duplicate in row z anyway because one b also has a space: 'b '.
$endgroup$
– n1k31t4
Mar 2 at 20:04




$begingroup$
I assume it is a typo - but there isn't actually a duplicate in row z anyway because one b also has a space: 'b '.
$endgroup$
– n1k31t4
Mar 2 at 20:04












$begingroup$
right. I've made a correction. thx.
$endgroup$
– Fabrice BOUCHAREL
Mar 2 at 20:41




$begingroup$
right. I've made a correction. thx.
$endgroup$
– Fabrice BOUCHAREL
Mar 2 at 20:41










2 Answers
2






active

oldest

votes


















3












$begingroup$

It is true that a set is not hashable (it cannot be used as a key in a hashmap a.k.a a dictionary). So what you can do is to just convert the column to a type that is hashable - I would go for a tuple.



I made a new column that is just the "z" column you had, converted to tuples. Then you can use the same method you tried to, on the new column:



In [1] : import pandas as pd 
...:
...: lnks = [ ( 'a' , 'b' , { 'a' , 'b' } ) , ( 'b' , 'c' , { 'b' , 'c' } )
...: , ( 'b' , 'a' , { 'a' , 'b' } ) ]
...: lbls = [ 'x' , 'y' , 'z' ]
...: df = pd.DataFrame.from_records( lnks , columns = lbls)

In [2]: df["z_tuple"] = df.z.apply(lambda x: tuple(x))

In [3]: df.drop_duplicates(subset="z_tuple", keep="first")
Out[3]:
x y z z_tuple
0 a b {b, a} (b, a)
1 b c {c, b} (c, b)


The apply method lets you apply a function to each item in a column, and then returns the values as a new column (a Pandas Series object). This lets you assign it back to the original DataFrame as a new column, as I did.



You can also remove the "z_tuple" column then if you no longer want it:



In [4] : df.drop("z_tuple", axis=1, inplace=True)                               

In [5] : df
Out[5] :
x y z
0 a b {b, a}
1 b c {c, b}
2 b a {b, a}





share|improve this answer









$endgroup$





















    0












    $begingroup$

    I have to admit I did not mention the reason why I was trying to drop duplicated rows based on a column containing set values.
    The reason is that the set { 'a' , 'b' } is the same as { 'b' , 'a' } so 2 apparently different rows are considered the same regarding the set column and are then deduplicated... but this is not possible because sets are unhashable ( like list )



    Tuples are hashable but order of their elements matters... so when I build the tuples for each row i sort them :



    import pandas as pd

    lnks = [ ( 'a' , 'b' ) , ( 'b' , 'c' ) , ( 'b' , 'a' ) , ( 'a' , 'd' ) , ( 'd' , 'e' ) ]
    lbls = [ 'x' , 'y' ]
    df = pd.DataFrame.from_records( lnks , columns = lbls )


    Building the tuple column (each tuple are sorted) :



    df[ 'z' ] = df.apply( lambda d : tuple( sorted( [ d[ 'x' ]  , d[ 'y' ] ] ) ) , axis = 1 )


    Droping duplicated rows (keeping first occurence) using the new tuple column :



    df.drop_duplicates(subset="z", keep="first" , inplace = True ) 





    share|improve this answer











    $endgroup$













      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "557"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f46541%2fis-there-a-way-to-drop-duplicated-rows-based-on-an-unhashable-column%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      3












      $begingroup$

      It is true that a set is not hashable (it cannot be used as a key in a hashmap a.k.a a dictionary). So what you can do is to just convert the column to a type that is hashable - I would go for a tuple.



      I made a new column that is just the "z" column you had, converted to tuples. Then you can use the same method you tried to, on the new column:



      In [1] : import pandas as pd 
      ...:
      ...: lnks = [ ( 'a' , 'b' , { 'a' , 'b' } ) , ( 'b' , 'c' , { 'b' , 'c' } )
      ...: , ( 'b' , 'a' , { 'a' , 'b' } ) ]
      ...: lbls = [ 'x' , 'y' , 'z' ]
      ...: df = pd.DataFrame.from_records( lnks , columns = lbls)

      In [2]: df["z_tuple"] = df.z.apply(lambda x: tuple(x))

      In [3]: df.drop_duplicates(subset="z_tuple", keep="first")
      Out[3]:
      x y z z_tuple
      0 a b {b, a} (b, a)
      1 b c {c, b} (c, b)


      The apply method lets you apply a function to each item in a column, and then returns the values as a new column (a Pandas Series object). This lets you assign it back to the original DataFrame as a new column, as I did.



      You can also remove the "z_tuple" column then if you no longer want it:



      In [4] : df.drop("z_tuple", axis=1, inplace=True)                               

      In [5] : df
      Out[5] :
      x y z
      0 a b {b, a}
      1 b c {c, b}
      2 b a {b, a}





      share|improve this answer









      $endgroup$


















        3












        $begingroup$

        It is true that a set is not hashable (it cannot be used as a key in a hashmap a.k.a a dictionary). So what you can do is to just convert the column to a type that is hashable - I would go for a tuple.



        I made a new column that is just the "z" column you had, converted to tuples. Then you can use the same method you tried to, on the new column:



        In [1] : import pandas as pd 
        ...:
        ...: lnks = [ ( 'a' , 'b' , { 'a' , 'b' } ) , ( 'b' , 'c' , { 'b' , 'c' } )
        ...: , ( 'b' , 'a' , { 'a' , 'b' } ) ]
        ...: lbls = [ 'x' , 'y' , 'z' ]
        ...: df = pd.DataFrame.from_records( lnks , columns = lbls)

        In [2]: df["z_tuple"] = df.z.apply(lambda x: tuple(x))

        In [3]: df.drop_duplicates(subset="z_tuple", keep="first")
        Out[3]:
        x y z z_tuple
        0 a b {b, a} (b, a)
        1 b c {c, b} (c, b)


        The apply method lets you apply a function to each item in a column, and then returns the values as a new column (a Pandas Series object). This lets you assign it back to the original DataFrame as a new column, as I did.



        You can also remove the "z_tuple" column then if you no longer want it:



        In [4] : df.drop("z_tuple", axis=1, inplace=True)                               

        In [5] : df
        Out[5] :
        x y z
        0 a b {b, a}
        1 b c {c, b}
        2 b a {b, a}





        share|improve this answer









        $endgroup$
















          3












          3








          3





          $begingroup$

          It is true that a set is not hashable (it cannot be used as a key in a hashmap a.k.a a dictionary). So what you can do is to just convert the column to a type that is hashable - I would go for a tuple.



          I made a new column that is just the "z" column you had, converted to tuples. Then you can use the same method you tried to, on the new column:



          In [1] : import pandas as pd 
          ...:
          ...: lnks = [ ( 'a' , 'b' , { 'a' , 'b' } ) , ( 'b' , 'c' , { 'b' , 'c' } )
          ...: , ( 'b' , 'a' , { 'a' , 'b' } ) ]
          ...: lbls = [ 'x' , 'y' , 'z' ]
          ...: df = pd.DataFrame.from_records( lnks , columns = lbls)

          In [2]: df["z_tuple"] = df.z.apply(lambda x: tuple(x))

          In [3]: df.drop_duplicates(subset="z_tuple", keep="first")
          Out[3]:
          x y z z_tuple
          0 a b {b, a} (b, a)
          1 b c {c, b} (c, b)


          The apply method lets you apply a function to each item in a column, and then returns the values as a new column (a Pandas Series object). This lets you assign it back to the original DataFrame as a new column, as I did.



          You can also remove the "z_tuple" column then if you no longer want it:



          In [4] : df.drop("z_tuple", axis=1, inplace=True)                               

          In [5] : df
          Out[5] :
          x y z
          0 a b {b, a}
          1 b c {c, b}
          2 b a {b, a}





          share|improve this answer









          $endgroup$



          It is true that a set is not hashable (it cannot be used as a key in a hashmap a.k.a a dictionary). So what you can do is to just convert the column to a type that is hashable - I would go for a tuple.



          I made a new column that is just the "z" column you had, converted to tuples. Then you can use the same method you tried to, on the new column:



          In [1] : import pandas as pd 
          ...:
          ...: lnks = [ ( 'a' , 'b' , { 'a' , 'b' } ) , ( 'b' , 'c' , { 'b' , 'c' } )
          ...: , ( 'b' , 'a' , { 'a' , 'b' } ) ]
          ...: lbls = [ 'x' , 'y' , 'z' ]
          ...: df = pd.DataFrame.from_records( lnks , columns = lbls)

          In [2]: df["z_tuple"] = df.z.apply(lambda x: tuple(x))

          In [3]: df.drop_duplicates(subset="z_tuple", keep="first")
          Out[3]:
          x y z z_tuple
          0 a b {b, a} (b, a)
          1 b c {c, b} (c, b)


          The apply method lets you apply a function to each item in a column, and then returns the values as a new column (a Pandas Series object). This lets you assign it back to the original DataFrame as a new column, as I did.



          You can also remove the "z_tuple" column then if you no longer want it:



          In [4] : df.drop("z_tuple", axis=1, inplace=True)                               

          In [5] : df
          Out[5] :
          x y z
          0 a b {b, a}
          1 b c {c, b}
          2 b a {b, a}






          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Mar 2 at 20:04









          n1k31t4n1k31t4

          6,3262319




          6,3262319























              0












              $begingroup$

              I have to admit I did not mention the reason why I was trying to drop duplicated rows based on a column containing set values.
              The reason is that the set { 'a' , 'b' } is the same as { 'b' , 'a' } so 2 apparently different rows are considered the same regarding the set column and are then deduplicated... but this is not possible because sets are unhashable ( like list )



              Tuples are hashable but order of their elements matters... so when I build the tuples for each row i sort them :



              import pandas as pd

              lnks = [ ( 'a' , 'b' ) , ( 'b' , 'c' ) , ( 'b' , 'a' ) , ( 'a' , 'd' ) , ( 'd' , 'e' ) ]
              lbls = [ 'x' , 'y' ]
              df = pd.DataFrame.from_records( lnks , columns = lbls )


              Building the tuple column (each tuple are sorted) :



              df[ 'z' ] = df.apply( lambda d : tuple( sorted( [ d[ 'x' ]  , d[ 'y' ] ] ) ) , axis = 1 )


              Droping duplicated rows (keeping first occurence) using the new tuple column :



              df.drop_duplicates(subset="z", keep="first" , inplace = True ) 





              share|improve this answer











              $endgroup$


















                0












                $begingroup$

                I have to admit I did not mention the reason why I was trying to drop duplicated rows based on a column containing set values.
                The reason is that the set { 'a' , 'b' } is the same as { 'b' , 'a' } so 2 apparently different rows are considered the same regarding the set column and are then deduplicated... but this is not possible because sets are unhashable ( like list )



                Tuples are hashable but order of their elements matters... so when I build the tuples for each row i sort them :



                import pandas as pd

                lnks = [ ( 'a' , 'b' ) , ( 'b' , 'c' ) , ( 'b' , 'a' ) , ( 'a' , 'd' ) , ( 'd' , 'e' ) ]
                lbls = [ 'x' , 'y' ]
                df = pd.DataFrame.from_records( lnks , columns = lbls )


                Building the tuple column (each tuple are sorted) :



                df[ 'z' ] = df.apply( lambda d : tuple( sorted( [ d[ 'x' ]  , d[ 'y' ] ] ) ) , axis = 1 )


                Droping duplicated rows (keeping first occurence) using the new tuple column :



                df.drop_duplicates(subset="z", keep="first" , inplace = True ) 





                share|improve this answer











                $endgroup$
















                  0












                  0








                  0





                  $begingroup$

                  I have to admit I did not mention the reason why I was trying to drop duplicated rows based on a column containing set values.
                  The reason is that the set { 'a' , 'b' } is the same as { 'b' , 'a' } so 2 apparently different rows are considered the same regarding the set column and are then deduplicated... but this is not possible because sets are unhashable ( like list )



                  Tuples are hashable but order of their elements matters... so when I build the tuples for each row i sort them :



                  import pandas as pd

                  lnks = [ ( 'a' , 'b' ) , ( 'b' , 'c' ) , ( 'b' , 'a' ) , ( 'a' , 'd' ) , ( 'd' , 'e' ) ]
                  lbls = [ 'x' , 'y' ]
                  df = pd.DataFrame.from_records( lnks , columns = lbls )


                  Building the tuple column (each tuple are sorted) :



                  df[ 'z' ] = df.apply( lambda d : tuple( sorted( [ d[ 'x' ]  , d[ 'y' ] ] ) ) , axis = 1 )


                  Droping duplicated rows (keeping first occurence) using the new tuple column :



                  df.drop_duplicates(subset="z", keep="first" , inplace = True ) 





                  share|improve this answer











                  $endgroup$



                  I have to admit I did not mention the reason why I was trying to drop duplicated rows based on a column containing set values.
                  The reason is that the set { 'a' , 'b' } is the same as { 'b' , 'a' } so 2 apparently different rows are considered the same regarding the set column and are then deduplicated... but this is not possible because sets are unhashable ( like list )



                  Tuples are hashable but order of their elements matters... so when I build the tuples for each row i sort them :



                  import pandas as pd

                  lnks = [ ( 'a' , 'b' ) , ( 'b' , 'c' ) , ( 'b' , 'a' ) , ( 'a' , 'd' ) , ( 'd' , 'e' ) ]
                  lbls = [ 'x' , 'y' ]
                  df = pd.DataFrame.from_records( lnks , columns = lbls )


                  Building the tuple column (each tuple are sorted) :



                  df[ 'z' ] = df.apply( lambda d : tuple( sorted( [ d[ 'x' ]  , d[ 'y' ] ] ) ) , axis = 1 )


                  Droping duplicated rows (keeping first occurence) using the new tuple column :



                  df.drop_duplicates(subset="z", keep="first" , inplace = True ) 






                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited Mar 4 at 17:39

























                  answered Mar 3 at 17:56









                  Fabrice BOUCHARELFabrice BOUCHAREL

                  585




                  585






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Data Science Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f46541%2fis-there-a-way-to-drop-duplicated-rows-based-on-an-unhashable-column%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Biblatex bibliography style without URLs when DOI exists (in Overleaf with Zotero bibliography)

                      ComboBox Display Member on multiple fields

                      Is it possible to collect Nectar points via Trainline?