Pandas DataFrame apply different functions to multiple columns efficiently











up vote
0
down vote

favorite












I am trying to organize a dataframe by splitting the 'text' column using a comma, as can be seen by: transcript_dataframe['text'].str.split(pat = delimiter, expand = True). However, when running this over a dataframe with a few million rows the process is extremely slow. I was wondering if there was actually a faster way to do this, and if it was possible to wrap a tqdm progress bar around this method to see the progress.



Further, since I am going through a few million rows, you can see that I use apply about four times (which means I'm going through all the few million rows four times). Is there a way to do all this processing in one loop? What I want for output is a dataframe with:



RecordID (string, removed BOM)

Content (string with blank or pipe characters removed)

call_time_seconds (end time - call time, after converting to float, np.nan if error)

count_calls (just 1 throughout)


Lastly, I want to remove all 'RecordIDs' that have 'M' inside which I did with this line:



transcripts_df = transcripts_df[transcripts_df['RecordID'].progress_apply(lambda x: bool(re.match('M', str(x)))) != True]


The following is my code:



def CleanTranscripts(transcript_dataframe):
"""
Cleans up transcript dataframe by splitting 'text' column with delimiter
Data is as follows: RecordID | _(unused) | StartTime | EndTime | Content | _(unused) | _(unused)
Split data is put into the following columns:
* RecordID : removed byte order mark (BOM - ufeff)
* Content : removed blank or | characters
* call_time_seconds : uses EndTime converted into seconds - StartTime converted into seconds
* count_calls : each unique call identified by RecordID will be 1, used in future dataframe merge with prospect table

Arguments:
* transcript_dataframe (pandas.DataFrame) -- the raw transcript dataframe acquired from ImportTranscripts() method

Returns:
* transcrips_df (pandas.DataFrame) -- cleaned up transcript dataframe with columns of interest, consolidated on a per call level
"""
tqdm.pandas(desc="Cleaning Transcripts")
delimiter = r'|' # delimiter defined as an escaped pipe character

transcripts_df = transcript_dataframe['text'].str.split(pat = delimiter, expand = True) # expands transcript dataframe separated by commma
transcripts_df.columns = ['RecordID', 1, 'start_time', 'end_time', 'Content', 7, 8] # rename column names
transcripts_df = transcripts_df[transcripts_df['RecordID'].progress_apply(lambda x: bool(re.match('M', str(x)))) != True] # remove rows with RecordID that has 'M'
transcripts_df['RecordID'] = transcripts_df['RecordID'].progress_apply(lambda x: re.sub('(ufeff)','',str(x)[0:8])) # remove BOM, take substring 0:8
transcripts_df['Content'] = transcripts_df['Content'].progress_apply(lambda x: re.sub(r'( )|(|)','',str(x))) # remove blank or pipe characters
transcripts_df.loc[:,'call_time_seconds'] = pd.to_numeric(transcripts_df['end_time'], errors='coerce') - pd.to_numeric(transcripts_df['start_time'], errors='coerce') # convert end time into seconds, subtract start time converted into seconds
transcripts_df = transcripts_df.groupby('RecordID').agg({'Content': 'sum', 'call_time_seconds': 'max'}).reset_index() # group records by RecordID, aggregate rows with same RecordID by summing the contents, and taking max call time
transcripts_df['count_calls'] = 1 # assign 1 to count_calls columns (each RecordID is one call, used for future merging)
return transcripts_df # return consolidated and cleaned transcripts dataframe


Appreciate any help, thank you.










share|improve this question
























  • Hints: regex is usually slower than regular Python str methods. Pandas str methods with apply is slower than using built-in str in a list comprehension. We can't help a huge amount until we see some input data and examples of what output you need.
    – jpp
    Nov 14 at 9:32

















up vote
0
down vote

favorite












I am trying to organize a dataframe by splitting the 'text' column using a comma, as can be seen by: transcript_dataframe['text'].str.split(pat = delimiter, expand = True). However, when running this over a dataframe with a few million rows the process is extremely slow. I was wondering if there was actually a faster way to do this, and if it was possible to wrap a tqdm progress bar around this method to see the progress.



Further, since I am going through a few million rows, you can see that I use apply about four times (which means I'm going through all the few million rows four times). Is there a way to do all this processing in one loop? What I want for output is a dataframe with:



RecordID (string, removed BOM)

Content (string with blank or pipe characters removed)

call_time_seconds (end time - call time, after converting to float, np.nan if error)

count_calls (just 1 throughout)


Lastly, I want to remove all 'RecordIDs' that have 'M' inside which I did with this line:



transcripts_df = transcripts_df[transcripts_df['RecordID'].progress_apply(lambda x: bool(re.match('M', str(x)))) != True]


The following is my code:



def CleanTranscripts(transcript_dataframe):
"""
Cleans up transcript dataframe by splitting 'text' column with delimiter
Data is as follows: RecordID | _(unused) | StartTime | EndTime | Content | _(unused) | _(unused)
Split data is put into the following columns:
* RecordID : removed byte order mark (BOM - ufeff)
* Content : removed blank or | characters
* call_time_seconds : uses EndTime converted into seconds - StartTime converted into seconds
* count_calls : each unique call identified by RecordID will be 1, used in future dataframe merge with prospect table

Arguments:
* transcript_dataframe (pandas.DataFrame) -- the raw transcript dataframe acquired from ImportTranscripts() method

Returns:
* transcrips_df (pandas.DataFrame) -- cleaned up transcript dataframe with columns of interest, consolidated on a per call level
"""
tqdm.pandas(desc="Cleaning Transcripts")
delimiter = r'|' # delimiter defined as an escaped pipe character

transcripts_df = transcript_dataframe['text'].str.split(pat = delimiter, expand = True) # expands transcript dataframe separated by commma
transcripts_df.columns = ['RecordID', 1, 'start_time', 'end_time', 'Content', 7, 8] # rename column names
transcripts_df = transcripts_df[transcripts_df['RecordID'].progress_apply(lambda x: bool(re.match('M', str(x)))) != True] # remove rows with RecordID that has 'M'
transcripts_df['RecordID'] = transcripts_df['RecordID'].progress_apply(lambda x: re.sub('(ufeff)','',str(x)[0:8])) # remove BOM, take substring 0:8
transcripts_df['Content'] = transcripts_df['Content'].progress_apply(lambda x: re.sub(r'( )|(|)','',str(x))) # remove blank or pipe characters
transcripts_df.loc[:,'call_time_seconds'] = pd.to_numeric(transcripts_df['end_time'], errors='coerce') - pd.to_numeric(transcripts_df['start_time'], errors='coerce') # convert end time into seconds, subtract start time converted into seconds
transcripts_df = transcripts_df.groupby('RecordID').agg({'Content': 'sum', 'call_time_seconds': 'max'}).reset_index() # group records by RecordID, aggregate rows with same RecordID by summing the contents, and taking max call time
transcripts_df['count_calls'] = 1 # assign 1 to count_calls columns (each RecordID is one call, used for future merging)
return transcripts_df # return consolidated and cleaned transcripts dataframe


Appreciate any help, thank you.










share|improve this question
























  • Hints: regex is usually slower than regular Python str methods. Pandas str methods with apply is slower than using built-in str in a list comprehension. We can't help a huge amount until we see some input data and examples of what output you need.
    – jpp
    Nov 14 at 9:32















up vote
0
down vote

favorite









up vote
0
down vote

favorite











I am trying to organize a dataframe by splitting the 'text' column using a comma, as can be seen by: transcript_dataframe['text'].str.split(pat = delimiter, expand = True). However, when running this over a dataframe with a few million rows the process is extremely slow. I was wondering if there was actually a faster way to do this, and if it was possible to wrap a tqdm progress bar around this method to see the progress.



Further, since I am going through a few million rows, you can see that I use apply about four times (which means I'm going through all the few million rows four times). Is there a way to do all this processing in one loop? What I want for output is a dataframe with:



RecordID (string, removed BOM)

Content (string with blank or pipe characters removed)

call_time_seconds (end time - call time, after converting to float, np.nan if error)

count_calls (just 1 throughout)


Lastly, I want to remove all 'RecordIDs' that have 'M' inside which I did with this line:



transcripts_df = transcripts_df[transcripts_df['RecordID'].progress_apply(lambda x: bool(re.match('M', str(x)))) != True]


The following is my code:



def CleanTranscripts(transcript_dataframe):
"""
Cleans up transcript dataframe by splitting 'text' column with delimiter
Data is as follows: RecordID | _(unused) | StartTime | EndTime | Content | _(unused) | _(unused)
Split data is put into the following columns:
* RecordID : removed byte order mark (BOM - ufeff)
* Content : removed blank or | characters
* call_time_seconds : uses EndTime converted into seconds - StartTime converted into seconds
* count_calls : each unique call identified by RecordID will be 1, used in future dataframe merge with prospect table

Arguments:
* transcript_dataframe (pandas.DataFrame) -- the raw transcript dataframe acquired from ImportTranscripts() method

Returns:
* transcrips_df (pandas.DataFrame) -- cleaned up transcript dataframe with columns of interest, consolidated on a per call level
"""
tqdm.pandas(desc="Cleaning Transcripts")
delimiter = r'|' # delimiter defined as an escaped pipe character

transcripts_df = transcript_dataframe['text'].str.split(pat = delimiter, expand = True) # expands transcript dataframe separated by commma
transcripts_df.columns = ['RecordID', 1, 'start_time', 'end_time', 'Content', 7, 8] # rename column names
transcripts_df = transcripts_df[transcripts_df['RecordID'].progress_apply(lambda x: bool(re.match('M', str(x)))) != True] # remove rows with RecordID that has 'M'
transcripts_df['RecordID'] = transcripts_df['RecordID'].progress_apply(lambda x: re.sub('(ufeff)','',str(x)[0:8])) # remove BOM, take substring 0:8
transcripts_df['Content'] = transcripts_df['Content'].progress_apply(lambda x: re.sub(r'( )|(|)','',str(x))) # remove blank or pipe characters
transcripts_df.loc[:,'call_time_seconds'] = pd.to_numeric(transcripts_df['end_time'], errors='coerce') - pd.to_numeric(transcripts_df['start_time'], errors='coerce') # convert end time into seconds, subtract start time converted into seconds
transcripts_df = transcripts_df.groupby('RecordID').agg({'Content': 'sum', 'call_time_seconds': 'max'}).reset_index() # group records by RecordID, aggregate rows with same RecordID by summing the contents, and taking max call time
transcripts_df['count_calls'] = 1 # assign 1 to count_calls columns (each RecordID is one call, used for future merging)
return transcripts_df # return consolidated and cleaned transcripts dataframe


Appreciate any help, thank you.










share|improve this question















I am trying to organize a dataframe by splitting the 'text' column using a comma, as can be seen by: transcript_dataframe['text'].str.split(pat = delimiter, expand = True). However, when running this over a dataframe with a few million rows the process is extremely slow. I was wondering if there was actually a faster way to do this, and if it was possible to wrap a tqdm progress bar around this method to see the progress.



Further, since I am going through a few million rows, you can see that I use apply about four times (which means I'm going through all the few million rows four times). Is there a way to do all this processing in one loop? What I want for output is a dataframe with:



RecordID (string, removed BOM)

Content (string with blank or pipe characters removed)

call_time_seconds (end time - call time, after converting to float, np.nan if error)

count_calls (just 1 throughout)


Lastly, I want to remove all 'RecordIDs' that have 'M' inside which I did with this line:



transcripts_df = transcripts_df[transcripts_df['RecordID'].progress_apply(lambda x: bool(re.match('M', str(x)))) != True]


The following is my code:



def CleanTranscripts(transcript_dataframe):
"""
Cleans up transcript dataframe by splitting 'text' column with delimiter
Data is as follows: RecordID | _(unused) | StartTime | EndTime | Content | _(unused) | _(unused)
Split data is put into the following columns:
* RecordID : removed byte order mark (BOM - ufeff)
* Content : removed blank or | characters
* call_time_seconds : uses EndTime converted into seconds - StartTime converted into seconds
* count_calls : each unique call identified by RecordID will be 1, used in future dataframe merge with prospect table

Arguments:
* transcript_dataframe (pandas.DataFrame) -- the raw transcript dataframe acquired from ImportTranscripts() method

Returns:
* transcrips_df (pandas.DataFrame) -- cleaned up transcript dataframe with columns of interest, consolidated on a per call level
"""
tqdm.pandas(desc="Cleaning Transcripts")
delimiter = r'|' # delimiter defined as an escaped pipe character

transcripts_df = transcript_dataframe['text'].str.split(pat = delimiter, expand = True) # expands transcript dataframe separated by commma
transcripts_df.columns = ['RecordID', 1, 'start_time', 'end_time', 'Content', 7, 8] # rename column names
transcripts_df = transcripts_df[transcripts_df['RecordID'].progress_apply(lambda x: bool(re.match('M', str(x)))) != True] # remove rows with RecordID that has 'M'
transcripts_df['RecordID'] = transcripts_df['RecordID'].progress_apply(lambda x: re.sub('(ufeff)','',str(x)[0:8])) # remove BOM, take substring 0:8
transcripts_df['Content'] = transcripts_df['Content'].progress_apply(lambda x: re.sub(r'( )|(|)','',str(x))) # remove blank or pipe characters
transcripts_df.loc[:,'call_time_seconds'] = pd.to_numeric(transcripts_df['end_time'], errors='coerce') - pd.to_numeric(transcripts_df['start_time'], errors='coerce') # convert end time into seconds, subtract start time converted into seconds
transcripts_df = transcripts_df.groupby('RecordID').agg({'Content': 'sum', 'call_time_seconds': 'max'}).reset_index() # group records by RecordID, aggregate rows with same RecordID by summing the contents, and taking max call time
transcripts_df['count_calls'] = 1 # assign 1 to count_calls columns (each RecordID is one call, used for future merging)
return transcripts_df # return consolidated and cleaned transcripts dataframe


Appreciate any help, thank you.







python pandas pandas-groupby pandas-apply






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 14 at 6:02









Aqueous Carlos

301213




301213










asked Nov 14 at 5:46









Fake

93




93












  • Hints: regex is usually slower than regular Python str methods. Pandas str methods with apply is slower than using built-in str in a list comprehension. We can't help a huge amount until we see some input data and examples of what output you need.
    – jpp
    Nov 14 at 9:32




















  • Hints: regex is usually slower than regular Python str methods. Pandas str methods with apply is slower than using built-in str in a list comprehension. We can't help a huge amount until we see some input data and examples of what output you need.
    – jpp
    Nov 14 at 9:32


















Hints: regex is usually slower than regular Python str methods. Pandas str methods with apply is slower than using built-in str in a list comprehension. We can't help a huge amount until we see some input data and examples of what output you need.
– jpp
Nov 14 at 9:32






Hints: regex is usually slower than regular Python str methods. Pandas str methods with apply is slower than using built-in str in a list comprehension. We can't help a huge amount until we see some input data and examples of what output you need.
– jpp
Nov 14 at 9:32



















active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53293871%2fpandas-dataframe-apply-different-functions-to-multiple-columns-efficiently%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown






























active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53293871%2fpandas-dataframe-apply-different-functions-to-multiple-columns-efficiently%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Biblatex bibliography style without URLs when DOI exists (in Overleaf with Zotero bibliography)

ComboBox Display Member on multiple fields

Is it possible to collect Nectar points via Trainline?