Pandas DataFrame apply different functions to multiple columns efficiently
up vote
0
down vote
favorite
I am trying to organize a dataframe by splitting the 'text'
column using a comma, as can be seen by: transcript_dataframe['text'].str.split(pat = delimiter, expand = True)
. However, when running this over a dataframe with a few million rows the process is extremely slow. I was wondering if there was actually a faster way to do this, and if it was possible to wrap a tqdm progress bar around this method to see the progress.
Further, since I am going through a few million rows, you can see that I use apply about four times (which means I'm going through all the few million rows four times). Is there a way to do all this processing in one loop? What I want for output is a dataframe with:
RecordID (string, removed BOM)
Content (string with blank or pipe characters removed)
call_time_seconds (end time - call time, after converting to float, np.nan if error)
count_calls (just 1 throughout)
Lastly, I want to remove all 'RecordIDs'
that have 'M'
inside which I did with this line:
transcripts_df = transcripts_df[transcripts_df['RecordID'].progress_apply(lambda x: bool(re.match('M', str(x)))) != True]
The following is my code:
def CleanTranscripts(transcript_dataframe):
"""
Cleans up transcript dataframe by splitting 'text' column with delimiter
Data is as follows: RecordID | _(unused) | StartTime | EndTime | Content | _(unused) | _(unused)
Split data is put into the following columns:
* RecordID : removed byte order mark (BOM - ufeff)
* Content : removed blank or | characters
* call_time_seconds : uses EndTime converted into seconds - StartTime converted into seconds
* count_calls : each unique call identified by RecordID will be 1, used in future dataframe merge with prospect table
Arguments:
* transcript_dataframe (pandas.DataFrame) -- the raw transcript dataframe acquired from ImportTranscripts() method
Returns:
* transcrips_df (pandas.DataFrame) -- cleaned up transcript dataframe with columns of interest, consolidated on a per call level
"""
tqdm.pandas(desc="Cleaning Transcripts")
delimiter = r'|' # delimiter defined as an escaped pipe character
transcripts_df = transcript_dataframe['text'].str.split(pat = delimiter, expand = True) # expands transcript dataframe separated by commma
transcripts_df.columns = ['RecordID', 1, 'start_time', 'end_time', 'Content', 7, 8] # rename column names
transcripts_df = transcripts_df[transcripts_df['RecordID'].progress_apply(lambda x: bool(re.match('M', str(x)))) != True] # remove rows with RecordID that has 'M'
transcripts_df['RecordID'] = transcripts_df['RecordID'].progress_apply(lambda x: re.sub('(ufeff)','',str(x)[0:8])) # remove BOM, take substring 0:8
transcripts_df['Content'] = transcripts_df['Content'].progress_apply(lambda x: re.sub(r'( )|(|)','',str(x))) # remove blank or pipe characters
transcripts_df.loc[:,'call_time_seconds'] = pd.to_numeric(transcripts_df['end_time'], errors='coerce') - pd.to_numeric(transcripts_df['start_time'], errors='coerce') # convert end time into seconds, subtract start time converted into seconds
transcripts_df = transcripts_df.groupby('RecordID').agg({'Content': 'sum', 'call_time_seconds': 'max'}).reset_index() # group records by RecordID, aggregate rows with same RecordID by summing the contents, and taking max call time
transcripts_df['count_calls'] = 1 # assign 1 to count_calls columns (each RecordID is one call, used for future merging)
return transcripts_df # return consolidated and cleaned transcripts dataframe
Appreciate any help, thank you.
python pandas pandas-groupby pandas-apply
add a comment |
up vote
0
down vote
favorite
I am trying to organize a dataframe by splitting the 'text'
column using a comma, as can be seen by: transcript_dataframe['text'].str.split(pat = delimiter, expand = True)
. However, when running this over a dataframe with a few million rows the process is extremely slow. I was wondering if there was actually a faster way to do this, and if it was possible to wrap a tqdm progress bar around this method to see the progress.
Further, since I am going through a few million rows, you can see that I use apply about four times (which means I'm going through all the few million rows four times). Is there a way to do all this processing in one loop? What I want for output is a dataframe with:
RecordID (string, removed BOM)
Content (string with blank or pipe characters removed)
call_time_seconds (end time - call time, after converting to float, np.nan if error)
count_calls (just 1 throughout)
Lastly, I want to remove all 'RecordIDs'
that have 'M'
inside which I did with this line:
transcripts_df = transcripts_df[transcripts_df['RecordID'].progress_apply(lambda x: bool(re.match('M', str(x)))) != True]
The following is my code:
def CleanTranscripts(transcript_dataframe):
"""
Cleans up transcript dataframe by splitting 'text' column with delimiter
Data is as follows: RecordID | _(unused) | StartTime | EndTime | Content | _(unused) | _(unused)
Split data is put into the following columns:
* RecordID : removed byte order mark (BOM - ufeff)
* Content : removed blank or | characters
* call_time_seconds : uses EndTime converted into seconds - StartTime converted into seconds
* count_calls : each unique call identified by RecordID will be 1, used in future dataframe merge with prospect table
Arguments:
* transcript_dataframe (pandas.DataFrame) -- the raw transcript dataframe acquired from ImportTranscripts() method
Returns:
* transcrips_df (pandas.DataFrame) -- cleaned up transcript dataframe with columns of interest, consolidated on a per call level
"""
tqdm.pandas(desc="Cleaning Transcripts")
delimiter = r'|' # delimiter defined as an escaped pipe character
transcripts_df = transcript_dataframe['text'].str.split(pat = delimiter, expand = True) # expands transcript dataframe separated by commma
transcripts_df.columns = ['RecordID', 1, 'start_time', 'end_time', 'Content', 7, 8] # rename column names
transcripts_df = transcripts_df[transcripts_df['RecordID'].progress_apply(lambda x: bool(re.match('M', str(x)))) != True] # remove rows with RecordID that has 'M'
transcripts_df['RecordID'] = transcripts_df['RecordID'].progress_apply(lambda x: re.sub('(ufeff)','',str(x)[0:8])) # remove BOM, take substring 0:8
transcripts_df['Content'] = transcripts_df['Content'].progress_apply(lambda x: re.sub(r'( )|(|)','',str(x))) # remove blank or pipe characters
transcripts_df.loc[:,'call_time_seconds'] = pd.to_numeric(transcripts_df['end_time'], errors='coerce') - pd.to_numeric(transcripts_df['start_time'], errors='coerce') # convert end time into seconds, subtract start time converted into seconds
transcripts_df = transcripts_df.groupby('RecordID').agg({'Content': 'sum', 'call_time_seconds': 'max'}).reset_index() # group records by RecordID, aggregate rows with same RecordID by summing the contents, and taking max call time
transcripts_df['count_calls'] = 1 # assign 1 to count_calls columns (each RecordID is one call, used for future merging)
return transcripts_df # return consolidated and cleaned transcripts dataframe
Appreciate any help, thank you.
python pandas pandas-groupby pandas-apply
Hints: regex is usually slower than regular Pythonstr
methods. Pandasstr
methods withapply
is slower than using built-instr
in a list comprehension. We can't help a huge amount until we see some input data and examples of what output you need.
– jpp
Nov 14 at 9:32
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I am trying to organize a dataframe by splitting the 'text'
column using a comma, as can be seen by: transcript_dataframe['text'].str.split(pat = delimiter, expand = True)
. However, when running this over a dataframe with a few million rows the process is extremely slow. I was wondering if there was actually a faster way to do this, and if it was possible to wrap a tqdm progress bar around this method to see the progress.
Further, since I am going through a few million rows, you can see that I use apply about four times (which means I'm going through all the few million rows four times). Is there a way to do all this processing in one loop? What I want for output is a dataframe with:
RecordID (string, removed BOM)
Content (string with blank or pipe characters removed)
call_time_seconds (end time - call time, after converting to float, np.nan if error)
count_calls (just 1 throughout)
Lastly, I want to remove all 'RecordIDs'
that have 'M'
inside which I did with this line:
transcripts_df = transcripts_df[transcripts_df['RecordID'].progress_apply(lambda x: bool(re.match('M', str(x)))) != True]
The following is my code:
def CleanTranscripts(transcript_dataframe):
"""
Cleans up transcript dataframe by splitting 'text' column with delimiter
Data is as follows: RecordID | _(unused) | StartTime | EndTime | Content | _(unused) | _(unused)
Split data is put into the following columns:
* RecordID : removed byte order mark (BOM - ufeff)
* Content : removed blank or | characters
* call_time_seconds : uses EndTime converted into seconds - StartTime converted into seconds
* count_calls : each unique call identified by RecordID will be 1, used in future dataframe merge with prospect table
Arguments:
* transcript_dataframe (pandas.DataFrame) -- the raw transcript dataframe acquired from ImportTranscripts() method
Returns:
* transcrips_df (pandas.DataFrame) -- cleaned up transcript dataframe with columns of interest, consolidated on a per call level
"""
tqdm.pandas(desc="Cleaning Transcripts")
delimiter = r'|' # delimiter defined as an escaped pipe character
transcripts_df = transcript_dataframe['text'].str.split(pat = delimiter, expand = True) # expands transcript dataframe separated by commma
transcripts_df.columns = ['RecordID', 1, 'start_time', 'end_time', 'Content', 7, 8] # rename column names
transcripts_df = transcripts_df[transcripts_df['RecordID'].progress_apply(lambda x: bool(re.match('M', str(x)))) != True] # remove rows with RecordID that has 'M'
transcripts_df['RecordID'] = transcripts_df['RecordID'].progress_apply(lambda x: re.sub('(ufeff)','',str(x)[0:8])) # remove BOM, take substring 0:8
transcripts_df['Content'] = transcripts_df['Content'].progress_apply(lambda x: re.sub(r'( )|(|)','',str(x))) # remove blank or pipe characters
transcripts_df.loc[:,'call_time_seconds'] = pd.to_numeric(transcripts_df['end_time'], errors='coerce') - pd.to_numeric(transcripts_df['start_time'], errors='coerce') # convert end time into seconds, subtract start time converted into seconds
transcripts_df = transcripts_df.groupby('RecordID').agg({'Content': 'sum', 'call_time_seconds': 'max'}).reset_index() # group records by RecordID, aggregate rows with same RecordID by summing the contents, and taking max call time
transcripts_df['count_calls'] = 1 # assign 1 to count_calls columns (each RecordID is one call, used for future merging)
return transcripts_df # return consolidated and cleaned transcripts dataframe
Appreciate any help, thank you.
python pandas pandas-groupby pandas-apply
I am trying to organize a dataframe by splitting the 'text'
column using a comma, as can be seen by: transcript_dataframe['text'].str.split(pat = delimiter, expand = True)
. However, when running this over a dataframe with a few million rows the process is extremely slow. I was wondering if there was actually a faster way to do this, and if it was possible to wrap a tqdm progress bar around this method to see the progress.
Further, since I am going through a few million rows, you can see that I use apply about four times (which means I'm going through all the few million rows four times). Is there a way to do all this processing in one loop? What I want for output is a dataframe with:
RecordID (string, removed BOM)
Content (string with blank or pipe characters removed)
call_time_seconds (end time - call time, after converting to float, np.nan if error)
count_calls (just 1 throughout)
Lastly, I want to remove all 'RecordIDs'
that have 'M'
inside which I did with this line:
transcripts_df = transcripts_df[transcripts_df['RecordID'].progress_apply(lambda x: bool(re.match('M', str(x)))) != True]
The following is my code:
def CleanTranscripts(transcript_dataframe):
"""
Cleans up transcript dataframe by splitting 'text' column with delimiter
Data is as follows: RecordID | _(unused) | StartTime | EndTime | Content | _(unused) | _(unused)
Split data is put into the following columns:
* RecordID : removed byte order mark (BOM - ufeff)
* Content : removed blank or | characters
* call_time_seconds : uses EndTime converted into seconds - StartTime converted into seconds
* count_calls : each unique call identified by RecordID will be 1, used in future dataframe merge with prospect table
Arguments:
* transcript_dataframe (pandas.DataFrame) -- the raw transcript dataframe acquired from ImportTranscripts() method
Returns:
* transcrips_df (pandas.DataFrame) -- cleaned up transcript dataframe with columns of interest, consolidated on a per call level
"""
tqdm.pandas(desc="Cleaning Transcripts")
delimiter = r'|' # delimiter defined as an escaped pipe character
transcripts_df = transcript_dataframe['text'].str.split(pat = delimiter, expand = True) # expands transcript dataframe separated by commma
transcripts_df.columns = ['RecordID', 1, 'start_time', 'end_time', 'Content', 7, 8] # rename column names
transcripts_df = transcripts_df[transcripts_df['RecordID'].progress_apply(lambda x: bool(re.match('M', str(x)))) != True] # remove rows with RecordID that has 'M'
transcripts_df['RecordID'] = transcripts_df['RecordID'].progress_apply(lambda x: re.sub('(ufeff)','',str(x)[0:8])) # remove BOM, take substring 0:8
transcripts_df['Content'] = transcripts_df['Content'].progress_apply(lambda x: re.sub(r'( )|(|)','',str(x))) # remove blank or pipe characters
transcripts_df.loc[:,'call_time_seconds'] = pd.to_numeric(transcripts_df['end_time'], errors='coerce') - pd.to_numeric(transcripts_df['start_time'], errors='coerce') # convert end time into seconds, subtract start time converted into seconds
transcripts_df = transcripts_df.groupby('RecordID').agg({'Content': 'sum', 'call_time_seconds': 'max'}).reset_index() # group records by RecordID, aggregate rows with same RecordID by summing the contents, and taking max call time
transcripts_df['count_calls'] = 1 # assign 1 to count_calls columns (each RecordID is one call, used for future merging)
return transcripts_df # return consolidated and cleaned transcripts dataframe
Appreciate any help, thank you.
python pandas pandas-groupby pandas-apply
python pandas pandas-groupby pandas-apply
edited Nov 14 at 6:02
Aqueous Carlos
301213
301213
asked Nov 14 at 5:46
Fake
93
93
Hints: regex is usually slower than regular Pythonstr
methods. Pandasstr
methods withapply
is slower than using built-instr
in a list comprehension. We can't help a huge amount until we see some input data and examples of what output you need.
– jpp
Nov 14 at 9:32
add a comment |
Hints: regex is usually slower than regular Pythonstr
methods. Pandasstr
methods withapply
is slower than using built-instr
in a list comprehension. We can't help a huge amount until we see some input data and examples of what output you need.
– jpp
Nov 14 at 9:32
Hints: regex is usually slower than regular Python
str
methods. Pandas str
methods with apply
is slower than using built-in str
in a list comprehension. We can't help a huge amount until we see some input data and examples of what output you need.– jpp
Nov 14 at 9:32
Hints: regex is usually slower than regular Python
str
methods. Pandas str
methods with apply
is slower than using built-in str
in a list comprehension. We can't help a huge amount until we see some input data and examples of what output you need.– jpp
Nov 14 at 9:32
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53293871%2fpandas-dataframe-apply-different-functions-to-multiple-columns-efficiently%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Hints: regex is usually slower than regular Python
str
methods. Pandasstr
methods withapply
is slower than using built-instr
in a list comprehension. We can't help a huge amount until we see some input data and examples of what output you need.– jpp
Nov 14 at 9:32