aggregate data for last seven day for each date
up vote
2
down vote
favorite
I have a dataset:
app id geo date count
90 NO 2018-09-04 27
66 HK 2018-09-03 2
66 HK 2018-09-02 4
80 QA 2018-04-22 5
85 MA 2018-04-20 1
80 BR 2018-04-19 68
I am trying to generate a field which would aggregate data for each date for last seven days. My dataset should look like that:
app id geo date count count_last_7_days
90 NO 2018-09-04 27 33
66 HK 2018-09-03 2 6
66 HK 2018-09-02 4 4
80 QA 2018-04-22 5 74
85 MA 2018-04-20 1 69
80 BR 2018-04-19 68 68
I am trying this code:
df['date'] = pd.to_datetime(df['date']) - pd.to_timedelta(7, unit='d')
df = df.groupby(['geo','app_id', pd.Grouper(key='date', freq='W')]) .
['count'].sum().reset_index().sort_values('date')
But even thought I use Grouper with weekly frequency (freq='W'
), It considers start of the week on Sunday and I don't have 7 days lag for non-Sunday entries.
Please, suggest how I can calculate that field.
python pandas date grouping
add a comment |
up vote
2
down vote
favorite
I have a dataset:
app id geo date count
90 NO 2018-09-04 27
66 HK 2018-09-03 2
66 HK 2018-09-02 4
80 QA 2018-04-22 5
85 MA 2018-04-20 1
80 BR 2018-04-19 68
I am trying to generate a field which would aggregate data for each date for last seven days. My dataset should look like that:
app id geo date count count_last_7_days
90 NO 2018-09-04 27 33
66 HK 2018-09-03 2 6
66 HK 2018-09-02 4 4
80 QA 2018-04-22 5 74
85 MA 2018-04-20 1 69
80 BR 2018-04-19 68 68
I am trying this code:
df['date'] = pd.to_datetime(df['date']) - pd.to_timedelta(7, unit='d')
df = df.groupby(['geo','app_id', pd.Grouper(key='date', freq='W')]) .
['count'].sum().reset_index().sort_values('date')
But even thought I use Grouper with weekly frequency (freq='W'
), It considers start of the week on Sunday and I don't have 7 days lag for non-Sunday entries.
Please, suggest how I can calculate that field.
python pandas date grouping
What if you change it todf = df.groupby(['geo','app_id', pd.Grouper(key='date', freq='D')])
– pygo
Nov 14 at 17:17
add a comment |
up vote
2
down vote
favorite
up vote
2
down vote
favorite
I have a dataset:
app id geo date count
90 NO 2018-09-04 27
66 HK 2018-09-03 2
66 HK 2018-09-02 4
80 QA 2018-04-22 5
85 MA 2018-04-20 1
80 BR 2018-04-19 68
I am trying to generate a field which would aggregate data for each date for last seven days. My dataset should look like that:
app id geo date count count_last_7_days
90 NO 2018-09-04 27 33
66 HK 2018-09-03 2 6
66 HK 2018-09-02 4 4
80 QA 2018-04-22 5 74
85 MA 2018-04-20 1 69
80 BR 2018-04-19 68 68
I am trying this code:
df['date'] = pd.to_datetime(df['date']) - pd.to_timedelta(7, unit='d')
df = df.groupby(['geo','app_id', pd.Grouper(key='date', freq='W')]) .
['count'].sum().reset_index().sort_values('date')
But even thought I use Grouper with weekly frequency (freq='W'
), It considers start of the week on Sunday and I don't have 7 days lag for non-Sunday entries.
Please, suggest how I can calculate that field.
python pandas date grouping
I have a dataset:
app id geo date count
90 NO 2018-09-04 27
66 HK 2018-09-03 2
66 HK 2018-09-02 4
80 QA 2018-04-22 5
85 MA 2018-04-20 1
80 BR 2018-04-19 68
I am trying to generate a field which would aggregate data for each date for last seven days. My dataset should look like that:
app id geo date count count_last_7_days
90 NO 2018-09-04 27 33
66 HK 2018-09-03 2 6
66 HK 2018-09-02 4 4
80 QA 2018-04-22 5 74
85 MA 2018-04-20 1 69
80 BR 2018-04-19 68 68
I am trying this code:
df['date'] = pd.to_datetime(df['date']) - pd.to_timedelta(7, unit='d')
df = df.groupby(['geo','app_id', pd.Grouper(key='date', freq='W')]) .
['count'].sum().reset_index().sort_values('date')
But even thought I use Grouper with weekly frequency (freq='W'
), It considers start of the week on Sunday and I don't have 7 days lag for non-Sunday entries.
Please, suggest how I can calculate that field.
python pandas date grouping
python pandas date grouping
asked Nov 14 at 17:04
Liza Che
163
163
What if you change it todf = df.groupby(['geo','app_id', pd.Grouper(key='date', freq='D')])
– pygo
Nov 14 at 17:17
add a comment |
What if you change it todf = df.groupby(['geo','app_id', pd.Grouper(key='date', freq='D')])
– pygo
Nov 14 at 17:17
What if you change it to
df = df.groupby(['geo','app_id', pd.Grouper(key='date', freq='D')])
– pygo
Nov 14 at 17:17
What if you change it to
df = df.groupby(['geo','app_id', pd.Grouper(key='date', freq='D')])
– pygo
Nov 14 at 17:17
add a comment |
1 Answer
1
active
oldest
votes
up vote
0
down vote
A dirty one-liner would be
import numpy as np
df['count_last_7_days'] = [np.sum(df['count'][np.logical_and(df['date'][i] - df['date'] < pd.to_timedelta(7,unit='d'),df['date'][i] - df['date'] >= pd.to_timedelta(0,unit='d'))]) for i in range(df.shape[0])]
Note that I converted the time
column to datetime using pd.to_datetime()
first.
What this does is: for each day it finds all other rows within the desired one-week timespan, flags them with a boolean value and sums them after
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
A dirty one-liner would be
import numpy as np
df['count_last_7_days'] = [np.sum(df['count'][np.logical_and(df['date'][i] - df['date'] < pd.to_timedelta(7,unit='d'),df['date'][i] - df['date'] >= pd.to_timedelta(0,unit='d'))]) for i in range(df.shape[0])]
Note that I converted the time
column to datetime using pd.to_datetime()
first.
What this does is: for each day it finds all other rows within the desired one-week timespan, flags them with a boolean value and sums them after
add a comment |
up vote
0
down vote
A dirty one-liner would be
import numpy as np
df['count_last_7_days'] = [np.sum(df['count'][np.logical_and(df['date'][i] - df['date'] < pd.to_timedelta(7,unit='d'),df['date'][i] - df['date'] >= pd.to_timedelta(0,unit='d'))]) for i in range(df.shape[0])]
Note that I converted the time
column to datetime using pd.to_datetime()
first.
What this does is: for each day it finds all other rows within the desired one-week timespan, flags them with a boolean value and sums them after
add a comment |
up vote
0
down vote
up vote
0
down vote
A dirty one-liner would be
import numpy as np
df['count_last_7_days'] = [np.sum(df['count'][np.logical_and(df['date'][i] - df['date'] < pd.to_timedelta(7,unit='d'),df['date'][i] - df['date'] >= pd.to_timedelta(0,unit='d'))]) for i in range(df.shape[0])]
Note that I converted the time
column to datetime using pd.to_datetime()
first.
What this does is: for each day it finds all other rows within the desired one-week timespan, flags them with a boolean value and sums them after
A dirty one-liner would be
import numpy as np
df['count_last_7_days'] = [np.sum(df['count'][np.logical_and(df['date'][i] - df['date'] < pd.to_timedelta(7,unit='d'),df['date'][i] - df['date'] >= pd.to_timedelta(0,unit='d'))]) for i in range(df.shape[0])]
Note that I converted the time
column to datetime using pd.to_datetime()
first.
What this does is: for each day it finds all other rows within the desired one-week timespan, flags them with a boolean value and sums them after
answered Nov 15 at 8:54
Lukas Thaler
2399
2399
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53305375%2faggregate-data-for-last-seven-day-for-each-date%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
What if you change it to
df = df.groupby(['geo','app_id', pd.Grouper(key='date', freq='D')])
– pygo
Nov 14 at 17:17