How do I move data from RDS of one AWS account to another account
up vote
2
down vote
favorite
We have our web services and database set up on AWS a while back and application is now in production. For some reason, we need to terminate the old AWS and move everything under a newly created AWS account. Application and all the infrastructure are pretty straightforward. It is trickier for data though. The current database is still receiving lots of data on daily basis. So it is best to migrate the data after we turn off the old application and switch on new platform.
Both source RDS and target RDS are Postgres. We have about 40GB data to transfer. There are three approaches I could think of and they all have drawbacks.
- Take a snapshot of the first RDS and restore it in second one. Problem is I don't need to transfer all the data from source to destination. Probably just records after 10/01 is enough. Also snapshot works best to restore in an empty rds that is just created. For our case, the new RDS will start receiving data already after the cutoff. Only after that, the data will be transferred from old account to new account otherwise we will lose data.
- Dump data from tables in old RDS and backup in new RDS. This will have the same problem as #1. Also, if I dump data to local machine and then back up from local, the network speed is bottleneck.
- Export table data to csv files and import to new RDS. The advantage is this method allows pick and choose and some data cleaning as well. But it takes forever to export a big fact table to local csv file. Another problem is, for some of the tables, I have surrogate row IDs which are
serial
(auto-incremental). The row IDs of exported csv may conflicting with existing data in new RDS tables.
I wonder if there is a better way to do it. Maybe some ETL tool AWS has which does point to point direct transfer without involving using local computer as the middle point.
postgresql amazon-web-services etl amazon-rds data-migration
add a comment |
up vote
2
down vote
favorite
We have our web services and database set up on AWS a while back and application is now in production. For some reason, we need to terminate the old AWS and move everything under a newly created AWS account. Application and all the infrastructure are pretty straightforward. It is trickier for data though. The current database is still receiving lots of data on daily basis. So it is best to migrate the data after we turn off the old application and switch on new platform.
Both source RDS and target RDS are Postgres. We have about 40GB data to transfer. There are three approaches I could think of and they all have drawbacks.
- Take a snapshot of the first RDS and restore it in second one. Problem is I don't need to transfer all the data from source to destination. Probably just records after 10/01 is enough. Also snapshot works best to restore in an empty rds that is just created. For our case, the new RDS will start receiving data already after the cutoff. Only after that, the data will be transferred from old account to new account otherwise we will lose data.
- Dump data from tables in old RDS and backup in new RDS. This will have the same problem as #1. Also, if I dump data to local machine and then back up from local, the network speed is bottleneck.
- Export table data to csv files and import to new RDS. The advantage is this method allows pick and choose and some data cleaning as well. But it takes forever to export a big fact table to local csv file. Another problem is, for some of the tables, I have surrogate row IDs which are
serial
(auto-incremental). The row IDs of exported csv may conflicting with existing data in new RDS tables.
I wonder if there is a better way to do it. Maybe some ETL tool AWS has which does point to point direct transfer without involving using local computer as the middle point.
postgresql amazon-web-services etl amazon-rds data-migration
1
40GB doesn't seem like a lot of data, but take a look at Database Migration Service. It can do homogeneous migrations: aws.amazon.com/dms Not sure how easy it would be to filter rows by date though.
– jarmod
Nov 13 at 22:41
@jarmod Tried Database Migration Service. It works pretty well to copy data from source table to target table that is empty. If the target table already has records, the transfer task would fail due to conflicting row ID. Like I mentioned the row ID is autogenerated incrementally using a sequence. Is there a way to work around this
– ddd
Nov 14 at 20:04
add a comment |
up vote
2
down vote
favorite
up vote
2
down vote
favorite
We have our web services and database set up on AWS a while back and application is now in production. For some reason, we need to terminate the old AWS and move everything under a newly created AWS account. Application and all the infrastructure are pretty straightforward. It is trickier for data though. The current database is still receiving lots of data on daily basis. So it is best to migrate the data after we turn off the old application and switch on new platform.
Both source RDS and target RDS are Postgres. We have about 40GB data to transfer. There are three approaches I could think of and they all have drawbacks.
- Take a snapshot of the first RDS and restore it in second one. Problem is I don't need to transfer all the data from source to destination. Probably just records after 10/01 is enough. Also snapshot works best to restore in an empty rds that is just created. For our case, the new RDS will start receiving data already after the cutoff. Only after that, the data will be transferred from old account to new account otherwise we will lose data.
- Dump data from tables in old RDS and backup in new RDS. This will have the same problem as #1. Also, if I dump data to local machine and then back up from local, the network speed is bottleneck.
- Export table data to csv files and import to new RDS. The advantage is this method allows pick and choose and some data cleaning as well. But it takes forever to export a big fact table to local csv file. Another problem is, for some of the tables, I have surrogate row IDs which are
serial
(auto-incremental). The row IDs of exported csv may conflicting with existing data in new RDS tables.
I wonder if there is a better way to do it. Maybe some ETL tool AWS has which does point to point direct transfer without involving using local computer as the middle point.
postgresql amazon-web-services etl amazon-rds data-migration
We have our web services and database set up on AWS a while back and application is now in production. For some reason, we need to terminate the old AWS and move everything under a newly created AWS account. Application and all the infrastructure are pretty straightforward. It is trickier for data though. The current database is still receiving lots of data on daily basis. So it is best to migrate the data after we turn off the old application and switch on new platform.
Both source RDS and target RDS are Postgres. We have about 40GB data to transfer. There are three approaches I could think of and they all have drawbacks.
- Take a snapshot of the first RDS and restore it in second one. Problem is I don't need to transfer all the data from source to destination. Probably just records after 10/01 is enough. Also snapshot works best to restore in an empty rds that is just created. For our case, the new RDS will start receiving data already after the cutoff. Only after that, the data will be transferred from old account to new account otherwise we will lose data.
- Dump data from tables in old RDS and backup in new RDS. This will have the same problem as #1. Also, if I dump data to local machine and then back up from local, the network speed is bottleneck.
- Export table data to csv files and import to new RDS. The advantage is this method allows pick and choose and some data cleaning as well. But it takes forever to export a big fact table to local csv file. Another problem is, for some of the tables, I have surrogate row IDs which are
serial
(auto-incremental). The row IDs of exported csv may conflicting with existing data in new RDS tables.
I wonder if there is a better way to do it. Maybe some ETL tool AWS has which does point to point direct transfer without involving using local computer as the middle point.
postgresql amazon-web-services etl amazon-rds data-migration
postgresql amazon-web-services etl amazon-rds data-migration
asked Nov 13 at 22:22
ddd
1,07622054
1,07622054
1
40GB doesn't seem like a lot of data, but take a look at Database Migration Service. It can do homogeneous migrations: aws.amazon.com/dms Not sure how easy it would be to filter rows by date though.
– jarmod
Nov 13 at 22:41
@jarmod Tried Database Migration Service. It works pretty well to copy data from source table to target table that is empty. If the target table already has records, the transfer task would fail due to conflicting row ID. Like I mentioned the row ID is autogenerated incrementally using a sequence. Is there a way to work around this
– ddd
Nov 14 at 20:04
add a comment |
1
40GB doesn't seem like a lot of data, but take a look at Database Migration Service. It can do homogeneous migrations: aws.amazon.com/dms Not sure how easy it would be to filter rows by date though.
– jarmod
Nov 13 at 22:41
@jarmod Tried Database Migration Service. It works pretty well to copy data from source table to target table that is empty. If the target table already has records, the transfer task would fail due to conflicting row ID. Like I mentioned the row ID is autogenerated incrementally using a sequence. Is there a way to work around this
– ddd
Nov 14 at 20:04
1
1
40GB doesn't seem like a lot of data, but take a look at Database Migration Service. It can do homogeneous migrations: aws.amazon.com/dms Not sure how easy it would be to filter rows by date though.
– jarmod
Nov 13 at 22:41
40GB doesn't seem like a lot of data, but take a look at Database Migration Service. It can do homogeneous migrations: aws.amazon.com/dms Not sure how easy it would be to filter rows by date though.
– jarmod
Nov 13 at 22:41
@jarmod Tried Database Migration Service. It works pretty well to copy data from source table to target table that is empty. If the target table already has records, the transfer task would fail due to conflicting row ID. Like I mentioned the row ID is autogenerated incrementally using a sequence. Is there a way to work around this
– ddd
Nov 14 at 20:04
@jarmod Tried Database Migration Service. It works pretty well to copy data from source table to target table that is empty. If the target table already has records, the transfer task would fail due to conflicting row ID. Like I mentioned the row ID is autogenerated incrementally using a sequence. Is there a way to work around this
– ddd
Nov 14 at 20:04
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53290412%2fhow-do-i-move-data-from-rds-of-one-aws-account-to-another-account%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
40GB doesn't seem like a lot of data, but take a look at Database Migration Service. It can do homogeneous migrations: aws.amazon.com/dms Not sure how easy it would be to filter rows by date though.
– jarmod
Nov 13 at 22:41
@jarmod Tried Database Migration Service. It works pretty well to copy data from source table to target table that is empty. If the target table already has records, the transfer task would fail due to conflicting row ID. Like I mentioned the row ID is autogenerated incrementally using a sequence. Is there a way to work around this
– ddd
Nov 14 at 20:04