Friday, August 3, 2012

How to Delete Large s3 Buckets Easily

You've got a problem: your s3 bucket is so massive it can't be deleted. It's big enough that s3cmd simply breaks when you try and run it. It's so big, in fact, that even if you deleted 10,000 keys a minute using s3nukem, you'd have to run it all day long for weeks.

Here's the easy way to delete a massive s3 bucket with large amounts of files: simply set a lifecycle policy of 1 day and wait.

Make sure you really want what's inside here gone forever.
  1. Log into to the aws dashboard, go to s3, and then to the properties of the s3 bucket you'd like to purge.
  2. Under the lifecycle tab, give it an expiration policy without a prefix of 1 day.
Let Amazon do the dirty work.
S3 will do its own housekeeping, and after awhile everything inside your bucket will be gone. Poof.

Before finding this solution, we would actually take a cluster of 200 machines and pound Amazon with s3nukem for several days. At one point Amazon actually deactivated our s3 credentials and called us on the phone asking us what the heck we were doing. Our buckets contained a web index with many, many millions of files. It still took several days.

Good luck!


3 comments:

  1. This makes so much sense. So incensed with it after spending the entire morning with the CLI and through the web console itself. It just wouldn't budge. You've helped me here in 2017 so thank you!

    ReplyDelete
  2. At one point Amazon actually deactivated our s3 credentials and called us on the phone asking us what the heck we were doing. - haha. Funny :-)

    ReplyDelete