I had to download a large (~60GB) Google Takeout file today; asking Google to split the file into chunks of 10GB resulted in this:

I tried to download the file twice in the browser; both times it completed and then vanished from my disk drive. Then I was told I couldn’t download it again. So I had to create an entirely new Takeout.
Needless to say, this was frustrating. Copying the URL and pasting it into wget or curl doesn’t work. There are a bunch of now seemingly useless blog posts and Stack Overflow posts that imply it should work, but I couldn’t get any of them to work.
After some mucking around, what did work for me, as of today’s date, was (in Chrome):
- Prepare the Takeout & go through it until you get to the ‘Download data’ image shown above.
- Start the download.
- Go to the downloads tab and copy the URL there.
- Stop the download.
- Go back to the Takeout page, open devtools, and refresh it.
- Find the first URL to load (the base page). It looked something like this for me:
https://takeout.google.com/manage?user=xxxxxx&rapt=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx - Right click that URL and ‘Copy cURL’ for your appropriate OS
- Paste that into a notepad or whatever and add a new line (don’t forget to add a new line marker at the end of the previous line – \ on Linux or ^ in Windows.
-o All mail including Spam and Trash-002.mbox - Paste that into your terminal/cmd/shell & run the curl command.
Thank you for the detailed explanation!
For some reason, this did not work for me in August 2025.
What did work was using this script:
https://github.com/yottabit42/gtakeout_backup/blob/master/get_takeout.sh
I found it, along with a detailed video, on this reddit thread:
https://www.reddit.com/r/googlephotos/comments/1fhuwqg/parallel_downloading_google_takeout_backups_of/