There are still a lot of people out there with SFTP (and even FTP!) based workflows. Amazon know this and have a dedicated product called AWS Transfer Family, which is basically an amazingly expensive SFTP wrapper that lives on top of S3.
If you don’t want the hassle of running SFTP on a $5/mo virtual server, then paying AWS on the order of USD$200/mo might be a good option.
There is some slightly weird behaviour compared to standard SFTP that caught me by surprise relating to directories.
(Note: I am doing this on a client’s SFTP setup, so I don’t know what it actually looks like on the S3 side.)
- If you try to rename a file into a directory that does not exist, you will not get an error – it will actually work, and create some sort of “virtual subdirectory” in the S3 bucket. e.g., if you do
rename example.txt backup/example.txt
, without thebackup/
directory existing, and then do a directory listing, you’ll see there is a newbackup/
directory that was created by that rename operation. - If you then move the file back –
rename backup/example.txt ./example.txt
– thebackup/
directory will disappear. - If you create the
backup/
directory first, and repeat the move in and out, the directory will persist. - If the
backup/
directory was created by the rename command, and you then try to do anls *
on the parent directory, it will return the files in backup/ as well – i.e., it will act like a recursivels
.
If you are trying to get closer to standard SFTP-based behaviour with directories, I suspect it’s safer to manually make the directories first (as you would normally) instead of relying on this weird automatic directory creation you get from the rename.