Skip to content

Instantly share code, notes, and snippets.

@KartikTalwar
Last active November 18, 2024 03:36
Show Gist options
  • Save KartikTalwar/4393116 to your computer and use it in GitHub Desktop.
Save KartikTalwar/4393116 to your computer and use it in GitHub Desktop.
Rsync over SSH - (40MB/s over 1GB NICs)

The fastest remote directory rsync over ssh archival I can muster (40MB/s over 1gb NICs)

This creates an archive that does the following:

rsync (Everyone seems to like -z, but it is much slower for me)

  • a: archive mode - rescursive, preserves owner, preserves permissions, preserves modification times, preserves group, copies symlinks as symlinks, preserves device files.
  • H: preserves hard-links
  • A: preserves ACLs
  • X: preserves extended attributes
  • x: don't cross file-system boundaries
  • v: increase verbosity
  • --numeric-ds: don't map uid/gid values by user/group name
  • --delete: delete extraneous files from dest dirs (differential clean-up during sync)
  • --progress: show progress during transfer

ssh

  • T: turn off pseudo-tty to decrease cpu load on destination.
  • c arcfour: use the weakest but fastest SSH encryption. Must specify "Ciphers arcfour" in sshd_config on destination.
  • o Compression=no: Turn off SSH compression.
  • x: turn off X forwarding if it is on by default.

Original

rsync -aHAXxv --numeric-ids --delete --progress -e "ssh -T -c arcfour -o Compression=no -x" user@<source>:<source_dir> <dest_dir>

Flip

rsync -aHAXxv --numeric-ids --delete --progress -e "ssh -T -c arcfour -o Compression=no -x" [source_dir] [dest_host:/dest_dir]
rsync -aHAXxv --numeric-ids --delete --progress -e "ssh -T -c arcfour -o Compression=no -x" user@<source>:<source_dir> <dest_dir>
@brianlamb
Copy link

This was such a great post to find!
I was set to leave my transfer going at 5-10MB/s but couldn't go to sleep with 1.2TB going for 30hours!
(This was also from Synology NAS to MacOS)

As others mentioned early on in this post using specific SSH options can affect the transfer rate dramatically: -e "ssh -T -c aes128-ctr -o Compression=no -x". Primarily the Compression factor. I couldn't see notable differences and didn't test more than comparing to "-c [email protected]" but got variably up to 50-90MB/sec.

DO use --dry-run and --itemize-changes which is a great record of what is actually going to happen.
Always be careful of SOURCE and DESTINATION.
Pause and think, before setting things in motion!

If you want a little help managing a collection of commands you run and an environment conducive to setting up rsync command lines you could try (on a Mac) RsyncOSX as a GUI front end (although I still prefer to run the actual command in a standalone terminal.)

@pricesgoingup
Copy link

@L1so > Not recommended, I almost trashed my entire movies collection by doing this, good thing I canceled it.

Don't paste everything you see on the internet without looking the flags up first /shrugs

@nerrons
Copy link

nerrons commented Mar 29, 2021

To decide which cipher is the best, I recommend using this script to benchmark for yourself: https://gist.github.com/joeharr4/c7599c52f9fad9e53f62e9c8ae690e6b

@anacondaq
Copy link

dunno, dunno
image

git, tons of settings tried. 100gb+ repos. Millions of files, etc of content. Just for bench purposes (not working dirs). But results pretty sad on internal network on Vultr.

@j4ys0n
Copy link

j4ys0n commented May 19, 2022

strange - i'm only getting 27-30MB/s with rsync -aHAxv --numeric-ids --progress -e "ssh -T -c [email protected] -o Compression=no -x"
transferring a 1.6TB file between two Epyc servers with plenty of resources and 10GB networking. i get 500MB/s transferring video files from my desktop to my storage server. not sure what's up here.

update: it increased to 52 MB/s, which is... definitely not as fast as I would like, but it's fine.

@sharkymcdongles
Copy link

Not recommended, I almost trashed my entire movies collection by doing this, good thing I canceled it.

🤡

@ip-rw
Copy link

ip-rw commented Sep 11, 2022

I'm not sure if people are still interested in this, but if you don't care about encryption then tar + netcat is by far the quickest way to transfer directories:

destination:
nc -l -p 7777 | tar -xpf -

source:
tar -cf - sourceDir/ | nc [dest ip] 7777

throw in 'pv' to see xfer speed.

@pricesgoingup
Copy link

pricesgoingup commented Sep 11, 2022 via email

@jaimehrubiks
Copy link

Great discussion.

I found this to be the best option. "ssh -T -c [email protected] -o Compression=no -x" .Probably aes256 was faster than arcfour due to hardware optimizations or something. Also might play with/without rsync -z based on the quantity/size of the files to transfer. No compression was faster for an already compressed single big tar.gz file

@pricesgoingup
Copy link

pricesgoingup commented Sep 27, 2022 via email

@schmorp
Copy link

schmorp commented Mar 3, 2024

To not let this stand as is, some facts: compression is off by default in ssh (and always has been in openssh), tty allocation is off when used in rsync and x forwarding does not affect bulk bandwidth in any way. Any difference in speed measured is not due to these options, but more likely because of a bad test setup, such as first making tests with cold disk cache and the with hot cache. The only change that can affect speed is the cipher (and not turning compression explicitly on in rsync or ssh).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment