How to Transfer Data Out of Russia Under Current Network Restrictions

You have some infrastructure in Russia and want to move it elsewhere, for example to Europe. Currently this can be problematic. During recent tests I noticed that Russia limits long network sessions: any session longer than about one minute starts to be throttled to around 100 kB/s. This may not apply to every destination, but I tested several major European providers and also Belarus as a proxy. Belarus was even slower, around 50 kB/s.

One possible solution could be torrents, but I cannot use them in my case and did not have time to test. Instead I used the following approach.

First I tested a simple scp transfer directly to my server in Europe — the speed was about 100 kB/s. Rsync through an intermediate server showed the same result. However, at the same time I noticed that Speedtest still showed good bandwidth: around 15–20 MB/s to Europe and up to 100 MB/s inside Russia. This means HTTP traffic still works relatively well.

Because of this you can use shared storage services such as S3, Google Drive, or Dropbox and achieve better speed — typically around 1–2 MB/s. I used rclone.

Upload test with rclone

root@box:/mnt# rclone --transfers 1 --stats 10s --stats-unit bits sync /mnt/test.txt s3:bucketname/test/ 2>&1 | grep '% done'
 *     test.txt:  2% done, 117.282 MBits/s, ETA: 1m8s
 *     test.txt:  2% done, 60.200 MBits/s, ETA: 2m12s
 *     test.txt:  2% done, 30.900 MBits/s, ETA: 4m18s
 *     test.txt:  2% done, 15.861 MBits/s, ETA: 8m23s
 *     test.txt:  2% done, 8.141 MBits/s, ETA: 16m21s
 *     test.txt:  2% done, 6.291 MBits/s, ETA: 21m3s
 *     test.txt:  3% done, 5.342 MBits/s, ETA: 24m41s
 *     test.txt:  3% done, 2.742 MBits/s, ETA: 48m5s
 *     test.txt:  3% done, 1.407 MBits/s, ETA: 1h33m41s
 *     test.txt:  3% done, 739.771 kBits/s, ETA: 3h2m31s
 *     test.txt:  3% done, 379.718 kBits/s, ETA: 5h55m36s
 *     test.txt:  3% done, 2.604 MBits/s, ETA: 50m22s
 *     test.txt:  3% done, 1.337 MBits/s, ETA: 1h38m8s
 *     test.txt:  4% done, 2.416 MBits/s, ETA: 54m2s
 *     test.txt:  4% done, 1.240 MBits/s, ETA: 1h45m16s
 *     test.txt:  4% done, 651.722 kBits/s, ETA: 3h25m5s
 *     test.txt:  4% done, 334.522 kBits/s, ETA: 6h39m34s
 *     test.txt:  4% done, 2.426 MBits/s, ETA: 53m31s
 *     test.txt:  4% done, 1.245 MBits/s, ETA: 1h44m17s
 *     test.txt:  5% done, 3.220 MBits/s, ETA: 40m7s
 *     test.txt:  5% done, 1.653 MBits/s, ETA: 1h18m10s
 *     test.txt:  5% done, 868.670 kBits/s, ETA: 2h32m18s
 *     test.txt:  5% done, 445.879 kBits/s, ETA: 4h56m43s
 *     test.txt:  5% done, 2.072 MBits/s, ETA: 1h2m1s
 *     test.txt:  5% done, 1.064 MBits/s, ETA: 2h0m49s
 *     test.txt:  5% done, 559.116 kBits/s, ETA: 3h55m24s
 *     test.txt:  6% done, 2.257 MBits/s, ETA: 56m39s
 *     test.txt:  6% done, 1.158 MBits/s, ETA: 1h50m23s
 *     test.txt:  6% done, 608.803 kBits/s, ETA: 3h35m4s
 *     test.txt:  6% done, 2.281 MBits/s, ETA: 55m45s
 *     test.txt:  6% done, 1.171 MBits/s, ETA: 1h48m37s
 *     test.txt:  6% done, 615.521 kBits/s, ETA: 3h31m36s
 *     test.txt:  6% done, 315.940 kBits/s, ETA: 6h52m16s
 *     test.txt:  7% done, 1.888 MBits/s, ETA: 1h7m1s
 *     test.txt:  7% done, 992.312 kBits/s, ETA: 2h10m34s
 *     test.txt:  7% done, 2.610 MBits/s, ETA: 48m13s
...

However, after a long time (a few hours) the speed eventually degrades to around 100 kB/s as well. One effective workaround is splitting the data into small pieces — for example 10 MB files.

Split file into 10 MB parts

split -b 10M ./test.txt "test.txt.part."

root@box:/mnt# ls -lah | tail -10
-rw-r--r-- 1 root root 10M Mar  9 13:42 test.txt.part.dp
-rw-r--r-- 1 root root 10M Mar  9 13:42 test.txt.part.dq
-rw-r--r-- 1 root root 10M Mar  9 13:42 test.txt.part.dr
-rw-r--r-- 1 root root 10M Mar  9 13:42 test.txt.part.ds
-rw-r--r-- 1 root root 10M Mar  9 13:42 test.txt.part.dt
-rw-r--r-- 1 root root 10M Mar  9 13:42 test.txt.part.du
-rw-r--r-- 1 root root 10M Mar  9 13:42 test.txt.part.dv
-rw-r--r-- 1 root root 10M Mar  9 13:42 test.txt.part.dw
-rw-r--r-- 1 root root 10M Mar  9 13:42 test.txt.part.dx
-rw-r--r-- 1 root root 4.0M Mar  9 13:42 test.txt.part.dy

Each new 10 MB file starts transferring at full speed. In my case I had a stable 2–4 MB/s during the entire transfer (almost one week).

Upload split parts

rclone --transfers 2 --stats 2s --stats-unit bits sync /mnt/parts/ s3:bucketname/test/parts/

...
Transferred:    20 MBytes (3.313 MBits/s)
Transferred:    40 MBytes (4.552 MBits/s)
Transferred:    60 MBytes (3.326 MBits/s)
...

After the upload finishes, simply join the files again and verify the checksum.

Reassemble files and verify

root@box:/mnt/parts# cat test.txt.part.* >> test.txt
root@box:/mnt/parts# md5sum test.txt ../test.txt
4b7b03eced07458c98465e0b3cc694dc  test.txt
4b7b03eced07458c98465e0b3cc694dc  ../test.txt
Human Logic, AI Syntax... Note on Content: I'm a Systems Engineer, not a native English writer. To ensure my technical ideas are clear and accessible, I use AI tools to polish the grammar and style. The workflow is simple: I provide the logic, the code, and the real-world experience. The AI handles the "English-to-Human" translation layer. If you find a bug, that's on me. If you find a perfectly placed comma, that's probably the AI.

Comments

Popular posts from this blog

FreeRadius with Google Workspace LDAP

Fixing pssh (parallel-ssh) Problems on Debian 10 with Python 3.7