network tuning, OS X Leopard edition

I had occasion to fire up an old PPC iMac G5 (OS X 10.5.8) the other week and was appalled at how slowly it performed at network access. So here’s what I did to fix it.

Per Scott Rolande, there are tunable values for many aspects of the TCP stack. Handily, they live in a text file and can be tinkered with interactively.

kern.ipc.maxsockbuf=4194304
kern.ipc.somaxconn=512
kern.ipc.nmbclusters=2048
net.inet.tcp.rfc1323=1
net.inet.tcp.win_scale_factor=3
net.inet.tcp.sockthreshold=16
net.inet.tcp.sendspace=262144
net.inet.tcp.recvspace=262144
net.inet.tcp.mssdflt=1440
net.inet.tcp.msl=15000
net.inet.tcp.always_keepalive=0
net.inet.tcp.delayed_ack=0
net.inet.tcp.slowstart_flightsize=4
net.inet.tcp.blackhole=2
net.inet.udp.blackhole=1
net.inet.icmp.icmplim=50

This machines didn’t have a sysctl.conf file so I used copied his and used it to pull out the current values.

for i in `cut -d= -f1 sysctl.conf`; do sysctl $i; done
kern.ipc.maxsockbuf: 8388608
kern.ipc.somaxconn: 128
kern.ipc.nmbclusters: 32768
net.inet.tcp.rfc1323: 1
net.inet.tcp.win_scale_factor: 3
net.inet.tcp.sockthreshold: 64
net.inet.tcp.sendspace: 65536
net.inet.tcp.recvspace: 65536
net.inet.tcp.mssdflt: 512
net.inet.tcp.msl: 15000
net.inet.tcp.always_keepalive: 0
net.inet.tcp.delayed_ack: 3
net.inet.tcp.slowstart_flightsize: 1
net.inet.tcp.blackhole: 0
net.inet.udp.blackhole: 0
net.inet.icmp.icmplim: 250

A little different. Not sure why kern.ipc.maxsockbuf is so much higher on an old machine that maxes out at 2Gb of RAM…

To test throughput, I needed a test file.
hdiutil create -size 1g test.dmg
.....................................................................................................................................
created: /Users/paul/test.dmg

Over wireless G on a mixed wireless N/G network to a wired 100 Mbit host on a Gigabit switch, it managed a stately 12 Mbits/second.

Twelve minutes (12m19.024s) later:
sent 1073872981 bytes received 42 bytes 1452160.95 bytes/sec
total size is 1073741824 speedup is 1.00

Oy. Now to try it to a wireless destination, a 10.8.3 machine.

Hmm, interestingly, OS X handles rsync transfers a little differently: it blocks out space equivalent to the size of the file. This is checking the size of the file during the transfer. du tells a different story than ls. As you can see the file size never changes during the transfer. du -h .test.dmg.GsCjdW; sleep 10 ; du -h .test.dmg.GsCjdW
1.0G .test.dmg.GsCjdW
1.0G .test.dmg.GsCjdW

Using ls -l shows the actual size of the file, not the disk space set aside for it.

Still slow: sent 1073872981 bytes received 42 bytes 1428972.75 bytes/sec

Took 12m30.961s, the difference being because it went to sleep (out of boredom?).

After changing the various sysctl OIDs, things got much worse.

This is what I have on the 10.8.3 system:
kern.ipc.maxsockbuf: 4194304
kern.ipc.somaxconn: 1024
kern.ipc.nmbclusters: 32768
net.inet.tcp.rfc1323: 1
net.inet.tcp.win_scale_factor: 3
net.inet.tcp.sockthreshold is not implemented
net.inet.tcp.sendspace: 2097152
net.inet.tcp.recvspace: 2097152
net.inet.tcp.mssdflt: 1460
net.inet.tcp.msl: 15000
net.inet.tcp.always_keepalive: 0
net.inet.tcp.delayed_ack: 0
net.inet.tcp.slowstart_flightsize: 1
net.inet.tcp.blackhole: 0
net.inet.udp.blackhole: 0
net.inet.icmp.icmplim: 250

A 1Gb transfer takes too long (which of course is the problem) so I made a couple of small changes and tried a 100Mbit file. Down to 13 seconds. Hmm, not bad. The changes:
sysctl -w net.inet.tcp.sendspace=4194304
sysctl -w net.inet.tcp.recvspace=4194304
sysctl -w net.inet.tcp.mssdflt=1460

I set net.inet.tcp.[send|recv]space to be half of kern.ipc.maxsockbuf and made the net.inet.tcp.mssdflt match the receiving system.

Now a 1Gb test file takes 53.287s. Copying from 10.8.3 to 10.5.8 took just 31.215s. After synchronizing the net.inet.tcp.mssdflt on the system I first tested with, transfers to there are down to 1m47.471s.

So some big improvements for not much effort. I’m sure there are lots of other tweaks but given the relatively little need for more improvement and the limited potential (the old 10.5 system on wireless G is frozen in time while the newer wireless N machines will see further improvements), I don’t know that I’ll bother. A twelve-fold increase in one direction and a 24-fold boost going the other way is pretty good. If I really cared, i.e., this was something I expected to do regularly, I’d run a Cat5 cable to it and call it done.

After a reboot to ensure the values stay put, I tested different copy methods as well, all with the same 1Gb file.

from the 100Mbit wired machine using rsync: 0m56.349s

same to/from, using scp -C for compression (since I used rsync -z): 1m40.794s

from the 10.8.3 system to the 10.5 system with scp -C: 1m35.228s

from the 10.8.3 system to the 10.5 system with rsync -z: 0m24.734s (!!)

from the 10.5 system to 10.8.3 with rsync -z: 0m38.861s

So even better after the reboot. Could be other variables in there as well. I’m calling it done.

UPDATE: the morning after shows a different story. I was puzzled that snmp monitoring wasn’t working so I took a look this morning and things are slow again, down to 5 Mbits/second from the 12 I considered poky. At this point, I’m not sure how reliable the benchmark data was or at least how I was evaluating it.

I’ll have to investigate further. I created some more test media by splitting up the 1Gb file into smaller ones, so I have a pile of 100Mbit and 10Mbit files as well. Part of the optimization I am looking for is good throughput for large files as well as being able to handle smaller files quickly. Large buffers and access to a good sized reserve of connections, in other words.

Leave a Reply

Your email address will not be published. Required fields are marked *