it could be called work

Building uniconvertor on OS X, post Snow Leopard

I have been doing some work with laser cutting and to that end, I need to translate files from Inkscape’s native SVG format to other formats, like plt or eps. For quite some time now, this has been failing, as the uniconvertor team — who supply the internal translation functions — have let their code base fall into decay.

Turns out the trick isn’t building uniconvertor, but the underlying library, sk1lib, where the real work gets done. For some reason, sk1lib doesn’t come with the developer-supplied distribution, even though the SK1project offers binary packages for Windows and umpteen variants of Linux. So you can successfully install uniconvertor but it doesn’t check dependencies, so you won’t know it doesn’t work until you try to run it. Annoying, that. They haven’t been particularly responsive, either. Too busy working on 2.0 which no one will care about is 1.x is broken.

I wish I had saved my error messages and other debris to better explain all this but it came together pretty quickly before I knew I was on the right track.

You’ll need the following distributions:

Install the tools, if you don’t have them. Make sure they are up to date. I found I had to rip Xcode out and replace it with this toolchain to get things working. There were some issues with llvm/clang that seemed to clear up after I did that. Next, download, build and install FreeType2. I found this symlink was needed:
ln -s /usr/local/include/freetype2/freetype/ /usr/include/freetype

If I remember where I found it, I’ll credit the poster, though it seems to be a pretty common workaround.

Next, lcms: the usual drill: ./configure, make install clean

Then, build and install sk1libs.

python ./ build; python ./ install --record installed-files.txt

The installed-files.txt is just a list of the files that get installed, in lieu of an actual package manager.

And then this patch has to be applied to sk1libs/src/utils/, either before installation or after.

<       return ['/System/Library/Fonts', '/Library/Fonts', os.path.expanduser("~/Library/Fonts")]
>       return ['/',]

Without the patch, the program will read every file on your system into some list, for a purpose known to no man. The comments point to python 1.5 so I suspect it’s long overdue for review and refactoring. Credit for this goes to Valerio Aimale.

Credit for this discovery goes to the Inkscape developers who are trying to get the OS X releases of Inkscape in parity with Windows and Linux.

Finally, uniconvertor-1.1.5 — python ./ build; python ./ install --record --installed-files.txt

And that should do it. Test it out.

it could be called work

swap usage monitor

Wrote this little thing to keep an eye on how swap usage grows. I find that when it exceeds physical RAM, things get boggy.

LAST=`who -b | cut -c18-50`
RAM=`system_profiler SPHardwareDataType | grep Memory | awk '{ print $2 * 1024 }' `
TOTAL=`du -m /var/vm/* | grep swap | awk '{total = total + $1} END {print total}'`
if [ ${TOTAL} -ge ${RAM} ]; then
logger "swap in use = ${TOTAL}, exceeds installed RAM (${RAM}), last reboot was ${LAST}, recommend reboot"
open /var/log/system.log # this opens the log in the Console application so you can see it/can't ignore it.
exit 0

Though, to be fair, it’s not as bad as when I had some useless never-looked-at Dashboard widgets. That was a performance killer. That discovery was inspired by this.I used to check this by simply using

du -sh /var/vm
6.0G    /var/vm

but that didn’t catch that there was a hibernation/sleepimage file in there.

-rw------T  1 root  wheel   4.0G May 22 01:12 sleepimage
-rw-------  1 root  wheel    64M May 22 04:05 swapfile0
-rw-------  1 root  wheel    64M May 22 04:05 swapfile1
-rw-------  1 root  wheel   128M May 22 04:05 swapfile2
-rw-------  1 root  wheel   256M May 22 04:05 swapfile3
-rw-------  1 root  wheel   512M May 22 04:05 swapfile4
-rw-------  1 root  wheel   1.0G May 22 04:05 swapfile5

That’s why I just add up the swapfiles themselves. The one thing I would add is a more informative display of the time since reboot: getting days (?) since reboot would be more informative. But that requires more jiggery-pokery with date(1) than I care to deal with. I’m sure some clever obfuscated perl could be cooked up but I want this to use only tools I know will be available.

Update: this just went off (opened up the Console app) and displayed these messages:

May 22 21:59:04 ivoire[1] ([26487]): Exited: Killed: 9
May 22 21:59:04 ivoire kernel[0]: memorystatus_thread: idle exiting pid 26487 [xpcd]
May 22 22:04:32 ivoire[2211] ([26538]): Exited: Killed: 9
May 22 22:04:32 ivoire kernel[0]: memorystatus_thread: idle exiting pid 26538 [cfprefsd]
May 22 22:04:33 ivoire kernel[0]: (default pager): [KERNEL]: ps_select_segment - send HI_WAT_ALERT
May 22 22:04:34 ivoire kernel[0]: (default pager): [KERNEL]: ps_vstruct_transfer_from_segment - ABORTED
May 22 22:04:34 ivoire kernel[0]: (default pager): [KERNEL]: Failed to recover emergency paging segment
May 22 22:04:34 ivoire kernel[0]: macx_swapon SUCCESS
May 22 22:05:02 paul[26584]: swap in use = 4096, exceeds installed RAM (4096), last reboot was May 20 17:51 , recommend reboot

This — Failed to recover emergency paging segment — looks alarming. I doubt it is. It’s not new, in any case.

it could be called work

I had this idea 30+ years ago

A new emerging concept known as hybrid solar lighting may offer an effective way of routing daylight deep into buildings. Using parabolic reflectors, direct sunlight can be concentrated on a smaller mirror which after removing most of the Infra red component (which can be extracted as electricity), reflects a very focused beam of visible light on to the end of a optical fibre bundle. This cooled beam of concentrated full spectrum natural light can then be routed into the interior of buildings for illumination. The hybrid design allows this additional lighting source to be mixed with back up lighting to create a dynamic system that always maximises the amount of natural light fed into the building.

[From Solar Power | Green Energy Jobs Career Guide]

Maybe not for task lighting but an easy win for hallways or ambient lighting.

I can recall when the idea came to me, around 1982, as I was walking along a corridor in an apartment/condo building in Florida. There were no windows but there were small wall sconces that radiated heat as I passed them. Perhaps it was the realization that there was all this heat and light outdoors, surrounding this air-conditioned darkness.

it could be called work

network tuning, OS X Leopard edition

I had occasion to fire up an old PPC iMac G5 (OS X 10.5.8) the other week and was appalled at how slowly it performed at network access. So here’s what I did to fix it.

Per Scott Rolande, there are tunable values for many aspects of the TCP stack. Handily, they live in a text file and can be tinkered with interactively.


This machines didn’t have a sysctl.conf file so I used copied his and used it to pull out the current values.

for i in `cut -d= -f1 sysctl.conf`; do sysctl $i; done
kern.ipc.maxsockbuf: 8388608
kern.ipc.somaxconn: 128
kern.ipc.nmbclusters: 32768
net.inet.tcp.rfc1323: 1
net.inet.tcp.win_scale_factor: 3
net.inet.tcp.sockthreshold: 64
net.inet.tcp.sendspace: 65536
net.inet.tcp.recvspace: 65536
net.inet.tcp.mssdflt: 512
net.inet.tcp.msl: 15000
net.inet.tcp.always_keepalive: 0
net.inet.tcp.delayed_ack: 3
net.inet.tcp.slowstart_flightsize: 1
net.inet.tcp.blackhole: 0
net.inet.udp.blackhole: 0
net.inet.icmp.icmplim: 250

A little different. Not sure why kern.ipc.maxsockbuf is so much higher on an old machine that maxes out at 2Gb of RAM…

To test throughput, I needed a test file.
hdiutil create -size 1g test.dmg
created: /Users/paul/test.dmg

Over wireless G on a mixed wireless N/G network to a wired 100 Mbit host on a Gigabit switch, it managed a stately 12 Mbits/second.

Twelve minutes (12m19.024s) later:
sent 1073872981 bytes received 42 bytes 1452160.95 bytes/sec
total size is 1073741824 speedup is 1.00

Oy. Now to try it to a wireless destination, a 10.8.3 machine.

Hmm, interestingly, OS X handles rsync transfers a little differently: it blocks out space equivalent to the size of the file. This is checking the size of the file during the transfer. du tells a different story than ls. As you can see the file size never changes during the transfer. du -h .test.dmg.GsCjdW; sleep 10 ; du -h .test.dmg.GsCjdW
1.0G .test.dmg.GsCjdW
1.0G .test.dmg.GsCjdW

Using ls -l shows the actual size of the file, not the disk space set aside for it.

Still slow: sent 1073872981 bytes received 42 bytes 1428972.75 bytes/sec

Took 12m30.961s, the difference being because it went to sleep (out of boredom?).

After changing the various sysctl OIDs, things got much worse.

This is what I have on the 10.8.3 system:
kern.ipc.maxsockbuf: 4194304
kern.ipc.somaxconn: 1024
kern.ipc.nmbclusters: 32768
net.inet.tcp.rfc1323: 1
net.inet.tcp.win_scale_factor: 3
net.inet.tcp.sockthreshold is not implemented
net.inet.tcp.sendspace: 2097152
net.inet.tcp.recvspace: 2097152
net.inet.tcp.mssdflt: 1460
net.inet.tcp.msl: 15000
net.inet.tcp.always_keepalive: 0
net.inet.tcp.delayed_ack: 0
net.inet.tcp.slowstart_flightsize: 1
net.inet.tcp.blackhole: 0
net.inet.udp.blackhole: 0
net.inet.icmp.icmplim: 250

A 1Gb transfer takes too long (which of course is the problem) so I made a couple of small changes and tried a 100Mbit file. Down to 13 seconds. Hmm, not bad. The changes:
sysctl -w net.inet.tcp.sendspace=4194304
sysctl -w net.inet.tcp.recvspace=4194304
sysctl -w net.inet.tcp.mssdflt=1460

I set net.inet.tcp.[send|recv]space to be half of kern.ipc.maxsockbuf and made the net.inet.tcp.mssdflt match the receiving system.

Now a 1Gb test file takes 53.287s. Copying from 10.8.3 to 10.5.8 took just 31.215s. After synchronizing the net.inet.tcp.mssdflt on the system I first tested with, transfers to there are down to 1m47.471s.

So some big improvements for not much effort. I’m sure there are lots of other tweaks but given the relatively little need for more improvement and the limited potential (the old 10.5 system on wireless G is frozen in time while the newer wireless N machines will see further improvements), I don’t know that I’ll bother. A twelve-fold increase in one direction and a 24-fold boost going the other way is pretty good. If I really cared, i.e., this was something I expected to do regularly, I’d run a Cat5 cable to it and call it done.

After a reboot to ensure the values stay put, I tested different copy methods as well, all with the same 1Gb file.

from the 100Mbit wired machine using rsync: 0m56.349s

same to/from, using scp -C for compression (since I used rsync -z): 1m40.794s

from the 10.8.3 system to the 10.5 system with scp -C: 1m35.228s

from the 10.8.3 system to the 10.5 system with rsync -z: 0m24.734s (!!)

from the 10.5 system to 10.8.3 with rsync -z: 0m38.861s

So even better after the reboot. Could be other variables in there as well. I’m calling it done.

UPDATE: the morning after shows a different story. I was puzzled that snmp monitoring wasn’t working so I took a look this morning and things are slow again, down to 5 Mbits/second from the 12 I considered poky. At this point, I’m not sure how reliable the benchmark data was or at least how I was evaluating it.

I’ll have to investigate further. I created some more test media by splitting up the 1Gb file into smaller ones, so I have a pile of 100Mbit and 10Mbit files as well. Part of the optimization I am looking for is good throughput for large files as well as being able to handle smaller files quickly. Large buffers and access to a good sized reserve of connections, in other words.