On Tue, 2004-06-01 at 17:44, Greger Cronquist wrote: > Hi, > > Has anyone considered using the rsync algorithm for loading programs on > embedded platforms? I.e. with low cpu power (say, 16-25 MHz ARM7) and > low memory requirements (programs are < 128 KB, and the rsync algorithm
I've been doing some work on librsync. As I come from an embedded background, I've always been conscious of embedded requirements as I worked on it. > must use little ROM and RAM). Or is it just not worth the effort? The algorithm sacrifices CPU to minimise data transfer when updating it. For it to be worth it, you must already have similar data that needs updating, and data transfer must be expensive relative to CPU. Often with embedded systems you are loading programs from scratch, so there is nothing to "update". Even if you are updating "programs", compiled binaries are often very different for only minor source changes. It would be worth analysing your data to see if there are more application specific ways to minimise "updates" (like partitioning data into discrete parts that can be updated independently). > I guess you'd want to at least modify the checksum calculations and use > shorter and faster checksums (as the data to be transferred is smaller > than on workstations), and you'd need that file overwrite patch that's > been floating around. The checksum size you can use is a function of the data size, block size, and your acceptable failure rate. There are threads where this has been analysed and the latest rsync incorporates dynamic checksum and block sizing based on that discussion. librsync doesn't have it yet, but it could easily be added. I'd be interested to hear of anyone using or contemplating the rsync algorithm on embedded systems. -- Donovan Baarda <[EMAIL PROTECTED]> http://minkirri.apana.org.au/~abo/ -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
