okay, so you saying if i have large db, and i made a change rsync will not re-transfer the whole file, it will just transfer small portion of that file? am I correct? does it say something like that in documentation anywhere?
On Wed, Dec 3, 2008 at 8:24 AM, Jamie Lokier <[EMAIL PROTECTED]> wrote: > alexus wrote: >> not quite what i need >> >> let's take another example >> >> let's say i have a mysql db, and only few rows gets changed on daily >> basis, yet that data file itself is huge, so rsync checks for checksum >> sees that it's different and xfer the whole file, i use remote site, >> so xfer takes long time and plus we pay for our bw, so it cost us >> money too, instead there is a technology out there called dedupe, it >> will slice file for many blocks ( i think 12k each ) and it will >> checksum each of blocks, and will only trasnfer the one that were >> changed, this way, if very little get changed during the day, the xfer >> of large file will dramaticly drop, as it will only xfer one small >> block 12k instead of 1G for instance... >> >> so i was wondering if rsync ever going support this kind of >> technology, as today there is more and more of data and not enough >> time to back it up... > > rsync has always supported this block checksum technology like "dedupe". > It was the original reason for writing rsync! > > rsync is smarter than just checksumming every block, though, as it can > detect arbitrary insertions and deletions too. That is what Matt > means by delta-transfer algorithm. > > If rsync is transferring the whole file in your example, there's > something wrong with your command line options or your measurements. > > -- Jamie > -- http://alexus.org/ -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
