faster rsync of huge directories
Hi, I am a great fan of rsync for copying filesystems. However I now have a filesystem which is several hundred gigabytes and apparently has a lot of small files. I have been running rsync all night and it still did not start copying as it is still building the file list. Is there any way to get it to start copying as it goes. Or do any of you have a better tool? Thanks, -tom ___ Linux-il mailing list Linux-il@cs.huji.ac.il http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
Re: faster rsync of huge directories
2010/4/12 Tom Rosenfeld tro...@bezeqint.net Hi, I am a great fan of rsync for copying filesystems. However I now have a filesystem which is several hundred gigabytes and apparently has a lot of small files. I have been running rsync all night and it still did not start copying as it is still building the file list. Is there any way to get it to start copying as it goes. Or do any of you have a better tool? Are both servers in the same LAN? IMHO, your problem in network band witchbetween source and destination. I have ~4M files, ~800GB - rsync is very fast in the same LAN (1Gb), and slowly for remote destination. Regards, Vitaly ___ Linux-il mailing list Linux-il@cs.huji.ac.il http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
[YBA] Rage
Dear list members, I have about 15 PCI ATi Rage graphics cards dated 1995 and 1997, some Rage II, some earlier. I will through them in the trash next week unles someone claims them. - yba -- EE 77 7F 30 4A 64 2E C5 83 5F E7 49 A6 82 29 BA~. .~ Tk Open Systems =}ooO--U--Ooo{= - y...@tkos.co.il - tel: +972.2.679.5364, http://www.tkos.co.il - ___ Linux-il mailing list Linux-il@cs.huji.ac.il http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
Re: [YBA] Rage
That should be throw not through. (Drat those auto correctors!) - yba On Mon, 12 Apr 2010, Jonathan Ben Avraham wrote: Date: Mon, 12 Apr 2010 10:27:48 +0300 (IDT) From: Jonathan Ben Avraham y...@tkos.co.il To: ILUG linux-il@cs.huji.ac.il Subject: [YBA] Rage Dear list members, I have about 15 PCI ATi Rage graphics cards dated 1995 and 1997, some Rage II, some earlier. I will through them in the trash next week unles someone claims them. - yba -- EE 77 7F 30 4A 64 2E C5 83 5F E7 49 A6 82 29 BA~. .~ Tk Open Systems =}ooO--U--Ooo{= - y...@tkos.co.il - tel: +972.2.679.5364, http://www.tkos.co.il - ___ Linux-il mailing list Linux-il@cs.huji.ac.il http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
Re: faster rsync of huge directories
Tom Rosenfeld wrote: Hi, I am a great fan of rsync for copying filesystems. However I now have a filesystem which is several hundred gigabytes and apparently has a lot of small files. I have been running rsync all night and it still did not start copying as it is still building the file list. Is there any way to get it to start copying as it goes. Or do any of you have a better tool? Yes, there is a better tool. Upgrade both ends to rsync version 3 or later. That version starts the transfer even before the file list is completely built. Shachar -- Shachar Shemesh Lingnu Open Source Consulting Ltd. http://www.lingnu.com ___ Linux-il mailing list Linux-il@cs.huji.ac.il http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
Re: faster rsync of huge directories
On Mon, Apr 12, 2010, Shachar Shemesh wrote about Re: faster rsync of huge directories: Upgrade both ends to rsync version 3 or later. That version starts the transfer even before the file list is completely built. Maybe I'm missing something, but how does this help? It may find the first file to copy a little quicker, but finishing the rsync will take exactly the same time, won't it? Also, if nothing has changed, it will take it exactly the same time to figure this out, won't it? I'm not sure what his problem is, though. Is it the fact that the remote rsync takes a very long time to walk the huge directory tree, or the fact that sending the whole list over the network is slow? If it's the first problem, then maybe switching to a different filesystem, or reorganizing your directory structure (e.g., not to have more than a few hundred files per directory) will help. If it's the second problem, then maybe rsync improvements are due - i.e., to use rsync's delta protocol not only on the individual files, but also on the file list. -- Nadav Har'El| Monday, Apr 12 2010, 28 Nisan 5770 n...@math.technion.ac.il |- Phone +972-523-790466, ICQ 13349191 |Fame: when your name is in everything but http://nadav.harel.org.il |the phone book. ___ Linux-il mailing list Linux-il@cs.huji.ac.il http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
Re: faster rsync of huge directories
Nadav Har'El wrote: On Mon, Apr 12, 2010, Shachar Shemesh wrote about Re: faster rsync of huge directories: Upgrade both ends to rsync version 3 or later. That version starts the transfer even before the file list is completely built. Maybe I'm missing something, but how does this help? It may find the first file to copy a little quicker, but finishing the rsync will take exactly the same time, won't it? Not at all. If the two are done linearly, then only after the entire directory tree is scanned will the first transfer *begin*. The total transfer time will be tree scan time + transfer time for older rsyncs, but the two overlap for newer transfers. How much time exactly that would save really depends on how much the second time is (i.e. - how much data you need to actually transfer). Also, if nothing has changed, it will take it exactly the same time to figure this out, won't it? Yes. You might still save some time, but this, definitely, is the minimal advantage that newer rsyncs have over older ones. I'm not sure what his problem is, though. Is it the fact that the remote rsync takes a very long time to walk the huge directory tree, or the fact that sending the whole list over the network is slow? From my experience, it's mostly the former. If it's the first problem, then maybe switching to a different filesystem, At the time, we tested ext3, jfs and xfs, and found no significant differences between them. It was not, however, a scientific test. or reorganizing your directory structure (e.g., not to have more than a few hundred files per directory) will help. That is likely to actually help (plugand is why rsyncrypto has the --ne-nesting option when encrypting file names/plug), but is not always a viable option. If it's the second problem, then maybe rsync improvements are due - i.e., to use rsync's delta protocol not only on the individual files, but also on the file list. It's not the second, typically. Shachar -- Shachar Shemesh Lingnu Open Source Consulting Ltd. http://www.lingnu.com ___ Linux-il mailing list Linux-il@cs.huji.ac.il http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
Nexus One
Hi, How may list list subscribers use Nexus One? (I know Gilad do) What are your impressions? Does it worth to purchase? Where it worth to purchase? Thanks Keywords: Nexus One, Google phone, HTC, Android -- Constantine Shulyupin Embedded Linux Expert TI DaVinci Expert Tel-Aviv Israel http://www.LinuxDriver.co.il/ ___ Linux-il mailing list Linux-il@cs.huji.ac.il http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
Re: faster rsync of huge directories
On Mon, Apr 12, 2010 at 9:41 AM, Tom Rosenfeld tro...@bezeqint.net wrote: Hi, I am a great fan of rsync for copying filesystems. However I now have a filesystem which is several hundred gigabytes and apparently has a lot of small files. I have been running rsync all night and it still did not start copying as it is still building the file list. Is there any way to get it to start copying as it goes. Or do any of you have a better tool? Thanks, -tom Thanks for all the suggestions! I realized that in my case I did not really need rsync since it is a local disk to disk copy. I could have used a tar and pipe, but I like cpio: find $FROMDIR -depth -print |cpio -pdma $TODIR By default cpio also will not overwrite files if the source is not newer. It was also pointed out that ver 3 of rsync now does start to copy before it indexes all the files. Unfortunately, it is not available on CentOS 5. -tom ___ Linux-il mailing list Linux-il@cs.huji.ac.il http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
Re: faster rsync of huge directories
Tom Rosenfeld wrote: On Mon, Apr 12, 2010 at 9:41 AM, Tom Rosenfeld tro...@bezeqint.net mailto:tro...@bezeqint.net wrote: Hi, I am a great fan of rsync for copying filesystems. However I now have a filesystem which is several hundred gigabytes and apparently has a lot of small files. I have been running rsync all night and it still did not start copying as it is still building the file list. Is there any way to get it to start copying as it goes. Or do any of you have a better tool? Thanks, -tom Thanks for all the suggestions! I realized that in my case I did not really need rsync since it is a local disk to disk copy. Please note that rsync from local to local is just a glorified cp. It does not do file comparisons at all. It was also pointed out that ver 3 of rsync now does start to copy before it indexes all the files. Unfortunately, it is not available on CentOS 5. wget http://samba.anu.edu.au/ftp/rsync/src/rsync-3.0.7.tar.gz tar xvzf rsync-3.0.7.tar.gz cd rsync-3.0.7.tar.gz ./configure make su make install Shachar -- Shachar Shemesh Lingnu Open Source Consulting Ltd. http://www.lingnu.com ___ Linux-il mailing list Linux-il@cs.huji.ac.il http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
Re: faster rsync of huge directories
On Mon, Apr 12, 2010, Tom Rosenfeld wrote about Re: faster rsync of huge directories: I realized that in my case I did not really need rsync since it is a local disk to disk copy. I could have used a tar and pipe, but I like cpio: Is this quicker? If it is, then the reason of rsync's extreme slowness which you described was *not* the filesystem speed. It has to be something else. Maybe rsync simply uses tons of memory, and starts thrashing? (but this is just a guess, I didn't look at it code). If this is the case then the copy-while-building- the-list that Shachar described might indeed be a big win. find $FROMDIR -depth -print |cpio -pdma $TODIR By default cpio also will not overwrite files if the source is not newer. I recommend you use the -print0 option to find instead of -print, and add the -0 option to cpio. These are GNU extensions to find and cpu (and a bunch of other commands as well) that uses nulls, instead of newlines, to separate the file names. This allows newline characters in filenames (these aren't common, but nevertheless are legal...). By the way, while cpio -p is indeed a good historic tool, nowadays there is little reason to use it, because GNU's cp make it easier to do almost everything that cpio -p did: The -a option to cp is recursive and copies links, modes, timestamps and so on, and the -u option will only copy if the source is newer than the destination (or the destination is missing). So, cp -au $FROMDIR $TODIR is shorter and easier to remember than find | cpio -p. But please note I didn't test this command, so don't use it on your important data without thinking first! -- Nadav Har'El| Monday, Apr 12 2010, 28 Nisan 5770 n...@math.technion.ac.il |- Phone +972-523-790466, ICQ 13349191 |I have a watch cat! If someone breaks in, http://nadav.harel.org.il |she'll watch. ___ Linux-il mailing list Linux-il@cs.huji.ac.il http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
Re: faster rsync of huge directories
By default cpio also will not overwrite files if the source is not newer. Consider cp -ur rsync also can --delete extraneous files from dest dirs -- Constantine Shulyupin Embedded Linux Expert TI DaVinci Expert Tel-Aviv Israel http://www.LinuxDriver.co.il/ ___ Linux-il mailing list Linux-il@cs.huji.ac.il http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
Re: [YBA] Rage
Hopefully in recycled electronics trash, though these are still far too rare in Israel. See: http://www.sviva.gov.il/bin/en.jsp?enPage=BlankPageenDisplay=viewenDispWhat=ObjectenDispWho=Articals ^l1409enZone=recycle_material Regards, Dov On Mon, Apr 12, 2010 at 10:27, Jonathan Ben Avraham y...@tkos.co.il wrote: Dear list members, I have about 15 PCI ATi Rage graphics cards dated 1995 and 1997, some Rage II, some earlier. I will through them in the trash next week unles someone claims them. - yba -- EE 77 7F 30 4A 64 2E C5 83 5F E7 49 A6 82 29 BA~. .~ Tk Open Systems =}ooO--U--Ooo{= - y...@tkos.co.il - tel: +972.2.679.5364, http://www.tkos.co.il - ___ Linux-il mailing list Linux-il@cs.huji.ac.il http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il ___ Linux-il mailing list Linux-il@cs.huji.ac.il http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
RE: faster rsync of huge directories
Check out Repliweb From: linux-il-boun...@cs.huji.ac.il [mailto:linux-il-boun...@cs.huji.ac.il] On Behalf Of Tom Rosenfeld Sent: Monday, April 12, 2010 9:41 AM To: linux-il@cs.huji.ac.il Subject: faster rsync of huge directories Hi, I am a great fan of rsync for copying filesystems. However I now have a filesystem which is several hundred gigabytes and apparently has a lot of small files. I have been running rsync all night and it still did not start copying as it is still building the file list. Is there any way to get it to start copying as it goes. Or do any of you have a better tool? Thanks, -tom ___ Linux-il mailing list Linux-il@cs.huji.ac.il http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
Re: faster rsync of huge directories
On Mon, Apr 12, 2010 at 10:04 AM, Vitaly li...@karasik.org wrote: 2010/4/12 Tom Rosenfeld tro...@bezeqint.net Hi, I am a great fan of rsync for copying filesystems. However I now have a filesystem which is several hundred gigabytes and apparently has a lot of small files. I have been running rsync all night and it still did not start copying as it is still building the file list. Is there any way to get it to start copying as it goes. Or do any of you have a better tool? Are both servers in the same LAN? IMHO, your problem in network band witchbetween source and destination. I have ~4M files, ~800GB - rsync is very fast in the same LAN (1Gb), and slowly for remote destination. Regards, Vitaly I am not even using a lan. It is disk to disk. I have ~16M files ~900GB. rsync has been running about 18 hours and has indexed over 8 million files, but still did not copy even one. Thanks, -tom ___ Linux-il mailing list Linux-il@cs.huji.ac.il http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
Job Offer: Experienced Embedded Linux Networking SW engineer
Hi all, Wavion is looking for experienced Embedded Linux Networking SW engineer. Please find below the job profile. Job description SW design and development for a wireless network system Requirements: - Hands-on experience in SW design and development of networking applications - A MUST - B.Sc. or B.A in either Software Engineering/Computer Science/Computer Engineering/Electrical Engineering - At least 3 years of experience in development of Networking embedded systems in Embedded Linux environment - A MUST - Linux kernel expert - A MUST - Proficiency in Networking system with proven experience in development of the following areas: - Radius/billing - QoS, SLA and Management System - Networking protocols - Excellent knowledge of C/C++ - Experience with networking equipments - Access Gateway and Access controller - Familiarity with IEEE802.11 standards - an advantage - Hands-on experience in development of wireless network systems - an advantage - Good human relations (team player) - High level of Hebrew English Please send CVs for the E-mail: a...@wavionnetworks.com mailto:a...@wavionnetworks.com Please note that Wavion sit in Yoqneam, North Israel. Thanks, Avi Vaknin SW Group Manager Wavion Wireless Networks www.wavionnetworks.com http://www.wavionnetworks.com/ att63176.jpg___ Linux-il mailing list Linux-il@cs.huji.ac.il http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il