Hi, I've been digging a bit deeper into this. I've got a tip to use LVM under my DRDB device, mount a LVM snaphot read only with localflocks. I've gone this way, and the backup is even slower from this snapshot volume. I'm pretty convinced my slowdowns are not due to the cluster locking.
Things I've noticed: - Performance (on the original volume and the snapshot volume) seems to degrade when the numbers of files in a directory rise. When I say performance, I mean 'operations where every file needs to be checked' like rsync or 'find . -mtime -1 print'. \ - Performance for my application is great (writing a couple of thousand files a day and reading a couple of 100.000 a day) dd tests give me 200 MB/s writes and 600 MB/s reads. Am I missing something here, or will OCFS2 just never work for my backups...? Kind regards, Dirk Op 9-4-2012 16:40, Sérgio Surkamp schreef: > Hi Dirk, > > The application running on your servers benefits/uses POSIX flocks? If > don't, you can try to disable them by appending the localflocks > parameter to your fstab. This will prevent the propagation of the > flocks trough the cluster stack and may improve performance. > > Another parameter that you could play with is data, that parameter > change the way the filesystem flush data to disk, try setting > data=writeback and report back if it improve performance. Warning, > setting data=writeback can result in deleted files reappear if the > operating system crash/panic, if this behaviour can lead to any > problem to your applications, then forget about changing the data > parameter. > > Regards, > Sérgio > > Em Wed, 04 Apr 2012 18:46:39 +0200 > Dirk Bonenkamp - ProActive <d...@proactive.nl> escreveu: > >> Hi Luis, >> >> Thank you for your reply. Noatime is enabled: >> >> /dev/drbd0 on /var/scans type ocfs2 >> (rw,_netdev,noatime,cluster_stack=pcmk) >> >> Without noatime performance was even worse! >> >> Cheers, >> >> Dirk >> >> >> Op 4-4-2012 18:44, Luis Freitas schreef: >>> Dirk, >>> >>> Also, you should have noatime enabled on this filesystem, check >>> your mount options. Or else the rsync will end up causing access >>> time to be updated. >>> >>> Regards, >>> Luis >>> >>> *From:* Eduardo Diaz - Gmail <ediaz...@gmail.com> >>> *To:* Dirk Bonenkamp - ProActive <d...@proactive.nl> >>> *Cc:* ocfs2-users@oss.oracle.com >>> *Sent:* Tuesday, April 3, 2012 11:26 AM >>> *Subject:* Re: [Ocfs2-users] Backup issues >>> >>> For backup I use a local copy using tar, but I use rsync too. >>> >>> I don't compare but if you send any stadistics about rsync we can >>> said more information. >>> >>> use rsync **--stats >>> >>> I don't note difernt speed, but did you make a full rsync, and show >>> --progress for see that file it is going at what speed? >>> >>> regards >>> ** >>> >>> On Mon, Apr 2, 2012 at 9:04 PM, Dirk Bonenkamp - ProActive >>> <d...@proactive.nl <mailto:d...@proactive.nl>> wrote: >>> >>> Hi All, >>> >>> I'm currently testing a OCFS2 set-up, and I'm having issues with >>> creating backups. >>> >>> I have a 2 node cluster, running OCFS2 on a dual primary DRBD >>> device. >>> >>> The file system is 3.7Tb of which 211 Gb is used: about 1.5 >>> million files in 95 directories. >>> >>> Everything works fine, except for the backups, which are taking >>> way more >>> time than on 'regular' file systems. >>> >>> I'm using rsync for my backups. When I rsync the file system >>> above, this >>> takes more than an hour, without any modifications to the file >>> system. >>> >>> Network / disk speed is good. I can rsync a 10 Gb file from the >>> OCFS2 filesystem to the same backup server with just under 100 Mb/s. >>> >>> I know there is some penalty to be expected from a clustered >>> file system, but this is al lot. Rsyncing an ext3 file system double >>> the size >>> (in Mb's and files) of this file system takes about 600 >>> seconds... >>> >>> Has anybody some advice on a backup strategy for me? Or some >>> tuning tips? >>> >>> Thanks in advance, >>> >>> Dirk >>> >>> -- >>> <http://www.proactive.nl> >>> T 023 - 5422299 >>> F 023 - 5422728 >>> >>> www.proactive.nl <http://www.proactive.nl/> >>> <http://www.proactive.nl <http://www.proactive.nl/>> >>> >>> >>> >>> _______________________________________________ >>> Ocfs2-users mailing list >>> Ocfs2-users@oss.oracle.com <mailto:Ocfs2-users@oss.oracle.com> >>> http://oss.oracle.com/mailman/listinfo/ocfs2-users >>> >>> >>> >>> _______________________________________________ >>> Ocfs2-users mailing list >>> Ocfs2-users@oss.oracle.com <mailto:Ocfs2-users@oss.oracle.com> >>> http://oss.oracle.com/mailman/listinfo/ocfs2-users >>> > > _______________________________________________ Ocfs2-users mailing list Ocfs2-users@oss.oracle.com http://oss.oracle.com/mailman/listinfo/ocfs2-users