Re: [zfs-discuss] Lots of metadata overhead on filesystems with 100M files

2009-06-22 Thread Bob Friesenhahn
On Mon, 22 Jun 2009, Thomas wrote: I have and raidz1 conisting 6 5400rpm drives on this zpool. I have stored some Media in a FS and in an other 200k files. Both FS are written not much. The Pool is 85% Full. Could this issue also the reason that if Iam playing(reading) some Media that the

Re: [zfs-discuss] Lots of metadata overhead on filesystems with 100M files

2009-06-19 Thread Rainer Orth
Richard Elling richard.ell...@gmail.com writes: George would probably have the latest info, but there were a number of things which circled around the notorious Stop looking and start ganging bug report, http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6596237 Indeed: we were

Re: [zfs-discuss] Lots of metadata overhead on filesystems with 100M files

2009-06-19 Thread Roch Bourbonnais
Le 18 juin 09 à 20:23, Richard Elling a écrit : Cor Beumer - Storage Solution Architect wrote: Hi Jose, Well it depends on the total size of your Zpool and how often these files are changed. ...and the average size of the files. For small files, it is likely that the default

Re: [zfs-discuss] Lots of metadata overhead on filesystems with 100M files

2009-06-18 Thread Louis Romero
hi Dirk, How might we explain running find on a linux client to an NFS mounted file system under the 7000 taking significantly longer (i.e. performance behaving as though the command was run from Solaris?) Not sure if find would have the intelligence to differentiate between file system

Re: [zfs-discuss] Lots of metadata overhead on filesystems with 100M files

2009-06-18 Thread Cor Beumer - Storage Solution Architect
Hi Jose, Well it depends on the total size of your Zpool and how often these files are changed. I was at a customer an huge internet provider, who had 40x an X4500 with Standard solaris and using ZFS. All the machines were equiped with 48x 1TB disks. The machines were used to provide the

Re: [zfs-discuss] Lots of metadata overhead on filesystems with 100M files

2009-06-18 Thread Richard Elling
Cor Beumer - Storage Solution Architect wrote: Hi Jose, Well it depends on the total size of your Zpool and how often these files are changed. ...and the average size of the files. For small files, it is likely that the default recordsize will not be optimal, for several reasons. Are

Re: [zfs-discuss] Lots of metadata overhead on filesystems with 100M files

2009-06-18 Thread Gary Mills
On Thu, Jun 18, 2009 at 12:12:16PM +0200, Cor Beumer - Storage Solution Architect wrote: What they noticed on the the X4500 systems, that when the zpool became filled up for about 50-60% the performance of the system did drop enormously. They do claim this has to do with the fragmentation

Re: [zfs-discuss] Lots of metadata overhead on filesystems with 100M files

2009-06-18 Thread Richard Elling
Gary Mills wrote: On Thu, Jun 18, 2009 at 12:12:16PM +0200, Cor Beumer - Storage Solution Architect wrote: What they noticed on the the X4500 systems, that when the zpool became filled up for about 50-60% the performance of the system did drop enormously. They do claim this has to do with

Re: [zfs-discuss] Lots of metadata overhead on filesystems with 100M files

2009-06-17 Thread Roch Bourbonnais
Le 16 juin 09 à 19:55, Jose Martins a écrit : Hello experts, IHAC that wants to put more than 250 Million files on a single mountpoint (in a directory tree with no more than 100 files on each directory). He wants to share such filesystem by NFS and mount it through many Linux Debian clients

Re: [zfs-discuss] Lots of metadata overhead on filesystems with 100M files

2009-06-17 Thread robert ungar
Jose, I hope our openstorage experts weigh in on 'is this a good idea', it sounds scary to me but I'm overly cautious anyway. I did want to raise the question of other client expectations for this opportunity, what are the intended data protection requirements, how will they backup and

Re: [zfs-discuss] Lots of metadata overhead on filesystems with 100M files

2009-06-17 Thread Eric D. Mudama
On Wed, Jun 17 at 13:49, Alan Hargreaves wrote: Another question worth asking here is, is a find over the entire filesystem something that they would expect to be executed with sufficient regularity that it the execution time would have a business impact. Exactly. That's such an odd

Re: [zfs-discuss] Lots of metadata overhead on filesystems with 100M files

2009-06-17 Thread Louis Romero
Jose, I believe the problem is endemic to Solaris. I have run into similar problems doing a simple find(1) in /etc. On Linux, a find operation in /etc is almost instantaneous. On solaris, it has a tendency to spin for a long time. I don't know what their use of find might be but,

Re: [zfs-discuss] Lots of metadata overhead on filesystems with 100M files

2009-06-17 Thread Dirk Nitschke
Hi Louis! Solaris /usr/bin/find and Linux (GNU-) find work differently! I have experienced dramatic runtime differences some time ago. The reason is that Solaris find and GNU find use different algorithms. GNU find uses the st_nlink (number of links) field of the stat structure to

Re: [zfs-discuss] Lots of metadata overhead on filesystems with 100M files

2009-06-17 Thread Casper . Dik
Hi Louis! Solaris /usr/bin/find and Linux (GNU-) find work differently! I have experienced dramatic runtime differences some time ago. The reason is that Solaris find and GNU find use different algorithms. GNU find uses the st_nlink (number of links) field of the stat structure to

Re: [zfs-discuss] Lots of metadata overhead on filesystems with 100M files

2009-06-17 Thread Joerg Schilling
Dirk Nitschke dirk.nitsc...@sun.com wrote: Solaris /usr/bin/find and Linux (GNU-) find work differently! I have experienced dramatic runtime differences some time ago. The reason is that Solaris find and GNU find use different algorithms. Correct: Solaris find honors the POSIX standard,

Re: [zfs-discuss] Lots of metadata overhead on filesystems with 100M files

2009-06-16 Thread Paisit Wongsongsarn
Hi Jose, Enable SSD (cache device usage) only for meta data would help?. Assuming that you have read optimized SSD in place. I never try it out but worth to try by just turn on. regards, Paisit W. Jose Martins wrote: Hello experts, IHAC that wants to put more than 250 Million files on a