Re: Re[2]: [zfs-discuss] Re: Recommendation ZFS on StorEdge 3320

2006-09-08 Thread Roch - PAE
zfs hogs all the ram under a sustained heavy write load. This is being tracked by: 6429205 each zpool needs to monitor it's throughput and throttle heavy writers -r ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] Re: when zfs enabled java

2006-09-13 Thread Roch - PAE
Jill Manfield writes: My customer is running java on a ZFS file system. His platform is Soalris 10 x86 SF X4200. When he enabled ZFS his memory of 18 gigs drops to 2 gigs rather quickly. I had him do a # ps -e -o pid,vsz,comm | sort -n +1 and it came back: The culprit

Re: [zfs-discuss] Re: Re: Recommendation ZFS on StorEdge 3320

2006-09-14 Thread Roch - PAE
With ZFS however the in-between cache is obsolete, as individual disk caches can be used directly. The statement needs to be qualified. Storage cache, if protected, works great to reduce critical op latency. ZFS when it writes to disk cache, will flush data out before return to

Re: [zfs-discuss] Re: ZFS hangs systems during copy

2006-10-26 Thread Roch - PAE
Jürgen Keil writes: ZFS 11.0 on Solaris release 06/06, hangs systems when trying to copy files from my VXFS 4.1 file system. any ideas what this problem could be?. What kind of system is that? How much memory is installed? I'm able to hang an Ultra 60 with 256 MByte of main

Re: [zfs-discuss] Re: zpool import takes to long with large numbers of file systems

2006-10-27 Thread Roch - PAE
as an alternative, I thaught this would be relevant to the discussion: Bug ID: 6478980 Synopsis: zfs should support automount property In other words, do we really need to mount 1 FS in a snap, or do we just need to system to be up quickly then mount on demand -r

Re: [zfs-discuss] thousands of ZFS file systems

2006-10-31 Thread Roch - PAE
Erblichs writes: Hi, My suggestion is direct any command output to a file that may print thous of lines. I have not tried that number of FSs. So, my first suggestion is to have alot of phys mem installed. I seem to recall 64K per FS and being worked on to

RE: [zfs-discuss] ZFS Performance Question

2006-11-02 Thread Roch - PAE
Luke Lonergan writes: Robert, I belive it's not solved yet but you may want to try with latest nevada and see if there's a difference. It's fixed in the upcoming Solaris 10 U3 and also in Solaris Express post build 47 I think. - Luke This one is not yet fixed :

Re: [zfs-discuss] User quotas. A recurring question

2006-11-02 Thread Roch - PAE
Chris Gerhard writes: One question that keeps coming up in my discussions about ZFS is the lack of user quotas. Typically this comes from people who have many tens of thousands (30,000 - 100,000) of users where they feel that having a file system per user will not be manageable. I

Re: [zfs-discuss] ZFS Performance Question

2006-11-02 Thread Roch - PAE
How much memory in the V210 ? UFS will recycle it's own pages while creating files that are big. ZFS working against a large heap of free memory will cache the data (why not?). The problem is that ZFS does not know when to stop. During the subsequent memory/cache reclaim, ZFS is potentially not

Re: [zfs-discuss] ZFS direct i/o

2006-11-07 Thread Roch - PAE
Here is my take on this http://blogs.sun.com/roch/entry/zfs_and_directio -r Marlanne DeLaSource writes: I had a look at various topics covering ZFS direct I/O, and this topic is sometimes mentioned, and it was not really clear to me. Correct me if I'm wrong Direct I/O

Re: [zfs-discuss] Some performance questions with ZFS/NFS/DNLC at snv_48

2006-11-13 Thread Roch - PAE
Tomas Ögren writes: On 13 November, 2006 - Sanjeev Bagewadi sent me these 7,1K bytes: Tomas, comments inline... arc::print struct arc { anon = ARC_anon mru = ARC_mru mru_ghost = ARC_mru_ghost mfu = ARC_mfu

Re: [zfs-discuss] poor NFS/ZFS performance

2006-11-22 Thread Roch - PAE
? Thanks Matt Roch - PAE wrote On 11/21/06 11:28,: Matthew B Sweeney - Sun Microsystems Inc. writes: Hi I have an application that use NFS between a Thumper and a 4600. The Thumper exports 2 ZFS filesystems that the 4600 uses as an inqueue and outqueue

Re: [zfs-discuss] poor NFS/ZFS performance

2006-11-23 Thread Roch - PAE
Nope, wrong conclusion again. This large performance degradation has nothing whatsoever to do with ZFS. I have not seen data that would show a possible slowness on the part of ZFS vfs AnyFS on the backend; there may well be and that would be an entirely diffenrent discussion to the large

Re: [zfs-discuss] poor NFS/ZFS performance

2006-11-24 Thread Roch - PAE
MB/s. Not a huge difference for sure, but enough to make you think about switching. This was single stream over a 10GE link. (x4600 mounting vols from an x4500) Matt Bill Moore wrote: On Thu, Nov 23, 2006 at 03:37:33PM +0100, Roch - PAE wrote: Al Hopper writes: Hi

Re: [zfs-discuss] Re: ZFS on multi-volume

2006-12-05 Thread Roch - PAE
How about attaching the slow storage and kick off a scrub during the nights ? Then detach in the morning ? Downside: you are running an unreplicated pool during the day. Storage side errors won't be recoverable. -r Albert Shih writes: Le 04/12/2006 à 21:24:26-0800, Anton B. Rang a écrit

Re: [zfs-discuss] Limitations of ZFS

2006-12-07 Thread Roch - PAE
Why all people are strongly recommending to use whole disk (not part of disk) for creation zpools / ZFS file system ? One thing is performance; ZFS can enable/disable write cache in the disk at will if it has full control over the entire disk.. ZFS will also flush the WC when

Re: [zfs-discuss] Re: zpool import takes to long with large numbers of file systems

2006-12-08 Thread Roch - PAE
I got around that some time ago with a little hack; Maintain a directory with soft links to disks of interest: ls -l .../mydsklist total 50 lrwxrwxrwx 1 cx158393 staff 17 Apr 29 2006 c1t0d0s1 - /dev/dsk/c1t0d0s1 lrwxrwxrwx 1 cx158393 staff 18 Apr 29 2006 c1t16d0s1 -

Re: [zfs-discuss] Re: ZFS Usage in Warehousing (lengthy intro)

2006-12-11 Thread Roch - PAE
Anton B. Rang writes: If your database performance is dominated by sequential reads, ZFS may not be the best solution from a performance perspective. Because ZFS uses a write-anywhere layout, any database table which is being updated will quickly become scattered on the disk, so that

Re: [zfs-discuss] How to do DIRECT IO on ZFS ?

2006-12-12 Thread Roch - PAE
Maybe this will help: http://blogs.sun.com/roch/entry/zfs_and_directio -r dudekula mastan writes: Hi All, We have directio() system to do DIRECT IO on UFS file system. Can any one know how to do DIRECT IO on ZFS file system. Regards Masthan

Re: [zfs-discuss] Monitoring ZFS

2006-12-13 Thread Roch - PAE
The latency issue might improve with this rfe 6471212 need reserved I/O scheduler slots to improve I/O latency of critical ops -r Tom Duell writes: Group, We are running a benchmark with 4000 users simulating a hospital management system running on Solaris 10 6/06 on USIV+ based

Re: [zfs-discuss] Re: ZFS Storage Pool advice

2006-12-14 Thread Roch - PAE
Right on. And you might want to capture this in a blog for reference. The permalink will be quite useful. We did have a use case for zil synchronicity which was a big user controlled transaction : turn zil off do tons of thing to the filesystem. big sync turn

Re: [zfs-discuss] Re: Disappearing directories

2006-12-18 Thread Roch - PAE
Was it over NFS ? Was zil_disable set on the server ? If it's yes/yes, I still don't know for sure if that would be grounds for a causal relationship, but I would certainly be looking into it. -r Trevor Watson writes: Anton B. Rang wrote: Were there any errors reported in

Re: [zfs-discuss] Instructions for ignoring ZFS write cache flushing on intelligent arrays

2006-12-19 Thread Roch - PAE
Jason J. W. Williams writes: Hi Jeremy, It would be nice if you could tell ZFS to turn off fsync() for ZIL writes on a per-zpool basis. That being said, I'm not sure there's a consensus on that...and I'm sure not smart enough to be a ZFS contributor. :-) The behavior is a

Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Roch - PAE
Shouldn't there be a big warning when configuring a pool with no redundancy and/or should that not require a -f flag ? -r Al Hopper writes: On Sun, 17 Dec 2006, Ricardo Correia wrote: On Friday 15 December 2006 20:02, Dave Burleson wrote: Does anyone have a document that describes

Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Roch - PAE
Jonathan Edwards writes: On Dec 19, 2006, at 07:17, Roch - PAE wrote: Shouldn't there be a big warning when configuring a pool with no redundancy and/or should that not require a -f flag ? why? what if the redundancy is below the pool .. should we warn that ZFS isn't

Re: Re[2]: [zfs-discuss] Difference between ZFS and UFS with one LUN from a SAN

2006-12-22 Thread Roch - PAE
Robert Milkowski writes: Hello przemolicc, Friday, December 22, 2006, 10:02:44 AM, you wrote: ppf On Thu, Dec 21, 2006 at 04:45:34PM +0100, Robert Milkowski wrote: Hello Shawn, Thursday, December 21, 2006, 4:28:39 PM, you wrote: SJ All, SJ I understand that

Re: [zfs-discuss] Re: ZFS over NFS extra slow?

2007-01-03 Thread Roch - PAE
I've just generated some data for an upcoming blog entry on the subject. This is about a small file tar extract : All times are elapse (single 72GB SAS disk) Local and memory based filesystems tmpfs : 0.077 sec ufs : 0.25 sec zfs : 0.12 sec NFS service

Re: [zfs-discuss] Re: Re[2]: RAIDZ2 vs. ZFS RAID-10

2007-01-04 Thread Roch - PAE
Anton B. Rang writes: In our recent experience RAID-5 due to the 2 reads, a XOR calc and a write op per write instruction is usually much slower than RAID-10 (two write ops). Any advice is greatly appreciated. RAIDZ and RAIDZ2 does not suffer from this malady (the RAID5 write

[zfs-discuss] NFS and ZFS, a fine combination

2007-01-08 Thread Roch - PAE
Just posted: http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine Performance, Availability Architecture Engineering Roch BourbonnaisSun Microsystems,

Re: [zfs-discuss] NFS and ZFS, a fine combination

2007-01-08 Thread Roch - PAE
Hans-Juergen Schnitzer writes: Roch - PAE wrote: Just posted: http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine Which role plays network latency? If I understand you right, even a low-latency network, e.g. Infiniband, would not increase performance

Re: [zfs-discuss] NFS and ZFS, a fine combination

2007-01-09 Thread Roch - PAE
Dennis Clarke writes: On Mon, Jan 08, 2007 at 03:47:31PM +0100, Peter Schuller wrote: http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine So just to confirm; disabling the zil *ONLY* breaks the semantics of fsync() and synchronous writes from the application

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-15 Thread Roch - PAE
Jonathan Edwards writes: On Jan 5, 2007, at 11:10, Anton B. Rang wrote: DIRECT IO is a set of performance optimisations to circumvent shortcomings of a given filesystem. Direct I/O as generally understood (i.e. not UFS-specific) is an optimization which allows data to

Re: [zfs-discuss] Re: Re: Heavy writes freezing system

2007-01-18 Thread Roch - PAE
If some aspect of the load is writing large amount of data into the pool (through the memory cache, as opposed to the zil) and that leads to a frozen system, I think that a possible contributor should be: |6429205||each zpool needs to monitor its throughput and throttle heavy

Re: [zfs-discuss] Re: Heavy writes freezing system

2007-01-18 Thread Roch - PAE
Jason J. W. Williams writes: Hi Anantha, I was curious why segregating at the FS level would provide adequate I/O isolation? Since all FS are on the same pool, I assumed flogging a FS would flog the pool and negatively affect all the other FS on that pool? Best Regards, Jason

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-24 Thread Roch - PAE
[EMAIL PROTECTED] writes: Note also that for most applications, the size of their IO operations would often not match the current page size of the buffer, causing additional performance and scalability issues. Thanks for mentioning this, I forgot about it. Since ZFS's default

Re: [zfs-discuss] Re: Re: ZFS or UFS - what to do?

2007-01-29 Thread Roch - PAE
Anantha N. Srirama writes: Agreed, I guess I didn't articulate my point/thought very well. The best config is to present JBoDs and let ZFS provide the data protection. This has been a very stimulating conversation thread; it is shedding new light into how to best use ZFS. I would

Re: [zfs-discuss] Actual (cache) memory use of ZFS?

2007-01-30 Thread Roch - PAE
Bjorn Munch writes: Hello, I am doing some tests using ZFS for the data files of a database system, and ran into memory problems which has been discussed in a thread here a few weeks ago. When creating a new database, the data files are first initialized to their configured size

Re: [zfs-discuss] Thumper Origins Q

2007-01-30 Thread Roch - PAE
Nicolas Williams writes: On Tue, Jan 30, 2007 at 06:32:14PM +0100, Roch - PAE wrote: The only benefit of using a HW RAID controller with ZFS is that it reduces the I/O that the host needs to do, but the trade off is that ZFS cannot do combinatorial parity reconstruction so

Re: Re[2]: [zfs-discuss] se3510 and ZFS

2007-02-07 Thread Roch - PAE
Robert Milkowski writes: Hello Jonathan, Tuesday, February 6, 2007, 5:00:07 PM, you wrote: JE On Feb 6, 2007, at 06:55, Robert Milkowski wrote: Hello zfs-discuss, It looks like when zfs issues write cache flush commands se3510 actually honors it. I do not have right

[zfs-discuss] RealNeel : ZFS and DB performance

2007-02-09 Thread Roch - PAE
It's just a matter of time before ZFS overtakes UFS/DIO for DB loads, See Neel's new blog entry: http://blogs.sun.com/realneel/entry/zfs_and_databases_time_for -r ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Re: NFS/ZFS performance problems - txg_wait_open() deadlocks?

2007-02-12 Thread Roch - PAE
Robert Milkowski writes: bash-3.00# dtrace -n fbt::txg_quiesce:return'{printf(%Y ,walltimestamp);}' dtrace: description 'fbt::txg_quiesce:return' matched 1 probe CPU IDFUNCTION:NAME 3 38168 txg_quiesce:return 2007 Feb 12 14:08:15 0 38168

Re: Re[2]: [zfs-discuss] Re: NFS/ZFS performance problems - txg_wait_open() deadlocks?

2007-02-12 Thread Roch - PAE
Duh!. Long sync (which delays the next sync) are also possible on a write intensive workloads. Throttling heavy writters, I think, is the key to fixing this. Robert Milkowski writes: Hello Roch, Monday, February 12, 2007, 3:19:23 PM, you wrote: RP Robert Milkowski writes:

Re: [zfs-discuss] Not about Implementing fbarrier() on ZFS

2007-02-13 Thread Roch - PAE
Erblichs writes: Jeff Bonwick, Do you agree that their is a major tradeoff of builds up a wad of transactions in memory? We loose the changes if we have an unstable environment. Thus, I don't quite understand why a 2-phase approach to commits

Re: [zfs-discuss] Implementing fbarrier() on ZFS

2007-02-13 Thread Roch - PAE
Peter Schuller writes: I agree about the usefulness of fbarrier() vs. fsync(), BTW. The cool thing is that on ZFS, fbarrier() is a no-op. It's implicit after every system call. That is interesting. Could this account for disproportionate kernel CPU usage for applications that

Re: [zfs-discuss] Re: Re: ZFS vs NFS vs array caches, revisited

2007-02-13 Thread Roch - PAE
The only obvious thing would be if the exported ZFS filesystems where initially mounted at a point in time when zil_disable was non-null. The stack trace that is relevant is: sd_send_scsi_SYNCHRONIZE_CACHE sd`sdioctl+0x1770

Re: [zfs-discuss] Re: Re: Re: ZFS vs NFS vs array caches, revisited

2007-02-13 Thread Roch - PAE
On x86 try with sd_send_scsi_SYNCHRONIZE_CACHE Leon Koll writes: Hi Marion, your one-liner works only on SPARC and doesn't work on x86: # dtrace -n fbt::ssd_send_scsi_SYNCHRONIZE_CACHE:entry'[EMAIL PROTECTED] = count()}' dtrace: invalid probe specifier

Re: [zfs-discuss] Re: SPEC SFS benchmark of NFS/ZFS/B56 - please help to improve it!

2007-02-19 Thread Roch - PAE
Leon Koll writes: An update: Not sure is it related to the fragmentation, but I can say that serious performance degradation in my NFS/ZFS benchmarks is a result of on-disk ZFS data layout. Read operations on directories (NFS3 readdirplus) are abnormally time consuming . That

Re: [zfs-discuss] Is ZFS file system supports short writes ?

2007-02-19 Thread Roch - PAE
dudekula mastan writes: If a write call attempted to write X bytes of data, and if writecall writes only x ( hwere x X) bytes, then we call that write as short write. -Masthan What kind of support do you want/need ? -r ___ zfs-discuss

[zfs-discuss] Re: Perforce on ZFS

2007-02-20 Thread Roch - PAE
from CC: people related to Perforce benchmark (not in techtracker) is welcome. Thanks, Clausde Roch - PAE a écrit : Salut Claude. For this kind of query, try zfs-discuss@opensolaris.org; Looks like a common workload to me. I know of no small file problem with ZFS. You

Re: [zfs-discuss] Re: Perforce on ZFS

2007-02-21 Thread Roch - PAE
So Jonathan, you have a concern about the on-disk space efficiency for small file (more or less subsector). It is a problem that we can throw rust at. I am not sure if this is the basis of Claude's concern though. Creating small files, last week I did a small test. With ZFS I can create 4600

Re: [zfs-discuss] understanding zfs/thunoer bottlenecks?

2007-02-27 Thread Roch - PAE
Jens Elkner writes: Currently I'm trying to figure out the best zfs layout for a thumper wrt. to read AND write performance. I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46

Re: [zfs-discuss] Why number of NFS threads jumps to the max value?

2007-02-28 Thread Roch - PAE
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6467988 NFSD threads are created on a demand spike (all of them waiting on I/O) but thentend to stick around servicing moderate loads. -r Leon Koll wrote: Hello, gurus I need your help. During the benchmark test

Re: [zfs-discuss] Re: Re: Efficiency when reading the same file blocks

2007-02-28 Thread Roch - PAE
Jeff Davis writes: On February 26, 2007 9:05:21 AM -0800 Jeff Davis But you have to be aware that logically sequential reads do not necessarily translate into physically sequential reads with zfs. zfs I understand that the COW design can fragment files. I'm still trying to

Re: [zfs-discuss] Re: Efficiency when reading the same file blocks

2007-02-28 Thread Roch - PAE
Frank Hofmann writes: On Tue, 27 Feb 2007, Jeff Davis wrote: Given your question are you about to come back with a case where you are not seeing this? As a follow-up, I tested this on UFS and ZFS. UFS does very poorly: the I/O rate drops off quickly when you add

Re: [zfs-discuss] Why number of NFS threads jumps to the max value?

2007-03-05 Thread Roch - PAE
Leon Koll writes: On 2/28/07, Roch - PAE [EMAIL PROTECTED] wrote: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6467988 NFSD threads are created on a demand spike (all of them waiting on I/O) but thentend to stick around servicing moderate loads

Re: [zfs-discuss] Why number of NFS threads jumps to the max value?

2007-03-05 Thread Roch - PAE
Leon Koll writes: On 3/5/07, Roch - PAE [EMAIL PROTECTED] wrote: Leon Koll writes: On 2/28/07, Roch - PAE [EMAIL PROTECTED] wrote: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6467988 NFSD threads are created on a demand spike (all

Re: [zfs-discuss] Re: ZFS stalling problem

2007-03-06 Thread Roch - PAE
Jesse, You can change txg_time with mdb echo txg_time/W0t1 | mdb -kw -r ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Re: ZFS/UFS layout for 4 disk servers

2007-03-08 Thread Roch - PAE
Manoj Joseph writes: Matt B wrote: Any thoughts on the best practice points I am raising? It disturbs me that it would make a statement like don't use slices for production. ZFS turns on write cache on the disk if you give it the entire disk to manage. It is good for

Re: [zfs-discuss] Re: ZFS stalling problem

2007-03-12 Thread Roch - PAE
Working with a small txg_time means we are hit by the pool sync overhead more often. This is why the per second throughpuot has smaller peak values. With txg_time = 5, we have another problem which is that depending on timing of the pool sync, some txg can end up with too little data in them

Re: Re[2]: [zfs-discuss] writes lost with zfs !

2007-03-12 Thread Roch - PAE
Did you run touch from a client ? ZFS and UFS are different in general but in response to a local touch command neither need to generate immediate I/O and in response to a client touch both do. -r Ayaz Anjum writes: HI ! Well as per my actual post, i created a zfs file as part of Sun

Re: [zfs-discuss] Re: ZFS/UFS layout for 4 disk servers

2007-03-12 Thread Roch - PAE
Frank Cusack writes: On March 7, 2007 8:50:53 AM -0800 Matt B [EMAIL PROTECTED] wrote: Any thoughts on the best practice points I am raising? It disturbs me that it would make a statement like don't use slices for production. I think that's just a performance thing. Right, I

Re: [zfs-discuss] Re: Re: ZFS memory and swap usage

2007-03-19 Thread Roch - PAE
Info on tuning the ARC was just recently updated: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Memory_and_Dynamic_Reconfiguration_Recommendations -r Rainer Heilke writes: Thanks for the feedback. Please see below. ZFS should give back memory used for cache to

Re: [zfs-discuss] Re: Re: Re: ZFS memory and swap usage

2007-03-19 Thread Roch - PAE
Rainer Heilke writes: The updated information states that the kernel setting is only for the current Nevada build. We are not going to use the kernel debugger method to change the setting on a live production system (and do this everytime we need to reboot). We're back to trying to

Re: [zfs-discuss] Re: Re: Re: ZFS memory and swap usage

2007-03-20 Thread Roch - PAE
Hi Mike, This already integrated in Nevada: 6510807 ARC statistics should be exported via kstat kstat zfs:0:arcstats module: zfs instance: 0 name: arcstatsclass:misc c

Re: [zfs-discuss] Re: ZFS performance with Oracle

2007-03-21 Thread Roch - PAE
JS writes: The big problem is that if you don't do your redundancy in the zpool, then the loss of a single device flatlines the system. This occurs in single device pools or stripes or concats. Sun support has said in support calls and Sunsolve docs that this is by design, but I've never

Re: [zfs-discuss] missing features?Could/should zfs support a new ioctl, constrained if neede

2007-03-26 Thread Roch - PAE
Richard L. Hamilton writes: _FIOSATIME - why doesn't zfs support this (assuming I didn't just miss it)? Might be handy for backups. Are these syscall sufficent ? int utimes(const char *path, const struct timeval times[2]); int futimesat(int fildes, const char *path, const

Re: [zfs-discuss] ZFS and Kstats

2007-03-27 Thread Roch - PAE
See Kernel Statistics Library Functions kstat(3KSTAT) -r Atul Vidwansa writes: Peter, How do I get those stats programatically? Any clues? Regards, _Atul ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: Re[2]: [zfs-discuss] 6410 expansion shelf

2007-03-29 Thread Roch - PAE
Robert Milkowski writes: Hello Selim, Wednesday, March 28, 2007, 5:45:42 AM, you wrote: SD talking of which, SD what's the effort and consequences to increase the max allowed block SD size in zfs to highr figures like 1M... Think what would happen then if you try to read 100KB

Re: [zfs-discuss] C'mon ARC, stay small...

2007-04-02 Thread Roch - PAE
220434 8615% Free (cachelist) 318625 12448% Free (freelist)659607 2576 16% Total 4167561 16279 Physical 4078747 15932 On 3/23/07, Roch - PAE [EMAIL

Re: [zfs-discuss] query on ZFS

2007-04-11 Thread Roch - PAE
Annie Li writes: Can anyone help explain what does out-of-order issue mean in the following segment? ZFS has a pipelined I/O engine, similar in concept to CPU pipelines. The pipeline operates on I/O dependency graphs and provides scoreboarding, priority, deadline scheduling,

Re: [zfs-discuss] Re: ZFS improvements

2007-04-11 Thread Roch - PAE
Gino writes: 6322646 ZFS should gracefully handle all devices failing (when writing) Which is being worked on. Using a redundant configuration prevents this from happening. What do you mean with redundant? All our servers has 2 or 4 HBAs, 2 or 4 fc switches and

Re: [zfs-discuss] Re: storage type for ZFS

2007-04-18 Thread Roch - PAE
Richard L. Hamilton writes: Well, no; his quote did say software or hardware. The theory is apparently that ZFS can do better at detecting (and with redundancy, correcting) errors if it's dealing with raw hardware, or as nearly so as possible. Most SANs _can_ hand out raw LUNs as well as

Re: [zfs-discuss] zfs block allocation strategy

2007-04-18 Thread Roch - PAE
tester writes: Hi, quoting from zfs docs The SPA allocates blocks in a round-robin fashion from the top-level vdevs. A storage pool with multiple top-level vdevs allows the SPA to use dynamic striping to increase disk bandwidth. Since a new block may be allocated from any of the

Re: [zfs-discuss] HowTo: UPS + ZFS NFS + no fsync

2007-04-26 Thread Roch - PAE
You might set zil_disable to 1 (_then_ mount the fs to be shared). But you're still exposed to OS crashes; those would still corrupt your nfs clients. -r cedric briner writes: Hello, I wonder if the subject of this email is not self-explanetory ? okay let'say that it is not. :)

Re: Re[2]: [zfs-discuss] HowTo: UPS + ZFS NFS + no fsync

2007-04-27 Thread Roch - PAE
Robert Milkowski writes: Hello Wee, Thursday, April 26, 2007, 4:21:00 PM, you wrote: WYT On 4/26/07, cedric briner [EMAIL PROTECTED] wrote: okay let'say that it is not. :) Imagine that I setup a box: - with Solaris - with many HDs (directly attached). - use ZFS as

Re: Re[2]: [zfs-discuss] HowTo: UPS + ZFS NFS + no fsync

2007-04-27 Thread Roch - PAE
Wee Yeh Tan writes: Robert, On 4/27/07, Robert Milkowski [EMAIL PROTECTED] wrote: Hello Wee, Thursday, April 26, 2007, 4:21:00 PM, you wrote: WYT On 4/26/07, cedric briner [EMAIL PROTECTED] wrote: okay let'say that it is not. :) Imagine that I setup a box: -

Re: [zfs-discuss] cow performance penatly

2007-04-27 Thread Roch - PAE
Chad Mynhier writes: On 4/27/07, Erblichs [EMAIL PROTECTED] wrote: Ming Zhang wrote: Hi All I wonder if any one have idea about the performance loss caused by COW in ZFS? If you have to read old data out before write it to some other place, it involve disk seek.

Re: [zfs-discuss] HowTo: UPS + ZFS NFS + no fsync

2007-04-27 Thread Roch - PAE
cedric briner writes: You might set zil_disable to 1 (_then_ mount the fs to be shared). But you're still exposed to OS crashes; those would still corrupt your nfs clients. Just to better understand ? (I know that I'm quite slow :( ) when you say _nfs clients_ are you specifically

Re: [zfs-discuss] Re: Re: gzip compression throttles system?

2007-05-04 Thread Roch - PAE
Ian Collins writes: Roch Bourbonnais wrote: with recent bits ZFS compression is now handled concurrently with many CPUs working on different records. So this load will burn more CPUs and acheive it's results (compression) faster. Would changing (selecting a smaller)

Re: [zfs-discuss] ARC, mmap, pagecache...

2007-05-04 Thread Roch - PAE
Manoj Joseph writes: Hi, I was wondering about the ARC and its interaction with the VM pagecache... When a file on a ZFS filesystem is mmaped, does the ARC cache get mapped to the process' virtual memory? Or is there another copy? My understanding is, The ARC does not get mapped

Re: [zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-30 Thread Roch - PAE
Torrey McMahon writes: Toby Thain wrote: On 25-May-07, at 1:22 AM, Torrey McMahon wrote: Toby Thain wrote: On 22-May-07, at 11:01 AM, Louwtjie Burger wrote: On 5/22/07, Pål Baltzersen [EMAIL PROTECTED] wrote: What if your HW-RAID-controller dies? in say 2 years or

Re: [zfs-discuss] NFS and Tar/Star Performance

2007-06-12 Thread Roch - PAE
Hi Seigfried, just making sure you had seen this: http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine You have very fast NFS to non-ZFS runs. That seems only possible if the hosting OS did not sync the data when NFS required it or the drive in question had some fast write caches. If

Re: [zfs-discuss] Re: Slow write speed to ZFS pool (via NFS)

2007-06-21 Thread Roch - PAE
Joe S writes: After researching this further, I found that there are some known performance issues with NFS + ZFS. I tried transferring files via SMB, and got write speeds on average of 25MB/s. So I will have my UNIX systems use SMB to write files to my Solaris server. This seems

Re: [zfs-discuss] Re: Slow write speed to ZFS pool (via NFS)

2007-06-25 Thread Roch - PAE
Sorry about that; looks like you've hit this: 6546683 marvell88sx driver misses wakeup for mv_empty_cv http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6546683 Fixed in snv_64. -r Thomas Garner writes: We have seen this behavior, but it appears to be entirely

[zfs-discuss] There is no NFS over ZFS issue

2007-06-26 Thread Roch - PAE
Regarding the bold statement There is no NFS over ZFS issue What I mean here is that,if you _do_ encounter a performance pathology not linked to the NVRAM Storage/cache flush issue then you _should_ complain or better get someone to do an analysis of the situation. One

Re: [zfs-discuss] Is ZFS efficient for large collections of small files?

2007-08-22 Thread Roch - PAE
Brandorr wrote: Is ZFS efficient at handling huge populations of tiny-to-small files - for example, 20 million TIFF images in a collection, each between 5 and 500k in size? I am asking because I could have sworn that I read somewhere that it isn't, but I can't find the

Re: [zfs-discuss] Odp: Is ZFS efficient for large collections of small files?

2007-08-22 Thread Roch - PAE
£ukasz K writes: Is ZFS efficient at handling huge populations of tiny-to-small files - for example, 20 million TIFF images in a collection, each between 5 and 500k in size? I am asking because I could have sworn that I read somewhere that it isn't, but I can't find the

Re: [zfs-discuss] Odp: Is ZFS efficient for large collections of small files?

2007-08-22 Thread Roch - PAE
£ukasz K writes: Is ZFS efficient at handling huge populations of tiny-to-small files - for example, 20 million TIFF images in a collection, each between 5 and 500k in size? I am asking because I could have sworn that I read somewhere that it isn't, but I can't find the

Re: [zfs-discuss] Supporting recordsizes larger than 128K?

2007-09-05 Thread Roch - PAE
Matty writes: Are there any plans to support record sizes larger than 128k? We use ZFS file systems for disk staging on our backup servers (compression is a nice feature here), and we typically configure the disk staging process to read and write large blocks (typically 1MB or so). This

[zfs-discuss] ZFS Evil Tuning Guide

2007-09-17 Thread Roch - PAE
Tuning should not be done in general and Best practices should be followed. So get very much acquainted with this first : http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide Then if you must, this could soothe or sting :

Re: [zfs-discuss] ZFS Evil Tuning Guide

2007-09-17 Thread Roch - PAE
Pawel Jakub Dawidek writes: On Mon, Sep 17, 2007 at 03:40:05PM +0200, Roch - PAE wrote: Tuning should not be done in general and Best practices should be followed. So get very much acquainted with this first : http://www.solarisinternals.com/wiki/index.php

Re: [zfs-discuss] question about uberblock blkptr

2007-09-20 Thread Roch - PAE
[EMAIL PROTECTED] writes: Roch - PAE wrote: [EMAIL PROTECTED] writes: Jim Mauro wrote: Hey Max - Check out the on-disk specification document at http://opensolaris.org/os/community/zfs/docs/. Page 32 illustration shows the rootbp pointing to a dnode_phys_t

Re: [zfs-discuss] zfs and small files

2007-09-21 Thread Roch - PAE
Claus Guttesen writes: I have many small - mostly jpg - files where the original file is approx. 1 MB and the thumbnail generated is approx. 4 KB. The files are currently on vxfs. I have copied all files from one partition onto a zfs-ditto. The vxfs-partition occupies 401 GB and

Re: [zfs-discuss] zfs and small files

2007-09-21 Thread Roch - PAE
Claus Guttesen writes: So the 1 MB files are stored as ~8 x 128K recordsize. Because of 5003563 use smaller tail block for last block of object The last block of you file is partially used. It will depend on your filesize distribution by without that info we can

Re: [zfs-discuss] ZFS (and quota)

2007-09-24 Thread Roch - PAE
Pawel Jakub Dawidek writes: I'm CCing zfs-discuss@opensolaris.org, as this doesn't look like FreeBSD-specific problem. It looks there is a problem with block allocation(?) when we are near quota limit. tank/foo dataset has quota set to 10m: Without quota: FreeBSD:

Re: [zfs-discuss] ZFS ARC DNLC Limitation

2007-09-25 Thread Roch - PAE
Hi Jason, This should have helped. 6542676 ARC needs to track meta-data memory overhead Some of the lines to arc.c: 1551 1.36 if (arc_meta_used = arc_meta_limit) { 1552/* 1553 * We are exceeding our meta-data cache

Re: [zfs-discuss] ZFS array NVRAM cache?

2007-09-26 Thread Roch - PAE
Vincent Fox writes: I don't understand. How do you setup one LUN that has all of the NVRAM on the array dedicated to it I'm pretty familiar with 3510 and 3310. Forgive me for being a bit thick here, but can you be more specific for the n00b? Do you mean from firmware side or

Re: [zfs-discuss] io:::start and zfs filenames?

2007-09-26 Thread Roch - PAE
Neelakanth Nadgir writes: io:::start probe does not seem to get zfs filenames in args[2]-fi_pathname. Any ideas how to get this info? -neel Who says an I/O is doing work for a single pathname/vnode or for a single process. There is not that one to one correspondance anymore. Not in the

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-03 Thread Roch - PAE
Rayson Ho writes: 1) Modern DBMSs cache database pages in their own buffer pool because it is less expensive than to access data from the OS. (IIRC, MySQL's MyISAM is the only one that relies on the FS cache, but a lot of MySQL sites use INNODB which has its own buffer pool) The DB

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-03 Thread Roch - PAE
Matty writes: On 10/3/07, Roch - PAE [EMAIL PROTECTED] wrote: Rayson Ho writes: 1) Modern DBMSs cache database pages in their own buffer pool because it is less expensive than to access data from the OS. (IIRC, MySQL's MyISAM is the only one that relies on the FS cache

  1   2   >