Re: [zfs-discuss] SSD for L2arc

2013-03-21 Thread Jim Mauro
Can I know how to configure a SSD to be used for L2arc ? Basically I want to improve read performance. Read the documentation, specifically the section titled; Creating a ZFS Storage PoolWith Cache Devices To increase write performance, will SSD for Zil help ? As I read on forums, Zil

Re: [zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-12 Thread Jim Mauro
So try unbinding the mac threads; it may help you here. How do I do that? All I can find on interrupt fencing and the like is to simply set certain processors to no-intr, which moves all of the interrupts and it doesn't prevent the xcall storm choosing to affect these CPUs either… In

Re: [zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-06 Thread Jim Mauro
I can't help but be curious about something, which perhaps you verified but did not post. What the data here shows is; - CPU 31 is buried in the kernel (100% sys). - CPU 31 is handling a moderate-to-high rate of xcalls. What the data does not prove empirically is that the 100% sys time of CPU

Re: [zfs-discuss] webserver zfs root lock contention under heavy load

2012-03-26 Thread Jim Mauro
THE PROBLEM - Linux is 15% sys, 55% usr, Solaris is 30% sys, 70% usr, running the same workload, doing the same amount of work. delivering the same level of performance. Please validate that problem statement. On Mar 25, 2012, at 9:51 PM, Aubrey Li wrote: On Mon, Mar 26, 2012 at 4:18 AM, Jim

Re: [zfs-discuss] webserver zfs root lock contention under heavy load

2012-03-25 Thread Jim Mauro
If you're chasing CPU utilization, specifically %sys (time in the kernel), I would start with a time-based kernel profile. #dtrace -n 'profile-997hz /arg0/ { @[stack()] = count(); } tick-60sec { trunc(@, 20); printa(@0; }' I would be curious to see where the CPU cycles are being consumed first,

Re: [zfs-discuss] What is my pol writing? :)

2011-06-11 Thread Jim Mauro
Does this reveal anything; dtrace -n 'syscall::*write:entry /fds[arg0].fi_fs == zfs/ { @[execname,fds[arg0].fi_pathname]=count(); }' On Jun 11, 2011, at 9:32 AM, Jim Klimov wrote: While looking over iostats from various programs, I see that my OS HDD is busy writing, about 2Mb/sec stream

Re: [zfs-discuss] What is my pol writing? :)

2011-06-11 Thread Jim Mauro
]-fi_pathname] = count(); }' On Jun 11, 2011, at 12:34 PM, Jim Klimov wrote: 2011-06-11 19:16, Jim Mauro пишет: Does this reveal anything; dtrace -n 'syscall::*write:entry /fds[arg0].fi_fs == zfs/ { @[execname,fds[arg0].fi_pathname]=count(); }' Alas, not much. # time dtrace -n 'syscall

Re: [zfs-discuss] What is my pol writing? :)

2011-06-11 Thread Jim Mauro
in IOPS and/or bytes being written? On Jun 11, 2011, at 1:00 PM, Jim Klimov wrote: 2011-06-11 20:42, Jim Mauro пишет: Well we may have missed something, because that dtrace will only capture write(2) and pwrite(2) - whatever is generating the writes may be using another interface (writev(2

Re: [zfs-discuss] What is my pol writing? :)

2011-06-11 Thread Jim Mauro
This may be interesting also (still fumbling...); dtrace -n 'fbt:zfs:zio_write:entry, fbt:zfs:zio_rewrite:entry,fbt:zfs:zio_write_override:entry { @[probefunc,stack()] = count(); }' On Jun 11, 2011, at 1:00 PM, Jim Klimov wrote: 2011-06-11 20:42, Jim Mauro пишет: Well we may have missed

[zfs-discuss] detach configured log devices?

2011-03-16 Thread Jim Mauro
With ZFS, Solaris 10 Update 9, is it possible to detach configured log devices from a zpool? I have a zpool with 3 F20 mirrors for the ZIL. They're coming up corrupted. I want to detach them, remake the devices and reattach them to the zpool. Thanks /jim

Re: [zfs-discuss] Performance problems due to smaller ZFS recordsize

2010-10-25 Thread Jim Mauro
Hi Jim - cross-posting to zfs-discuss, because 20X is, to say the least, compelling. Obviously, it would be awesome if we had the opportunity to whittle-down which of the changes made this fly, or if it was a combination of the changes. Looking at them individually set

Re: [zfs-discuss] Performance problems due to smaller ZFS recordsize

2010-10-21 Thread Jim Mauro
There is nothing in here that requires zfs confidential. cross-posted to zfs discuss. On Oct 21, 2010, at 3:37 PM, Jim Nissen wrote: Cross-posting. Original Message Subject: Performance problems due to smaller ZFS recordsize Date: Thu, 21 Oct 2010 14:00:42 -0500

Re: [zfs-discuss] Solaris 10 default caching segmap/vpm size

2010-04-27 Thread Jim Mauro
ZFS does not use segmap. The ZFS ARC (Adaptive Replacement Cache) will consume what's available, memory-wise, based on the workload. There's an upper limit if zfs_arc_max has not been set, but I forget what it is. If other memory consumers (applications, other kernel subsystems) need memory,

Re: [zfs-discuss] Oracle Performance - ZFS vs UFS

2010-02-13 Thread Jim Mauro
Using ZFS for Oracle can be configured to deliver very good performance. Depending on what your priorities are in terms of critical metrics, keep in mind that the most performant solution is to use Oracle ASM on raw disk devices. That is not intended to imply anything negative about ZFS or UFS.

Re: [zfs-discuss] ZFS- When do you add more memory?

2009-12-23 Thread Jim Mauro
Hi Anthony - I don't get this. How does the presence (or absence) of the ARC change the methodology for doing memory capacity planning? Memory capacity planning is all about identifying and measuring consumers. Memory consumers; - The kernel. - User processes. - The ZFS ARC, which is

Re: [zfs-discuss] Performance problems with Thumper and 7TB ZFS pool using RAIDZ2

2009-10-24 Thread Jim Mauro
Posting to zfs-discuss. There's no reason this needs to be kept confidential. 5-disk RAIDZ2 - doesn't that equate to only 3 data disks? Seems pointless - they'd be much better off using mirrors, which is a better choice for random IO... Looking at this now... /jim Jeff Savit wrote: Hi all,

Re: [zfs-discuss] [dtrace-discuss] How to drill down cause of cross-calls in the kernel? (output provided)

2009-09-23 Thread Jim Mauro
I'm cross-posting to zfs-discuss, as this is now more of a ZFS query than a dtrace query at this point, and I'm not sure if all the ZFS experts are listening on dtrace-discuss (although they probably are... :^). The only thing that jumps out at me is the ARC size - 53.4GB, or most of your 64GB

Re: [zfs-discuss] [dtrace-discuss] How to drill down cause of cross-calls in the kernel? (output provided)

2009-09-23 Thread Jim Mauro
(posted to zfs-discuss) Hmmm...this is nothing in terms of load. So you say that the system becomes sluggish/unresponsive periodically, and you noticed the xcall storm when that happens, correct? Refresh my memory - what is the frequency and duration of the sluggish cycles? Could you capture a

Re: [zfs-discuss] URGENT: very high busy and average service time with ZFS and USP1100

2009-09-22 Thread Jim Mauro
Cross-posting to zfs-discuss. This does not need to be on the confidential alias. It's a performance query - there's nothing confidential in here. Other folks post performance queries to zfs-discuss Forget %b - it's useless. It's not the bandwidth that's hurting you, it's the IOPS. One of

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Jim Mauro
Bob - Have you filed a bug on this issue? I am not up to speed on this thread, so I can not comment on whether or not there is a bug here, but you seem to have a test case and supporting data. Filing a bug will get the attention of ZFS engineering. Thanks, /jim Bob Friesenhahn wrote: On Mon,

[zfs-discuss] [Fwd: Re: [perf-discuss] ZFS performance issue - READ is slow as hell...]

2009-03-31 Thread Jim Mauro
Posting this back to zfs-discuss. Roland's test case (below) is a single threaded sequential write followed by a single threaded sequential read. His bandwidth goes from horrible (~2MB/sec) to expected (~30MB/sec) when prefetch is disabled. This is with relatively recent nv bits (nv110).

Re: [zfs-discuss] [perf-discuss] ZFS performance issue - READ is slow as hell...

2009-03-30 Thread Jim Mauro
Cross posting to zfs-discuss. By my math, here's what you're getting; 4.6MB/sec on writes to ZFS. 2.2MB/sec on reads from ZFS. 90MB/sec on read from block device. What is c0t1d0 - I assume it's a hardware RAID LUN, but how many disks, and what type of LUN? What version of Solaris (cat

Re: [zfs-discuss] Copying thousands of small files on an expanded ZFS pool crawl to a poor performance-not on other pools.

2009-03-23 Thread Jim Mauro
Cross-posting to the public ZFS discussion alias. There's nothing here that requires confidentiallity, and the public alias is a much broader audience with a larger number of experienced ZFS users... As to the issue - what is the free space disparity across the pools? Is the one particular

Re: [zfs-discuss] write cache and cache flush

2009-01-30 Thread Jim Mauro
This problem only manifests itself when dealing with many small files over NFS. There is no throughput problem with the network. But there could be a _latency_ issue with the network. [snip] I've done my homework on this issue, I've ruled out the network as an issue, as well as the NFS

Re: [zfs-discuss] write cache and cache flush

2009-01-30 Thread Jim Mauro
You have SSD's for the ZIL (logzilla) enabled, and ZIL IO is what is hurting your performance...Hmmm I'll ask the stupid question (just to get it out of the way) - is it possible that the logzilla is undersized? Did you gather data using Richard Elling's zilstat (included below)? Thanks,

Re: [zfs-discuss] strange performance drop of solaris 10/zfs

2009-01-30 Thread Jim Mauro
Sogranted, tank is about 77% full (not to split hairs ;^), but in this case, 23% is 640GB of free space. I mean, it's not like 15 years ago when a file system was 2GB total, and 23% free meant a measely 460MB to allocate from. 640GB is a lot of space, and our largest writes are less than 5MB.

Re: [zfs-discuss] write cache and cache flush

2009-01-29 Thread Jim Mauro
Multiple Thors (more than 2?), with performance problems. Maybe it's the common demnominator - the network. Can you run local ZFS IO loads and determine if performance is expected when NFS and the network are out of the picture? Thanks, /jim Greg Mason wrote: So, I'm still beating my head

Re: [zfs-discuss] Slow death-spiral with zfs gzip-9 compression

2008-11-29 Thread Jim Mauro
on x64 - it will panic your system. None of this has anything to do with ZFS, which uses a completely different mechanism for caching (the ZFS ARC). Thanks, /jim What is what I heard Jim Mauro tell us. I recall feeling a bit disturbed when I heard it. If it is true, perhaps it applies only

Re: [zfs-discuss] copying a ZFS

2008-07-20 Thread Jim Mauro
So I'm really exposing my ignorance here, but... You wrote /... if you wish to keep your snapshots.../... I never mentioned snapshots, thus you introduced the use of a ZFS snapshot as a method of doing what I wish to do. And yes, snapshots and send are in the manual, and I read about them. I

Re: [zfs-discuss] ZIL controls in Solaris 10 U4?

2008-01-29 Thread Jim Mauro
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Disabling_the_ZIL_.28Don.27t.29 The above link shows how to disable to ZIL for testing purposes (it's not generally recommended to keep it disabled in production). As to the putpack schedule of recent ZFS features into

Re: [zfs-discuss] Yager on ZFS

2007-12-13 Thread Jim Mauro
Would you two please SHUT THE F$%K UP. Dear God, my kids don't go own like this. Please - let it die already. Thanks very much. /jim can you guess? wrote: Hello can, Thursday, December 13, 2007, 12:02:56 AM, you wrote: cyg On the other hand, there's always the possibility that someone

Re: [zfs-discuss] ZFS File system and Oracle raw files compatibility

2007-10-19 Thread Jim Mauro
If the question is can Oracle files (datafiles, log files, etc) exist on a ZFS, the answer is absolutely yes. More simply put, can you configure you Oracle database on ZFS - absolutely. The question, as stated, is confusing, because the term compatible can have pretty broad meaning. So, I

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-04 Thread Jim Mauro
Where does the win come from with directI/O? Is it 1), 2), or some combination? If its a combination, what's the percentage of each towards the win? That will vary based on workload (I know, you already knew that ... :^). Decomposing the performance win between what is gained as a

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-03 Thread Jim Mauro
Hey Roch - We do not retain 2 copies of the same data. If the DB cache is made large enough to consume most of memory, the ZFS copy will quickly be evicted to stage other I/Os on their way to the DB cache. What problem does that pose ? Can't answer that question empirically, because we

Re: [zfs-discuss] io:::start and zfs filenames?

2007-09-26 Thread Jim Mauro
Hi Neel - Thanks for pushing this out. I've been tripping over this for a while. You can instrument zfs_read() and zfs_write() to reliably track filenames: #!/usr/sbin/dtrace -s #pragma D option quiet zfs_read:entry, zfs_write:entry { printf(%s of %s\n,probefunc,

Re: [zfs-discuss] io:::start and zfs filenames?

2007-09-26 Thread Jim Mauro
What sayeth the ZFS team regarding the use of a stable DTrace provider with their file system? For the record, the above has a tone to it that I really did not intend (antagonistic?), so I had a good chat with Roch about this. The file pathname is derived via a translator from the

Re: [zfs-discuss] io:::start and zfs filenames?

2007-09-26 Thread Jim Mauro
files, and it seems to work /jim Neelakanth Nadgir wrote: Jim I can't use zfs_read/write as the file is mmap()'d so no read/write! -neel On Sep 26, 2007, at 5:07 AM, Jim Mauro [EMAIL PROTECTED] wrote: Hi Neel - Thanks for pushing this out. I've been tripping over this for a while

Re: [zfs-discuss] question about uberblock blkptr

2007-09-17 Thread Jim Mauro
Hey Max - Check out the on-disk specification document at http://opensolaris.org/os/community/zfs/docs/. Page 32 illustration shows the rootbp pointing to a dnode_phys_t object (the first member of a objset_phys_t data structure). The source code indicates ub_rootbp is a blkptr_t, which

Re: [zfs-discuss] (politics) Sharks in the waters

2007-09-05 Thread Jim Mauro
About 2 years ago I was able to get a little closer to the patent litigation process, by way of giving a deposition in litigation that was filed against Sun and Apple (and has been settled). Apparently, there's an entire sub-economy built on patent litigation among the technology players.

Re: [zfs-discuss] ZFS, XFS, and EXT4 compared

2007-08-30 Thread Jim Mauro
I'll take a look at this. ZFS provides outstanding sequential IO performance (both read and write). In my testing, I can essentially sustain hardware speeds with ZFS on sequential loads. That is, assuming 30-60MB/sec per disk sequential IO capability (depending on hitting inner or out

Re: [zfs-discuss] ZFS Under the Hood Presentation Slides

2007-08-17 Thread Jim Mauro
Is the referenced Laminated Handout on slide 3 available anywhere in any form electronically? If not, I'd be happy to create an electronic copy and make it pubically available. Thanks, /jim Joy Marshall wrote: It's taken a while but at last we have been able to post the ZFS Under the

Re: Fwd: [zfs-discuss] Re: Mac OS X Leopard to use ZFS

2007-06-10 Thread Jim Mauro
Hello - I think L4 still needs to evolve. BTW, i believe microkernels is the _right_ way and L4 is a first step in that direction. Perhaps you could elaborate on this? I thought the microkernel debate ended in the 1990s, in terms of being a compelling technology direction for kernel

Re: [zfs-discuss] Re: zfs boot image conversion kit is posted

2007-04-19 Thread Jim Mauro
I'm not sure I understand the question. Virtual machines are built by either running a virtualization technology in a host operating systems, such as running VMware Workstation in Linux, running Parallels in Mac OS X, Linux or Windows, etc. These are sometimes referred to as Type II VMMs, where

Re: [zfs-discuss] C'mon ARC, stay small...

2007-04-01 Thread Jim Mauro
] writes: Jim Mauro wrote: All righty...I set c_max to 512MB, c to 512MB, and p to 256MB... arc::print -tad { ... c02e29e8 uint64_t size = 0t299008 c02e29f0 uint64_t p = 0t16588228608 c02e29f8 uint64_t c = 0t33176457216

Re: [zfs-discuss] Re: ZFS with raidz

2007-03-20 Thread Jim Mauro
(I'm probably not the best person to answer this, but that has never stopped me before, and I need to give Richard Elling a little more time to get the Goats, Cows and Horses fed, sip his morning coffee, and offer a proper response...) Would it benefit us to have the disk be setup as a raidz

[zfs-discuss] The value of validating your backups...

2007-03-20 Thread Jim Mauro
http://www.cnn.com/2007/US/03/20/lost.data.ap/index.html ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Re: ZFS with raidz

2007-03-20 Thread Jim Mauro
no data in those thousand's of file systems yet. Richard Elling wrote: Jim Mauro wrote: (I'm probably not the best person to answer this, but that has never stopped me before, and I need to give Richard Elling a little more time to get the Goats, Cows and Horses fed, sip his morning coffee

Re: [zfs-discuss] C'mon ARC, stay small...

2007-03-15 Thread Jim Mauro
c02e2a08 uint64_t c_max = 0t1070318720 . . . Perhaps c_max does not do what I think it does? Thanks, /jim Jim Mauro wrote: Running an mmap-intensive workload on ZFS on a X4500, Solaris 10 11/06 (update 3). All file IO is mmap(file), read memory segment, unmap, close. Tweaked the arc

Re: [zfs-discuss] C'mon ARC, stay small...

2007-03-15 Thread Jim Mauro
the ARC size because for mmap-intensive workloads, it seems to hurt more than help (although, based on experiments up to this point, it's not hurting a lot). I'll do another reboot, and run it all down for you serially... /jim Thanks, -j On Thu, Mar 15, 2007 at 06:57:12PM -0400, Jim Mauro wrote

Re: [zfs-discuss] C'mon ARC, stay small...

2007-03-15 Thread Jim Mauro
Following a reboot: arc::print -tad { . . . c02e29e8 uint64_t size = 0t299008 c02e29f0 uint64_t p = 0t16588228608 c02e29f8 uint64_t c = 0t33176457216 c02e2a00 uint64_t c_min = 0t1070318720 c02e2a08 uint64_t c_max = 0t33176457216 . . . }

Re: [zfs-discuss] C'mon ARC, stay small...

2007-03-15 Thread Jim Mauro
another reboot, and run it all down for you serially... /jim Thanks, -j On Thu, Mar 15, 2007 at 06:57:12PM -0400, Jim Mauro wrote: ARC_mru::print -d size lsize size = 0t10224433152 lsize = 0t10218960896 ARC_mfu::print -d size lsize

Re: [zfs-discuss] C'mon ARC, stay small...

2007-03-15 Thread Jim Mauro
, 2007 at 06:57:12PM -0400, Jim Mauro wrote: ARC_mru::print -d size lsize size = 0t10224433152 lsize = 0t10218960896 ARC_mfu::print -d size lsize size = 0t303450112 lsize = 0t289998848 ARC_anon::print -d size

Re: [zfs-discuss] Why number of NFS threads jumps to the max value?

2007-02-27 Thread Jim Mauro
You don't honestly, really, reasonably, expect someone, anyone, to look at the stack trace of a few hundred threads, and post something along the lines of This is what is wrong with your NFS server.Do you? Without any other information at all? We're here to help, but please reset your

Re: [zfs-discuss] A Plea for Help: Thumper/ZFS/NFS/B43

2006-12-09 Thread Jim Mauro
Could be NFS synchronous semantics on file create (followed by repeated flushing of the write cache). What kind of storage are you using (feel free to send privately if you need to) - is it a thumper? It's not clear why NFS-enforced synchronous semantics would induce different behavior than

Re: [zfs-discuss] A Plea for Help: Thumper/ZFS/NFS/B43

2006-12-07 Thread Jim Mauro
Hey Ben - I need more time to look at this and connect some dots, but real quick Some nfsstat data that we could use to potentially correlate to the local server activity would be interesting. zfs_create() seems to be the heavy hitter, but a periodic kernel profile (especially if we can

Re: [zfs-discuss] zfs sucking down my memory!?

2006-07-21 Thread Jim Mauro
I need to read through this more thoroughly to get my head around it, but on my first pass, what jumps out at me is that something significant _changed_ in terms of application behavior with the introduction of ZFS. I'm saying that that is a bad thing, or a good thing, but it is an important

Re: [zfs-discuss] Big JBOD: what would you do?

2006-07-17 Thread Jim Mauro
I agree with Greg - For ZFS, I'd recommend a larger number of raidz luns, with a smaller number of disks per LUN, up to 6 disks per raidz lun. This will more closely align with performance best practices, so it would be cool to find common ground in terms of a sweet-spot for performance and

[zfs-discuss] Let's get cooking...

2006-06-21 Thread Jim Mauro
http://www.tech-recipes.com/solaris_system_administration_tips1446.html ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss