Re: [zfs-discuss] How many disk in one pool

2012-10-08 Thread Brad Stone
Here's an example of a ZFS-based product you can buy with a large number of disks in the volume: http://www.aberdeeninc.com/abcatg/petarack.htm 360 3T drives A full petabyte of storage (1080TB) in a single rack, under a single namespace or volume On Sat, Oct 6, 2012 at 11:48 AM, Richard Elling

Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-29 Thread Brad Diggs
Reducing the record size would negatively impact performance. For rational why, see thesection titled "Match Average I/O Block Sizes" in my blog post on filesystem caching:http://www.thezonemanager.com/2009/03/filesystem-cache-optimization.htmlBrad Brad Diggs | Principal Sales

Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-29 Thread Brad Diggs
effectively leverage this caching potential, that won't happen. OUD far outperforms ODSEE. That said OUD may get some focus in this area. However, time willtell on that one.For now, I hope everyone benefits from the little that I did validate.Have a great day!Brad Brad Diggs | Principal Sales

Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-29 Thread Brad Diggs
S11 FCSBrad Brad Diggs | Principal Sales Consultant |972.814.3698eMail:brad.di...@oracle.comTech Blog:http://TheZoneManager.comLinkedIn:http://www.linkedin.com/in/braddiggs On Dec 29, 2011, at 8:11 AM, Robert Milkowski wrote:And these results are from S11 FCS I assume.On older builds or Illumos

Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-12 Thread Brad Diggs
/02/directory-data-priming-strategies.htmlThanks again!Brad Brad Diggs | Principal Sales ConsultantTech Blog:http://TheZoneManager.comLinkedIn:http://www.linkedin.com/in/braddiggs On Dec 8, 2011, at 4:22 PM, Mark Musante wrote:You can see the original ARC case here:http://arc.opensolaris.org

[zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-07 Thread Brad Diggs
that the L1ARCwill also only require 1TB of RAM for the data.Note that I know the deduplication table will use the L1ARC as well. However, the focus of my questionis on how the L1ARC would benefit from a data caching standpoint.Thanks in advance!Brad Brad Diggs | Principal Sales ConsultantTech Blog:http

Re: [zfs-discuss] OpenIndiana | ZFS | scrub | network | awful slow

2011-06-15 Thread Brad Stone
3G per TB would be a better ballpark estimate. On Wed, Jun 15, 2011 at 8:17 PM, Daniel Carosone d...@geek.com.au wrote: On Wed, Jun 15, 2011 at 07:19:05PM +0200, Roy Sigurd Karlsbakk wrote: Dedup is known to require a LOT of memory and/or L2ARC, and 24GB isn't really much with 34TBs of data.

Re: [zfs-discuss] Disk space size, used, available mismatch

2011-05-12 Thread Brad Kroeger
Thank you for your insight. This is a system that was handed down to me when another sysadmin went to greener pastures. There were no quotas set on the system. I used zfs destroy to free up some space and did put a quota on it. I still have 0 freespace available. I think this is due to the

Re: [zfs-discuss] A few questions

2011-01-09 Thread Brad Stone
As for certified systems, It's my understanding that Nexenta themselves don't certify anything.  They have systems which are recommended and supported by their network of VAR's. The certified solutions listed on Nexenta's website were certified by Nexenta.

[zfs-discuss] ZFS Administation Concole

2010-11-13 Thread Brad Henderson
I am new to OpenSolaris and I have been reading about and seeing screenshots of the ZFS Administration Console. I have been looking at the dates on it and every post is from about two years ago. I am just wondering is this option not available on OpenSolaris anymore and if it is how do I set it

Re: [zfs-discuss] Dedup relationship between pool and filesystem

2010-09-24 Thread Brad Stone
For de-duplication to perform well you need to be able to fit the de-dup table in memory. Is a good rule-of-thumb for needed RAM Size=(pool capacity/avg block size)*270 bytes? Or perhaps it's Size/expected_dedup_ratio? And if you limit de-dup to certain datasets in the pool, how would this

[zfs-discuss] ZFS on solid state as disk rather than L2ARC...

2010-09-15 Thread Brad Diggs
Has anyone done much testing of just using the solid state devices (F20 or F5100) asdevices for ZFS pools? Are there any concerns with running in this mode versus usingsolid state devices for L2ARC cache?Second, has anyone done this sort of testing with MLC based solid state drives?What has your

Re: [zfs-discuss] zfs compression with Oracle - anyone implemented?

2010-09-15 Thread Brad
Ed, See my answers inline: I don't think your question is clear. What do you mean on oracle backed by storage luns? We'll be using luns from a storage array vs ZFS controller disks. The luns are mapped the db server and from there initialize under ZFS. Do you mean on oracle hardware? On

[zfs-discuss] zfs compression with Oracle - anyone implemented?

2010-09-13 Thread Brad
Hi! I'd been scouring the forums and web for admins/users who deployed zfs with compression enabled on Oracle backed by storage array luns. Any problems with cpu/memory overhead? -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Dedup - Does on imply sha256?

2010-08-24 Thread Brad Stone
Correct, but presumably for a limited time only. I would think that over time as the technology improves that the default would change. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] OpenStorage Summit

2010-08-21 Thread Brad Stone
Just wanted to make a quick announcement that there will be an OpenStorage Summit in Palo Alto, CA in late October. The conference should have a lot of good OpenSolaris talks, with ZFS experts such as Bill Moore, Adam Levanthal, and Ben Rockwood already planning to give presentations. The

Re: [zfs-discuss] hybrid drive: flash and platters

2010-05-25 Thread Brad Diggs
to have someone do some benchmarkingof MySQL in a cache optimized server with F20 PCIe flash cards but never got around to it.So, if you want to get all of the caching benefits of DmCache, just run your app on Solaris 10 today. ;-)Have a great day!Brad Brad Diggs | Principal Security Sales Consultant

[zfs-discuss] replaced disk...copy back completed but spare is in use

2010-05-04 Thread Brad
I yanked a disk to simulate failure to the test pool to test hot spare failover - everything seemed fine until the copy back completed. The hot spare is still showing in used...do we need to remove the spare from the pool to get it to deattach? # zpool status pool: ZPOOL.TEST state:

Re: [zfs-discuss] replaced disk...copy back completed but spare is in use

2010-05-04 Thread Brad
Thanks! -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Solaris 10 default caching segmap/vpm size

2010-04-28 Thread Brad
The reason I asked was just to understand how those attributes play with ufs/vxfs... -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Solaris 10 default caching segmap/vpm size

2010-04-27 Thread Brad
Whats the default size of the file system cache for Solaris 10 x86 and can it be tuned? I read various posts on the subject and its confusing.. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] not showing data in L2ARC or ZIL

2010-04-24 Thread Brad
I'm not showing any data being populated in the L2ARC or ZIL SSDs with a J4500 (48 - 500GB SATA drives). # zpool iostat -v capacity operationsbandwidth poolused avail read write read write - -

Re: [zfs-discuss] not showing data in L2ARC or ZIL

2010-04-24 Thread Brad
thanks - :) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zpool import -F hangs system

2010-04-21 Thread Brad Stone
What build are you on? zpool import hangs for me on b134. On Wed, Apr 21, 2010 at 9:21 AM, John Balestrini j...@balestrini.netwrote: Howdy All, I have a raidz pool that hangs the system when importing. I attempted a pfexec zpool import -F pool1 (which has been importing for two days with no

Re: [zfs-discuss] mpxio load-balancing...it doesn't work??

2010-04-05 Thread Brad
I'm wondering if the author is talking about cache mirroring where the cache is mirrored between both controllers. If that is the case, is he saying that for every write to the active controlle,r a second write issued on the passive controller to keep the cache mirrored? -- This message

[zfs-discuss] mpxio load-balancing...it doesn't work??

2010-04-04 Thread Brad
I had always thought that with mpxio, it load-balances IO request across your storage ports but this article http://christianbilien.wordpress.com/2007/03/23/storage-array-bottlenecks/ has got me thinking its not true. The available bandwidth is 2 or 4Gb/s (200 or 400MB/s – FC frames are 10

Re: [zfs-discuss] j4500 cache flush

2010-03-05 Thread Brad
Marion - Do you happen to know which SAS hba it applys to? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] j4500 cache flush

2010-03-04 Thread Brad
Since the j4500 doesn't have a internal SAS controller, would it be safe to say that ZFS cache flushes would be handled by the host's SAS hba? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] naming zfs disks

2010-02-17 Thread Brad
Is there anyway to assign a unique name or id to a disk part of a zpool? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Oracle Performance - ZFS vs UFS

2010-02-13 Thread Brad
Don't use raidz for the raid type - go with a striped set -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Instructions for ignoring ZFS write cache flushing on intelligent arrays

2010-01-27 Thread Brad
flushes from zfs? Brad -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Instructions for ignoring ZFS write cache flushing on intelligent arrays

2010-01-27 Thread Brad
We're running 10/09 on the dev box but 11/06 is prodqa. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] compression ratio

2010-01-26 Thread Brad
With the default compression scheme (LZJB ), how does one calculate the ratio or amount compressed ahead of time when allocating storage? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Instructions for ignoring ZFS write cache flushing on intelligent arrays

2010-01-25 Thread Brad
Hi! So after reading through this thread and checking the bug report...do we still need to tell zfs to disable cache flush? set zfs:zfs_nocacheflush=1 -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] x4500...need input and clarity on striped/mirrored configuration

2010-01-21 Thread Brad
Did you buy the SSDs directly from Sun? I've heard there could possibly be firmware that's vendor specific for the X25-E. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] x4500...need input and clarity on striped/mirrored configuration

2010-01-20 Thread Brad
Can anyone recommend a optimum and redundant striped configuration for a X4500? We'll be using it for a OLTP (Oracle) database and will need best performance. Is it also true that the reads will be load-balanced across the mirrors? Is this considered a raid 1+0 configuration? zpool create

Re: [zfs-discuss] x4500...need input and clarity on striped/mirrored configuration

2010-01-20 Thread Brad
@hortnon - ASM is not within the scope of this project. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] x4500...need input and clarity on striped/mirrored configuration

2010-01-20 Thread Brad
Zfs does not do striping across vdevs, but its load share approach will write based on (roughly) a round-robin basis, but will also prefer a less loaded vdev when under a heavy write load, or will prefer to write to an empty vdev rather than write to an almost full one. I'm trying to visualize

Re: [zfs-discuss] x4500...need input and clarity on striped/mirrored configuration

2010-01-20 Thread Brad
I was reading your old posts about load-shares http://opensolaris.org/jive/thread.jspa?messageID=294580#294580 . So between raidz and load-share striping, raidz stripes a file system block evenly across each vdev but with load sharing the file system block is written on a vdev that's not

[zfs-discuss] x4500/x4540 does the internal controllers have a bbu?

2010-01-12 Thread Brad
Has anyone worked with a x4500/x4540 and know if the internal raid controllers have a bbu? I'm concern that we won't be able to turn off the write-cache on the internal hds and SSDs to prevent data corruption in case of a power failure. -- This message posted from opensolaris.org

Re: [zfs-discuss] x4500/x4540 does the internal controllers have a bbu?

2010-01-12 Thread Brad
(Caching isn't the problem; ordering is.) Weird I was reading about a problem where using SSDs (intel x25-e) if the power goes out and the data in cache is not flushed, you would have loss of data. Could you elaborate on ordering? -- This message posted from opensolaris.org

Re: [zfs-discuss] x4500/x4540 does the internal controllers have a bbu?

2010-01-12 Thread Brad
Richard, Yes, write cache is enabled by default, depending on the pool configuration. Is it enabled for a striped (mirrored configuration) zpool? I'm asking because of a concern I've read on this forum about a problem with SSDs (and disks) where if a power outage occurs any data in cache would

[zfs-discuss] raidz stripe size (not stripe width)

2010-01-04 Thread Brad
If a 8K file system block is written on a 9 disk raidz vdev, how is the data distributed (writtened) between all devices in the vdev since a zfs write is one continuously IO operation? Is it distributed evenly (1.125KB) per device? -- This message posted from opensolaris.org

Re: [zfs-discuss] raidz stripe size (not stripe width)

2010-01-04 Thread Brad
Hi Adam, From your the picture, it looks like the data is distributed evenly (with the exception of parity) across each spindle then wrapping around again (final 4K) - is this one single write operation or two? | P | D00 | D01 | D02 | D03 | D04 | D05 | D06 | D07 | -one write op??

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread Brad
Thanks for the suggestion! I have heard mirrored vdevs configuration are preferred for Oracle but whats the difference between a raidz mirrored vdev vs a raid10 setup? We have tested a zfs stripe configuration before with 15 disks and our tester was extremely happy with the performance.

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread Brad
@ross Because each write of a raidz is striped across the disks the effective IOPS of the vdev is equal to that of a single disk. This can be improved by utilizing multiple (smaller) raidz vdevs which are striped, but not by mirroring them. So with random reads, would it perform better on a

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread Brad
@eric As a general rule of thumb, each vdev has the random performance roughly the same as a single member of that vdev. Having six RAIDZ vdevs in a pool should give roughly the performance as a stripe of six bare drives, for random IO. It sounds like we'll need 16 vdevs striped in a pool to at

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread Brad
@relling For small, random read IOPS the performance of a single, top-level vdev is performance = performance of a disk * (N / (N - P)) 133 * 12/(12-1)= 133 * 12/11 where, N = number of disks in the vdev P = number of parity devices in the vdev performance of a disk

[zfs-discuss] raidz vs raid5 clarity needed

2009-12-29 Thread Brad
Hi! I'm attempting to understand the pros/cons between raid5 and raidz after running into a performance issue with Oracle on zfs (http://opensolaris.org/jive/thread.jspa?threadID=120703tstart=0). I would appreciate some feedback on what I've understood so far: WRITES raid5 - A FS block is

Re: [zfs-discuss] raidz vs raid5 clarity needed

2009-12-29 Thread Brad
@ross If the write doesn't span the whole stripe width then there is a read of the parity chunk, write of the block and a write of the parity chunk which is the write hole penalty/vulnerability, and is 3 operations (if the data spans more then 1 chunk then it is written in parallel so you can

Re: [zfs-discuss] repost - high read iops

2009-12-28 Thread Brad
Try an SGA more like 20-25 GB. Remember, the database can cache more effectively than any file system underneath. The best I/O is the I/O you don't have to make. We'll be turning up the SGA size from 4GB to 16GB. The arc size will be set from 8GB to 4GB. This can be a red herring. Judging by the

Re: [zfs-discuss] repost - high read iops

2009-12-28 Thread Brad
This doesn't make sense to me. You've got 32 GB, why not use it? Artificially limiting the memory use to 20 GB seems like a waste of good money. I'm having a hard time convincing the dbas to increase the size of the SGA to 20GB because their philosophy is, no matter what eventually you'll have

Re: [zfs-discuss] repost - high read iops

2009-12-27 Thread Brad
Richard - the l2arc is c1t13d0. What tools can be use to show the l2arc stats? raidz1 2.68T 580G543453 4.22M 3.70M c1t1d0 - -258102 689K 358K c1t2d0 - -256103 684K 354K c1t3d0 - -258102 690K 359K

[zfs-discuss] repost - high read iops

2009-12-26 Thread Brad
repost - Sorry for ccing the other forums. I'm running into a issue where there seems to be a high number of read iops hitting disks and physical free memory is fluctuating between 200MB - 450MB out of 16GB total. We have the l2arc configured on a 32GB Intel X25-E ssd and slog on another 32GB

[zfs-discuss] high read iops - more memory for arc?

2009-12-24 Thread Brad
I'm running into a issue where there seems to be a high number of read iops hitting disks and physical free memory is fluctuating between 200MB - 450MB out of 16GB total. We have the l2arc configured on a 32GB Intel X25-E ssd and slog on another32GB X25-E ssd. According to our tester, Oracle

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-22 Thread Brad Diggs
Have you considered running your script with ZFS pre-fetching disabled altogether to see if the results are consistent between runs? Brad Brad Diggs Senior Directory Architect Virtualization Architect xVM Technology Lead Sun Microsystems, Inc. Phone x52957/+1 972-992-0002 Mail

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Brad Diggs
You might want to have a look at my blog on filesystem cache tuning... It will probably help you to avoid memory contention between the ARC and your apps. http://www.thezonemanager.com/2009/03/filesystem-cache-optimization.html Brad Brad Diggs Senior Directory Architect

Re: [zfs-discuss] zpool import hangs

2009-06-16 Thread Brad Reese
Hi Victor, Yes, you may access the system via ssh. Please contact me at bar001 at uark dot edu and I will reply with details of how to connect. Thanks, Brad -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] zpool import hangs

2009-06-15 Thread Brad Reese
and is very long...is there anything I should be looking for? Without -t 243... this command failed on dmu_read, now it just keeps going forever. Your help is much appreciated. Thanks, Brad -- This message posted from opensolaris.org ___ zfs-discuss

Re: [zfs-discuss] zpool import hangs

2009-06-10 Thread Brad Reese
= 00bab10c version = 4 txg = 2435911 guid_sum = 16655261404755214374 timestamp = 1240287900 UTC = Mon Apr 20 23:25:00 2009 Thanks, Brad -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] zpool import hangs

2009-06-02 Thread Brad Reese
Hi Victor, Here's the output of 'zdb -e -bcsvL tank' (similar to above but with -c). Thanks, Brad Traversing all blocks to verify checksums ... zdb_blkptr_cb: Got error 50 reading 0, 11, 0, 0 [L0 packed nvlist] 4000L/4000P DVA[0]=0:2500014000:4000 DVA[1]=0:4400014000:4000 fletcher4

Re: [zfs-discuss] zpool import hangs

2009-06-01 Thread Brad Reese
:56 2009 Thanks for your help, Brad -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Data size grew.. with compression on

2009-03-30 Thread Brad Plecs
I've run into this too... I believe the issue is that the block size/allocation unit size in ZFS is much larger than the default size on older filesystems (ufs, ext2, ext3). The result is that if you have lots of small files smaller than the block size, they take up more total space on the

Re: [zfs-discuss] Any way to set casesensitivity=mixed on the main pool?

2009-02-04 Thread Brad
If you have an older Solaris release using ZFS and Samba, and you upgrade to a version with CIFS support, how do you ensure the file systems/pools have casesensitivity mixed? -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Raidz1 faulted with single bad disk. Requesting

2009-01-28 Thread Brad Hill
if that makes any kind of difference. Thanks for the suggestions. Brad Just a thought, but have you physically disconnected the bad disk? It's not unheard of for a bad disk to cause problems with others. Failing that, it's the corrupted data bit that's worrying me, it sounds like you may

Re: [zfs-discuss] Raidz1 faulted with single bad disk. Requesting assistan

2009-01-27 Thread Brad Hill
Any ideas on this? It looks like a potential bug to me, or there is something that I'm not seeing. Thanks again! -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Raidz1 faulted with single bad disk. Requesting

2009-01-27 Thread Brad Hill
r...@opensolaris:~# zpool import -f tank internal error: Bad exchange descriptor Abort (core dumped) Hoping someone has seen that before... the Google is seriously letting me down on that one. I guess you could try 'zpool import -f'. This is a pretty odd status, I think. I'm pretty sure

Re: [zfs-discuss] Raidz1 faulted with single bad disk. Requesting assistan

2009-01-27 Thread Brad Hill
I do, thank you. The disk that went out sounds like it had a head crash or some such - loud clicking shortly after spin-up then it spins down and gives me nothing. BIOS doesn't even detect it properly to do a firmware update. Do you know 7200.11 has firmware bugs? Go to seagate website to

Re: [zfs-discuss] Raidz1 faulted with single bad disk. Requesting assistance.

2009-01-24 Thread Brad Hill
I've seen reports of a recent Seagate firmware update bricking drives again. What's the output of 'zpool import' from the LiveCD? It sounds like ore than 1 drive is dropping off. r...@opensolaris:~# zpool import pool: tank id: 16342816386332636568 state: FAULTED status: The pool

Re: [zfs-discuss] Raidz1 faulted with single bad disk. Requesting assistance.

2009-01-22 Thread Brad Hill
I would get a new 1.5 TB and make sure it has the new firmware and replace c6t3d0 right away - even if someone here comes up with a magic solution, you don't want to wait for another drive to fail. The replacement disk showed up today but I'm unable to replace the one marked UNAVAIL:

[zfs-discuss] Raidz1 p

2009-01-19 Thread Brad Hill
Greetings! I lost one out of five disks on a machine with a raidz1 and I'm not sure exactly how to recover from it. The pool is marked as FAULTED which I certainly wasn't expecting with only one bum disk. r...@blitz:/# zpool status -v tank pool: tank state: FAULTED status: One or more

Re: [zfs-discuss] Raidz1 p

2009-01-19 Thread Brad Hill
Sure, and thanks for the quick reply. Controller: Supermicro AOC-SAT2-MV8 plugged into a 64-big PCI-X 133 bus Drives: 5 x Seagate 7200.11 1.5TB disks for the raidz1. Single 36GB western digital 10krpm raptor as system disk. Mate for this is in but not yet mirrored. Motherboard: Tyan Thunder K8W

Re: [zfs-discuss] Aggregate Pool I/O

2009-01-18 Thread Brad
Well if I do fsstat mountpoint on all the filesystems in the ZFS pool, then I guess my aggregate number for read and write bandwidth should equal the aggregate numbers for the pool? Yes? The downside is that fsstat has the same granularity issue as zpool iostat. What I'd really like is nread

[zfs-discuss] Aggregate Pool I/O

2009-01-17 Thread Brad
I'd like to track a server's ZFS pool I/O throughput over time. What's a good data source to use for this? I like zpool iostat for this, but if I poll at two points in time I would get a number since boot (e.g. 1.2M) and a current number (e.g. 1.3K). If I use the current number then I've lost

Re: [zfs-discuss] zpool add dumping core

2009-01-10 Thread Brad Plecs
Problem solved... after the resilvers completed, the status reported that the filesystem needed an upgrade. I did a zpool upgrade -a, and after that completed and there was no resilvering going on, the zpool add ran successfully. I would like to suggest, however, that the behavior be fixed

Re: [zfs-discuss] zpool add dumping core

2009-01-10 Thread Brad Plecs
Are you sure this isn't a case of CR 6433264 which was fixed long ago, but arrived in patch 118833-36 to Solaris 10? It certainly looks similar, but this system already had 118833-36 when the error occurred, so if this bug is truly fixed, it must be something else. Then again, I wasn't

[zfs-discuss] zpool add dumping core

2009-01-09 Thread Brad Plecs
I'm trying to add some additional devices to my existing pool, but it's not working. I'm adding a raidz group of 5 300 GB drives, but the command always fails: r...@kronos:/ # zpool add raid raidz c8t8d0 c8t13d0 c7t8d0 c3t8d0 c5t8d0 Assertion failed: nvlist_lookup_string(cnv, path, path) ==

[zfs-discuss] zpool add dumping core

2009-01-09 Thread Brad Plecs
I'm trying to add some additional devices to my existing pool, but it's not working. I'm adding a raidz group of 5 300 GB drives, but the command always fails: r...@kronos:/ # zpool add raid raidz c8t8d0 c8t13d0 c7t8d0 c3t8d0 c5t8d0 Assertion failed: nvlist_lookup_string(cnv, path, path) ==

[zfs-discuss] ZFS filesystem creation during JumpStart

2008-12-15 Thread Brad Hudson
Does anyone know of a way to specify the creation of ZFS file systems for a ZFS root pool during a JumpStart installation? For example, creating the following during the install: Filesystem Mountpoint rpool/var /var

Re: [zfs-discuss] ZFS filesystem creation during JumpStart

2008-12-15 Thread Brad Hudson
Thanks for the response Peter. However, I'm not looking to create a different boot environment (bootenv). I'm actually looking for a way within JumpStart to separate out the ZFS filesystems from a new installation to have better control over quotas and reservations for applications that

Re: [zfs-discuss] Can't rm file when No space left on device...

2008-06-10 Thread Brad Diggs
Great point. Hadn't thought of it in that way. I haven't tried truncating a file prior to trying to remove it. Either way though, I think it is a bug if once the filesystem fills up, you can't remove a file. Brad On Thu, 2008-06-05 at 21:13 -0600, Keith Bierman wrote: On Jun 5, 2008, at 8:58

[zfs-discuss] Can't rm file when No space left on device...

2008-06-04 Thread Brad Diggs
Is there an existing bug on this that is going to address enabling the removal of a file without the pre-requisite removal of a snapshot? Thanks in advance, Brad -- - _/_/_/ _/_/ _/ _/ Brad Diggs

Re: [zfs-discuss] Shrinking a zpool?

2008-05-06 Thread Brad Bender
Solaris 10 update 5 was released 05/2008, but no zpool shrink :-( Any update? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] How do you determine the zfs_vdev_cache_size current value?

2008-04-29 Thread Brad Diggs
How do you ascertain the current zfs vdev cache size (e.g. zfs_vdev_cache_size) via mdb or kstat or any other cmd? Thanks in advance, Brad -- The Zone Manager http://TheZoneManager.COM http://opensolaris.org/os/project/zonemgr ___ zfs-discuss mailing

[zfs-discuss] Is gzip planned to be in S10U5?

2008-02-13 Thread Brad Diggs
Hello, Is the gzip compression algorithm planned to be in Solaris 10 Update 5? Thanks in advance, Brad -- The Zone Manager http://TheZoneManager.COM http://opensolaris.org/os/project/zonemgr ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] UFS on zvol Cache Questions...

2008-02-08 Thread Brad Diggs
Hello Darren, Please find responses in line below... On Fri, 2008-02-08 at 10:52 +, Darren J Moffat wrote: Brad Diggs wrote: I would like to use ZFS but with ZFS I cannot prime the cache and I don't have the ability to control what is in the cache (e.g. like with the directio UFS

[zfs-discuss] UFS on zvol Cache Questions...

2008-02-07 Thread Brad Diggs
in advance, Brad ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS quota

2007-08-27 Thread Brad Plecs
OK, you asked for creative workarounds... here's one (though it requires that the filesystem be briefly unmounted, which may be deal-killing): That is, indeed, creative. :) And yes, the unmount make it impractical in my environment. I ended up going back to rsync, because we had more

Re: [zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-23 Thread Brad Plecs
At the moment, I'm hearing that using h/w raid under my zfs may be better for some workloads and the h/w hot spare would be nice to have across multiple raid groups, but the checksum capabilities in zfs are basically nullified with single/multiple h/w lun's resulting in reduced protection.

[zfs-discuss] Re: Puzzling ZFS behavior with COMPRESS option

2007-04-17 Thread Brad Green
Did you find a resoltion to this issue? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: ZFS over NFS extra slow?

2007-01-03 Thread Brad Plecs
write cache was enabled on all the ZFS drives, but disabling it gave a negligible speed improvement: (FWIW, the pool has 50 drives) (write cache on) /bin/time tar xf /tmp/vbulletin_3-6-4.tar real 51.6 user0.0 sys 1.0 (write cache off) /bin/time tar xf

[zfs-discuss] ZFS over NFS extra slow?

2007-01-02 Thread Brad Plecs
I had a user report extreme slowness on a ZFS filesystem mounted over NFS over the weekend. After some extensive testing, the extreme slowness appears to only occur when a ZFS filesystem is mounted over NFS. One example is doing a 'gtar xzvf php-5.2.0.tar.gz'... over NFS onto a ZFS

[zfs-discuss] Re: ZFS over NFS extra slow?

2007-01-02 Thread Brad Plecs
Ah, thanks -- reading that thread did a good job of explaining what I was seeing. I was going nuts trying to isolate the problem. Is work being done to improve this performance? 100% of my users are coming in over NFS, and that's a huge hit. Even on single large files, writes are slower by

[zfs-discuss] Re: Tunable parameter to zfs memory use

2006-12-24 Thread Brad Diggs
What would you want to observe if your system hit the upper limit in zfs_max_phys_mem? I would want zfs to behave well and safely like every other app on which you apply boundary conditions. It is the responsibility of zfs to know its boundaries and stay within them. Otherwise, your system

[zfs-discuss] Interesting zfs destroy failure

2006-08-22 Thread Brad Plecs
Saw this while writing a script today -- while debugging the script, I was ctrl-c-ing it a lot rather than wait for the zfs create / zfs set commands to complete. After doing so, my cleanup script failed to zfs destroy the new filesystem: [EMAIL PROTECTED]:/ # zfs destroy -f

[zfs-discuss] Difficult to recursive-move ZFS filesystems to another server

2006-08-11 Thread Brad Plecs
Just wanted to point this out -- I have a large web tree that used to have UFS user quotas on it. I converted to ZFS using the model that each user has their own ZFS filesystem quota instead. I worked around some NFS/automounter issues, and it now seems to be working fine. Except now I

[zfs-discuss] Re: Proposal expand raidz

2006-08-11 Thread Brad Plecs
Just a data point -- our netapp filer actually creates additional raid groups that are added to the greater pool when you add disks, much as zfs does now. They aren't simply used to expand the one large raid group of the volume.I've been meaning to rebuild the whole thing to get use of

Re: [zfs-discuss] Re: System hangs on SCSI error

2006-08-10 Thread Brad Plecs
The core dump timed out (related to the SCSI bus reset?), so I don't have one. I can try it again, though, it's easy enough to reproduce. I was seeing errors on the fibre channel disks as well, so it's possible the whole thing was locked up. BP -- [EMAIL PROTECTED]

[zfs-discuss] Re: Quotas and Snapshots

2006-07-25 Thread Brad Plecs
I've run into this myself. (I am in a university setting). after reading bug ID 6431277 (URL below for noobs like myself who didn't know what see 6431277 meant): http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6431277 ...it's not clear to me how this will be resolved. What I'd

Re: [zfs-discuss] Re: Quotas and Snapshots

2006-07-25 Thread Brad Plecs
First, ZFS allows one to take advantage of large, inexpensive Serial ATA disk drives. Paraphrased: ZFS loves large, cheap SATA disk drives. So the first part of the solution looks (to me) as simple as adding some cheap SATA disk drives. Next, after extra storage space has been added to

  1   2   >