Re: [zfs-discuss] triple-parity: RAID-Z3

2009-07-20 Thread Thomas
which gap? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] triple-parity: RAID-Z3

2009-07-20 Thread Thomas
http://mail.opensolaris.org/pipermail/onnv-notify/2009-July/009872.html second bug, its the same link like in the first post. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] virtualization, alignment and zfs variation stripes

2009-07-22 Thread thomas
Hmm.. I guess that's what I've heard as well. I do run compression and believe a lot of others would as well. So then, it seems to me that if I have guests that run a filesystem formatted with 4k blocks for example.. I'm inevitably going to have this overlap when using ZFS network storage? So

Re: [zfs-discuss] SSD's and ZFS...

2009-07-23 Thread thomas
I think it is a great idea, assuming the SSD has good write performance. This one claims up to 230MB/s read and 180MB/s write and it's only $196. http://www.newegg.com/Product/Product.aspx?Item=N82E16820609393 Compared to this one (250MB/s read and 170MB/s write) which is $699. Are

Re: [zfs-discuss] Opensolaris attached to 70 disk HP array

2009-07-23 Thread thomas
That is an interesting bit of kit. I wish a white box manufacturer would create something like this (hint hint supermicro) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Using consumer drives in a zraid2

2009-08-25 Thread thomas
I'll admit, I was cheap at first and my fileserver right now is consumer drives. nbsp;You can bet all my future purchases will be of the enterprise grade. nbsp;And guess what... none of the drives in my array are less than 5 years old, so even if they did die, and I had bought the

Re: [zfs-discuss] Using consumer drives in a zraid2

2009-08-26 Thread thomas
Hi Richard, So you have to wait for the sd (or other) driver to timeout the request. By default, this is on the order of minutes. Meanwhile, ZFS is patiently awaiting a status on the request. For enterprise class drives, there is a limited number of retries on the disk before it reports an

Re: [zfs-discuss] Status/priority of 6761786

2009-08-27 Thread thomas
For whatever it's worth to have someone post on a list.. I would *really* like to see this improved as well. The time it takes to iterate over both thousands of filesystems and thousands of snapshots makes me very cautious about taking advantage of some of the built-in zfs features in an HA

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-16 Thread thomas
Even if it might not be the best technical solution, I think what a lot of people are looking for when this comes up is a knob they can use to say I only want X IOPS per vdev (in addition to low prioritization) to be used while scrubbing. Doing so probably helps them feel more at ease that they

Re: [zfs-discuss] SSD best practices

2010-04-22 Thread thomas
Someone on this list threw out the idea a year or so ago to just setup 2 ramdisk servers, export a ramdisk from each and create a mirror slog from them. Assuming newer version zpools, this sounds like it could be even safer since there is (supposedly) less of a chance of catastrophic failure if

Re: [zfs-discuss] New SSD options

2010-05-19 Thread thomas
40k IOPS sounds like best in case, you'll never see it in the real world marketing to me. There are a few benchmarks if you google and they all seem to indicate the performance is probably +/- 10% of an intel x25-e. I would personally trust intel over one of these drives. Is it even possible

Re: [zfs-discuss] New SSD options

2010-05-21 Thread thomas
On the PCIe side, I noticed there's a new card coming from LSI that claims 150,000 4k random writes. Unfortunately this might end up being an OEM-only card. I also notice on the ddrdrive site that they now have an opensolaris driver and are offering it in a beta program. -- This message

Re: [zfs-discuss] can you recover a pool if you lose the zil (b134+)

2010-05-25 Thread thomas
Is there a best practice on keeping a backup of the zpool.cache file? Is it possible? Does it change with changes to vdevs? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Snapshots, txgs and performance

2010-06-06 Thread thomas
Very interesting. This could be useful for a number of us. Would you be willing to share your work? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Encryption on ZFS / Disk Usage

2006-08-22 Thread Thomas Deutsch
Hi 2006/8/22, Constantin Gonzalez [EMAIL PROTECTED]: Thomas Deutsch wrote: I'm thinking about to change from Linux/Softwareraid to OpenSolaris/ZFS. During this, I've got some (probably stupid) questions: don't worry, there are no stupid questions :). 1. Is ZFS able to encrypt all the data

Re: [zfs-discuss] ?: ZFS and jumpstart export race condition

2006-09-08 Thread Thomas Wagner
does a zfs filesystem get mounted? Probably a zfs legacy mount together with a lower priority lofs mount would do it. Regards, Thomas On Fri, Sep 08, 2006 at 08:18:06AM -0400, Steffen Weiberle wrote: I have a jumpstart server where the install images are on a ZFS pool. For PXE boot, several

Re: [zfs-discuss] Memory Usage

2006-09-12 Thread Thomas Burns
On Sep 12, 2006, at 2:04 PM, Mark Maybee wrote: Thomas Burns wrote: Hi, We have been using zfs for a couple of months now, and, overall, really like it. However, we have run into a major problem -- zfs's memory requirements crowd out our primary application. Ultimately, we have

Re: [zfs-discuss] Memory Usage

2006-09-12 Thread Thomas Burns
Also, where do I set arc.c_max? In etc/system? Out of curiosity, why isn't limiting arc.c_max considered best practice (I just want to make sure I am not missing something about the effect limiting it will have)? My guess is that in our case (lots of small groups -- 50 people or less

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Thomas Wagner
using a cluster-framework with heartbeats and all that great stuff ... Regards, Thomas ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] mkdir == zfs create

2006-09-28 Thread Thomas Wagner
filedescriptors or ünmount, then operator-predefined actions will be triggered. Actions like zfs create rulebased-name, take a snapshot or zsend on a snapshot and others could be thought of. Thomas -- ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] raidz DEGRADED state

2006-11-30 Thread Thomas Garner
the original disk to complete the array? Thanks! Thomas On 11/30/06, Krzys [EMAIL PROTECTED] wrote: Ah, did not see your follow up. Thanks. Chris On Thu, 30 Nov 2006, Cindy Swearingen wrote: Sorry, Bart, is correct: If new_device is not specified, it defaults

Re: [zfs-discuss] raidz DEGRADED state

2006-12-05 Thread Thomas Garner
So there is no current way to specify the creation of a 3 disk raid-z array with a known missing disk? On 12/5/06, David Bustos [EMAIL PROTECTED] wrote: Quoth Thomas Garner on Thu, Nov 30, 2006 at 06:41:15PM -0500: I currently have a 400GB disk that is full of data on a linux system. If I buy

[zfs-discuss] .zfs snapshot directory in all directories

2007-02-25 Thread Thomas Garner
of the filesystem? Thanks! Thomas ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Re: .zfs snapshot directory in all directories

2007-02-26 Thread Thomas Garner
for what purpose ? Darren's correct, it's a simple case of ease of use. Not show-stopping by any means but would be nice to have. Thomas ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

[zfs-discuss] Re: Cluster File System Use Cases

2007-02-28 Thread Thomas Roach
I'm an Oracle DBA and we are doing ASM on SUN with RAC. I am happy with ASM's performance but am interested in Clustering. I mentioned to Bob Netherton that if Sun could make it a clustering file system, that helps them enable the grid further. Oracle wrote and gave OCFS2 to the Linux Kernel.

[zfs-discuss] Re: How to interrupt a zpool scrub?

2007-03-05 Thread Thomas Werschlein
How embarrassing is that? Pete kindly pointed me to the man page where it clearly states that I should use zpool scrub [-s] pool. -s for Stop scrubbing. Sorry folks, I just looked in the Administration guide where I couldn't find it. But I am sure it's in there, too. This message posted

[zfs-discuss] ZFS checksum error detection

2007-03-16 Thread Thomas Nau
the read? Would someone please explain how the mechanism works in that case? Of course in the meantime we attached another box in mirror configuration ;) Thanks in advance Thomas - GPG fingerprint: B1 EE D2 39 2C 82 26 DA A5 4D

Re: [zfs-discuss] Re: ZFS checksum error detection

2007-03-17 Thread Thomas Nau
what they are about Thomas - GPG fingerprint: B1 EE D2 39 2C 82 26 DA A5 4D E0 50 35 75 9E ED ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

[zfs-discuss] ZFS over iSCSI question

2007-03-23 Thread Thomas Nau
is gone forever? If so, is this a transport independent problem which can also happen if ZFS used Fibre Channel attached drives instead of iSCSI devices? Thanks for your help Thomas - GPG fingerprint: B1 EE D2 39 2C 82 26 DA A5 4D

Re: [zfs-discuss] ZFS over iSCSI question

2007-03-23 Thread Thomas Nau
that FC-AL, ... do better in this case Thomas - GPG fingerprint: B1 EE D2 39 2C 82 26 DA A5 4D E0 50 35 75 9E ED ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] ZFS over iSCSI question

2007-03-23 Thread Thomas Nau
to see if they use that sequence. Allow me one more question: why is fflush() required prior to fsync()? Putting all pieces together this means that if the app doesn't do it it suffered from the problem with UFS anyway just with typically smaller caches, right? Thanks again Thomas

Re: [zfs-discuss] ZFS over iSCSI question

2007-03-23 Thread Thomas Nau
as possible applications but we need to have redundancy for the fileserver itself too Thomas - GPG fingerprint: B1 EE D2 39 2C 82 26 DA A5 4D E0 50 35 75 9E ED ___ zfs-discuss mailing list zfs

Re[2]: [zfs-discuss] ZFS over iSCSI question

2007-03-25 Thread Thomas Nau
on UFS or any similar FS anymore It probably will be really slow, but everythink should be consistent all the time I guess. You might be right about. I did a quick check with dtrace on the mail server and it seems IMAP, sendmail and the others nicely sync data as they should Thomas

[zfs-discuss] pool resilver oddity

2007-04-08 Thread Thomas Garner
drives online? Shouldn't it have been writing data/parity to the replacement drive? Is this normal and the expected behavior? Thanks for any insight! Thomas ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] Preferred backup mechanism for ZFS?

2007-04-20 Thread Tim Thomas
My initial reaction is that the world has got by without [email|cellphone| other technology] for a long time ... so not a big deal. Well, I did say I viewed it as an indefensible position :-) Now shall we debate if the world is a better place because of cell phones :-P

Re: [zfs-discuss] Re: Slow write speed to ZFS pool (via NFS)

2007-06-23 Thread Thomas Garner
So it is expected behavior on my Nexenta alpha 7 server for Sun's nfsd to stop responding after 2 hours of running a bittorrent client over nfs4 from a linux client, causing zfs snapshots to hang and requiring a hard reboot to get the world back in order? Thomas There is no NFS over ZFS issue

Re: [zfs-discuss] Re: Slow write speed to ZFS pool (via NFS)

2007-06-24 Thread Thomas Garner
) and zfs would be having problems taking snapshots, if I hadn't disabled the hourly snapshots. Thanks! Thomas [EMAIL PROTECTED] ~]$ rpcinfo -t filer0 nfs rpcinfo: RPC: Timed out program 13 version 0 is not available echo ::pgrep nfsd | ::walk thread | ::findstack -v | mdb -k stack pointer

Re: [zfs-discuss] Re: Slow write speed to ZFS pool (via NFS)

2007-06-25 Thread Thomas Garner
Thanks, Roch! Much appreciated knowing what the problem is and that a fix is in a forthcoming release. Thomas On 6/25/07, Roch - PAE [EMAIL PROTECTED] wrote: Sorry about that; looks like you've hit this: 6546683 marvell88sx driver misses wakeup for mv_empty_cv http

[zfs-discuss] ZFS and SE99x0 Array Best Practices

2007-07-13 Thread Thomas McPhillips
Does anyone have a best practice for utilizing ZFS with Hitachi SE99x0 arrays?? I'm curious about what type of parity-groups work best with ZFS for various application uses. Examples: OLTP, warehousing, NFS, . Thanks! This message posted from opensolaris.org

Re: [zfs-discuss] ZFS raid is very slow???

2007-07-17 Thread Thomas Wagner
the difference being faktor 2 between reading and writing when using a 1:1 mirror setup, I would say, you hit the bottleneck of your PCI-Bus. Thomas On Sun, Jul 15, 2007 at 03:37:06AM -0700, Orvar Korvar wrote: I did that, and here are the results from the ZFS jury: bash-3.00$ timex dd if=/dev/zero

[zfs-discuss] ? ZFS dynamic striping over RAID-Z

2007-08-02 Thread Tim Thomas
Hi if I create a storage pool with multiple RAID-Z stripes in it does ZFS dynamically stripe data across all the RAID-Z stripes in the pool automagically ? If I relate this back to my storage array experience, this would be Plaiding which is/was creating a RAID-0 logical volume across

Re: [zfs-discuss] Samba with ZFS ACL

2007-08-30 Thread Tim Thomas
___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Sun Logo http://www.sun.com http://www.sun.com*Tim Thomas *Storage Systems Product Group* * Sun Microsystems, Inc. Internal Extension: x(70)18097 Office Direct Dial: +44-161

[zfs-discuss] SAS-controller recommodations

2007-09-13 Thread Thomas Liesner
Hi all, i am about to put together a one month test configuration for a graphics-production server (prepress-filer that is). I would like to test zfs on a x4200 with two sas2sata-jbods attached. Initially i wanted to use an infortrend fc2sata-jbod-enclosure but these are at out of production

Re: [zfs-discuss] Best option for my home file server?

2007-09-28 Thread Thomas Wagner
mirror A B here lives the OS and userdata-one pool userdata-two mirror C D userdata-two spanning CD - XY mirror X Y Thomas On Thu, Sep 27, 2007 at 08:39:40PM +0100, Dick Davies wrote: On 26/09/2007, Christopher [EMAIL PROTECTED] wrote: I'm about to build a fileserver and I think I'm

[zfs-discuss] Fileserver performance tests

2007-10-08 Thread Thomas Liesner
Hi all, i want to replace a bunch of Apple Xserves with Xraids and HFS+ (brr) by Sun x4200 with SAS-Jbods and ZFS. The application will be the Helios UB+ fileserver suite. I installed the latest Solaris 10 on a x4200 with 8gig of ram and two Sun SAS controllers, attached two sas-jbods with 8

[zfs-discuss] Some test results: ZFS + SAMBA + Sun Fire X4500 (Thumper)

2007-10-08 Thread Tim Thomas
Hi this may be of interest: http://blogs.sun.com/timthomas/entry/samba_performance_on_sun_fire I appreciate that this is not a frightfully clever set of tests but I needed some throughout numbersand the easiest way to share the results is to blog. Rgds Tim -- *Tim Thomas *Storage

Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread Thomas Liesner
Hi again, i did not want to compare the filebench test with the single mkfile command. Still, i was hoping to see similar numbers in the filbench stats. Any hints what i could do to further improve the performance? Would a raid1 over two stripes be faster? TIA, Tom This message posted from

Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread Thomas Liesner
Hi, i checked with $nthreads=20 which will roughly represent the expected load and these are the results: IO Summary: 7989 ops 7914.2 ops/s, (996/979 r/w) 142.7mb/s,255us cpu/op, 0.2ms latency BTW, smpatch is still running and further tests will get done when the system is

Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread Thomas Liesner
Hi, i checked with $nthreads=20 which will roughly represent the expected load and these are the results: IO Summary: 7989 ops 7914.2 ops/s, (996/979 r/w) 142.7mb/s, 255us cpu/op, 0.2ms latency BTW, smpatch is still running and further tests will get done when the system is rebooted. The

Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread Thomas Liesner
i wanted to test some simultanious sequential writes and wrote this little snippet: #!/bin/bash for ((i=1; i=20; i++)) do dd if=/dev/zero of=lala$i bs=128k count=32768 done While the script was running i watched zpool iostat and measured the time between starting and stopping of the writes

Re: [zfs-discuss] Some test results: ZFS + SAMBA + Sun Fire X4500 (Thumper)

2007-10-09 Thread Tim Thomas
://blogs.sun.com/timthomas/entry/another_samba_test_on_sun What I find nice about Thumper/X4500's is that they behave very predictably..in my experience anyway. Rgds Tim -- Tim Thomas Storage Systems Product Group Sun Microsystems, Inc. Internal Extension: x(70

Re: [zfs-discuss] Fileserver performance tests

2007-10-10 Thread Thomas Liesner
Hi Eric, Are you talking about the documentation at: http://sourceforge.net/projects/filebench or: http://www.opensolaris.org/os/community/performance/filebench/ and: http://www.solarisinternals.com/wiki/index.php/FileBench ? i was talking about the solarisinternals wiki. I can't find any

Re: [zfs-discuss] Fileserver performance tests

2007-10-11 Thread Thomas Liesner
Hi, compression is off. I've checked rw-perfomance with 20 simultaneous cp and with the following... #!/usr/bin/bash for ((i=1; i=20; i++)) do cp lala$i lulu$i done (lala1-20 are 2gb files) ...and ended up with 546mb/s. Not too bad at all. This message posted from opensolaris.org

[zfs-discuss] Which SAS JBOD-enclosure

2007-10-11 Thread Thomas Liesner
Hi all, i am currently using two XStore XJ 1100 SAS JBOD enclosures(http://www.xtore.com/product_detail.asp?id_cat=11) attached to a x4200 for testing. So far it works rather nicly, but i am still looking for alternatives. The Infortrend JBOD-expansions are not deliverable at the moment. What

Re: [zfs-discuss] ZFS is very slow in our test, when the capacity is high

2007-10-12 Thread Thomas Liesner
Hi, did you read the following? http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide Currently, pool performance can degrade when a pool is very full and filesystems are updated frequently, such as on a busy mail server. Under these circumstances, keep pool space under 80%

Re: [zfs-discuss] Sun's storage product roadmap?

2007-10-18 Thread Thomas Liesner
Hi, from sun germany i got the info hat the 2u JBODs wille be officially announced in q1 2008 and the 4u JBODs in q2 2008. Both will have SAS connectors and support either SAS and SATA drives. Ragards, Tom This message posted from opensolaris.org

[zfs-discuss] ? Removing a disk from a ZFS Storage Pool

2008-01-28 Thread Tim Thomas
is the status of this ? Thanks Tim -- Tim Thomas Staff Engineer Storage Systems Product Group Sun Microsystems, Inc. Internal Extension: x(70)18097 Office Direct Dial: +44-161-905-8097 Mobile: +44-7802-212-209 Email: [EMAIL PROTECTED] Blog: http://blogs.sun.com

[zfs-discuss] Avoiding perfromance decrease when pool over 80% usage

2008-02-08 Thread Thomas Liesner
Hi all, i am planning a zfs-fileserver for a larger prepress-company in Germany. Knowing that users tend to use all the space they can get, i am looking for a solution to avoid a rapid performance loss when the production-pool is more than 80% used. Would it be a practical solution to just set

Re: [zfs-discuss] Avoiding performance decrease when pool usage is over 80%

2008-02-12 Thread Thomas Liesner
bda wrote: I haven't noticed this behavior when ZFS has (as recommended) the full disk. Good to know, as i intended to use the whole disks anyway. Thanks, Tom This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Avoiding performance decrease when pool usage is over 80%

2008-02-12 Thread Thomas Liesner
Ralf Ramge wrote: Quotas are applied to file systems, not pools, and a such are pretty independent from the pool size. I found it best to give every user his/her own filesystem and applying individual quotas afterwards. Does this mean, that if i have a pool of 7TB with one filesystem for

Re: [zfs-discuss] Avoiding performance decrease when pool usage is over 80%

2008-02-12 Thread Thomas Liesner
Nobody out there who ever had problems with low diskspace? Regrads, Tom This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] We can't import pool zfs faulted

2008-02-12 Thread Thomas Liesner
If you can't use zpool status, you probably should check wether your system is right and not all devices needed for this pool are currently available... i.e. format... Regards, Tom This message posted from opensolaris.org ___ zfs-discuss mailing

Re: [zfs-discuss] Avoiding performance decrease when pool usage is over 80%

2008-02-13 Thread Thomas Liesner
Ralf Ramge schrieb: Thomas Liesner wrote: Does this mean, that if i have a pool of 7TB with one filesystem for all users with a quota of 6TB i'd be alright? Yep. Although I *really* recommend creating individual file systems, e.g. if you have 1,000 users on your server, I'd create 1,000

[zfs-discuss] Is the ZFS GUI in open Solaris ?

2008-02-14 Thread Tim Thomas
Title: Signature Hi I just loaded up opensolaris on an X4500 (Thumper) and tried to connect to the ZFS GUI (https://x:6789)...and it is not there. Is this not part of Open Solaris...or do I just need to work out how to switch it on.. Thanks Tim -- Tim Thomas Staff

Re: [zfs-discuss] Is the ZFS GUI in open Solaris ?

2008-02-14 Thread Tim Thomas
ollowing : It should be there... try starting the webconsole service. On 2/14/08, Tim Thomas [EMAIL PROTECTED] wrote: Hi I just loaded up opensolaris on an X4500 (Thumper) and tried to connect to the ZFS GUI (https://x:6789)...and it is not there. Is this not part of Open Solaris

Re: [zfs-discuss] Is the ZFS GUI in open Solaris ?

2008-02-14 Thread Tim Thomas
Title: Signature are you sure the service is actually running? does "svcs -a | grep webconsole" say "online"? Yes, it is online -- Tim Thomas Staff Engineer Storage Systems Product Group Sun Microsystems, Inc. Internal E

[zfs-discuss] FIXED (Re: Is the ZFS GUI in open Solaris ?)

2008-02-14 Thread Tim Thomas
Title: Signature A reboot did it Tim Thomas said the following : Thanks Chris someone else has suggested that to me but it still does not work. I also tried... # svccfg -s svc:/system/webconsole setprop options/tcp_listen = true # svcadm refresh svc:/system/webconsole

[zfs-discuss] creating ZFS mirror over iSCSI between to DELL MD3000i arrays

2008-06-09 Thread Thomas Rojsel
Hi, I've looked at ZFS for a while now and i'm wondering if it's possible on a server create a ZFS mirror between two different iSCSI targets (two MD3000i located in two different server rooms). Or is it any setup that you guys recommend for maximal data protection. Thanks, /Thom This

Re: [zfs-discuss] creating ZFS mirror over iSCSI between to DELL MD3000i arrays

2008-06-10 Thread Thomas Rojsel
Hi Tomas, I will try it my self, but it's just that if i google the subject i only find old entries describing things as kernel panics and system freeze. I'm just wondering if this problem is fixed in the newer releases, or if there is another recommended way to keep data stored on different

[zfs-discuss] Q: grow zpool build on top of iSCSI devices

2008-07-02 Thread Thomas Nau
that seems to be one root cause for not seeing the added space after recreating the targets. The later was necessary to make sure the disk size matches the ZVOL size Any hints are greatly appreciated! Thomas - GPG fingerprint: B1 EE D2

[zfs-discuss] 'referenced' bigger than 'volsize'?

2008-07-28 Thread Thomas Pfohe
Hi, a zfs create -V 1M pool/foo dd if=/dev/random of=/dev/zvol/rdsk/pool/foo bs=1k count=1k (using Nevada b94) yields zfs get all pool/foo pool/foo used 1,09M - pool/foo referenced 1,09M - pool/foo volsize 1M-

[zfs-discuss] problems accessing ZFS snapshots

2008-07-30 Thread Thomas Nau
on that? Thomas - GPG fingerprint: B1 EE D2 39 2C 82 26 DA A5 4D E0 50 35 75 9E ED ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Unbalanced write patterns

2008-07-30 Thread Thomas Garner
raidz's, but does not seem to fit what I've seen empirically. What am I missing? Note that the following is a snapshot of time in the middle of a large streaming write, not the initial output from zpool iostat. Thomas zpool iostat -v tank 1 capacity operationsbandwidth

Re: [zfs-discuss] problems accessing ZFS snapshots

2008-07-31 Thread Thomas Nau
in snv_96. Thanks for finding this. Thanks for fixing but it also happens if the snapshot directory isn't empty as /.zfs/snapshot holds the name of the snapshot that was taken Thomas - GPG fingerprint: B1 EE D2 39 2C 82 26 DA A5 4D

[zfs-discuss] checksum errors after online'ing device

2008-08-02 Thread Thomas Nau
with OpenSolaris clients Any hints? Thomas - GPG fingerprint: B1 EE D2 39 2C 82 26 DA A5 4D E0 50 35 75 9E ED ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

Re: [zfs-discuss] checksum errors after online'ing device

2008-08-02 Thread Thomas Nau
Miles On Sat, 2 Aug 2008, Miles Nordin wrote: tn == Thomas Nau [EMAIL PROTECTED] writes: tn Nevertheless during the first hour of operation after onlining tn we recognized numerous checksum errors on the formerly tn offlined device. We decided to scrub the pool and after tn

Re: [zfs-discuss] ZFS on 32bit.

2008-08-06 Thread Thomas Garner
issues, but is definitely not cool when it happens. Thomas On Wed, Aug 6, 2008 at 1:31 PM, Bryan Allen [EMAIL PROTECTED] wrote: Good afternoon, I have a ~600GB zpool living on older Xeons. The system has 8GB of RAM. The pool is hanging off two LSI Logic SAS3041X-Rs (no RAID configured). When

[zfs-discuss] trouble with resilver after removing drive from 3510

2008-08-28 Thread Thomas Bleek
make a call to service? Thanks in advance, thomas -- Dr. Thomas Bleek, Netzwerkadministrator Helmholtz-Zentrum Potsdam Deutsches GeoForschungsZentrum Telegrafenberg G261 D-14473 Potsdam Tel.: +49 331 288- 1818/1681 Fax.: 1730 Mobil: +49 172 1543233 E-Mail: [EMAIL PROTECTED] smime.p7s

[zfs-discuss] Any experience of bulk transfers with zfs send/receive ?

2008-09-21 Thread Tim Thomas
Hi has anyone attempted transfers of large volume of data with zfs send/receive in a production environment. I am seeing interest in zfs send/receive from people who have use rsync and similar technologies to copy data for DR purposes..but I have no idea of waht to expect so far as

[zfs-discuss] Recover zpool/zfs

2008-11-07 Thread Thomas Kloeber
of the physical device I can see all the data but I can't get to it... aaarrrggghhh Has anybody successfully patched/tweaked/whatever a zpool or zfs to recover from this? I would be most and for ever greatful I somebody could give me a hint. Thanx, Thomas Following is the 'zpool status': zpool status -xv

Re: [zfs-discuss] How to diagnose zfs - iscsi - nfs hang

2008-11-10 Thread Thomas Garner
Are these machines 32-bit by chance? I ran into similar seemingly unexplainable hangs, which Marc correctly diagnosed and have since not reappeared: http://mail.opensolaris.org/pipermail/zfs-discuss/2008-August/049994.html Thomas ___ zfs-discuss

Re: [zfs-discuss] Aggregate Pool I/O

2009-01-17 Thread Thomas Garner
vs disk i/o, but would love to hear how to measure it. Thomas On Sat, Jan 17, 2009 at 4:07 AM, Brad bst...@aspirinsoftware.com wrote: I'd like to track a server's ZFS pool I/O throughput over time. What's a good data source to use for this? I like zpool iostat for this, but if I poll at two

[zfs-discuss] ? Changing storage pool serial number

2009-01-27 Thread Tim Thomas
Hi I took a look at the archives and I have seen a few threads about using array block level snapshots with ZFS and how we face the old issue that we used to see with logical volumes and unique IDs (quite correctly) stopping the same volume being presented twice to the same server. IHAC

Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Thomas Wagner
especially in SAN environments need this. Projects own theyr own pools and constantly grow and *shrink* space. And they have no downtime available for that. give a +1 if you agree Thomas ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Thomas Wagner
redundancy. Thomas PS: think of the day where simple operator $NAME makes a typo zfs destroy -r poolname and all the data still sits on the disk. But no one is able to bring that valueable data back, except restoration from tape with hours of downtime. Sorry for repeating

[zfs-discuss] state of zfs pool shrinking

2009-05-14 Thread Thomas Wagner
Just wanted to ask how we make progress with zpool shrinking? Are there any prerequisite projects we are waiting on? e.g. tracked by CR 4852783 reduce pool capacity Thomas -- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

[zfs-discuss] Problem with zfs mounting in b114?

2009-05-28 Thread Thomas Nau
the problem as does zfs mount -a. So far it simply worked, as said till we updated. Any hints? Thomas - GPG fingerprint: B1 EE D2 39 2C 82 26 DA A5 4D E0 50 35 75 9E ED ___ zfs-discuss mailing

Re: [zfs-discuss] Problem with zfs mounting in b114?

2009-05-28 Thread Thomas Nau
Miles, Miles Nordin wrote: tn == Thomas Nau thomas@uni-ulm.de writes: tn After updating the machine to b114 we ran into a strange tn problem. The pool get's imported (listed by 'zpool list') but tn none of it's ZFS filesystems get mounted. Exporting and tn reimporting

Re: [zfs-discuss] Creating ZFS filesystem with inherited ACLs ?

2009-06-23 Thread Thomas Fili
/home was certainly created after setting the ACLs .. So i think actualy it is not possible, but it might be possible in the future ? Or had i misunderstood your comment ? Thomas -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] JBOD?

2009-07-02 Thread Thomas Burgess
some controllers still create jbods in the same way. A perfect example is any of the highpoint controllers. But yah, when we say JBOD we mean it as it was originally intended..just a bunch of discs On Thu, Jul 2, 2009 at 10:23 AM, Kees Nuyt k.n...@zonnet.nl wrote: On Thu, 02 Jul 2009

Re: [zfs-discuss] Slow Resilvering Performance

2009-07-11 Thread Thomas Burgess
i might be wrong because i'm kind of new but i THINK you need to disable automatic snapshots when resilvering, at least on the older version you did. if not it would restart every time a new snapshot was madebut then again, i may be wrong. On Fri, Jul 10, 2009 at 6:18 PM, Galen

Re: [zfs-discuss] Can't offline a RAID-Z2 device: no valid replica

2009-07-15 Thread Thomas Liesner
You can't replace it because this disk is still a valid member of the pool, although it is marked faulty. Put in a replacement disk, add this to the pool and replace the faulty one with the new disk. Regards, Tom -- This message posted from opensolaris.org

Re: [zfs-discuss] Can't offline a RAID-Z2 device: no valid replica

2009-07-15 Thread Thomas Liesner
You could offline the disk if [b]this[/b] disk (not the pool) had a replica. Nothing wrong with the documentation. Hmm, maybe it is little misleading here. I walked into the same trap. The pool is not using the disk anymore anyway, so (from the zfs point of view) there is no need to offline

Re: [zfs-discuss] Can't offline a RAID-Z2 device: no valid replica

2009-07-16 Thread Thomas Liesner
You're right, from the documentation it definitely should work. Still, it doesn't. At least not in Solaris 10. But i am not a zfs-developer, so this should probably answered by them. I will give it a try with a recent OpneSolaris-VM and check, wether this works in newer implementations of zfs.

Re: [zfs-discuss] Can't offline a RAID-Z2 device: no valid replica

2009-07-16 Thread Thomas Liesner
FYI: In b117 it works as expected and stated in the documentation. Tom -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] What are the rollback tools?

2009-07-19 Thread Thomas Burgess
i'm pretty sure you're just looking for the zfs rollback command. a quick google brings up a lot of information and also man zfs check out this page http://docs.huihoo.com/opensolaris/solaris-zfs-administration-guide/html/ch06.html On Sun, Jul 19, 2009 at 10:29 AM, Brian Wilson

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-27 Thread Thomas Burgess
i was under the impression it was virtualbox and it's default setting that ignored the command, not the hard drive On Mon, Jul 27, 2009 at 1:27 PM, Eric D. Mudama edmud...@bounceswoosh.orgwrote: On Sun, Jul 26 at 1:47, David Magda wrote: On Jul 25, 2009, at 16:30, Carson Gaspar wrote:

Re: [zfs-discuss] ZFS Mirror : drive unexpectedly unplugged

2009-07-28 Thread Thomas Burgess
I don't have an answer to your question exactly because i'm a noob and i'm not using mac but i can say that on FreeBSD which i'm using atm there is a method to name devices ahead of time so if the drive letters change you avoid this exact problem. I'm sure opensolaris and mac have something

Re: [zfs-discuss] ZFS Mirror : drive unexpectedly unplugged

2009-07-28 Thread Thomas Burgess
sometimes the disk will be busy just from being in the directory or if something is trying to connect to it. Again, i'm no expert so i'm going to refrain from commenting on your issue further. 2009/7/28 Avérous Julien-Pierre no-re...@opensolaris.org There is a little mistake : If I do a

[zfs-discuss] How to mirror an entire zfs pool to another pool

2009-07-28 Thread Thomas Walker
filesystems to the new filesystems, but it seems like there should be a way to mirror or replicate the pool itself rather than doing it at the filesystem level. Thomas Walker -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs

  1   2   3   4   5   >