Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-21 Thread Marc Bevand
Bob Friesenhahn bfriesen at simple.dallas.tx.us writes: [...] X25-E's write cache is volatile), the X25-E has been found to offer a bit more than 1000 write IOPS. I think this is incorrect. On the paper the X25-E offers 3300 random write 4kB IOPS (and Intel is known to be very conservative

Re: [zfs-discuss] Performance of ZFS and UFS inside local/global zone

2009-10-21 Thread Orvar Korvar
So is there is a Change Request on this? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-21 Thread Tristan Ball
What makes you say that the X25-E's cache can't be disabled or flushed? The net seems to be full of references to people who are disabling the cache, or flushing it frequently, and then complaining about the performance! T Frédéric VANNIERE wrote: The ZIL is a write-only log that is only

[zfs-discuss] Unable to destroy/rollback snapshot

2009-10-21 Thread Andrew Robert Nicols
I've been trying to rollback a snapshot but seem to be unable to do so. Can anyone shed some light on what I may be doing wrong? I'm trying to rollback from thumperpool/m...@200908271200 to thumperpool/m...@200908270100. 344 r...@thumper1:~ zfs list -t snapshot | tail

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-21 Thread Meilicke, Scott
Thank you Bob and Richard. I will go with A, as it also keeps things simple. One physical device per pool. -Scott On 10/20/09 6:46 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Tue, 20 Oct 2009, Richard Elling wrote: The ZIL device will never require more space than RAM. In

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-21 Thread Meilicke, Scott
Thanks Ed. It sounds like you have run in this mode? No issues with the perc? -- Scott Meilicke On Oct 20, 2009, at 9:59 PM, Edward Ned Harvey sola...@nedharvey.com wrote: System: Dell 2950 16G RAM 16 1.5T SATA disks in a SAS chassis hanging off of an LSI 3801e, no extra drive slots, a

Re: [zfs-discuss] Sun Flash Accelerator F20

2009-10-21 Thread Robert Dupuy
There is a debate tactic known as complex argument, where so many false and misleading statements are made at once, that it overwhelms the respondent. I'm just going to respond this way. I am very disappointed in this discussion group. The response is not genuine. The idea that latency is not

Re: [zfs-discuss] Sun Flash Accelerator F20

2009-10-21 Thread Brian Hechinger
Please don't feed the troll. :) -brian On Wed, Oct 21, 2009 at 06:32:42AM -0700, Robert Dupuy wrote: There is a debate tactic known as complex argument, where so many false and misleading statements are made at once, that it overwhelms the respondent. I'm just going to respond this way.

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-21 Thread Edward Ned Harvey
Thanks Ed. It sounds like you have run in this mode? No issues with the perc? You can JBOD with the perc. It might be technically a raid0 or raid1 with a single disk in it, but that would be functionally equivalent to JBOD. The only time I did this was ... I have a Windows server, on

Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-21 Thread Mark Horstman
I'm seeing the same [b]lucreate[/b] error on my fresh SPARC sol10u8 install (and my SPARC sol10u7 machine I keep patches up to date), but I don't have a separate /var: # zfs list NAMEUSED AVAIL REFER MOUNTPOINT pool00 3.36G 532G20K none

[zfs-discuss] The iSCSI-backed zpool for my zone hangs.

2009-10-21 Thread Jacob Ritorto
My goal is to have a big, fast, HA filer that holds nearly everything for a bunch of development services, each running in its own Solaris zone. So when I need a new service, test box, etc., I provision a new zone and hand it to the dev requesters and they load their stuff on it and go.

Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-21 Thread Mark Horstman
Neither the virgin SPARC sol10u8 nor the (update to date) patched SPARC sol10u7 have any local zones. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-21 Thread Enda O'Connor
Hi This looks ok to me, message but not an indicator of an issue could you post cat /etc/lu/ICF.1 cat /etc/ICF.2 ( the foobar Be ) also lumount foobar /a and cat /a/etc/vfstab Enda Mark Horstman wrote: I'm seeing the same [b]lucreate[/b] error on my fresh SPARC sol10u8 install (and my

Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-21 Thread dick hoogendijk
Mark Horstman wrote: I don't see anything wrong with my /etc/vfstab. Until I get this resolved, I'm afraid to patch and use the new BE. It's the vfstab file in the newly created ABE that is wrongly written to. Try to mount this new ABE and check out for yourself. -- Dick Hoogendijk --

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-21 Thread Scott Meilicke
sigh Thanks Frédéric, that is a very interesting read. So my options as I see them now: 1. Keep the x25-e, and disable the cache. Performance should still be improved, but not by a *whole* like, right? I will google for an expectation, but if anyone knows off the top of their head, I would

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-21 Thread Scott Meilicke
Ed, your comment: If solaris is able to install at all, I would have to acknowledge, I have to shutdown anytime I need to change the Perc configuration, including replacing failed disks. Replacing failed disks is easy when PERC is doing the RAID. Just remove the failed drive and replace with a

[zfs-discuss] fault.fs.zfs.vdev.io

2009-10-21 Thread Matthew C Aycock
I have several of these messages from fmdump: fmdump -v -u 98abae95-8053-4cdc-d91a-dad89b125db4~ TIME UUID SUNW-MSG-ID Sep 18 00:45:23.7621 98abae95-8053-4cdc-d91a-dad89b125db4 ZFS-8000-FD 100% fault.fs.zfs.vdev.io

Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-21 Thread Mark Horstman
More input: # cat /etc/lu/ICF.1 sol10u8:-:/dev/zvol/dsk/rpool/swap:swap:67108864 sol10u8:/:rpool/ROOT/sol10u8:zfs:0 sol10u8:/appl:pool00/global/appl:zfs:0 sol10u8:/home:pool00/global/home:zfs:0 sol10u8:/rpool:rpool:zfs:0 sol10u8:/install:pool00/shared/install:zfs:0

Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-21 Thread Mark Horstman
more input: # lumount foobar /mnt /mnt # cat /mnt/etc/vfstab # cat /mnt/etc/vfstab #live-upgrade:Wed Oct 21 09:36:20 CDT 2009 updated boot environment foobar #device device mount FS fsckmount mount #to mount to fsck point typepass

Re: [zfs-discuss] fault.fs.zfs.vdev.io

2009-10-21 Thread Cindy Swearingen
Hi Matthew, You can use various forms of fmdump to decode this output. It might be easier to use fmdump -eV and look for the device info in the vdev path entry, like the one below. Also see if the errors on these vdevs are reported in your zpool status output. Thanks, Cindy # fmdump -eV |

Re: [zfs-discuss] zvol used apparently greater than volsize for sparse volume

2009-10-21 Thread Cindy Swearingen
Hi Stuart, I ran various forms of the zdb command to see if I could glean the metadata accounting stuff but it is beyond my mere mortal skills. Maybe someone else can provide the right syntax. Cindy On 10/20/09 10:17, Stuart Anderson wrote: Cindy, Thanks for the pointer. Until this is

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-21 Thread Richard Elling
On Oct 20, 2009, at 10:24 PM, Frédéric VANNIERE wrote: The ZIL is a write-only log that is only read after a power failure. Several GB is large enough for most workloads. You can't use the Intel X25-E because it has a 32 or 64 MB volatile cache that can't be disabled neither flushed by

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-21 Thread Bob Friesenhahn
On Wed, 21 Oct 2009, Marc Bevand wrote: Bob Friesenhahn bfriesen at simple.dallas.tx.us writes: [...] X25-E's write cache is volatile), the X25-E has been found to offer a bit more than 1000 write IOPS. I think this is incorrect. On the paper the X25-E offers 3300 random write 4kB IOPS (and

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-21 Thread David Dyer-Bennet
On Wed, October 21, 2009 12:21, Bob Friesenhahn wrote: Device performance should be specified as a minimum assured level of performance and not as meaningless peak (up to) values. I repeat: peak values are meaningless. Seems a little pessimistic to me. Certainly minimum assured values are

Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-21 Thread Enda O'Connor
Hi T his will boot ok in my opinion, not seeing any issues there. Enda Mark Horstman wrote: more input: # lumount foobar /mnt /mnt # cat /mnt/etc/vfstab # cat /mnt/etc/vfstab #live-upgrade:Wed Oct 21 09:36:20 CDT 2009 updated boot environment foobar #device device mount

Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-21 Thread Enda O'Connor
Mark Horstman wrote: Then why the warning on the lucreate. It hasn't done that in the past. this is from the vfstab processing code in ludo.c, in your case not causing any issue, but shall be fixed. Enda Mark On Oct 21, 2009, at 12:41 PM, Enda O'Connor enda.ocon...@sun.com wrote: Hi T

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-21 Thread Bob Friesenhahn
On Wed, 21 Oct 2009, David Dyer-Bennet wrote: Device performance should be specified as a minimum assured level of performance and not as meaningless peak (up to) values. I repeat: peak values are meaningless. Seems a little pessimistic to me. Certainly minimum assured values are the basic

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-21 Thread David Dyer-Bennet
On Wed, October 21, 2009 12:53, Bob Friesenhahn wrote: On Wed, 21 Oct 2009, David Dyer-Bennet wrote: Device performance should be specified as a minimum assured level of performance and not as meaningless peak (up to) values. I repeat: peak values are meaningless. Seems a little

Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-21 Thread Enda O'Connor
Hi Yes sorry remove that line from vfstab in the new BE Enda Mark wrote: Ok. Thanks. Why does '/' show up in the newly created /BE/etc/vfstab but not in the current /etc/vfstab? Should '/' be in the /BE/etc/vfstab? btw, thank you for responding so quickly to this. Mark On Wed, Oct 21, 2009

[zfs-discuss] importing pool with missing/failed log device

2009-10-21 Thread Paul B. Henson
I've had a case open for a while (SR #66210171) regarding the inability to import a pool whose log device failed while the pool was off line. I was told this was CR #6343667, which was supposedly fixed in patches 141444-09/141445-09. However, I recently upgraded a system to U8 which includes

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-21 Thread Paul B. Henson
On Tue, 20 Oct 2009, [UTF-8] Fr??d??ric VANNIERE wrote: You can't use the Intel X25-E because it has a 32 or 64 MB volatile cache that can't be disabled neither flushed by ZFS. Say what? My understanding is that the officially supported Sun SSD for the x4540 is an OEM'd Intel X25-E, so I don't

[zfs-discuss] Exported zpool cannot be imported or deleted.

2009-10-21 Thread Stacy Maydew
I have an exported zpool that had several drives incur errors at the same time and as a result became unusable. The pool was exported at the time the drives had problems and now I can't find a way to either delete or import the pool. I've tried relabeling the disks and using dd to write

Re: [zfs-discuss] Sun Flash Accelerator F20

2009-10-21 Thread Dupuy, Robert
My take on the responses I've received the last days, is that it isn't genuine. From: Tim Cook [mailto:t...@cook.ms] Sent: 2009-10-20 20:57 To: Dupuy, Robert Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] Sun Flash Accelerator F20 On

Re: [zfs-discuss] Sun Flash Accelerator F20

2009-10-21 Thread Dupuy, Robert
I've already explained how you can scale up IOP #'s and unless that is your real workload, you won't see that in practice. See, running a high # of parallel jobs spread evenly across. I don't find the conversation genuine, so I'm not going to continue it. -Original Message- From:

Re: [zfs-discuss] Sun Flash Accelerator F20

2009-10-21 Thread Dupuy, Robert
This is one of the skimpiest specification sheets that I have ever seen for an enterprise product. At least it shows the latency. This is some kind of technology cult, I've wondered into. I won't respond further. -Original Message- From: Bob Friesenhahn

Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-21 Thread Mark Horstman
Then why the warning on the lucreate. It hasn't done that in the past. Mark On Oct 21, 2009, at 12:41 PM, Enda O'Connor enda.ocon...@sun.com wrote: Hi T his will boot ok in my opinion, not seeing any issues there. Enda Mark Horstman wrote: more input: # lumount foobar /mnt /mnt # cat

Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-21 Thread Mark
Ok. Thanks. Why does '/' show up in the newly created /BE/etc/vfstab but not in the current /etc/vfstab? Should '/' be in the /BE/etc/vfstab? btw, thank you for responding so quickly to this. Mark On Wed, Oct 21, 2009 at 12:49 PM, Enda O'Connor enda.ocon...@sun.comwrote: Mark Horstman wrote:

Re: [zfs-discuss] Exported zpool cannot be imported or deleted.

2009-10-21 Thread Victor Latushkin
Stacy Maydew wrote: I have an exported zpool that had several drives incur errors at the same time and as a result became unusable. The pool was exported at the time the drives had problems and now I can't find a way to either delete or import the pool. I've tried relabeling the disks and

Re: [zfs-discuss] Sun Flash Accelerator F20

2009-10-21 Thread Richard Elling
On Oct 21, 2009, at 6:14 AM, Dupuy, Robert wrote: This is one of the skimpiest specification sheets that I have ever seen for an enterprise product. At least it shows the latency. STORAGEsearch has been trying to wade through the spec muck for years.

Re: [zfs-discuss] Exported zpool cannot be imported or deleted.

2009-10-21 Thread Cindy Swearingen
Hi Stacy, Can you try to forcibly create a new pool using the devices from the corrupted pool, like this: # zpool create -f newpool disk1 disk2 ... Then, destroy this pool, which will release the devices. This CR has been filed to help resolve the pool cruft problem: 6893282 Allow the zpool

[zfs-discuss] Disk locating in OpenSolaris/Solaris 10

2009-10-21 Thread SHOUJIN WANG
Hi there, What I am tring to do is: Build a NAS storage server based on the following hardware architecture: Server--SAS HBA---SAS JBOD I plugin 2 SAS HBA cards into a X86 box, I also have 2 SAS I/O Modules on SAS JBOD. From each HBA card, I have one SAS cable which connects to SAS JBOD.

[zfs-discuss] Trouble testing hot spares

2009-10-21 Thread Ian Allison
Hi, I've been looking at a raidz using opensolaris snv_111b and I've come across something I don't quite understand. I have 5 disks (fixed size disk images defined in virtualbox) in a raidz configuration, with 1 disk marked as a spare. The disks are 100m in size and I wanted simulate data

Re: [zfs-discuss] Stupid to have 2 disk raidz?

2009-10-21 Thread Marty Scholes
Erik Trimble wrote: As always, the devil is in the details. In this case, the primary problem I'm having is maintaining two different block mapping schemes (one for the old disk layout, and one for the new disk layout) and still being able to interrupt the expansion process. My primary

Re: [zfs-discuss] Trouble testing hot spares

2009-10-21 Thread Richard Elling
On Oct 21, 2009, at 5:18 PM, Ian Allison wrote: Hi, I've been looking at a raidz using opensolaris snv_111b and I've come across something I don't quite understand. I have 5 disks (fixed size disk images defined in virtualbox) in a raidz configuration, with 1 disk marked as a spare. The

Re: [zfs-discuss] Disk locating in OpenSolaris/Solaris 10

2009-10-21 Thread Trevor Pretty
have a look at this thread:- http://mail.opensolaris.org/pipermail/zfs-discuss/2009-September/032349.html we discussed this a while back. SHOUJIN WANG wrote: Hi there, What I am tring to do is: Build a NAS storage server based on the following hardware architecture: Server--SAS

Re: [zfs-discuss] Sun Flash Accelerator F20

2009-10-21 Thread Jake Caferilla
Clearly a lot of people don't understand latency, so I'll talk about latency, breaking it down in simpler components. Sometimes it helps to use made up numbers, to simplify a point. Imagine a non-real system that had these 'ridiculous' performance characteristics: The system has a 60 second

Re: [zfs-discuss] Sun Flash Accelerator F20

2009-10-21 Thread Jake Caferilla
Now lets talk about the 'latency deny'ers' First of all, the say, there is no standard measurement of latency. That isn't complicated. Sun includes the transfer time in latency figures, other companies do not. THen latency deny'ers say, there is no way to compare the numbers. Thats what I'm

Re: [zfs-discuss] Sun Flash Accelerator F20

2009-10-21 Thread Tim Cook
On Wed, Oct 21, 2009 at 9:15 PM, Jake Caferilla j...@tanooshka.com wrote: Clearly a lot of people don't understand latency, so I'll talk about latency, breaking it down in simpler components. Sometimes it helps to use made up numbers, to simplify a point. Imagine a non-real system that had

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-21 Thread Marc Bevand
Bob Friesenhahn bfriesen at simple.dallas.tx.us writes: The Intel specified random write IOPS are with the cache enabled and without cache flushing. For random write I/O, caching improves I/O latency not sustained I/O throughput (which is what random write IOPS usually refer to). So Intel

Re: [zfs-discuss] Trouble testing hot spares

2009-10-21 Thread Victor Latushkin
On Oct 22, 2009, at 4:18, Ian Allison i...@pims.math.ca wrote: Hi, I've been looking at a raidz using opensolaris snv_111b and I've come across something I don't quite understand. I have 5 disks (fixed size disk images defined in virtualbox) in a raidz configuration, with 1 disk marked

[zfs-discuss] strange results ...

2009-10-21 Thread Jens Elkner
Hmmm, wondering about IMHO strange ZFS results ... X4440: 4x6 2.8GHz cores (Opteron 8439 SE), 64 GB RAM 6x Sun STK RAID INT V1.0 (Hitachi H103012SCSUN146G SAS) Nevada b124 Started with a simple test using zfs on c1t0d0s0: cd /var/tmp (1) time sh -c 'mkfile 32g bla ; sync'