Re: [zfs-discuss] Very sick iSCSI pool
On Tue, Jul 3, 2012 at 11:08 AM, Ian Collins wrote: I'm assuming the pool is hosed? >>> >>> Before making that assumption, I'd try something simple first: >>> - reading from the imported iscsi disk (e.g. with dd) to make sure >>> it's not iscsi-related problem >>> - import the disk in another host, and try to read the disk again, to >>> make sure it's not client-specific problem >>> - possibly restart the iscsi server, just to make sure >> >> Booting the initiator host from a live DVD image and attempting to >> import the pool gives the same error report. > > > The pool's data appears to be recoverable when I import it read only. That's good > > The storage appliance is so full they can't delete files from it! Hahaha :D > Now that > shouldn't have caused problems with a fixed sized volume, but who knows? AFAIK you'll always need space, e.g. to replay/rollback transactions during pool import. The best way is, of course, fix the appliance. Sometimes something simple like deleting snapshots/datasets will do the trick. -- Fajar ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Very sick iSCSI pool
On 07/ 1/12 08:57 PM, Ian Collins wrote: On 07/ 1/12 10:20 AM, Fajar A. Nugraha wrote: On Sun, Jul 1, 2012 at 4:18 AM, Ian Collins wrote: On 06/30/12 03:01 AM, Richard Elling wrote: Hi Ian, Chapter 7 of the DTrace book has some examples of how to look at iSCSI target and initiator behaviour. Thanks Richard, I 'll have a look. I'm assuming the pool is hosed? Before making that assumption, I'd try something simple first: - reading from the imported iscsi disk (e.g. with dd) to make sure it's not iscsi-related problem - import the disk in another host, and try to read the disk again, to make sure it's not client-specific problem - possibly restart the iscsi server, just to make sure Booting the initiator host from a live DVD image and attempting to import the pool gives the same error report. The pool's data appears to be recoverable when I import it read only. The storage appliance is so full they can't delete files from it! Now that shouldn't have caused problems with a fixed sized volume, but who knows? -- Ian. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] HP Proliant DL360 G7
On Jul 2, 2012, at 2:40 PM, Edmund White wrote: > This depends upon what you want to do. I've used G6 and G7 ProLiants > extensively in ZFS deployments (Nexenta, mostly). I'm assuming you'd be > using an external JBOD enclosure? When I was at Nexenta, we qualed the DL380 G7, D2600, and D2700. These are some of the better boxes on the market. > All works well. I disable the onboard Smart Array P410 RAID controller and > replace it with an LSI SAS HBA. If using internal disks, I'll use the > 9211-8i. If external, the 9205-8e. Or sometimes, both. FYI, HP also sells an 8-port IT-style HBA (SC-08Ge), but it is hard to locate with their configurators. There might be a more modern equivalent cleverly hidden somewhere difficult to find. -- richard -- ZFS Performance and Training richard.ell...@richardelling.com +1-760-896-4422 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Has anyone used a Dell with a PERC H310?
On 05/29/12 08:42 AM, Richard Elling wrote: On May 28, 2012, at 2:48 AM, Ian Collins wrote: On 05/28/12 08:55 PM, Sašo Kiselkov wrote: .. If the drives show up at all, chances are you only need to work around the power-up issue in Dell HDD firmware. Here's what I had to do to get the drives going in my R515: /kernel/drv/sd.conf sd-config-list = "SEAGATE ST3300657SS", "power-condition:false", "SEAGATE ST2000NM0001", "power-condition:false"; (that's for Seagate 300GB 15k SAS and 2TB 7k2 SAS drives, depending on your drive model the strings might differ) How would that work when the drive type is unknown (to format)? I assumed if sd knows the type, so will format. I haven't looked at the code recently, but if it is the same parser as used elsewhere, then a partial match should work. Can someone try it out and report back to the list? sd-config-list = "SEAGATE ST", "power-condition:false"; Well I finally got back to testing this box... Yes, that shorthand fixes the power-up issue (tested from a cold start). -- Ian. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] HP Proliant DL360 G7
No, I'll install using the internal drive bays mapped to an LSI controller. Using the internal disks bays is also handy for ZIL/L2ARC devices. -- Ed On 7/2/12 5:07 PM, "Anh Quach" wrote: >Yes, planning on attaching multiple DataOn JBODs. > >When you say you've been replacing the onboard Smart Array controller (in >external disk setups), do you also mean that your root pool is configured >on the JBOD(s), essentially completely bypassing any of the built-in >drive bays? > >Thanks, Edmund! > >-Anh > > >On Jul 2, 2012, at 2:40 PM, Edmund White wrote: > >> This depends upon what you want to do. I've used G6 and G7 ProLiants >> extensively in ZFS deployments (Nexenta, mostly). I'm assuming you'd be >> using an external JBOD enclosure? >> >> All works well. I disable the onboard Smart Array P410 RAID controller >>and >> replace it with an LSI SAS HBA. If using internal disks, I'll use the >> 9211-8i. If external, the 9205-8e. Or sometimes, both. >> >> I think the DL380 G7 is a better choice for PCIe flexibility, though. >>The >> DL360 is pretty limited in expansion space. >> >> -- >> Edmund White >> >> >> >> >> On 7/2/12 4:29 PM, "Anh Quach" wrote: >> >>> Hello, >>> >>> Has anyone out there been able to qualify the Proliant DL360 G7 for >>>your >>> Solaris/OI/Nexenta environments? Any pros/cons/gotchas (vs. previous >>> generation HP servers) would be greatly appreciated. >>> >>> Thanks in advance! >>> >>> -Anh >>> >>> >>> ___ >>> zfs-discuss mailing list >>> zfs-discuss@opensolaris.org >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] HP Proliant DL360 G7
Yes, planning on attaching multiple DataOn JBODs. When you say you've been replacing the onboard Smart Array controller (in external disk setups), do you also mean that your root pool is configured on the JBOD(s), essentially completely bypassing any of the built-in drive bays? Thanks, Edmund! -Anh On Jul 2, 2012, at 2:40 PM, Edmund White wrote: > This depends upon what you want to do. I've used G6 and G7 ProLiants > extensively in ZFS deployments (Nexenta, mostly). I'm assuming you'd be > using an external JBOD enclosure? > > All works well. I disable the onboard Smart Array P410 RAID controller and > replace it with an LSI SAS HBA. If using internal disks, I'll use the > 9211-8i. If external, the 9205-8e. Or sometimes, both. > > I think the DL380 G7 is a better choice for PCIe flexibility, though. The > DL360 is pretty limited in expansion space. > > -- > Edmund White > > > > > On 7/2/12 4:29 PM, "Anh Quach" wrote: > >> Hello, >> >> Has anyone out there been able to qualify the Proliant DL360 G7 for your >> Solaris/OI/Nexenta environments? Any pros/cons/gotchas (vs. previous >> generation HP servers) would be greatly appreciated. >> >> Thanks in advance! >> >> -Anh >> >> >> ___ >> zfs-discuss mailing list >> zfs-discuss@opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Interaction between ZFS intent log and mmap'd files
On Mon, Jul 2, 2012 at 3:32 PM, Bob Friesenhahn wrote: > On Mon, 2 Jul 2012, Iwan Aucamp wrote: >> I'm interested in some more detail on how ZFS intent log behaves for >> updated done via a memory mapped file - i.e. will the ZIL log updates done >> to an mmap'd file or not ? > > > I would to expect these writes to go into the intent log unless msync(2) is > used on the mapping with the MS_SYNC option. You can't count on any writes to mmap(2)ed files hitting disk until you msync(2) with MS_SYNC. The system should want to wait as long as possible before committing any mmap(2)ed file writes to disk. Conversely you can't expect that no writes will hit disk until you msync(2) or munmap(2). Nico -- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] HP Proliant DL360 G7
This depends upon what you want to do. I've used G6 and G7 ProLiants extensively in ZFS deployments (Nexenta, mostly). I'm assuming you'd be using an external JBOD enclosure? All works well. I disable the onboard Smart Array P410 RAID controller and replace it with an LSI SAS HBA. If using internal disks, I'll use the 9211-8i. If external, the 9205-8e. Or sometimes, both. I think the DL380 G7 is a better choice for PCIe flexibility, though. The DL360 is pretty limited in expansion space. -- Edmund White On 7/2/12 4:29 PM, "Anh Quach" wrote: >Hello, > >Has anyone out there been able to qualify the Proliant DL360 G7 for your >Solaris/OI/Nexenta environments? Any pros/cons/gotchas (vs. previous >generation HP servers) would be greatly appreciated. > >Thanks in advance! > >-Anh > > >___ >zfs-discuss mailing list >zfs-discuss@opensolaris.org >http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] HP Proliant DL360 G7
Hello, Has anyone out there been able to qualify the Proliant DL360 G7 for your Solaris/OI/Nexenta environments? Any pros/cons/gotchas (vs. previous generation HP servers) would be greatly appreciated. Thanks in advance! -Anh ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Interaction between ZFS intent log and mmap'd files
On Mon, 2 Jul 2012, Iwan Aucamp wrote: I'm interested in some more detail on how ZFS intent log behaves for updated done via a memory mapped file - i.e. will the ZIL log updates done to an mmap'd file or not ? I would to expect these writes to go into the intent log unless msync(2) is used on the mapping with the MS_SYNC option. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Interaction between ZFS intent log and mmap'd files
I'm interested in some more detail on how ZFS intent log behaves for updated done via a memory mapped file - i.e. will the ZIL log updates done to an mmap'd file or not ? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss