Re: [zfs-discuss] Trying to determine if this box will be compatible with Opensolaris or Solaris

2009-03-12 Thread mike
yeah i really wish the HCL was easier to work with, and allowed comments. for instance that HCL entry was updated in 2007 sometime. since then like you've said it could have been better or dropped altogether. some sort of more community oriented aspect might help beef it up some. also making the

Re: [zfs-discuss] reboot when copying large amounts of data

2009-03-12 Thread Blake
I start the cp, and then, with prstat -a, watch the cpu load for the cp process climb to 25% on a 4-core machine. Load, measured for example with 'uptime', climbs steadily until the reboot. Note that the machine does not dump properly, panic or hang - rather, it reboots. I attached a screenshot

Re: [zfs-discuss] reboot when copying large amounts of data

2009-03-12 Thread Nathan Kroenert
definitely time to bust out some mdb -k and see what it's moaning about. I did not see the screenshot earlier... sorry about that. Nathan. Blake wrote: I start the cp, and then, with prstat -a, watch the cpu load for the cp process climb to 25% on a 4-core machine. Load, measured for example

Re: [zfs-discuss] reboot when copying large amounts of data

2009-03-12 Thread Nathan Kroenert
definitely time to bust out some mdb -K or boot -k and see what it's moaning about. I did not see the screenshot earlier... sorry about that. Nathan. Blake wrote: I start the cp, and then, with prstat -a, watch the cpu load for the cp process climb to 25% on a 4-core machine. Load, measured

Re: [zfs-discuss] reboot when copying large amounts of data

2009-03-12 Thread Blake
So, if I boot with the -k boot flags (to load the kernel debugger?) what do I need to look for? I'm no expert at kernel debugging. I think this is a pci error judging by the console output, or at least is i/o related... thanks for your feedback, Blake On Thu, Mar 12, 2009 at 2:18 AM, Nathan

[zfs-discuss] Encryption through compression?

2009-03-12 Thread Monish Shah
Hello everyone, My understanding is that the ZFS crypto framework will not release until 2010. In light of that, I'm wondering if the following approach to encryption could make sense for some subset of users: The idea is to use the compression framework to do both compression and

Re: [zfs-discuss] Encryption through compression?

2009-03-12 Thread Darren J Moffat
Monish Shah wrote: Hello everyone, My understanding is that the ZFS crypto framework will not release until 2010. That is incorrect information, where did you get that from ? In light of that, I'm wondering if the following approach to encryption could make sense for some subset of

Re: [zfs-discuss] Encryption through compression?

2009-03-12 Thread Darren J Moffat
Monish Shah wrote: Hello Darren, Monish Shah wrote: Hello everyone, My understanding is that the ZFS crypto framework will not release until 2010. That is incorrect information, where did you get that from ? It was in Mike Shapiro's presentation at the Open Solaris Storage Summit that

Re: [zfs-discuss] Encryption through compression?

2009-03-12 Thread Monish Shah
Hello Darren, Monish Shah wrote: Hello everyone, My understanding is that the ZFS crypto framework will not release until 2010. That is incorrect information, where did you get that from ? It was in Mike Shapiro's presentation at the Open Solaris Storage Summit that took place a couple

Re: [zfs-discuss] Export ZFS via ISCSI to Linux - Is it stable for production use now?

2009-03-12 Thread howard chen
Hi, On Thu, Mar 12, 2009 at 1:19 AM, Darren J Moffat darr...@opensolaris.org wrote: That is all that has to be done on the OpenSolaris side to make a 10g lun available over iSCSI.  The rest of it is all how Linux sets up its iSCSI client side which I don't know but I know on Solaris it is

Re: [zfs-discuss] ZFS on a SAN

2009-03-12 Thread Sriram Narayanan
On Thu, Mar 12, 2009 at 2:12 AM, Erik Trimble erik.trim...@sun.com wrote: snip/ On the SAN, create (2) LUNs - one for your primary data, and one for your snapshots/backups. On hostA, create a zpool on the primary data LUN (call it zpool A), and another zpool on the backup LUN (zpool B).  

[zfs-discuss] CLI grinds to a halt during backups

2009-03-12 Thread Marius van Vuuren
Hi, I have a X4150 with a J4200 connected populated with 12 x 1 TB Disks (SATA) I run backup_pc as my software for backing up. Is there anything I can do to make the command line more responsive during backup windows? At the moment it grinds to a complete standstill. Thanks

Re: [zfs-discuss] User quota design discussion..

2009-03-12 Thread Bob Friesenhahn
On Thu, 12 Mar 2009, Jorgen Lundman wrote: User-land will then have a daemon, whether or not it is one daemon per file-system or really just one daemon does not matter. This process will open '/dev/quota' and empty the transaction log entries constantly. Take the uid,gid entries and update

Re: [zfs-discuss] ZFS on a SAN

2009-03-12 Thread Grant Lowe
Hi Erik, A couple of questions about what you said in your email. In synopsis 2, if hostA has gone belly up and is no longer accessible, then a step that is implied (or maybe I'm just inferring it) is to go to the SAN and reassign the LUN from hostA to hostB. Correct? - Original

Re: [zfs-discuss] User quota design discussion..

2009-03-12 Thread Eric Schrock
Note that: 6501037 want user/group quotas on ZFS Is already committed to be fixed in build 113 (i.e. in the next month). - Eric On Thu, Mar 12, 2009 at 12:04:04PM +0900, Jorgen Lundman wrote: In the style of a discussion over a beverage, and talking about user-quotas on ZFS, I recently

Re: [zfs-discuss] Export ZFS via ISCSI to Linux - Is it stable for production use now?

2009-03-12 Thread Darren J Moffat
howard chen wrote: Hi, On Thu, Mar 12, 2009 at 1:19 AM, Darren J Moffat darr...@opensolaris.org wrote: That is all that has to be done on the OpenSolaris side to make a 10g lun available over iSCSI. The rest of it is all how Linux sets up its iSCSI client side which I don't know but I know

Re: [zfs-discuss] User quota design discussion..

2009-03-12 Thread Blake
That is pretty freaking cool. On Thu, Mar 12, 2009 at 11:38 AM, Eric Schrock eric.schr...@sun.com wrote: Note that: 6501037 want user/group quotas on ZFS Is already committed to be fixed in build 113 (i.e. in the next month). - Eric On Thu, Mar 12, 2009 at 12:04:04PM +0900, Jorgen

Re: [zfs-discuss] User quota design discussion..

2009-03-12 Thread Matthew Ahrens
Jorgen Lundman wrote: In the style of a discussion over a beverage, and talking about user-quotas on ZFS, I recently pondered a design for implementing user quotas on ZFS after having far too little sleep. It is probably nothing new, but I would be curious what you experts think of the

Re: [zfs-discuss] User quota design discussion..

2009-03-12 Thread Tomas Ögren
On 12 March, 2009 - Matthew Ahrens sent me these 5,0K bytes: Jorgen Lundman wrote: In the style of a discussion over a beverage, and talking about user-quotas on ZFS, I recently pondered a design for implementing user quotas on ZFS after having far too little sleep. It is probably

Re: [zfs-discuss] User quota design discussion..

2009-03-12 Thread Matthew Ahrens
Bob Friesenhahn wrote: On Thu, 12 Mar 2009, Jorgen Lundman wrote: User-land will then have a daemon, whether or not it is one daemon per file-system or really just one daemon does not matter. This process will open '/dev/quota' and empty the transaction log entries constantly. Take the

Re: [zfs-discuss] reboot when copying large amounts of data

2009-03-12 Thread Miles Nordin
maj == Maidak Alexander J maidakalexand...@johndeere.com writes: maj If you're having issues with a disk contoller or disk IO maj driver its highly likely that a savecore to disk after the maj panic will fail. I'm not sure how to work around this not in Solaris, but as a concept for

Re: [zfs-discuss] usedby* properties for datasets created before v13

2009-03-12 Thread Matthew Ahrens
Gavin Maltby wrote: Hi, The manpage says Specifically, used = usedbychildren + usedbydataset + usedbyrefreservation +, usedbysnapshots. These proper- ties are only available for datasets created on zpool version 13 pools. .. and I now realize that

Re: [zfs-discuss] ZFS on a SAN

2009-03-12 Thread Scott Lawson
Grant, Yes this is correct. If host A goes belly up, you can deassign the LUN from host A and assign to host B. Being that host A has not gracefully exported it's zpool you will need to 'zpool import -f poolname' to force the pool to be imported because it hasn't been exported prior to import

Re: [zfs-discuss] reboot when copying large amounts of data

2009-03-12 Thread Blake
I've managed to get the data transfer to work by rearranging my disks so that all of them sit on the integrated SATA controller. So, I feel pretty certain that this is either an issue with the Supermicro aoc-sat2-mv8 card, or with PCI-X on the motherboard (though I would think that the integrated

Re: [zfs-discuss] CLI grinds to a halt during backups

2009-03-12 Thread Jeff Williams
Maybe you're also seeing this one? 6586537 async zio taskqs can block out userland commands -Jeff Blake wrote: I think we need some data to look at to find out what's being slow. Try some commands like this to get data: prstat -a iostat -x 5 zpool iostat 5 (if you are using ZFS) and then

Re: [zfs-discuss] reboot when copying large amounts of data

2009-03-12 Thread Tim
On Thu, Mar 12, 2009 at 2:22 PM, Blake blake.ir...@gmail.com wrote: I've managed to get the data transfer to work by rearranging my disks so that all of them sit on the integrated SATA controller. So, I feel pretty certain that this is either an issue with the Supermicro aoc-sat2-mv8 card,

Re: [zfs-discuss] reboot when copying large amounts of data

2009-03-12 Thread Dave
Tim wrote: On Thu, Mar 12, 2009 at 2:22 PM, Blake blake.ir...@gmail.com mailto:blake.ir...@gmail.com wrote: I've managed to get the data transfer to work by rearranging my disks so that all of them sit on the integrated SATA controller. So, I feel pretty certain that this is

Re: [zfs-discuss] reboot when copying large amounts of data

2009-03-12 Thread Miles Nordin
b == Blake blake.ir...@gmail.com writes: b http://www.provantage.com/lsi-logic-lsi00117~7LSIG03X.htm I'm having trouble matching up chips, cards, drivers, platforms, and modes with the LSI stuff. The more I look at it the mroe confused I get. Platforms: x86 SPARC Drivers: mpt

Re: [zfs-discuss] reboot when copying large amounts of data

2009-03-12 Thread Nathan Kroenert
For what it's worth, I have been running Nevada (so, same kernel as opensolaris) for ages (at least 18 months) on a Gigabyte board with the MCP55 chipset and it's been flawless. I liked it so much, I bought it's newer brother, based on the nvidia 750SLI chipset... M750SLI-DS4 Cheers!

Re: [zfs-discuss] reboot when copying large amounts of data

2009-03-12 Thread Will Murnane
On Thu, Mar 12, 2009 at 18:30, Miles Nordin car...@ivy.net wrote:  I love the way they use the numbers 3800 and 3080, so you are  constantly transposing them thus leaving google littered with all  this confusingly wrong information. Think of the middle two digits as (number of external ports,

Re: [zfs-discuss] User quota design discussion..

2009-03-12 Thread Jorgen Lundman
Bob Friesenhahn wrote: In order for this to work, ZFS data blocks need to somehow be associated with a POSIX user ID. To start with, the ZFS POSIX layer is implemented on top of a non-POSIX Layer which does not need to know about POSIX user IDs. ZFS also supports snapshots and clones.

Re: [zfs-discuss] User quota design discussion..

2009-03-12 Thread Jorgen Lundman
Eric Schrock wrote: Note that: 6501037 want user/group quotas on ZFS Is already committed to be fixed in build 113 (i.e. in the next month). - Eric Wow, that would be fantastic. We have the Sun vendors camped out at the data center trying to apply fresh patches. I believe 6798540 fixed

Re: [zfs-discuss] User quota design discussion..

2009-03-12 Thread Jorgen Lundman
As it turns out, I'm working on zfs user quotas presently, and expect to integrate in about a month. My implementation is in-kernel, integrated with the rest of ZFS, and does not have the drawbacks you mention below. I merely suggested my design as it may have been something I _could_

Re: [zfs-discuss] User quota design discussion..

2009-03-12 Thread Matthew Ahrens
Jorgen Lundman wrote: Great! Will there be any particular limits on how many uids, or size of uids in your implementation? UFS generally does not, but I did note that if uid go over 1000 it flips out and changes the quotas file to 128GB in size. All UIDs, as well as SIDs (from the SMB

Re: [zfs-discuss] reboot when copying large amounts of data

2009-03-12 Thread Miles Nordin
wm == Will Murnane will.murn...@gmail.com writes:     * SR = Software RAID IT = Integrate. Target mode. IR mode is not supported. wm Integrated target mode lets you export some storage attached wm to the host system (through another adapter, presumably) as a wm storage

Re: [zfs-discuss] reboot when copying large amounts of data

2009-03-12 Thread James C. McPherson
On Thu, 12 Mar 2009 22:24:12 -0400 Miles Nordin car...@ivy.net wrote: wm == Will Murnane will.murn...@gmail.com writes:     * SR = Software RAID IT = Integrate. Target mode. IR mode is not supported. wm Integrated target mode lets you export some storage attached wm to