Re: [zfs-discuss] Checksum question.

2008-07-02 Thread Bob Friesenhahn
On Tue, 1 Jul 2008, Brian McBride wrote: Customer: I would like to know more about zfs's checksum feature. I'm guessing it is something that is applied to the data and not the disks (as in raid-5). Data and metadata. For performance reasons, I turned off checksum on our zfs filesystem

[zfs-discuss] Changing GUID

2008-07-02 Thread Peter Pickford
Hi, How difficult would it be to write some code to change the GUID of a pool? Thanks Peter ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Changing GUID

2008-07-02 Thread Jeff Bonwick
How difficult would it be to write some code to change the GUID of a pool? As a recreational hack, not hard at all. But I cannot recommend it in good conscience, because if the pool contains more than one disk, the GUID change cannot possibly be atomic. If you were to crash or lose power in

Re: [zfs-discuss] Some basic questions about getting the best performance for database usage

2008-07-02 Thread Christiaan Willemsen
Let ZFS deal with the redundancy part. I'm not counting redundancy offered by traditional RAID as you can see by just posts in this forums that - 1. It doesn't work. 2. It bites when you least expect it to. 3. You can do nothing but resort to tapes and LOT of aspirin when you get bitten.

Re: [zfs-discuss] zpool i/o error

2008-07-02 Thread Bryan Wagoner
Can you try just deleting the zpool.cache file and let it rebuild on import? I would guess a listing of your old devices were in there when the system came back up with new stuff. The OS stayed the same. This message posted from opensolaris.org ___

Re: [zfs-discuss] Changing GUID

2008-07-02 Thread Cyril Plisko
On Wed, Jul 2, 2008 at 9:55 AM, Peter Pickford [EMAIL PROTECTED] wrote: Hi, How difficult would it be to write some code to change the GUID of a pool? Not too difficult - I did it some time ago for a customer, who wanted it badly. I guess you are trying to import pools cloned by the storage

Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume - updated proposal

2008-07-02 Thread jan damborsky
Dave Miner wrote: jan damborsky wrote: ... [2] dump and swap devices will be considered optional dump and swap devices will be considered optional during fresh installation and will be created only if there is appropriate space available on disk provided. Minimum disk space required will

Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume

2008-07-02 Thread jan damborsky
Jeff Bonwick wrote: To be honest, it is not quite clear to me, how we might utilize dumpadm(1M) to help us to calculate/recommend size of dump device. Could you please elaborate more on this ? dumpadm(1M) -c specifies the dump content, which can be kernel, kernel plus current process, or all

Re: [zfs-discuss] swap dump on ZFS volume - updated proposal

2008-07-02 Thread jan damborsky
Hi Robert, you are quite welcome ! Thank you very much for your comments. Jan Robert Milkowski wrote: Hello jan, Tuesday, July 1, 2008, 11:09:54 AM, you wrote: jd Hi all, jd Based on the further comments I received, following jd approach would be taken as far as calculating default

[zfs-discuss] HELP changing concat to a mirror

2008-07-02 Thread Mark McDonald
Hi I have managed to get this: HOSTNAME$ zpool status pool: zp01 state: ONLINE scrub: resilver completed with 0 errors on Wed Jul 2 11:55:27 2008 config: NAMESTATE READ WRITE CKSUM zp01ONLINE 0 0 0 c0t2d0ONLINE 0 0

[zfs-discuss] J4200/J4400 Array

2008-07-02 Thread Ben B.
Hi, According to the Sun Handbook, there is a new array : SAS interface 12 disks SAS or SATA ZFS could be used nicely with this box. There is an another version called J4400 with 24 disks. Doc is here : http://docs.sun.com/app/docs/coll/j4200 Does someone know price and availability for these

Re: [zfs-discuss] J4200/J4400 Array

2008-07-02 Thread Ed Saipetch
This array has not been formally announced yet and information on general availability is not available as far as I know. I saw the docs last week and the product was supposed to be launched a couple of weeks ago. Unofficially this is Sun's continued push to develop cheaper storage

Re: [zfs-discuss] is it possible to add a mirror device later?

2008-07-02 Thread Enda O'Connor
Hi Tommaso Have a look at the man page for zfs and the attach section in particular, it will do the job nicely. Enda Tommaso Boccali wrote: Ciao, the rot filesystem of my thumper is a ZFS with a single disk: bash-3.2# zpool status rpool pool: rpool state: ONLINE scrub: none

Re: [zfs-discuss] Checksum question.

2008-07-02 Thread Richard Elling
Brian McBride wrote: I have some questions from a customer about zfs checksums. Could anyone answer some of these? Thanks. Brian Customer: I would like to know more about zfs's checksum feature. I'm guessing it is something that is applied to the data and not the disks (as in raid-5).

Re: [zfs-discuss] HELP changing concat to a mirror

2008-07-02 Thread Tomas Ögren
On 02 July, 2008 - Mark McDonald sent me these 0,7K bytes: Hi I have managed to get this: HOSTNAME$ zpool status pool: zp01 state: ONLINE scrub: resilver completed with 0 errors on Wed Jul 2 11:55:27 2008 config: NAMESTATE READ WRITE CKSUM zp01

Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume

2008-07-02 Thread David Magda
On Jun 30, 2008, at 19:19, Jeff Bonwick wrote: Dump is mandatory in the sense that losing crash dumps is criminal. Swap is more complex. It's certainly not mandatory. Not so long ago, swap was typically larger than physical memory. These two statements kind of imply that dump and swap are

Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume

2008-07-02 Thread Kyle McDonald
David Magda wrote: Quite often swap and dump are the same device, at least in the installs that I've worked with, and I think the default for Solaris is that if dump is not explicitly specified it defaults to swap, yes? Is there any reason why they should be separate? I beleive

Re: [zfs-discuss] HELP changing concat to a mirror

2008-07-02 Thread Cindy . Swearingen
Mark, If you don't want to backup the data, destroy the pool, and recreate the pool as a mirrored configuration, then another option it to attach two more disks to create 2 mirrors of 2 disks. See the output below. Cindy # zpool create zp01 c1t3d0 c1t4d0 # zpool status pool: zp01 state:

Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume

2008-07-02 Thread Darren J Moffat
David Magda wrote: On Jun 30, 2008, at 19:19, Jeff Bonwick wrote: Dump is mandatory in the sense that losing crash dumps is criminal. Swap is more complex. It's certainly not mandatory. Not so long ago, swap was typically larger than physical memory. These two statements kind of imply

Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume

2008-07-02 Thread Mike Gerdts
On Wed, Jul 2, 2008 at 10:08 AM, David Magda [EMAIL PROTECTED] wrote: Quite often swap and dump are the same device, at least in the installs that I've worked with, and I think the default for Solaris is that if dump is not explicitly specified it defaults to swap, yes? Is there any reason why

Re: [zfs-discuss] zfs on top of 6140 FC array

2008-07-02 Thread Mertol Ozyoney
Depends on what benefit you are looking for. If you are looking ways to improve redundancy you can still benefit from ZFS a) ZFS snap shots will give you the ability to withstand soft/user errors. b) ZFS checksum... c) ZFS can mirror (synch or async) a 6140'lun to an other

Re: [zfs-discuss] J4200/J4400 Array

2008-07-02 Thread Mertol Ozyoney
Availibilty may depend on where you are located but J4200 and J4400 are available for most regions. Those equipment is engineered to go well with Sun open storage components like ZFS. Besides price advantage, J4200 and J4400 offers unmatched bandwith to hosts or to stacking units. You can get

[zfs-discuss] Q: grow zpool build on top of iSCSI devices

2008-07-02 Thread Thomas Nau
Hi all. We currenty move out a number of iSCSI servers based on Thumpers (x4500) running both, Solaris 10 and OpenSolaris build 90+. The targets on the machines are based on ZVOLs. Some of the clients use those iSCSI disks to build mirrored Zpools. As the volumes size on the x4500 can easily

Re: [zfs-discuss] Some basic questions about getting the best performance for database usage

2008-07-02 Thread Richard Elling
Christiaan Willemsen wrote: Hi Richard, Richard Elling wrote: It should cost less than a RAID array... Advertisement: Sun's low-end servers have 16 DIMM slots. Sadly, those are by far more expensive than what I have here from our own server supplier... ok, that pushed a button. Let's

Re: [zfs-discuss] is it possible to add a mirror device later?

2008-07-02 Thread Richard Elling
Tommaso Boccali wrote: Ciao, the rot filesystem of my thumper is a ZFS with a single disk: bash-3.2# zpool status rpool pool: rpool state: ONLINE scrub: none requested config: NAMESTATE READ WRITE CKSUM rpool ONLINE 0 0 0

Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume

2008-07-02 Thread sanjay nadkarni (Laptop)
Mike Gerdts wrote: On Wed, Jul 2, 2008 at 10:08 AM, David Magda [EMAIL PROTECTED] wrote: Quite often swap and dump are the same device, at least in the installs that I've worked with, and I think the default for Solaris is that if dump is not explicitly specified it defaults to swap, yes?

Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume

2008-07-02 Thread Kyle McDonald
sanjay nadkarni (Laptop) wrote: Mike Gerdts wrote: On Wed, Jul 2, 2008 at 10:08 AM, David Magda [EMAIL PROTECTED] wrote: Quite often swap and dump are the same device, at least in the installs that I've worked with, and I think the default for Solaris is that if dump is not

Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume

2008-07-02 Thread George Wilson
Kyle McDonald wrote: David Magda wrote: Quite often swap and dump are the same device, at least in the installs that I've worked with, and I think the default for Solaris is that if dump is not explicitly specified it defaults to swap, yes? Is there any reason why they should be

Re: [zfs-discuss] Streaming video and audio over CIFS lags.

2008-07-02 Thread Juho Mäkinen
A few things to try: put in a different ethernet card if you have one, on one or more ends. Realtek works, but I've been unimpressed with their performance in the past. An Intel x1 pci express card will only run you around $40, and I've seen much better results with them. I first

Re: [zfs-discuss] Streaming video and audio over CIFS lags.

2008-07-02 Thread Will Murnane
On Wed, Jul 2, 2008 at 13:16, Juho Mäkinen [EMAIL PROTECTED] wrote: Then I went and bought an Intel PCI Gigabit Ethernet card for 25€ which seems to have solved the problem. I still need to do some testing though to verify. Glad to hear it. Is hardware checksum offloading enabled on either

Re: [zfs-discuss] J4200/J4400 Array

2008-07-02 Thread Albert Chin
On Wed, Jul 02, 2008 at 04:49:26AM -0700, Ben B. wrote: According to the Sun Handbook, there is a new array : SAS interface 12 disks SAS or SATA ZFS could be used nicely with this box. Doesn't seem to have any NVRAM storage on board, so seems like JBOD. There is an another version called

Re: [zfs-discuss] J4200/J4400 Array

2008-07-02 Thread Tim
So when are they going to release msrp? On 7/2/08, Mertol Ozyoney [EMAIL PROTECTED] wrote: Availibilty may depend on where you are located but J4200 and J4400 are available for most regions. Those equipment is engineered to go well with Sun open storage components like ZFS. Besides price

[zfs-discuss] /var/log as a single zfs filesystem -- problems at boot

2008-07-02 Thread Dan McDonald
I created a filesystem dedicated to /var/log so I could keep compression on the logs. Unfortunately, this caused problems at boot time because my log ZFS dataset couldn't be mounted because /var/log already contained bits. Some of that, to be fair, could be fixed by having some SMF services

Re: [zfs-discuss] /var/log as a single zfs filesystem -- problems at boot

2008-07-02 Thread Kyle McDonald
Dan McDonald wrote: I created a filesystem dedicated to /var/log so I could keep compression on the logs. Unfortunately, this caused problems at boot time because my log ZFS dataset couldn't be mounted because /var/log already contained bits. Some of that, to be fair, could be fixed by

Re: [zfs-discuss] /var/log as a single zfs filesystem -- problems at boot

2008-07-02 Thread Richard Elling
Dan McDonald wrote: I created a filesystem dedicated to /var/log so I could keep compression on the logs. Unfortunately, this caused problems at boot time because my log ZFS dataset couldn't be mounted because /var/log already contained bits. Some of that, to be fair, could be fixed by

[zfs-discuss] evil tuning guide updates

2008-07-02 Thread Mike Gerdts
I was making my way through the evil tuning guide and noticed a couple updates that seem appropriate. I tried to create an account to be able to add this into the discussion tab but account creation seems to be a NOP. http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#RFEs -

Re: [zfs-discuss] is it possible to add a mirror device later?

2008-07-02 Thread Orvar Korvar
Remember, you can not delete a device, so be careful what you add. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] is it possible to add a mirror device later?

2008-07-02 Thread Richard Elling
Orvar Korvar wrote: Remember, you can not delete a device, so be careful what you add. You can detach disks from mirrors. -- richard ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] is it possible to add a mirror device later?

2008-07-02 Thread dick hoogendijk
On Wed, 02 Jul 2008 13:41:18 -0700 Richard Elling [EMAIL PROTECTED] wrote: Orvar Korvar wrote: Remember, you can not delete a device, so be careful what you add. You can detach disks from mirrors. So, a mirror of two disks becomes a system of two seperate disks? -- Dick Hoogendijk --

Re: [zfs-discuss] is it possible to add a mirror device later?

2008-07-02 Thread Richard Elling
dick hoogendijk wrote: On Wed, 02 Jul 2008 13:41:18 -0700 Richard Elling [EMAIL PROTECTED] wrote: Orvar Korvar wrote: Remember, you can not delete a device, so be careful what you add. You can detach disks from mirrors. So, a mirror of two disks becomes a system of

Re: [zfs-discuss] /var/log as a single zfs filesystem -- problems at boot

2008-07-02 Thread Akhilesh Mritunjai
Can't say about /var/log, but I have a system here with /var on zfs. My assumption was that, not just /var/log, but essentially all of /var is supposed to be runtime cruft, and so can be treated equally. This message posted from opensolaris.org ___

Re: [zfs-discuss] /var/log as a single zfs filesystem -- problems at boot

2008-07-02 Thread Richard Elling
Akhilesh Mritunjai wrote: Can't say about /var/log, but I have a system here with /var on zfs. My assumption was that, not just /var/log, but essentially all of /var is supposed to be runtime cruft, and so can be treated equally. Not really. Please see the man page for filesystem for

Re: [zfs-discuss] Accessing zfs partitions on HDD from LiveCD

2008-07-02 Thread Benjamin Ellison
to answer my own question -- yes, it worked beautifully (zpool import -f tank). Now to figure out why my network connection doesn't want to work after being set up the exact same way again :( This message posted from opensolaris.org ___ zfs-discuss

Re: [zfs-discuss] zpool i/o error

2008-07-02 Thread Victor Pajor
# rm /etc/zfs/zpool.cache # zpool import pool: zfs id: 3801622416844369872 state: FAULTED status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. The pool may be active on on another system, but can be imported using the '-f' flag.

Re: [zfs-discuss] zpool i/o error

2008-07-02 Thread Bryan Wagoner
I'll have to do some thunkin' on this. We just need to get back one of the disks, both would be great, but one more would do the trick. After all other avenues have been tried, one thing that you can try is to use the 2008.05 livecd and boot into the livecd without installing the OS. Import