Re: [zfs-discuss] Checksum question.

2008-07-02 Thread Bob Friesenhahn
On Tue, 1 Jul 2008, Brian McBride wrote:

 Customer:
 I would like to know more about zfs's checksum feature.  I'm guessing
 it is something that is applied to the data and not the disks (as in
 raid-5).

Data and metadata.

 For performance reasons, I turned off checksum on our zfs filesystem
 (along with atime updates).  Because of a concern for possible data
 corruption (silent data corruption), I'm interested in turning checksum
 back on.  When I do so, will it create checksums for existing files or
 will they need to be rewritten?  And can you tell me the overhead
 involved with having checksum active (CPU time, additional space)?

Turning the checksums off only disables them for user data.  They are 
still enabled for filesystem metadata.  I doubt that checksums will be 
computed for existing files until a block is copied/modified, but 
perhaps scrub can do that (I don't know).  On modern AMD Opteron 
hardware it seems that CPU overhead for checksums is very low (e.g.  
5%).

I don't see much value from disabling both atime and checksums in the 
fileysystem.  There is more value to disabling atime updates in NFS 
mounts.  Zfs is pretty lazy about updates so atime just adds slightly 
to total I/O but without noticeably increasing latency.

In a benchmark I did using iozone with 8k I/O blocks in ZFS 
filesystems with 128K block size, I see that with atime the random 
witers test results in 834.79 ops/sec but without it increases to 
853.56 ops/sec.  This is a very small performance improvement. 
Likewise, with checksums disabled (but atime enabled) I see 839.78 
ops/sec.  Using 8K I/O blocks in a filesystem with 8K block size 
resulted in a huge performance difference but unfortunately I failed 
to record the result.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Changing GUID

2008-07-02 Thread Peter Pickford
Hi,

How difficult would it be to write some code to change the GUID of a pool?



Thanks

Peter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Changing GUID

2008-07-02 Thread Jeff Bonwick
 How difficult would it be to write some code to change the GUID of a pool?

As a recreational hack, not hard at all.  But I cannot recommend it
in good conscience, because if the pool contains more than one disk,
the GUID change cannot possibly be atomic.  If you were to crash or
lose power in the middle of the operation, your data would be gone.

What problem are you trying to solve?

Jeff
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Some basic questions about getting the best performance for database usage

2008-07-02 Thread Christiaan Willemsen
 Let ZFS deal with the redundancy part. I'm not
 counting redundancy offered by traditional RAID as
 you can see by just posts in this forums that -
 1. It doesn't work.
 2. It bites when you least expect it to.
 3. You can do nothing but resort to tapes and LOT of
 aspirin when you get bitten.

Thanks, that's exactly what I was asking about.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool i/o error

2008-07-02 Thread Bryan Wagoner
Can you try just deleting the zpool.cache file and let it rebuild on import? I 
would guess a listing of your old devices were in there when the system came 
back up with new stuff. The OS stayed the same.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Changing GUID

2008-07-02 Thread Cyril Plisko
On Wed, Jul 2, 2008 at 9:55 AM, Peter Pickford [EMAIL PROTECTED] wrote:
 Hi,

 How difficult would it be to write some code to change the GUID of a pool?

Not too difficult - I did it some time ago for a customer, who wanted it badly.
I guess you are trying to import pools cloned by the storage itself.
Am I close ?

-- 
Regards,
 Cyril
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume - updated proposal

2008-07-02 Thread jan damborsky
Dave Miner wrote:
 jan damborsky wrote:
 ...
 [2] dump and swap devices will be considered optional

 dump and swap devices will be considered optional during
 fresh installation and will be created only if there is
 appropriate space available on disk provided.

 Minimum disk space required will not take into account
 dump and swap, thus allowing user to install on small disks.
 This will need to be documented (e.g. as part of release notes),
 so that user is aware of such behavior.


 I'd like to at least consider whether a warning should be displayed in 
 the GUI about the lack of dump space if it won't be created, since it 
 does represent a serviceability issue.


This is a good suggestion - I think we might display the warning message
on Summary screen before user actually starts installation process.
In this case there is possibility to go back and change the disk size.
Or we might display the warning dialog earlier - when user decides to
leave Disk screen.
I will check with Niall and Frank in order to work out the right solution
from UI point of view.

Thank you,
Jan

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume

2008-07-02 Thread jan damborsky
Jeff Bonwick wrote:
 To be honest, it is not quite clear to me, how we might utilize
 dumpadm(1M) to help us to calculate/recommend size of dump device.
 Could you please elaborate more on this ?

 dumpadm(1M) -c specifies the dump content, which can be kernel, kernel plus
 current process, or all memory.  If the dump content is 'all', the dump space
 needs to be as large as physical memory.  If it's just 'kernel', it can be
 some fraction of that.

I see - thanks a lot for clarification.

Jan

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] swap dump on ZFS volume - updated proposal

2008-07-02 Thread jan damborsky
Hi Robert,

you are quite welcome !

Thank you very much for your comments.

Jan


Robert Milkowski wrote:
 Hello jan,

 Tuesday, July 1, 2008, 11:09:54 AM, you wrote:

 jd Hi all,

 jd Based on the further comments I received, following
 jd approach would be taken as far as calculating default
 jd size of swap and dump devices on ZFS volumes in Caiman
 jd installer is concerned.

 jd [1] Following formula would be used for calculating
 jd swap and dump sizes:

 jd size_of_swap = MAX(512 MiB, MIN(physical_memory/2, 32 GiB))
 jd size_of_dump = MAX(256 MiB, MIN(physical_memory/4, 16 GiB))

 jd User can reconfigure this after installation is done on live
 jd system by zfs set command.

 jd [2] dump and swap devices will be considered optional

 jd dump and swap devices will be considered optional during
 jd fresh installation and will be created only if there is
 jd appropriate space available on disk provided.

 jd Minimum disk space required will not take into account
 jd dump and swap, thus allowing user to install on small disks.
 jd This will need to be documented (e.g. as part of release notes),
 jd so that user is aware of such behavior.

 jd Recommended disk size (which now covers one full upgrade plus
 jd 2GiB space for additional software) will take into account dump
 jd and swap.

 jd Dump and swap devices will be then created if user dedicates
 jd at least recommended disk space for installation.

 jd Thank you very much all for this valuable input.
 jd Jan


 I like your approach and I like even more that you've listened to
 community - thank you.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] HELP changing concat to a mirror

2008-07-02 Thread Mark McDonald
Hi

I have managed to get this:
HOSTNAME$ zpool status
  pool: zp01
 state: ONLINE
 scrub: resilver completed with 0 errors on Wed Jul  2 11:55:27 2008
config:

NAMESTATE READ WRITE CKSUM
zp01ONLINE   0 0 0
  c0t2d0ONLINE   0 0 0
  c0t3d0ONLINE   0 0 0

But I wanted to get these in a mirror - I am unable to remove c0t3d0 from the 
pool. There is already data in the pool with filesystems mounted. So I do not 
wish to destroy the pool.

Please help
Mark
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] J4200/J4400 Array

2008-07-02 Thread Ben B.
Hi,

According to the Sun Handbook, there is a new array :
SAS interface
12 disks SAS or SATA

ZFS could be used nicely with this box.

There is an another version called
J4400 with 24 disks.

Doc is here :
http://docs.sun.com/app/docs/coll/j4200

Does someone know price and availability for these products ?

Best Regards,
Ben
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] J4200/J4400 Array

2008-07-02 Thread Ed Saipetch
This array has not been formally announced yet and information on  
general availability is not available as far as I know.  I saw the  
docs last week and the product was supposed to be launched a couple of  
weeks ago.

Unofficially this is Sun's continued push to develop cheaper storage  
options that can be combined with Solaris and the Open Storage  
initiative to provide customers with options they don't have today.   
I'd expect the price-point to be quite a bit cheaper than the LC 24XX  
series of arrays.

On Jul 2, 2008, at 7:49 AM, Ben B. wrote:

 Hi,

 According to the Sun Handbook, there is a new array :
 SAS interface
 12 disks SAS or SATA

 ZFS could be used nicely with this box.

 There is an another version called
 J4400 with 24 disks.

 Doc is here :
 http://docs.sun.com/app/docs/coll/j4200

 Does someone know price and availability for these products ?

 Best Regards,
 Ben


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is it possible to add a mirror device later?

2008-07-02 Thread Enda O'Connor
Hi Tommaso
Have a look at the man page for zfs and the attach section in 
particular, it will do the job nicely.

Enda



Tommaso Boccali wrote:
 Ciao, 
 the rot filesystem of my thumper is a ZFS with a single disk:
 
 bash-3.2# zpool status rpool
   pool: rpool
  state: ONLINE
  scrub: none requested
 config:
 
 NAMESTATE READ WRITE CKSUM
 rpool   ONLINE   0 0 0
   c5t0d0s0  ONLINE   0 0 0
 spares
   c0t7d0AVAIL   
   c1t6d0AVAIL   
   c1t7d0AVAIL   
 
 
 is it possible to add a mirror to it? I seem to be able only to add a 
 new PAIR of disks in mirror, but not to add a mirror to the existing 
 disk ...
 
 thanks
 
 tommaso
 
 
 Tommaso Boccali - CMS Experiment - INFN Pisa
 iChat/AIM/Skype/Gizmo:  tomboc73
 Mail: mailto:[EMAIL PROTECTED]
 Pisa: +390502214216  Portable: +393472563154
 CERN: +41227671545   Portable:  +41762310208
 
 
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Checksum question.

2008-07-02 Thread Richard Elling
Brian McBride wrote:
 I have some questions from a customer about zfs checksums.
 Could anyone answer some of these? Thanks.

 Brian

 Customer:
  I would like to know more about zfs's checksum feature.  I'm guessing 
 it is something that is applied to the data and not the disks (as in 
 raid-5).
   

RAID-5 does not do checksumming.  It does a parity calculation,
but many RAID-5 implementations do not actually check the
parity unless a disk reports an error. ZFS always checks the
checksum, unless you disable it.

At this point, I usually explain how people find faults in
their SAN because ZFS's checksum works end-to-end.

  For performance reasons, I turned off checksum on our zfs filesystem 
 (along with atime updates).  Because of a concern for possible data 
 corruption (silent data corruption), I'm interested in turning checksum 
 back on.  When I do so, will it create checksums for existing files or 
 will they need to be rewritten?  And can you tell me the overhead 
 involved with having checksum active (CPU time, additional space)?

   

To put this in perspective, in general, the time it takes to read the data
from disk is much larger than the time required to calculate the
checksum.  But, you can also use different checksum algorithms, with
varying strength and computational requirements.  By default, ZFS
uses a Fletcher-2 algorithm, but you can enable Fletcher-4 or SHA-256.
If you are planning to characterize the computational cost of checksums,
please add these to your test plan and report back to us :-)
 -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HELP changing concat to a mirror

2008-07-02 Thread Tomas Ögren
On 02 July, 2008 - Mark McDonald sent me these 0,7K bytes:

 Hi
 
 I have managed to get this:
 HOSTNAME$ zpool status
   pool: zp01
  state: ONLINE
  scrub: resilver completed with 0 errors on Wed Jul  2 11:55:27 2008
 config:
 
 NAMESTATE READ WRITE CKSUM
 zp01ONLINE   0 0 0
   c0t2d0ONLINE   0 0 0
   c0t3d0ONLINE   0 0 0
 
 But I wanted to get these in a mirror - I am unable to remove c0t3d0
 from the pool. There is already data in the pool with filesystems
 mounted. So I do not wish to destroy the pool.

Currently your only option is to copy data somewhere else, destroy pool
and create a new one. Disk removal is being worked on I believe, but it
gets kinda complex when you have a bunch of snapshots, clones etc..

/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume

2008-07-02 Thread David Magda
On Jun 30, 2008, at 19:19, Jeff Bonwick wrote:

 Dump is mandatory in the sense that losing crash dumps is criminal.

 Swap is more complex.  It's certainly not mandatory.  Not so long ago,
 swap was typically larger than physical memory.

These two statements kind of imply that dump and swap are two  
different slices. They certainly can be, but how often are they?

 On my desktop, which has 16GB of memory, the default OpenSolaris  
 swap partition is 2GB.
 That's just stupid.  Unless swap space significantly expands the
 amount of addressable virtual memory, there's no reason to have it.

Quite often swap and dump are the same device, at least in the  
installs that I've worked with, and I think the default for Solaris  
is that if dump is not explicitly specified it defaults to swap, yes?  
Is there any reason why they should be separate?

Having two just seems like a waste to me, even with disk sizes being  
what they are (and growing). A separate dump device is only really  
needed if something goes completely wrong, otherwise it's just  
sitting there doing nothing. If you're panicing, then whatever is  
in swap is now no longer relevant, so over writing it is no big deal.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume

2008-07-02 Thread Kyle McDonald
David Magda wrote:

 Quite often swap and dump are the same device, at least in the  
 installs that I've worked with, and I think the default for Solaris  
 is that if dump is not explicitly specified it defaults to swap, yes?  
 Is there any reason why they should be separate?

   
I beleive there are technical limitations with ZFS Boot that stop them 
from sharing the same Zvol..
 Having two just seems like a waste to me, even with disk sizes being  
 what they are (and growing). A separate dump device is only really  
 needed if something goes completely wrong, otherwise it's just  
 sitting there doing nothing. If you're panicing, then whatever is  
 in swap is now no longer relevant, so over writing it is no big deal.
   
That said, with all the talk of dynamic sizing, If, during normal 
operation the swap Zvol has space allocated, and the Dump Zvol is sized 
to 0. Then during a panic, could the swap volume be sized to 0 and the 
dump volume expanded to whatever size?

This at least while still requireing 2 Zvol's would allow (even when the 
rest of the pool is short on space) a close approximation of the old 
behavior of sharing the same slice for both swap and dump.

  -Kyle

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HELP changing concat to a mirror

2008-07-02 Thread Cindy . Swearingen
Mark,

If you don't want to backup the data, destroy the pool, and
recreate the pool as a mirrored configuration, then another
option it to attach two more disks to create 2 mirrors of 2
disks.

See the output below.

Cindy

# zpool create zp01 c1t3d0 c1t4d0
# zpool status
   pool: zp01
  state: ONLINE
  scrub: none requested
config:

 NAMESTATE READ WRITE CKSUM
 zp01ONLINE   0 0 0
   c1t3d0ONLINE   0 0 0
   c1t4d0ONLINE   0 0 0

errors: No known data errors
# zpool attach zp01 c1t3d0 c1t5d0
# zpool attach zp01 c1t4d0 c1t6d0
# zpool status zp01
   pool: zp01
  state: ONLINE
  scrub: resilver completed after 0h0m with 0 errors on Wed Jul  2 
09:21:33 2008
config:

 NAMESTATE READ WRITE CKSUM
 zp01ONLINE   0 0 0
   mirrorONLINE   0 0 0
 c1t3d0  ONLINE   0 0 0
 c1t5d0  ONLINE   0 0 0
   mirrorONLINE   0 0 0
 c1t4d0  ONLINE   0 0 0
 c1t6d0  ONLINE   0 0 0

errors: No known data errors


Mark McDonald wrote:
 Hi
 
 I have managed to get this:
 HOSTNAME$ zpool status
   pool: zp01
  state: ONLINE
  scrub: resilver completed with 0 errors on Wed Jul  2 11:55:27 2008
 config:
 
 NAMESTATE READ WRITE CKSUM
 zp01ONLINE   0 0 0
   c0t2d0ONLINE   0 0 0
   c0t3d0ONLINE   0 0 0
 
 But I wanted to get these in a mirror - I am unable to remove c0t3d0 from the 
 pool. There is already data in the pool with filesystems mounted. So I do not 
 wish to destroy the pool.
 
 Please help
 Mark
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume

2008-07-02 Thread Darren J Moffat
David Magda wrote:
 On Jun 30, 2008, at 19:19, Jeff Bonwick wrote:
 
 Dump is mandatory in the sense that losing crash dumps is criminal.

 Swap is more complex.  It's certainly not mandatory.  Not so long ago,
 swap was typically larger than physical memory.
 
 These two statements kind of imply that dump and swap are two  
 different slices. They certainly can be, but how often are they?

If they are ZVOLs then they are ALWAYS different.

 Quite often swap and dump are the same device, at least in the  
 installs that I've worked with, and I think the default for Solaris  
 is that if dump is not explicitly specified it defaults to swap, yes?  

Correct.

 Is there any reason why they should be separate?

You might want dump but not swap.

They maybe connected via completely different types of storage 
interconnect.  For dump ideally you want the simplest possible route to 
the disk.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume

2008-07-02 Thread Mike Gerdts
On Wed, Jul 2, 2008 at 10:08 AM, David Magda [EMAIL PROTECTED] wrote:
 Quite often swap and dump are the same device, at least in the
 installs that I've worked with, and I think the default for Solaris
 is that if dump is not explicitly specified it defaults to swap, yes?
 Is there any reason why they should be separate?

Aside from what Kyle just said...

If they are separate you can avoid doing savecore if you are never
going to read it.  For most people, my guess is that savecore just
means that they cause a bunch of thrashing during boot (swap/dump is
typically on same spindles as /var/crashh), waste some space in
/var/crash, and never look at the crash dump.  If you come across a
time where you actually do want to look at it, you can manually run
savecore at some time in the future.

Also, last time I looked (and I've not seen anything to suggest it is
fixed) proper dependencies do not exist to prevent paging activity
after boot from trashing the crash dump in a shared swap+dump device -
even when savecore is enabled.  It is only by luck that you get
anything out of it.  Arguably this should be fixed by proper SMF
dependencies.

--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs on top of 6140 FC array

2008-07-02 Thread Mertol Ozyoney
Depends on what benefit you are looking for. 

If you are looking ways to improve redundancy you can still benefit from ZFS


 

a)  ZFS snap shots will give you the ability to withstand soft/user
errors.

b)  ZFS checksum...

c)   ZFS can mirror (synch or async) a 6140'lun to an other storage for
increased redundancy

d)  You can put an other level of raid over 6140's internal raid to
increase redundancy

e)  Use ZFS send recieve to backup data some other place. 

Best regards

 

Mertol 

 

 


 http://www.sun.com/ http://www.sun.com/emrkt/sigs/6g_top.gif

Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email [EMAIL PROTECTED]

 

 

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Justin Vassallo
Sent: Wednesday, July 02, 2008 3:23 AM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] zfs on top of 6140 FC array

 

When set up with  multi-pathing to dual redundant controllers, is layering
zfs on top of the 6140 of any benefit? AFAIK this array does have internal
redundant paths up to the disk connection.

 

justin

attachment: image001.gif___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] J4200/J4400 Array

2008-07-02 Thread Mertol Ozyoney
Availibilty may depend on where you are located but J4200 and J4400 are
available for most regions. 
Those equipment is engineered to go well with Sun open storage components
like ZFS. 
Besides price advantage, J4200 and J4400 offers unmatched bandwith to hosts
or to stacking units. 

You can get the price from your sun account manager

Best regards
Mertol 



Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email [EMAIL PROTECTED]


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Ben B.
Sent: Wednesday, July 02, 2008 2:49 PM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] J4200/J4400 Array

Hi,

According to the Sun Handbook, there is a new array :
SAS interface
12 disks SAS or SATA

ZFS could be used nicely with this box.

There is an another version called
J4400 with 24 disks.

Doc is here :
http://docs.sun.com/app/docs/coll/j4200

Does someone know price and availability for these products ?

Best Regards,
Ben
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Q: grow zpool build on top of iSCSI devices

2008-07-02 Thread Thomas Nau
Hi all.

We currenty move out a number of iSCSI servers based on Thumpers 
(x4500) running both, Solaris 10 and OpenSolaris build 90+. The 
targets on the machines are based on ZVOLs. Some of the clients use those 
iSCSI disks to build mirrored Zpools. As the volumes size on the x4500 
can easily be grown I would like to know if that growth in space can be 
propagated to the client zpools without the need for resyncing. Destroying 
the iSCSI targets is just fine as is importing/exporting pools.

So far I managed to track the problem down to the good old partition 
table created by fdisk that seems to be one root cause for not seeing the 
added space after recreating the targets. The later was necessary to make 
sure the disk size matches the ZVOL size

Any hints are greatly appreciated!
Thomas

-
GPG fingerprint: B1 EE D2 39 2C 82 26 DA  A5 4D E0 50 35 75 9E ED
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Some basic questions about getting the best performance for database usage

2008-07-02 Thread Richard Elling
Christiaan Willemsen wrote:
 Hi Richard,

 Richard Elling wrote:
 It should cost less than a RAID array...
 Advertisement: Sun's low-end servers have 16 DIMM slots.

 Sadly, those are by far more expensive than what I have here from our 
 own server supplier...


ok, that pushed a button.  Let's see...  I just surfed the 4 major
server vendors for their online store offerings without logging in
(no discounts applied).  I was looking for a 64 GByte 1-2U rack
server with 8 internal disk drives.  Due to all of the vendors having
broken stores, in some form or another, it was difficult to actually
get an exact, orderable configuration, but I was able to come close.
Requirements: redundant power supplies, 8x 146 GByte 10k rpm
disks, 64 GBytes of RAM, 4 cores of some type, no OS (I'll use
OpenSolaris, thank you :-)  All prices in USD.

IBM - no 1-2U product with 64 GByte memory capacity, the x3650
line have only 12 slots available until you get to the 4U servers. 
Didn't make the first cut.  But for the record, if you want it at
48 GBytes, $10,748, and if you could add 16 GBytes more, it
would come in at around $12,450... not bad.

HP - DL380 G5 looks promising. Site had difficulty calculating
the price, but it cruised in at $23,996.

Dell - PowerEdge 2970 seemed to be the most inexpensive, at
first.  But once configured (store showed configuration errors,
but it seems to be a bug in the error reporting itself). $21,825.

Sun - X4150 is actually 1U while the others are 2U.  Store
would only allow me to configure 60 GBytes -- 1 pair of
DIMMs were 2 GByte.  I'm sure I could get it fully populated
with a direct quote.  $12,645.

The way I see it, for your solution Sun offers the best value
by far.  But more importantly, it really helps to shop around
for these x64 boxes.  I was quite surprised to find Sun's price
to be nearly half of HP and Dell...
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is it possible to add a mirror device later?

2008-07-02 Thread Richard Elling
Tommaso Boccali wrote:
 Ciao, 
 the rot filesystem of my thumper is a ZFS with a single disk:

 bash-3.2# zpool status rpool
   pool: rpool
  state: ONLINE
  scrub: none requested
 config:

 NAMESTATE READ WRITE CKSUM
 rpool   ONLINE   0 0 0
   c5t0d0s0  ONLINE   0 0 0
 spares
   c0t7d0AVAIL   
   c1t6d0AVAIL   
   c1t7d0AVAIL   


 is it possible to add a mirror to it? I seem to be able only to add a 
 new PAIR of disks in mirror, but not to add a mirror to the existing 
 disk ...


As Edna and Robert mentioned, zpool attach will add the mirror.
But note that the X4500 has only two possible boot devices:
c5t0d0 and c5t4d0.  This is a BIOS limitation.  So you will want
to mirror with c5t4d0 and configure the disks for boot.  See the
docs on ZFS boot for details on how to configure the boot sectors
and grub.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume

2008-07-02 Thread sanjay nadkarni (Laptop)
Mike Gerdts wrote:
 On Wed, Jul 2, 2008 at 10:08 AM, David Magda [EMAIL PROTECTED] wrote:
   
 Quite often swap and dump are the same device, at least in the
 installs that I've worked with, and I think the default for Solaris
 is that if dump is not explicitly specified it defaults to swap, yes?
 Is there any reason why they should be separate?
 

 Aside from what Kyle just said...

 If they are separate you can avoid doing savecore if you are never
 going to read it.  For most people, my guess is that savecore just
 means that they cause a bunch of thrashing during boot (swap/dump is
 typically on same spindles as /var/crashh), waste some space in
 /var/crash, and never look at the crash dump.  If you come across a
 time where you actually do want to look at it, you can manually run
 savecore at some time in the future.

 Also, last time I looked (and I've not seen anything to suggest it is
 fixed) proper dependencies do not exist to prevent paging activity
 after boot from trashing the crash dump in a shared swap+dump device -
 even when savecore is enabled.  It is only by luck that you get
 anything out of it.  Arguably this should be fixed by proper SMF
 dependencies.
   
Really ? Back when I looked at it, dumps were written to the back end of 
the swap device.  This would prevent paging from writing on top of a 
valid dump.  Furthermore  when the system is  coming up, savecore was 
run very early to grab core so that paging would not trash the core. 


-Sanjay

 --
 Mike Gerdts
 http://mgerdts.blogspot.com/
 ___
 caiman-discuss mailing list
 [EMAIL PROTECTED]
 http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume

2008-07-02 Thread Kyle McDonald
sanjay nadkarni (Laptop) wrote:
 Mike Gerdts wrote:
   
 On Wed, Jul 2, 2008 at 10:08 AM, David Magda [EMAIL PROTECTED] wrote:
   
 
 Quite often swap and dump are the same device, at least in the
 installs that I've worked with, and I think the default for Solaris
 is that if dump is not explicitly specified it defaults to swap, yes?
 Is there any reason why they should be separate?
 
   
 Aside from what Kyle just said...

 If they are separate you can avoid doing savecore if you are never
 going to read it.  For most people, my guess is that savecore just
 means that they cause a bunch of thrashing during boot (swap/dump is
 typically on same spindles as /var/crashh), waste some space in
 /var/crash, and never look at the crash dump.  If you come across a
 time where you actually do want to look at it, you can manually run
 savecore at some time in the future.

 Also, last time I looked (and I've not seen anything to suggest it is
 fixed) proper dependencies do not exist to prevent paging activity
 after boot from trashing the crash dump in a shared swap+dump device -
 even when savecore is enabled.  It is only by luck that you get
 anything out of it.  Arguably this should be fixed by proper SMF
 dependencies.
   
 
 Really ? Back when I looked at it, dumps were written to the back end of 
 the swap device.  This would prevent paging from writing on top of a 
 valid dump.  Furthermore  when the system is  coming up, savecore was 
 run very early to grab core so that paging would not trash the core. 

   
I'm guessing Mike is suggesting that making the swap device available 
for paging should be dependent on savecore having already completed it's 
job.

-Kyle

 -Sanjay

   
 --
 Mike Gerdts
 http://mgerdts.blogspot.com/
 ___
 caiman-discuss mailing list
 [EMAIL PROTECTED]
 http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
   
 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume

2008-07-02 Thread George Wilson
Kyle McDonald wrote:
 David Magda wrote:
   
 Quite often swap and dump are the same device, at least in the  
 installs that I've worked with, and I think the default for Solaris  
 is that if dump is not explicitly specified it defaults to swap, yes?  
 Is there any reason why they should be separate?

   
 
 I beleive there are technical limitations with ZFS Boot that stop them 
 from sharing the same Zvol..
   
Yes, there is. Swap zvols are ordinary zvols which still COW their 
blocks and leverage checksumming, etc. Dump zvols don't have this luxury 
because when the system crashes you are limited in the number of tasks 
that you can perform. So we solved this by changing the personality of a 
zvol when it's added as a dump device. In particular, we needed to make 
sure that all the blocks that the dump device cared about were available 
at the time of a system crash. So we preallocate the dump device when it 
gets created. We also follow a different I/O path when writing to a dump 
device allowing us to behave as if we were a separate partition on the 
disk. The dump subsystem doesn't know the difference which is exactly 
what we wanted. :-)

 Having two just seems like a waste to me, even with disk sizes being  
 what they are (and growing). A separate dump device is only really  
 needed if something goes completely wrong, otherwise it's just  
 sitting there doing nothing. If you're panicing, then whatever is  
 in swap is now no longer relevant, so over writing it is no big deal.
   
 
 That said, with all the talk of dynamic sizing, If, during normal 
 operation the swap Zvol has space allocated, and the Dump Zvol is sized 
 to 0. Then during a panic, could the swap volume be sized to 0 and the 
 dump volume expanded to whatever size.
   

Unfortunately that's not possible for the reasons I mentioned. You can 
resize the dump zvol to a smaller size but unfortunately you can't make 
it a size 0 as there is a minimum size requirement.

Thanks,
George
 This at least while still requireing 2 Zvol's would allow (even when the 
 rest of the pool is short on space) a close approximation of the old 
 behavior of sharing the same slice for both swap and dump.

   -Kyle

   
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   
 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Streaming video and audio over CIFS lags.

2008-07-02 Thread Juho Mäkinen
 A few things to try: put in a different ethernet card
 if you have one,
 on one or more ends.  Realtek works, but I've been
 unimpressed with
 their performance in the past.  An Intel x1 pci
 express card will only
 run you around $40, and I've seen much better results
 with them.  

I first replaced my cheapest-from-the-shop-cheap 1Gbps network switch with a 
much better HP unmanaged switch, which didn't solve the problem. 

Then I went and bought an Intel PCI Gigabit Ethernet card for 25€ which seems 
to have solved the problem. I still need to do some testing though to verify.

 Is hardware checksum offloading enabled on either end?
  How does another
 pplication (e.g., scp) behave between the two
 machines?
What's this?

So far thanks for the responses so far, hopefully my Intel network card solved 
my problems =)

 - Juho Mäkinen
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Streaming video and audio over CIFS lags.

2008-07-02 Thread Will Murnane
On Wed, Jul 2, 2008 at 13:16, Juho Mäkinen [EMAIL PROTECTED] wrote:
 Then I went and bought an Intel PCI Gigabit Ethernet card for 25€ which seems 
 to have solved the problem. I still need to do some testing though to verify.
Glad to hear it.

 Is hardware checksum offloading enabled on either end?
  How does another
 pplication (e.g., scp) behave between the two
 machines?
 What's this?
Hardware checksum offloading is when the card calculates the checksum
of the outgoing packets in hardware rather than letting the OS do it.
It can give better performance than software (depending on
configuration) but some cards do the checksums improperly, which can
lead to poor performance since bad packets are dropped.  On Solaris
you can try the program here:
http://www.opensolaris.org/jive/thread.jspa?threadID=31058tstart=150
to see if it's enabled; on Linux run ethtool -k ifname, or
netstat -ant on Windows.

scp is secure copy; it transfers files from one machine to another
over an ssh tunnel.  It may be processor-bound, especially since its
encryption is single-threaded, but it's a good thing to compare to
when CIFS is misbehaving.  It's included with Linux and Solaris, and
you can try WinSCP on Windows.

That said, if it's working with the replacement card I wouldn't worry
about it too much ;)

 So far thanks for the responses so far, hopefully my Intel network card 
 solved my problems =)
You're welcome, and I hope it keeps working for you.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] J4200/J4400 Array

2008-07-02 Thread Albert Chin
On Wed, Jul 02, 2008 at 04:49:26AM -0700, Ben B. wrote:
 According to the Sun Handbook, there is a new array :
 SAS interface
 12 disks SAS or SATA
 
 ZFS could be used nicely with this box.

Doesn't seem to have any NVRAM storage on board, so seems like JBOD.

 There is an another version called
 J4400 with 24 disks.
 
 Doc is here :
 http://docs.sun.com/app/docs/coll/j4200

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] J4200/J4400 Array

2008-07-02 Thread Tim
So when are they going to release msrp?



On 7/2/08, Mertol Ozyoney [EMAIL PROTECTED] wrote:
 Availibilty may depend on where you are located but J4200 and J4400 are
 available for most regions.
 Those equipment is engineered to go well with Sun open storage components
 like ZFS.
 Besides price advantage, J4200 and J4400 offers unmatched bandwith to hosts
 or to stacking units.

 You can get the price from your sun account manager

 Best regards
 Mertol



 Mertol Ozyoney
 Storage Practice - Sales Manager

 Sun Microsystems, TR
 Istanbul TR
 Phone +902123352200
 Mobile +905339310752
 Fax +90212335
 Email [EMAIL PROTECTED]


 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Ben B.
 Sent: Wednesday, July 02, 2008 2:49 PM
 To: zfs-discuss@opensolaris.org
 Subject: [zfs-discuss] J4200/J4400 Array

 Hi,

 According to the Sun Handbook, there is a new array :
 SAS interface
 12 disks SAS or SATA

 ZFS could be used nicely with this box.

 There is an another version called
 J4400 with 24 disks.

 Doc is here :
 http://docs.sun.com/app/docs/coll/j4200

 Does someone know price and availability for these products ?

 Best Regards,
 Ben


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] /var/log as a single zfs filesystem -- problems at boot

2008-07-02 Thread Dan McDonald
I created a filesystem dedicated to /var/log so I could keep compression on the 
logs.  Unfortunately, this caused problems at boot time because my log ZFS 
dataset couldn't be mounted because /var/log already contained bits.  Some of 
that, to be fair, could be fixed by having some SMF services explicitly depend 
on svc:/system/filesystem/local:default that don't already, but I get the 
feeling there's more to it than that.

Any recent insights into /var/log being its own filesystem?

Thanks!
Dan
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] /var/log as a single zfs filesystem -- problems at boot

2008-07-02 Thread Kyle McDonald
Dan McDonald wrote:
 I created a filesystem dedicated to /var/log so I could keep compression on 
 the logs.  Unfortunately, this caused problems at boot time because my log 
 ZFS dataset couldn't be mounted because /var/log already contained bits.  
 Some of that, to be fair, could be fixed by having some SMF services 
 explicitly depend on svc:/system/filesystem/local:default that don't already, 
 but I get the feeling there's more to it than that.

 Any recent insights into /var/log being its own filesystem?

   
I don't know if the order has changed with ZFS boot, but I had found a 
way to put lofi mounts of ISO files in the vfstab, and then had a 
similar problem when I then moved those ISO's so that they lived on a 
ZFS filesystem - the /etc/vfstab was processed much earlier than the ZFS 
ones.

So one solution you might try (It worked for me) would be to set the zfs 
mount point to legacy, and then create an entry in /etc/vfstab so that 
it gets mounted earlier in the boot process.

  -Kyle

 Thanks!
 Dan
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] /var/log as a single zfs filesystem -- problems at boot

2008-07-02 Thread Richard Elling
Dan McDonald wrote:
 I created a filesystem dedicated to /var/log so I could keep compression on 
 the logs.  Unfortunately, this caused problems at boot time because my log 
 ZFS dataset couldn't be mounted because /var/log already contained bits.  
 Some of that, to be fair, could be fixed by having some SMF services 
 explicitly depend on svc:/system/filesystem/local:default that don't already, 
 but I get the feeling there's more to it than that.

 Any recent insights into /var/log being its own filesystem?
   

I think this is a symptom of another problem.  But you should be able
to setup the installation on a separate /var/log (using JumpStart or
LiveUpgrade) to get past this chicken-and-egg problem.  Alternatively,
you should be able to boot from CD/DVD and relocate the stuff in
/var/log to a /var/log filesystem.  Once the original /var/log directory
contents are gone, it should work ok.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] evil tuning guide updates

2008-07-02 Thread Mike Gerdts
I was making my way through the evil tuning guide and noticed a couple
updates that seem appropriate.  I tried to create an account to be
able to add this into the discussion tab but account creation seems
to be a NOP.


http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#RFEs

- 6533726 fixed in snv_79.  No sign of it in S10 yet.

http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Solaris_10_8.2F07_and_Solaris_Nevada_.28snv_51.29_Releases

- No need to convert to hex.  According to system(4):

 set [module:]symbol {=, |, } [~][-]value

 . . .

 Operations that  are  supported  for  modifying  integer
 variables  are: simple assignment, inclusive bitwise OR,
 bitwise AND, one's complement, and  negation.  Variables
 in a specific loadable module can be targeted for modif-
 ication by specifying the variable  name  prefixed  with
 the kernel module name and a colon (:) separator. Values
 can be specified as hexadecimal (0x10), Octal (046),  or
 Decimal (5).

http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#RFEs_2

- 6429205 fixed in snv_87.
- Good explanation at
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-April/046937.html


-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is it possible to add a mirror device later?

2008-07-02 Thread Orvar Korvar
Remember, you can not delete a device, so be careful what you add.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is it possible to add a mirror device later?

2008-07-02 Thread Richard Elling
Orvar Korvar wrote:
 Remember, you can not delete a device, so be careful what you add.
   

You can detach disks from mirrors.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is it possible to add a mirror device later?

2008-07-02 Thread dick hoogendijk
On Wed, 02 Jul 2008 13:41:18 -0700
Richard Elling [EMAIL PROTECTED] wrote:

 Orvar Korvar wrote:
  Remember, you can not delete a device, so be careful what you add.
 
 You can detach disks from mirrors.

So, a mirror of two disks becomes a system of two seperate disks?

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
++ http://nagual.nl/ + SunOS sxce snv91 ++
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is it possible to add a mirror device later?

2008-07-02 Thread Richard Elling
dick hoogendijk wrote:
 On Wed, 02 Jul 2008 13:41:18 -0700
 Richard Elling [EMAIL PROTECTED] wrote:

   
 Orvar Korvar wrote:
 
 Remember, you can not delete a device, so be careful what you add.
   
 You can detach disks from mirrors.
 

 So, a mirror of two disks becomes a system of two seperate disks?

   

zpool detach will detach a disk from a mirror.  But it will no longer
be part of the pool.  If you want to add the detached disk to the
pool as a stripe, then that would be a zpool detach followed by a
zpool add

Please take a careful look at the zpool add, attach, and detach options.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] /var/log as a single zfs filesystem -- problems at boot

2008-07-02 Thread Akhilesh Mritunjai
Can't say about /var/log, but I have a system here with /var on zfs.

My assumption was that, not just /var/log, but essentially all of /var is 
supposed to be runtime cruft, and so can be treated equally.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] /var/log as a single zfs filesystem -- problems at boot

2008-07-02 Thread Richard Elling
Akhilesh Mritunjai wrote:
 Can't say about /var/log, but I have a system here with /var on zfs.

 My assumption was that, not just /var/log, but essentially all of /var is 
 supposed to be runtime cruft, and so can be treated equally.
   

Not really.  Please see the man page for filesystem for the details.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Accessing zfs partitions on HDD from LiveCD

2008-07-02 Thread Benjamin Ellison
to answer my own question -- yes, it worked beautifully (zpool import -f tank).

Now to figure out why my network connection doesn't want to work after being 
set up the exact same way again :(
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool i/o error

2008-07-02 Thread Victor Pajor
# rm /etc/zfs/zpool.cache
# zpool import
pool: zfs
id: 3801622416844369872
state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on on another system, but can be imported using
the '-f' flag.
see: http://www.sun.com/msg/ZFS-8000-5E
config:

zfs FAULTED corrupted data
raidz1 ONLINE
c5t1d0 ONLINE
c7t0d0 UNAVAIL corrupted data
c7t1d0 UNAVAIL corrupted data
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool i/o error

2008-07-02 Thread Bryan Wagoner
I'll have to do some thunkin' on this.  We just need to  get back one of the 
disks, both would be great, but one more would do the trick. 

After all other avenues have been tried,  one thing that you can try is to use 
the 2008.05 livecd and boot into the livecd without installing the OS. Import 
the pool and see if you have any better luck.  If not, you can try the zdb -l 
again under the livecd as there have been bugs with that in the past on older 
versions of ZFS code. 

Will edit this message if I can think of something else to try.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss