Re: [zfs-discuss] ashift and vdevs

2010-12-01 Thread Roch

Brandon High writes:
  On Tue, Nov 23, 2010 at 9:55 AM, Krunal Desai mov...@gmail.com wrote:
   What is the upgrade path like from this? For example, currently I
  
  The ashift is set in the pool when it's created and will persist
  through the life of that pool. If you set it at pool creation, it will
  stay regardless of OS upgrades.
  

It is indeed persistent but each top level vdev (mirror or
raid-z group or drive in a stripe) will have it's own value
based on the sector size when the vdev was integrated in the
pool. The sector size of a vdev which is part of a pool
is better not to increase (or vdev will be faulted).

-r

  -B
  
  --
  Brandon High : bh...@freaks.com
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool does not like iSCSI ?

2010-12-01 Thread Markus Kovero

  Do you know if these bugs are fixed in Solaris 11 Express ?

 It says it was fixed in snv_140, and S11E is based on snv_151a, so it
 should be in:

 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6907687


I can confirm it works, iscsi zpools seem to work very happily now.

Yours
Markus Kovero

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS snapshot limit?

2010-12-01 Thread f...@ll
Hi,

I must send zfs snaphost from one server to another. Snapshot have size
130GB. Now I have question, the zfs have any limit of sending file?

f...@ll

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot limit?

2010-12-01 Thread Darren J Moffat

On 01/12/2010 13:36, f...@ll wrote:

I must send zfs snaphost from one server to another. Snapshot have size
130GB. Now I have question, the zfs have any limit of sending file?


No.

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot limit?

2010-12-01 Thread Menno Lageman

f...@ll wrote:

Hi,

I must send zfs snaphost from one server to another. Snapshot have size
130GB. Now I have question, the zfs have any limit of sending file?


If you are sending the snapshot to another zpool (i.e. using 'zfs send | 
zfs recv') then no, there is no limit. If you however send the snapshot 
to a file on the other system (i.e. 'zfs send  somefile') then you are 
limited by what the file system you are creating the file on supports.


Menno
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot limit?

2010-12-01 Thread Albert

W dniu 2010-12-01 15:19, Menno Lageman pisze:

f...@ll wrote:

Hi,

I must send zfs snaphost from one server to another. Snapshot have size
130GB. Now I have question, the zfs have any limit of sending file?


If you are sending the snapshot to another zpool (i.e. using 'zfs send |
zfs recv') then no, there is no limit. If you however send the snapshot
to a file on the other system (i.e. 'zfs send  somefile') then you are
limited by what the file system you are creating the file on supports.

Menno



Hi,

In my situation is first option, I send snapshot to another server using 
zfs send | zfs recv and I have problem when data send is completed, 
after reboot the zpool have error or have state: faulted.
First server is physical, second is a virtual machine running under 
xenserver 5.6


f...@ll

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot limit?

2010-12-01 Thread Casper . Dik


In my situation is first option, I send snapshot to another server using 
zfs send | zfs recv and I have problem when data send is completed, 
after reboot the zpool have error or have state: faulted.
First server is physical, second is a virtual machine running under 
xenserver 5.6


What is the underlying datastorage?

Typically what can happen here is that zfs is safe, it needs to trust the 
hardware not to lie to the kernel.  If you write data and you reboot/
restart the VM, the data should still be there.  If that is not the case, 
then it has lied to you and you may need to change something in the host.

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs send receive problem/questions

2010-12-01 Thread Don Jackson
Hello, 

I am attempting to move a bunch of zfs filesystems from one pool to another.

Mostly this is working fine, but one collection of file systems is causing me 
problems, and repeated re-reading of man zfs and the ZFS Administrators Guide 
is not helping.  I would really appreciate some help/advice.

Here is the scenario.
I have a nested (hierarchy) of zfs file systems.
Some of the deeper fs are snapshotted.
All this exists on the source zpool
First I recursively snapshotted the whole subtree:

   zfs snapshot -r nasp...@xfer-11292010 

Here is a subset of the source zpool:

# zfs list -r naspool
NAME   USED  AVAIL  REFER  
MOUNTPOINT
naspool   1.74T  42.4G  37.4K  /naspool
nasp...@xfer-11292010 0  -  37.4K  -
naspool/openbsd113G  42.4G  23.3G  
/naspool/openbsd
naspool/open...@xfer-11292010 0  -  23.3G  -
naspool/openbsd/4.4   21.6G  42.4G  2.33G  
/naspool/openbsd/4.4
naspool/openbsd/4...@xfer-11292010 0  -  2.33G  -
naspool/openbsd/4.4/ports  592M  42.4G   200M  
/naspool/openbsd/4.4/ports
naspool/openbsd/4.4/po...@patch00052.5M  -   169M  -
naspool/openbsd/4.4/po...@patch00654.7M  -   194M  -
naspool/openbsd/4.4/po...@patch00754.9M  -   194M  -
naspool/openbsd/4.4/po...@patch01355.1M  -   194M  -
naspool/openbsd/4.4/po...@patch01635.1M  -   200M  -
naspool/openbsd/4.4/po...@xfer-11292010   0  -   200M  -

Now I want to send this whole hierarchy to a new pool.

# zfs create npool/openbsd  
  
# zfs send -R naspool/open...@xfer-11292010 | zfs receive -Fv  npool/openbsd
  
receiving full stream of naspool/open...@xfer-11292010 into 
npool/open...@xfer-11292010
received 23.5GB stream in 883 seconds (27.3MB/sec)
cannot receive new filesystem stream: destination has snapshots (eg. 
npool/open...@xfer-11292010)
must destroy them to overwrite it

What am I doing wrong?  What is the proper way to accomplish my goal here?

And I have a follow up question:

I had to snapshot the source zpool filesystems in order to zfs send them.

Once they are received on the new zpool, I really don't need nor want this 
snapshot on the receiving side.
Is it OK to zfs destroy that snapshot?

I've been pounding my head against this problem for a couple of days, and I 
would definitely appreciate any tips/pointers/advice.

Don
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send receive problem/questions

2010-12-01 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Don Jackson
 
 # zfs send -R naspool/open...@xfer-11292010 | zfs receive -Fv
 npool/openbsd
 receiving full stream of naspool/open...@xfer-11292010 into
 npool/open...@xfer-11292010 received 23.5GB stream in 883 seconds
 (27.3MB/sec) cannot receive new filesystem stream: destination has
 snapshots (eg. npool/open...@xfer-11292010) must destroy them to
 overwrite it

Somewhere, in either the ZFS admin guide, or the ZFS troubleshooting guide,
or the ZFS best practices guide, I vaguely recall that there was a bug with
-R prior to some version of zpool, and the solution was to send each
individual filesystem individually.

Prior to solaris 10u9, I simply assume -R is broken, and I always do
individual filesystems.  10u9 is not a magic number, and maybe it was fixed
earlier.  I'm just saying that due to blackmagic and superstition, I never
trusted -R until 10u9.

I notice your mention of openbsd.  I presume you're running an old version
of zfs.


 What am I doing wrong?  What is the proper way to accomplish my goal
 here?

You might not be doing anything wrong.  But I will suggest doing the
filesystems individually anyway.  You might get a different (more
successful) result.


 Once they are received on the new zpool, I really don't need nor want this
 snapshot on the receiving side.
 Is it OK to zfs destroy that snapshot?

Yes.  It is safe to destroy snapshots, and you don't lose the filesystem.
When I script this, I just grep for the presence of '@' in the thing which
is scheduled for destruction, and then I know I can't possibly destroy the
latest version of the filesystem.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send receive problem/questions

2010-12-01 Thread Don Jackson
Here is some more info on my system:

This machine is running Solaris 10 U9, with all the patches as of 11/10/2010.

The source zpool I am attempting to transfer from was originally created on a 
older OpenSolaris (specifically Nevada) release, I think it was 111.
I did a zpool export on that zpool, and physically transferred those drives to 
the new machine, where I did a zpool import, and and then upgraded the ZFS 
version on the imported zpool, now:

# zpool upgrade
This system is currently running ZFS pool version 22.
All pools are formatted using this version.

The reference to OpenBSD in the directory paths in the listings I provided 
refers only to the data that is stored therein, the actual OS I am running here 
is Solaris 10.

# zpool status naspool npool
  pool: naspool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
naspool ONLINE   0 0 0
  raidz2-0  ONLINE   0 0 0
c0t1d0  ONLINE   0 0 0
c0t2d0  ONLINE   0 0 0
c0t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0

errors: No known data errors

  pool: npool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
npool   ONLINE   0 0 0
  raidz3-0  ONLINE   0 0 0
c0t4d0  ONLINE   0 0 0
c0t5d0  ONLINE   0 0 0
c0t6d0  ONLINE   0 0 0
c0t7d0  ONLINE   0 0 0
c1t4d0  ONLINE   0 0 0
c1t5d0  ONLINE   0 0 0
c1t6d0  ONLINE   0 0 0
c1t7d0  ONLINE   0 0 0

errors: No known data errors
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ashift and vdevs

2010-12-01 Thread Miles Nordin
 kd == Krunal Desai mov...@gmail.com writes:

kd http://support.microsoft.com/kb/whatever

dude.seriously?

This is worse than a waste of time.  Don't read a URL that starts this
way.

kd Windows 7 (even with SP1) has no support for 4K-sector
kd drives.

NTFS has 4KByte allocation units, so all you have to do is make sure
the NTFS partition starts at an LBA that's a multiple of 8, and you
have full performance.  Probably NTFS is the reason WD has chosen
4kByte.

Linux XFS is also locked at 4kByte sector size, because that's the VM
page size and XFS cannot use any other block size than the page size.
so, 4kByte is good (except for ZFS).

kd can you explicate further about these drives and their
kd emulation (or lack thereof), I'd appreciate it!

further explication: all drives will have the emulation, or else you
wouldn't be able to boot from them.  The world of peecees isn't as
clean as you imagine.

kd which 4K sector drives offer a jumper or other method to
kd completely disable any form of emulation and appear to the
kd host OS as a 4K-sector drive?

None that I know of.  It's probably simpler and less silly to leave
the emulation in place forever than start adding jumpers and modes and
more secret commands.

It doesn't matter what sector size the drive presents to the host OS
because you can get the same performance character by always writing
an aligned set of 8 sectors at once, which is what people are trying
to force ZFS to do by adding 3 to ashift.  Whether the number is
reported by some messy new invented SCSI command, input by the
operator, or derived by a mini-benchmark added to
format/fmthard/zpool/whatever-applies-the-label, this is done once for
the life of the disk, and after that happens whenever the OS needs
this number it's gotten by issuing READ on the label.  Day-to-day, the
drive doesn't need to report it.  Therefore, it is ``ability to
accomodate a minimum-aligned-write-size'' which badly people want
added to their operating systems, and no one sane really cares about
automatic electronic reporting of true sector size.

Unfortunately (but predictably) it sounds like if you 'zfs replace' a
512-byte drive with a 4096-byte drive you are screwed.  therefore even
people with 512-byte drives might want to set their ashift for
4096-byte drives right now.  This is another reason it's a waste of
time to worry about reporting/querying a drive's ``true'' sector size:
for a pool of redundant disks, the needed planning's more complicated
than query-report-obey.

Also did anyone ever clarify whether the slog has an ashift?  or is it
forced-512?  or derived from whatever vdev will eventually contain the
separately-logged data?  I would expect generalized immediate Caring
about that since no slogs except ACARD and DDRDrive will have 512-byte
sectors.


pgpdnTloWn49S.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Seagate ST32000542AS and ZFS perf

2010-12-01 Thread Miles Nordin
 t == taemun  tae...@gmail.com writes:

 t I would note that the Seagate 2TB LP has a 0.32% Annualised
 t Failure Rate.

bullshit.


pgpsMvTxl5Ghd.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Seagate ST32000542AS and ZFS perf

2010-12-01 Thread taemun
On 2 December 2010 16:17, Miles Nordin car...@ivy.net wrote:

  t == taemun  tae...@gmail.com writes:

 t I would note that the Seagate 2TB LP has a 0.32% Annualised
 t Failure Rate.

 bullshit.


Apologies, should have read: Specified Annualised Failure Rate.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ashift and vdevs

2010-12-01 Thread Neil Perrin

On 12/01/10 22:14, Miles Nordin wrote:

Also did anyone ever clarify whether the slog has an ashift?  or is it
forced-512?  or derived from whatever vdev will eventually contain the
separately-logged data?  I would expect generalized immediate Caring
about that since no slogs except ACARD and DDRDrive will have 512-byte
sectors.
  

The minimum slog write is

#define ZIL_MIN_BLKSZ 4096

and all writes are also rounded to multiples of ZIL_MIN_BLKSZ.

Neil.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss