Re: [zfs-discuss] Compatibility of Hitachi Deskstar 7K3000 HDS723030ALA640 with ZFS

2012-03-08 Thread Jim Klimov

2012-03-07 17:21, Edward Ned Harvey wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of luis Johnstone



As far as I can tell, the Hitachi Deskstar 7K3000 (HDS723030ALA640) uses

512B

sectors and so I presume does not suffer from such issues (because it
doesn't lie about the physical layout of sectors on-platter)


I think what you mean to ask is Is the HD7K3000 a piece of junk?  Because
any disk which is lying about its physical sectors is a piece of junk,
regardless of what filesystem is going to be on it.

This isn't a ZFS question.  (Nothing wrong with asking - I'm not trying to
discourage having the discussion, but please don't associate such problems
with ZFS as if ZFS is unique in that way.)



Well, of the currently-used FSes, ZFS does have certain issues
with 4k drives, which make it rather unique.

Table/bitmap based FSes like NTFS or FAT can preallocate their
file-allocation tables and don't have any metadata overhead
compared to 512b-sectored disks. In part this is due to having
4kb clusters as default for quite a while, so no expectations
of users change.

ZFS tree has many small nodes as well as file tails, so it
can effectively utilize small 512b blocks on disks. When you
migrate same pool to ashift=12, it explodes to require more
disk space (from several percent up to ten-twenty, according
to internet rumours). Users don't often expect that, so it is
sorts of an issue.

While it can be speculated that other FSes already steal this
slack space from users on any other drives, this change of
behavior on ZFS on different drive types can be seen as a
drawback, by some.

//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Advice for migrating ZFS configuration

2012-03-08 Thread Paul Kraus
Lots of suggestions (not included here), but ...

With the exception of Cindy's suggestion of using 4 disks and
mirroring (zpool attach two new disks to existing vdevs), I would
absolutely NOT do anything unless I had a known good backup of the
data! I have seen too many cases described here on this list of people
trying complicated procedures with ZFS and making one small mistake
and loosing their data, or spending weeks or months trying to recover
it.

Regarding IMPORT / EXPORT, these functions are have two real
purposes in my mind:

1. you want to move a zpool from one host to another. You EXPORT from
the first host, physically move the disks, then IMPORT on the new
host.

2. You want (or need) to physically change the connectivity between
the disks and the host, and implicit in that is that the device paths
will change. EXPORT, change connectivity, IMPORT. Once again I have
seen many cases described on this list of folks who moved disks
around, which ZFS is _supposed_ to handle, but then had a problem.

I use ZFS first for reliability and second for performance. With
that in mind, one of my primary rules for ZFS is to NOT move disks
around without first exporting the zpool. I have done some pretty rude
things regarding devices underlying vdev disappearing and then much
later reappearing (mostly in test, but occasionally in production),
and I have yet to lose any data, BUT none of the devices changed path
in the process.

On Wed, Mar 7, 2012 at 4:38 PM, Bob Doolittle bob.doolit...@oracle.com wrote:
 Hi,

 I had a single-disk zpool (export) and was given two new disks for expanded
 storage. All three disks are identically sized, no slices/partitions. My
 goal is to create a raidz1 configuration of the three disks, containing the
 data in the original zpool.

 However, I got off on the wrong foot by doing a zpool add of the first
 disk. Apparently this has simply increased my storage without creating a
 raidz config.

 Unfortunately, there appears to be no simple way to just remove that disk
 now and do a proper raidz create of the other two. Nor am I clear on how
 import/export works and whether that's a good way to copy content from one
 zpool to another on a single host.

 Can somebody guide me? What's the easiest way out of this mess, so that I
 can move from what is now a simple two-disk zpool (less than 50% full) to a
 three-disk raidz configuration, starting with one unused disk? In the end I
 want the three-disk raidz to have the same name (and mount point) as the
 original zpool. There must be an easy way to do this.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, Troy Civic Theatre Company
- Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Advice for migrating ZFS configuration

2012-03-08 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.
just note that you can has different zpool name but with the same old 
mount point for export purpose

-LT


On 3/8/2012 8:40 AM, Paul Kraus wrote:

Lots of suggestions (not included here), but ...

 With the exception of Cindy's suggestion of using 4 disks and
mirroring (zpool attach two new disks to existing vdevs), I would
absolutely NOT do anything unless I had a known good backup of the
data! I have seen too many cases described here on this list of people
trying complicated procedures with ZFS and making one small mistake
and loosing their data, or spending weeks or months trying to recover
it.

 Regarding IMPORT / EXPORT, these functions are have two real
purposes in my mind:

1. you want to move a zpool from one host to another. You EXPORT from
the first host, physically move the disks, then IMPORT on the new
host.

2. You want (or need) to physically change the connectivity between
the disks and the host, and implicit in that is that the device paths
will change. EXPORT, change connectivity, IMPORT. Once again I have
seen many cases described on this list of folks who moved disks
around, which ZFS is _supposed_ to handle, but then had a problem.

 I use ZFS first for reliability and second for performance. With
that in mind, one of my primary rules for ZFS is to NOT move disks
around without first exporting the zpool. I have done some pretty rude
things regarding devices underlying vdev disappearing and then much
later reappearing (mostly in test, but occasionally in production),
and I have yet to lose any data, BUT none of the devices changed path
in the process.

On Wed, Mar 7, 2012 at 4:38 PM, Bob Doolittlebob.doolit...@oracle.com  wrote:

Hi,

I had a single-disk zpool (export) and was given two new disks for expanded
storage. All three disks are identically sized, no slices/partitions. My
goal is to create a raidz1 configuration of the three disks, containing the
data in the original zpool.

However, I got off on the wrong foot by doing a zpool add of the first
disk. Apparently this has simply increased my storage without creating a
raidz config.

Unfortunately, there appears to be no simple way to just remove that disk
now and do a proper raidz create of the other two. Nor am I clear on how
import/export works and whether that's a good way to copy content from one
zpool to another on a single host.

Can somebody guide me? What's the easiest way out of this mess, so that I
can move from what is now a simple two-disk zpool (less than 50% full) to a
three-disk raidz configuration, starting with one unused disk? In the end I
want the three-disk raidz to have the same name (and mount point) as the
original zpool. There must be an easy way to do this.


--
Hung-Sheng Tsao Ph D.
Founder  Principal
HopBit GridComputing LLC
cell: 9734950840

http://laotsao.blogspot.com/
http://laotsao.wordpress.com/
http://blogs.oracle.com/hstsao/

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Receive failing with invalid backup stream error

2012-03-08 Thread Ian Collins

On 03/ 3/12 11:57 AM, Ian Collins wrote:

Hello,

I am problems sending some snapshots between two fully up to date
Solaris 11 systems:

zfs send -i tank/live/fs@20120226_0705 tank/live/fs@20120226_1105 | ssh
remote zfs receive -vd fileserver/live
receiving incremental stream of tank/live/fs@20120226_1105 into
fileserver/live/fs@20120226_1105
cannot receive incremental stream: invalid backup stream

Both pools and filesystems are at the latest revision.  Most the other
filesystems in the pool can be sent without issues.

The filesystem was upgraded yesterday, which is when the problems
stared.  The snapshots are from 26/02.

Other filesystems that were upgraded yesterday receive fine, so I don't
think the problem is directly related to the upgrade.

Any ideas?

I haven't had a solution from support yet, but I do have a workaround if 
anyone else encounters the same problem.


I sent the snapshot to a file, coped the file to the remote host and 
piped the file into zfs receive.  That worked and I was able to send 
further snapshots with ssh.


Odd.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss