Re: [zfs] [developer] Re: [smartos-discuss] an interesting survey -- the zpool with most disks you have ever built

2016-03-07 Thread Fred Liu
2016-03-08 4:55 GMT+08:00 Liam Slusser :

> I don't have a 2000 drive array (thats amazing!) but I do have two 280
> drive arrays which are in production.  Here are the generic stats:
>
> server setup:
> OpenIndiana oi_151
> 1 server rack
> Dell r720xd 64g ram with mirrored 250g boot disks
> 5 x LSI 9207-8e dualport SAS pci-e host bus adapters
> Intel 10g fibre ethernet (dual port)
> 2 x SSD for log cache
> 2 x SSD for cache
> 23 x Dell MD1200 with 3T,4T, or 6T NLSAS disks (a mix of Toshiba, Western
> Digital, and Seagate drives - basically whatever Dell sends)
>
> zpool setup:
> 23 x 12-disk raidz2 glued together.  276 total disks.  Basically each new
> 12 disk MD1200 is a new raidz2 added to the pool.
>
> Total size: ~797T
>
> We have an identical server which we replicate changes via zfs snapshots
> every few minutes.  The whole setup as been up and running for a few years
> now, no issues.  As we run low on space we purchase two additional MD1200
> shelfs (one for each system) and add the new raidz2 into pool on-the-fly.
>
> The only real issues we've had is sometimes a disk fails in such a way
> (think Monty Python and the holy grail i'm not dead yet) where the disk
> hasn't failed but is timing out and slows the whole array to a standstill
> until we can manual find and remove the disk.  Other problems are once a
> disk has been replaced sometimes the resilver process can take
> an eternity.  We have also found the snapshot replication process can
> interfere with the resilver process - resilver gets stuck at 99% and never
> ends - so we end up stopping or only doing one replication a day until the
> resilver process is done.
>
> The last helpful hint I have was lowering all the drive timeouts, see
> http://everycity.co.uk/alasdair/2011/05/adjusting-drive-timeouts-with-mdb-on-solaris-or-openindiana/
> for info.
>
> [Fred]: zpool wiith 280 drives in production is pretty big! I think 2000
> drives were just in test. It is true that huge pools have lots of operation
> challenges. I have met the similar sluggish issue caused by a
>
   will-die disk.  Just curious, what is the cluster software
implemented in
http://everycity.co.uk/alasdair/2011/05/adjusting-drive-timeouts-with-mdb-on-solaris-or-openindiana/
 ?

Thanks.

Fred

>
>
>
>>>
>>>
>>
> *illumos-zfs* | Archives
> 
>  |
> Modify
> 
> Your Subscription 
>



---
openzfs-developer
Archives: https://www.listbox.com/member/archive/274414/=now
RSS Feed: https://www.listbox.com/member/archive/rss/274414/28015062-cce53afa
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=28015062&id_secret=28015062-f966d51c
Powered by Listbox: http://www.listbox.com


[developer] Re: [zfs] an interesting survey -- the zpool with most disks you have ever built

2016-03-07 Thread Fred Liu
2016-03-07 13:55 GMT+08:00 Julian Elischer :
> On 6/03/2016 9:30 PM, Fred Liu wrote:
>>
>> 2016-03-05 0:01 GMT+08:00 Freddie Cash :
>>
>>> On Mar 4, 2016 2:05 AM, "Fred Liu"  wrote:

 2016-03-04 13:47 GMT+08:00 Freddie Cash :
>
> Currently, I just use a simple coordinate system. Columns are letters,
>>>
>>> rows are numbers.
>
> "smartos-disc...@lists.smartos.org" >>>
 、
>>>
>>> developer 、
>>>
>>> illumos-developer 、
>>>
>>> omnios-discuss 、
>>>
>>> Discussion list for OpenIndiana 、
>>>
>>> illumos-zfs 、
>>>
>>> "zfs-disc...@list.zfsonlinux.org" 、
>>>
>>> "freebsd...@freebsd.org" 、
>>>
>>> "zfs-de...@freebsd.org" 
>>>
> Each disk is partitioned using GPT with the first (only) partition
>>>
>>> starting at 1 MB and covering the whole disk, and labelled with the
>>> column/row where it is located (disk-a1, disk-g6, disk-p3, etc).

 [Fred]: So you manually pull off all the drives one by one to locate
>>>
>>> them?
>>>
>>> When putting the system together for the first time, I insert each disk
>>> one at a time, wait for it to be detected, partition it, then label it
>>> based on physical location.  Then do the next one.  It's just part of the
>>> normal server build process, whether it has 2 drives, 20 drives, or 200
>>> drives.
>>>
>>> We build all our own servers from off-the-shelf parts; we don't buy
>>> anything pre-built from any of the large OEMs.
>>>
>> [Fred]: Gotcha!
>>
>>
> The pool is created using the GPT labels, so the label shows in "zpool
>>>
>>> list" output.

 [Fred]:  What will the output look like?
>>>
>>> From our smaller backups server, with just 24 drive bays:
>>>
>>> $ zpool status storage
>>>
>>>pool: storage
>>>
>>>   state: ONLINE
>>>
>>> status: Some supported features are not enabled on the pool. The pool can
>>>
>>> still be used, but some features are unavailable.
>>>
>>> action: Enable all features using 'zpool upgrade'. Once this is done,
>>>
>>> the pool may no longer be accessible by software that does not support
>>>
>>> the features. See zpool-features(7) for details.
>>>
>>>scan: scrub canceled on Wed Feb 17 12:02:20 2016
>>>
>>> config:
>>>
>>>
>>> NAME STATE READ WRITE CKSUM
>>>
>>> storage  ONLINE   0 0 0
>>>
>>>   raidz2-0   ONLINE   0 0 0
>>>
>>> gpt/disk-a1  ONLINE   0 0 0
>>>
>>> gpt/disk-a2  ONLINE   0 0 0
>>>
>>> gpt/disk-a3  ONLINE   0 0 0
>>>
>>> gpt/disk-a4  ONLINE   0 0 0
>>>
>>> gpt/disk-a5  ONLINE   0 0 0
>>>
>>> gpt/disk-a6  ONLINE   0 0 0
>>>
>>>   raidz2-1   ONLINE   0 0 0
>>>
>>> gpt/disk-b1  ONLINE   0 0 0
>>>
>>> gpt/disk-b2  ONLINE   0 0 0
>>>
>>> gpt/disk-b3  ONLINE   0 0 0
>>>
>>> gpt/disk-b4  ONLINE   0 0 0
>>>
>>> gpt/disk-b5  ONLINE   0 0 0
>>>
>>> gpt/disk-b6  ONLINE   0 0 0
>>>
>>>   raidz2-2   ONLINE   0 0 0
>>>
>>> gpt/disk-c1  ONLINE   0 0 0
>>>
>>> gpt/disk-c2  ONLINE   0 0 0
>>>
>>> gpt/disk-c3  ONLINE   0 0 0
>>>
>>> gpt/disk-c4  ONLINE   0 0 0
>>>
>>> gpt/disk-c5  ONLINE   0 0 0
>>>
>>> gpt/disk-c6  ONLINE   0 0 0
>>>
>>>   raidz2-3   ONLINE   0 0 0
>>>
>>> gpt/disk-d1  ONLINE   0 0 0
>>>
>>> gpt/disk-d2  ONLINE   0 0 0
>>>
>>> gpt/disk-d3  ONLINE   0 0 0
>>>
>>> gpt/disk-d4  ONLINE   0 0 0
>>>
>>> gpt/disk-d5  ONLINE   0 0 0
>>>
>>> gpt/disk-d6  ONLINE   0 0 0
>>>
>>> cache
>>>
>>>   gpt/cache0 ONLINE   0 0 0
>>>
>>>   gpt/cache1 ONLINE   0 0 0
>>>
>>>
>>> errors: No known data errors
>>>
>>> The 90-bay systems look the same, just that the letters go all the way to
>>> p (so disk-p1 through disk-p6).  And there's one vdev that uses 3 drives
>>> from each chassis (7x 6-disk vdev only uses 42 drives of the 45-bay
>>> chassis, so there's lots of spares if using a single chassis; using two
>>> chassis, there's enough drives to add an extra 6-disk vdev).
>>>
>> [Fred]: It looks like the gpt label shown in "zpool status" only works in
>> FreeBSD/FreeNAS. Are you using FreeBSD/FreeNAS? I can't find the similar
>> possibilities in Illumos/Linux.
>
>
> Ah that's a trick.. FreeBSD exports an actual /dev/gpt/{you-label-goes-here}
> for each labeled partition it finds.
> So it's not ZFS doing anything special.. it's what FreeBSD is calling the
> partition.
>

Super cool!

Fred


---
openzfs-developer
Archives: https://www.listbox.com/member/archive/274414/=now
RSS Feed: https://www.listbox.com/member/archive/rss/274414/28015062-cce53afa
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=28015062&id_secret=28015062-f966d51c
Po

Re: [zfs] [developer] Re: [smartos-discuss] an interesting survey -- the zpool with most disks you have ever built

2016-03-07 Thread Liam Slusser
I don't have a 2000 drive array (thats amazing!) but I do have two 280
drive arrays which are in production.  Here are the generic stats:

server setup:
OpenIndiana oi_151
1 server rack
Dell r720xd 64g ram with mirrored 250g boot disks
5 x LSI 9207-8e dualport SAS pci-e host bus adapters
Intel 10g fibre ethernet (dual port)
2 x SSD for log cache
2 x SSD for cache
23 x Dell MD1200 with 3T,4T, or 6T NLSAS disks (a mix of Toshiba, Western
Digital, and Seagate drives - basically whatever Dell sends)

zpool setup:
23 x 12-disk raidz2 glued together.  276 total disks.  Basically each new
12 disk MD1200 is a new raidz2 added to the pool.

Total size: ~797T

We have an identical server which we replicate changes via zfs snapshots
every few minutes.  The whole setup as been up and running for a few years
now, no issues.  As we run low on space we purchase two additional MD1200
shelfs (one for each system) and add the new raidz2 into pool on-the-fly.

The only real issues we've had is sometimes a disk fails in such a way
(think Monty Python and the holy grail i'm not dead yet) where the disk
hasn't failed but is timing out and slows the whole array to a standstill
until we can manual find and remove the disk.  Other problems are once a
disk has been replaced sometimes the resilver process can take
an eternity.  We have also found the snapshot replication process can
interfere with the resilver process - resilver gets stuck at 99% and never
ends - so we end up stopping or only doing one replication a day until the
resilver process is done.

The last helpful hint I have was lowering all the drive timeouts, see
http://everycity.co.uk/alasdair/2011/05/adjusting-drive-timeouts-with-mdb-on-solaris-or-openindiana/
for info.

thanks,
liam






On Sun, Mar 6, 2016 at 10:18 PM, Fred Liu  wrote:

>
>
> 2016-03-07 14:04 GMT+08:00 Richard Elling :
>
>>
>> On Mar 6, 2016, at 9:06 PM, Fred Liu  wrote:
>>
>>
>>
>> 2016-03-06 22:49 GMT+08:00 Richard Elling <
>> richard.ell...@richardelling.com>:
>>
>>>
>>> On Mar 3, 2016, at 8:35 PM, Fred Liu  wrote:
>>>
>>> Hi,
>>>
>>> Today when I was reading Jeff's new nuclear weapon -- DSSD D5's CUBIC
>>> RAID introduction,
>>> the interesting survey -- the zpool with most disks you have ever built
>>> popped in my brain.
>>>
>>>
>>> We test to 2,000 drives. Beyond 2,000 there are some scalability issues
>>> that impact failover times.
>>> We’ve identified these and know what to fix, but need a real customer at
>>> this scale to bump it to
>>> the top of the priority queue.
>>>
>>> [Fred]: Wow! 2000 drives almost need 4~5 whole racks!
>>
>>>
>>> For zfs doesn't support nested vdev, the maximum fault tolerance should
>>> be three(from raidz3).
>>>
>>>
>>> Pedantically, it is N, because you can have N-way mirroring.
>>>
>>
>> [Fred]: Yeah. That is just pedantic. N-way mirroring of every disk works
>> in theory and rarely happens in reality.
>>
>>>
>>> It is stranded if you want to build a very huge pool.
>>>
>>>
>>> Scaling redundancy by increasing parity improves data loss protection by
>>> about 3 orders of
>>> magnitude. Adding capacity by striping reduces data loss protection by
>>> 1/N. This is why there is
>>> not much need to go beyond raidz3. However, if you do want to go there,
>>> adding raidz4+ is
>>> relatively easy.
>>>
>>
>> [Fred]: I assume you used stripped raidz3 vedvs in your storage mesh of
>> 2000 drives. If that is true, the possibility of 4/2000 will be not so low.
>>Plus, reslivering takes longer time if single disk has bigger
>> capacity. And further, the cost of over-provisioning spare disks vs raidz4+
>> will be an deserved
>> trade-off when the storage mesh at the scale of 2000 drives.
>>
>>
>> Please don't assume, you'll just hurt yourself :-)
>> For example, do not assume the only option is striping across raidz3
>> vdevs. Clearly, there are many
>> different options.
>>
>
> [Fred]:  Yeah. Assumptions always go far way from facts! ;-) Is designing
> a storage mesh with 2000 drives biz secret? Or it is just too complicate to
> elaborate?
> Never mind. ;-)
>
> Thanks.
>
> Fred
>
>
>>
>>
> *illumos-zfs* | Archives
> 
>  |
> Modify
> 
> Your Subscription 
>



---
openzfs-developer
Archives: https://www.listbox.com/member/archive/274414/=now
RSS Feed: https://www.listbox.com/member/archive/rss/274414/28015062-cce53afa
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=28015062&id_secret=28015062-f966d51c
Powered by Listbox: http://www.listbox.com


Re: [developer] Re: [smartos-discuss] an interesting survey -- the zpool with most disks you have ever built

2016-03-07 Thread Jort Bloem

We currently have 84 disks; 72 internal, 12 in a jbod.

The internal drives are named: (Front|Back)Row([1-6])Column([1-4])(Near|Far)

(e.g. FrontRow1Column4Far )

We use /etc/zfs/vdev_id.conf to name them that way, by path, to ensure that 
they stay where they are put.

Our jbod disks are named JBOD(\d+)Disk(\d+)  - currently we have 1 jbod with 12 
disks.

Currently all our disks are in sets of 6 RaidZ2s; they were added 1 set at a 
time.

When we got to 66 disks, we had some weirdness, and I thought I might have to 
rebuild it - at that time, 66 disks. Since we can't remove drives from the 
array (except redundant disks), I decided to create a new storage pool. Also, 
this means if we do have a catastrophic failure (e.g. lose 3 disks from the 
same raidz2), we still have some data. Then, when we got the JBOD, there was 
talk of moving this around, so I put this in a separate pool too.

How does reliability scale as I add more redundancy?
For example, if I have 36 disks (lowest common denominator), what is the 
reliability of:

6 stripes of 6 disks as raidz2
4 stripes of 9 disks as raidz3
2 stripes of 12 disks as raidz3

Jort

On 07/03/16 18:06, Fred Liu wrote:



Jort Bloem



Technical Engineer  -  Auckland

Business Technology Group LTD



p: +64 9 580 1374 x9884

m: +64 21 326 000

jort.bl...@btg.co.nz

2016-03-06 22:49 GMT+08:00 Richard Elling 
mailto:richard.ell...@richardelling.com>>:

On Mar 3, 2016, at 8:35 PM, Fred Liu 
<fred_...@issi.com> wrote:

Hi,

Today when I was reading Jeff's new nuclear weapon -- DSSD D5's CUBIC RAID 
introduction,
the interesting survey -- the zpool with most disks you have ever built popped 
in my brain.

We test to 2,000 drives. Beyond 2,000 there are some scalability issues that 
impact failover times.
We’ve identified these and know what to fix, but need a real customer at this 
scale to bump it to
the top of the priority queue.

[Fred]: Wow! 2000 drives almost need 4~5 whole racks!

For zfs doesn't support nested vdev, the maximum fault tolerance should be 
three(from raidz3).

Pedantically, it is N, because you can have N-way mirroring.

[Fred]: Yeah. That is just pedantic. N-way mirroring of every disk works in 
theory and rarely happens in reality.

It is stranded if you want to build a very huge pool.

Scaling redundancy by increasing parity improves data loss protection by about 
3 orders of
magnitude. Adding capacity by striping reduces data loss protection by 1/N. 
This is why there is
not much need to go beyond raidz3. However, if you do want to go there, adding 
raidz4+ is
relatively easy.

[Fred]: I assume you used stripped raidz3 vedvs in your storage mesh of 2000 
drives. If that is true, the possibility of 4/2000 will be not so low.
  Plus, reslivering takes longer time if single disk has bigger 
capacity. And further, the cost of over-provisioning spare disks vs raidz4+ 
will be an deserved
   trade-off when the storage mesh at the scale of 2000 drives.

Thanks.

Fred


--

richard.ell...@richardelling.com
+1-760-896-4422




openzfs-developer | Archives [X] 
  | 
Modify Your Subscription  [X] 




---
openzfs-developer
Archives: https://www.listbox.com/member/archive/274414/=now
RSS Feed: https://www.listbox.com/member/archive/rss/274414/28015062-cce53afa
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=28015062&id_secret=28015062-f966d51c
Powered by Listbox: http://www.listbox.com


[developer] incremental replication stream of a fs tree with lots of snapshots trips assert in zfs recv

2016-03-07 Thread Lauri Tirkkonen
Hi, I just reported the following bug:
https://www.illumos.org/issues/6729

Dan McDonald asked me to ping the lists in addition to creating the bug,
so here you go.

-- 
Lauri Tirkkonen | lotheac @ IRCnet


---
openzfs-developer
Archives: https://www.listbox.com/member/archive/274414/=now
RSS Feed: https://www.listbox.com/member/archive/rss/274414/28015062-cce53afa
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=28015062&id_secret=28015062-f966d51c
Powered by Listbox: http://www.listbox.com


Re: [zfs] [developer] Re: [smartos-discuss] an interesting survey -- the zpool with most disks you have ever built

2016-03-07 Thread Richard Elling

> On Mar 6, 2016, at 9:06 PM, Fred Liu  wrote:
> 
> 
> 
> 2016-03-06 22:49 GMT+08:00 Richard Elling  >:
> 
>> On Mar 3, 2016, at 8:35 PM, Fred Liu > > wrote:
>> 
>> Hi,
>> 
>> Today when I was reading Jeff's new nuclear weapon -- DSSD D5's CUBIC RAID 
>> introduction,
>> the interesting survey -- the zpool with most disks you have ever built 
>> popped in my brain.
> 
> We test to 2,000 drives. Beyond 2,000 there are some scalability issues that 
> impact failover times.
> We’ve identified these and know what to fix, but need a real customer at this 
> scale to bump it to
> the top of the priority queue.
> 
> [Fred]: Wow! 2000 drives almost need 4~5 whole racks! 
>> 
>> For zfs doesn't support nested vdev, the maximum fault tolerance should be 
>> three(from raidz3).
> 
> Pedantically, it is N, because you can have N-way mirroring.
>  
> [Fred]: Yeah. That is just pedantic. N-way mirroring of every disk works in 
> theory and rarely happens in reality.
> 
>> It is stranded if you want to build a very huge pool.
> 
> Scaling redundancy by increasing parity improves data loss protection by 
> about 3 orders of 
> magnitude. Adding capacity by striping reduces data loss protection by 1/N. 
> This is why there is
> not much need to go beyond raidz3. However, if you do want to go there, 
> adding raidz4+ is 
> relatively easy.
> 
> [Fred]: I assume you used stripped raidz3 vedvs in your storage mesh of 2000 
> drives. If that is true, the possibility of 4/2000 will be not so low.
>Plus, reslivering takes longer time if single disk has bigger 
> capacity. And further, the cost of over-provisioning spare disks vs raidz4+ 
> will be an deserved 
> trade-off when the storage mesh at the scale of 2000 drives.

Please don't assume, you'll just hurt yourself :-)
For example, do not assume the only option is striping across raidz3 vdevs. 
Clearly, there are many
different options.
 -- richard





---
openzfs-developer
Archives: https://www.listbox.com/member/archive/274414/=now
RSS Feed: https://www.listbox.com/member/archive/rss/274414/28015062-cce53afa
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=28015062&id_secret=28015062-f966d51c
Powered by Listbox: http://www.listbox.com


[developer] Re: [zfs] an interesting survey -- the zpool with most disks you have ever built

2016-03-07 Thread Julian Elischer

On 6/03/2016 9:30 PM, Fred Liu wrote:

2016-03-05 0:01 GMT+08:00 Freddie Cash :


On Mar 4, 2016 2:05 AM, "Fred Liu"  wrote:

2016-03-04 13:47 GMT+08:00 Freddie Cash :

Currently, I just use a simple coordinate system. Columns are letters,

rows are numbers.

"smartos-disc...@lists.smartos.org" 
、

developer 、

illumos-developer 、

omnios-discuss 、

Discussion list for OpenIndiana 、

illumos-zfs 、

"zfs-disc...@list.zfsonlinux.org" 、

"freebsd...@freebsd.org" 、

"zfs-de...@freebsd.org" 


Each disk is partitioned using GPT with the first (only) partition

starting at 1 MB and covering the whole disk, and labelled with the
column/row where it is located (disk-a1, disk-g6, disk-p3, etc).

[Fred]: So you manually pull off all the drives one by one to locate

them?

​When putting the system together for the first time, I insert each disk
one at a time, wait for it to be detected, partition it, then label it
based on physical location.​  Then do the next one.  It's just part of the
normal server build process, whether it has 2 drives, 20 drives, or 200
drives.

​We build all our own servers from off-the-shelf parts; we don't buy
anything pre-built from any of the large OEMs.​


[Fred]: Gotcha!



The pool is created using the GPT labels, so the label shows in "zpool

list" output.

[Fred]:  What will the output look like?

​From our smaller backups server, with just 24 drive bays:

$ zpool status storage

   pool: storage

  state: ONLINE

status: Some supported features are not enabled on the pool. The pool can

still be used, but some features are unavailable.

action: Enable all features using 'zpool upgrade'. Once this is done,

the pool may no longer be accessible by software that does not support

the features. See zpool-features(7) for details.

   scan: scrub canceled on Wed Feb 17 12:02:20 2016

config:


NAME STATE READ WRITE CKSUM

storage  ONLINE   0 0 0

  raidz2-0   ONLINE   0 0 0

gpt/disk-a1  ONLINE   0 0 0

gpt/disk-a2  ONLINE   0 0 0

gpt/disk-a3  ONLINE   0 0 0

gpt/disk-a4  ONLINE   0 0 0

gpt/disk-a5  ONLINE   0 0 0

gpt/disk-a6  ONLINE   0 0 0

  raidz2-1   ONLINE   0 0 0

gpt/disk-b1  ONLINE   0 0 0

gpt/disk-b2  ONLINE   0 0 0

gpt/disk-b3  ONLINE   0 0 0

gpt/disk-b4  ONLINE   0 0 0

gpt/disk-b5  ONLINE   0 0 0

gpt/disk-b6  ONLINE   0 0 0

  raidz2-2   ONLINE   0 0 0

gpt/disk-c1  ONLINE   0 0 0

gpt/disk-c2  ONLINE   0 0 0

gpt/disk-c3  ONLINE   0 0 0

gpt/disk-c4  ONLINE   0 0 0

gpt/disk-c5  ONLINE   0 0 0

gpt/disk-c6  ONLINE   0 0 0

  raidz2-3   ONLINE   0 0 0

gpt/disk-d1  ONLINE   0 0 0

gpt/disk-d2  ONLINE   0 0 0

gpt/disk-d3  ONLINE   0 0 0

gpt/disk-d4  ONLINE   0 0 0

gpt/disk-d5  ONLINE   0 0 0

gpt/disk-d6  ONLINE   0 0 0

cache

  gpt/cache0 ONLINE   0 0 0

  gpt/cache1 ONLINE   0 0 0


errors: No known data errors

The 90-bay systems look the same, just that the letters go all the way to
p (so disk-p1 through disk-p6).  And there's one vdev that uses 3 drives
from each chassis (7x 6-disk vdev only uses 42 drives of the 45-bay
chassis, so there's lots of spares if using a single chassis; using two
chassis, there's enough drives to add an extra 6-disk vdev).


[Fred]: It looks like the gpt label shown in "zpool status" only works in
FreeBSD/FreeNAS. Are you using FreeBSD/FreeNAS? I can't find the similar
possibilities in Illumos/Linux.


Ah that's a trick.. FreeBSD exports an actual 
/dev/gpt/{you-label-goes-here} for each labeled partition it finds.
So it's not ZFS doing anything special.. it's what FreeBSD is calling 
the partition.


Thanks,

Fred


*illumos-zfs* | Archives

 |
Modify

Your Subscription 


___
freebsd...@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-fs
To unsubscribe, send any mail to "freebsd-fs-unsubscr...@freebsd.org"





---
openzfs-developer
Archives: https://www.listbox.com/member/archive/274414/=now
RSS Feed: https://www.listbox.com/member/archive/rss/274414/28015062-cce53afa
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=28015062&id_secret=28015062-f966d51c
Powered by Listbox: http://www.listbox.com