Re: [zfs-discuss] zpool split how it works?

2010-11-10 Thread Darren J Moffat

On 10/11/2010 11:18, sridhar surampudi wrote:

I was wondering how zpool split works or implemented.

If a pool pool1 is on a mirror having two devices dev1 and dev2 then using 
zpool split I can split with the new pool name say pool-mirror on dev2.

How split can change metadata on dev2 and rename/replace and associate with new 
name i.e. pool-mirror ??


Exactly what isn't clear from the description in the man page ?

 zpool split [-R altroot] [-n] [-o mntopts] [-o
 property=value] pool newpool [device ...]

 Splits off one disk from each mirrored top-level vdev in
 a  pool and creates a new pool from the split-off disks.
 The original pool must be made up of one or more mirrors
 and must not be in the process of resilvering. The split
 subcommand chooses the last device in each  mirror  vdev
 unless  overridden by a device specification on the com-
 mand line.

 When using a device argument, split includes the  speci-
 fied  device(s)  in  a  new pool and, should any devices
 remain unspecified, assigns the last device in each mir-
 ror  vdev  to that pool, as it does normally. If you are
 uncertain about the outcome of a split command, use  the
 -n  (dry-run)  option to ensure your command will have
 the effect you intend.

Or are you really asking about the implementation details ?  If you want 
to know how it is implemented then you need to read the source code.


Here would be a good starting point:

http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libzfs/common/libzfs_pool.c#zpool_vdev_split

Which ends up in kernel here:

http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/zfs_ioctl.c#zfs_ioc_vdev_split


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool split how it works?

2010-11-10 Thread Mark J Musante

On Wed, 10 Nov 2010, Darren J Moffat wrote:


On 10/11/2010 11:18, sridhar surampudi wrote:

I was wondering how zpool split works or implemented.

Or are you really asking about the implementation details ?  If you want 
to know how it is implemented then you need to read the source code.


Also or you can read the blog entry I wrote up after it was put back:

http://blogs.sun.com/mmusante/entry/seven_years_of_good_luck
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool split how it works?

2010-11-10 Thread sridhar surampudi
Hi,

I was wondering how zpool split works or implemented.

If a pool pool1 is on a mirror having two devices dev1 and dev2 then using 
zpool split I can split with the new pool name say pool-mirror on dev2. 

How split can change metadata on dev2 and rename/replace and associate with new 
name i.e. pool-mirror ??

Could you please let me know more about it?

Thanks  Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool split how it works?

2010-11-10 Thread sridhar surampudi
Hi Darren,

Thanks for your info. 

Sorry below might be lengthy :

Yes I am looking for actual implementation rather how to use zpool split.

My requirement is not at zfs file system level and also not zfs snapshot. 


As I understood,
If my zpool say mypool is created using   zpool create mypool mirror device1 
device2 
and  after running  : zfs split mypool  newpool device2 , I can access device2 
with newpool  

Same data on newpool is available as mypool as long as there are no 
writes/modifications to newpool.

What i am looking for is, 

if my devices ( say zpool is created with only one device device1) are from an 
array and I took array snapshot ( zfs /zpool doesn't come in picture as I take 
hardware snapshot), I will get a snapshot device say device2. 

I am  looking for a way to use the snapshot device device2 by recreating the 
zpool and zfs stack with an alternate name.

zpool split must be doing some changes to metadata of device2 to associate 
with the new name i.e. newpool,

I want to do it for the same for snapshot device created using array/hardware 
snapshot. 

Thanks  Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Maximum zfs send/receive throughput

2010-11-10 Thread Karsten Weiss
 I'm not very familar with mdb. I've tried this:
 
Ah, this looks much better:

root   641  0.0  0.0 7660 2624 ?S   Nov 08  2:16 /sbin/zfs receive 
-dF datapool/share/ (...)

# echo 0t641::pid2proc|::walk thread|::findstack -v | mdb -k
stack pointer for thread ff09236198e0: ff003d9b5670
[ ff003d9b5670 _resume_from_idle+0xf1() ]
  ff003d9b56a0 swtch+0x147()
  ff003d9b56d0 cv_wait+0x61(ff0a4fbd4228, ff0a4fbd40e8)
  ff003d9b5710 dmu_tx_wait+0x80(ff0948aa4600)
  ff003d9b5750 dmu_tx_assign+0x4b(ff0948aa4600, 1)
  ff003d9b57e0 dmu_free_long_range_impl+0x12a(ff0911456d60, 
ff0a4fbd4028, 0, , 0)
  ff003d9b5840 dmu_free_long_range+0x5b(ff0911456d60, 53e34, 0, 
)
  ff003d9b58d0 dmu_object_reclaim+0x112(ff0911456d60, 53e34, 13, 1e00, 
11, 108)
  ff003d9b5930 restore_object+0xff(ff003d9b5950, ff0911456d60, 
ff003d9b59c0)
  ff003d9b5a90 dmu_recv_stream+0x48d(ff003d9b5be0, ff094d089440, 
ff003d9b5ad8)
  ff003d9b5c40 zfs_ioc_recv+0x2c0(ff092492b000)
  ff003d9b5cc0 zfsdev_ioctl+0x10b(b6, 5a1c, 8044e50, 13, 
ff0948b60e50, ff003d9b5de4)
  ff003d9b5d00 cdev_ioctl+0x45(b6, 5a1c, 8044e50, 13, 
ff0948b60e50, ff003d9b5de4)
  ff003d9b5d40 spec_ioctl+0x83(ff0921e54640, 5a1c, 8044e50, 13, 
ff0948b60e50, ff003d9b5de4, 0)
  ff003d9b5dc0 fop_ioctl+0x7b(ff0921e54640, 5a1c, 8044e50, 13, 
ff0948b60e50, ff003d9b5de4, 0)
  ff003d9b5ec0 ioctl+0x18e(3, 5a1c, 8044e50)
  ff003d9b5f10 sys_syscall32+0x101()

Does this maybe ring a bell with someone?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to grow root vdevs?

2010-11-10 Thread Peter Taps
Thank you for your help.

Regards,
Peter
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to create a checkpoint?

2010-11-10 Thread Richard Elling
On Nov 9, 2010, at 11:24 AM, Peter Taps wrote:
 Thank you all for your help. Looks like beadm is the utility I was looking 
 for.

On NexentaStor, the NMC command is setup appliance checkpoint :-)
There is also a GUI form for managing the checkpoints.  This works similar
to the way beadm works, but is easier.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is opensolaris support ended?

2010-11-10 Thread Orvar Korvar
You can upgrade with update manager to b134 which is the last build from Sun.

You can also upgrade to b147, if you switch to OpenIndiana. Read on OpenIndiana 
web site.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs record size implications

2010-11-10 Thread Rob Cohen
Thanks, Richard.  Your answers were very helpful.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP ProLiant N36L

2010-11-10 Thread Krist van Besien
I just bought one. :-)

My imprssions:

- Installed Nexentastor community edition in it. All hardware was recognized 
and works. No problem there. I am however rather underwhelmed by the 
Nexentastor system and will problably just install Opensolaris on it (b134) 
this evening. I want to use the box as a NAS, serving CIFS to clients (a 
mixture of MAC and Linux machines) but as I don't have that much administration 
to do in it I'll just do it on the command line and forgo fancy broken guis...
- The system is wel build. Quality is good. I could get the whole motherboard 
tray out without needing to use tools. It comes with 1GB of ram that I plan to 
upgrade.
- The system does come with four HD trays and all the screws you need. I 
plunked in 4 2T disks, and a small SSD for the OS.
- The motherboard has a minisas connector, which is connected to the backplane, 
and a seperate SATA connector that is intended for an optical drive. I used 
that to connect a SSD which lives in the optical drive bay. There is also an 
internal USB connector you could just put a USB stick in.
- Performance under nexentastor appears OK. I have to do some real tests though.
- It is very quiet. Can certainly live with it in my office. (But will move it 
in to the basement anyway.
.  A nice touch is the eSata connector on the back. It does have a VGA 
connector, but no keyboard/mouse. This is completely legacy free...

All in all this is an excellent platform to build a NAS on.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is opensolaris support ended?

2010-11-10 Thread sridhar surampudi
Thanks for your help.

I would check this out.

Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss