Re: [zfs-discuss] zfs reliability under xen

2009-05-22 Thread John Levon
On Sun, May 17, 2009 at 02:16:01PM +0300, Ahmed Kamal wrote:

 I am wondering whether the reliability of solaris/zfs is still guaranteed if
 I will be running zfs not directly over real hardware, but over Xen
 virtualization ? The plan is to assign physical raw access to the disks to
 the xen guest. I remember zfs having problems with hardware that lies about
 disk write ordering, wonder how that is handled over Xen, or if that issue
 has been completely resolved

You can read the frontend sources here:

http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/xen/io/xdf.c

If both dom0 and domU are Solaris, then any disk cache flushes are
passed through via FLUSH_DISKCACHE. If the dom0 is Linux, then we
attempt to emulate a flush by using WRITE_BARRIER (annoyingly, this
requires us to write a block as well, so in this case, we cache one).

In the backend (xdb or xpvtap), we pass along the flush request, either
via an in kernel flush:

683 (void) ldi_ioctl(vdp-xs_ldi_hdl,
684 DKIOCFLUSHWRITECACHE, NULL, FKIOCTL, kcred, NULL);

or via VDFlush() in the VirtualBox code we use (which essentially ends
up as an fsync()).

Thus as long as the ioctl and/or fsync are obeyed, things should be
good. Hope that's clearer.

regards
john
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Dreadful lofi read performance on snv_111

2009-04-06 Thread John Levon

On snv_111 OpenSolaris. heaped is snv_101b, both ZFS:

# mount -F hsfs /rpool/dc/media/OpenSolaris.iso /mnt
# ptime cp /mnt/boot/boot_archive /var/tmp

real 3:31.453461873
user0.003283729
sys 0.376784567
# mount -F hsfs /net/heaped/export/netimage/opensolaris/vnc-fix.iso /mnt2
# ptime cp /mnt2/boot/boot_archive /var/tmp

real1.442180764
user0.004013447
sys 0.442550604
# mount -F hsfs /net/localhost/rpool/dc/media/OpenSolaris.iso /mnt3
# ptime cp /mnt3/boot/boot_archive /var/tmp

real 3:41.182920499
user0.004244172
sys 0.430159730

I see a couple of bugs about lofi performance like 6382683, but I'm not sure if 
this
related, it seems to be a newer issue.

Any ideas?

regards
john
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dreadful lofi read performance on snv_111

2009-04-06 Thread John Levon
On Mon, Apr 06, 2009 at 04:46:12PM +0700, Fajar A. Nugraha wrote:

 On Mon, Apr 6, 2009 at 4:41 PM, John Levon john.le...@sun.com wrote:
  I see a couple of bugs about lofi performance like 6382683, but I'm not 
  sure if this
  related, it seems to be a newer issue.
 
 Isn't it 6806627?
 
 http://opensolaris.org/jive/thread.jspa?threadID=98043tstart=0

Ah, I thought that made 111, but it sounds like it's going in the respin
instead - should have checked.

thanks
john
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] xVM GRUB entry incorrect with ZFS root

2008-08-28 Thread John Levon
On Thu, Aug 28, 2008 at 09:25:14AM -0700, Trevor Watson wrote:

 Looking at the GRUB menu, it appears as though the flags -B $ZFS-BOOTFS are 
 needed to be passed to the kernel. Is this something I can add to:  kernel$ 
 /boot/$ISADIR/xen.gz or is there some other mechanism required for booting 
 Solaris xVM from ZFS ?

You need to add it to the next line ($module ...). This was a bug that's
now fixed in the latest LU

regards
john
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume

2008-06-30 Thread John Levon
On Mon, Jun 30, 2008 at 04:19:15PM -0700, Jeff Bonwick wrote:

 (2) In a virtualized environment, a better way to get a crash dump
 would be to snapshot the VM.  This would require a little bit
 of host/guest cooperation, in that the installer (or dumpadm)
 would have to know that it's operating in a VM, and the kernel
 would need some way to notify the VM that it just panicked.
 Both of these ought to be doable.

This is trivial with xVM, btw: just make the panic routine call
HYPERVISOR_shutdown(SHUTDOWN_crash); and dom0 will automatically create a
full crash dump for the domain, which is readably directly in MDB.

As a refinement you might want to only do this if a (suitable) place to
crash dump isn't available.

regards
john
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss