Re: [zfs-discuss] NFS4-sharing-ZFS issues

2008-05-22 Thread Thommy M. Malmström
Bob Friesenhahn wrote:
 On Wed, 21 May 2008, Will Murnane wrote:
 So, my questions are:
 * Are there options I can set server- or client-side to make Solaris
 child mounts happen automatically (i.e., match the Linux behavior)?
 * Will this behave with automounts?  What I'd like to do is list
 /export/home in the automount master file, but not all the child
 filesystems.
 
 Here is the answer you were looking for:
 
 In /etc/auto_home:
 # Home directory map for automounter
 #
 *   server:/home/
 
 This works on Solaris 9, Solaris 10, and OS-X Leopard.

Shouldn't that be

*   server:/export/home/


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] List of supported multipath drivers

2008-05-22 Thread Raja Subramanian
I have a Hitachi SMS100 iSCSI array attached to my Nexenta 1.0 box.  The 
storage array has dual ports/controllers and I have dual bge NICs on my server. 
 I'm not using a iSCSI HBA in my setup.

I'm unable to get MPxIO working, and the format command always shows my disk 
targets multiple times.  Can you please tell me what I should put in 
scsi_vhci.conf?  My config is below.


# iscsiadm list target
Target: iqn.1994-04.jp.co.hitachi:rsd.d8a.t.11089.1a001
Alias: -
TPGT: 1
ISID: 402a
Connections: 1
Target: iqn.1994-04.jp.co.hitachi:rsd.d8a.t.11089.0a001
Alias: -
TPGT: 1
ISID: 402a
Connections: 1


# format
...
AVAILABLE DISK SELECTIONS:
...
   2. c2t2d0 DEFAULT cyl 2558 alt 2 hd 128 sec 32
  /iscsi/[EMAIL PROTECTED],1
   3. c2t3d0 DEFAULT cyl 2558 alt 2 hd 128 sec 32
  /iscsi/[EMAIL PROTECTED],1
...
Specify disk (enter its number): 2
selecting c2t2d0
...
format inquiry
Vendor:   HITACHI
Product:  DF600F
Revision: 



cat /kernel/drv/iscsi.conf
#
# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License, Version 1.0 only
# (the License).  You may not use this file except in compliance
# with the License.
#
# You can obtain a copy of the license at src/sun_nws/OPENSOLARIS.LICENSE
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
#
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at src/sun_nws/OPENSOLARIS.LICENSE.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets [] replaced with your own identifying
# information: Portions Copyright [] [name of copyright owner]
#
# CDDL HEADER END
#
#
# Copyright 2006 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#
#ident  @(#)iscsi.conf 1.4 06/03/22 SMI

name=iscsi parent=/ instance=0;

#
# I/O multipathing feature (MPxIO) can be enabled or disabled using
# mpxio-disable property. Setting mpxio-disable=no will activate
# I/O multipathing; setting mpxio-disable=yes disables the feature.
#
# Global mpxio-disable property:
#
# To globally enable MPxIO on all iscsi ports set:
# mpxio-disable=no;
#
# To globally disable MPxIO on all iscsi ports set:
# mpxio-disable=yes;
#
mpxio-disable=no;


cat /kernel/drv/scsi_vhci.conf
# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License (the License).
# You may not use this file except in compliance with the License.
#
# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
#
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets [] replaced with your own identifying
# information: Portions Copyright [] [name of copyright owner]
#
# CDDL HEADER END
#
# Copyright 2007 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#
#pragma ident   @(#)scsi_vhci.conf 1.1007/08/10 SMI
#
name=scsi_vhci class=root;

#
# Load balancing global configuration: setting load-balance=none will cause
# all I/O to a given device (which supports multipath I/O) to occur via one
# path.  Setting load-balance=round-robin will cause each path to the device
# to be used in turn.
#
#load-balance=round-robin;
load-balance=none

#
# Automatic failback configuration
# possible values are auto-failback=enable or auto-failback=disable
auto-failback=enable;
#BEGIN: FAILOVER_MODULE_BLOCK (DO NOT MOVE OR DELETE)
#
# Declare scsi_vhci failover module paths with 'ddi-forceload' so that
# they get loaded early enough to be available for scsi_vhci root use.
#
# NOTE: Correct operation depends on the value of 'ddi-forceload', this
# value should not be changed. The ordering of entries is from
# most-specific failover modules (with a probe implementation that is
# completely VID/PID table based), to most generic (failover modules that
# are based on T10 standards like TPGS). By convention the last part of a
# failover module path, after /scsi_vhci_, is called the
# failover-module-name, which begins with f_ (like f_asym_sun). The
# failover-module-name is also used in the override mechanism below.
ddi-forceload =
misc/scsi_vhci/scsi_vhci_f_asym_sun,
misc/scsi_vhci/scsi_vhci_f_asym_lsi,
misc/scsi_vhci/scsi_vhci_f_asym_emc,
misc/scsi_vhci/scsi_vhci_f_sym_emc,
misc/scsi_vhci/scsi_vhci_f_sym,
misc/scsi_vhci/scsi_vhci_f_tpgs;

#
# For a device that has a GUID, discovered on a pHCI 

Re: [zfs-discuss] Cannot copy...The operation completed successfully / OpenSolaris 2008

2008-05-22 Thread Dave Koelmeyer
I've got a screen grab of this here:

http://web.mac.com/davekoelmeyer/Dave_Koelmeyer/Dave_Koelmeyer_-_OpenSolaris_2008.05_CIFS_File_Server_OS_Win_XP_Copy_Prob_1.html

Also, I am also seeing the behaviour in that Nexenta forum link, where file 
copy seemingly gets right to the very end, then craps out.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Weirdly inflated files sizes in Mac OS 10.5.2 Finder / Opensolaris 2008

2008-05-22 Thread Dave Koelmeyer
This is what my collegue is seeing:

http://web.mac.com/davekoelmeyer/Dave_Koelmeyer/Dave_Koelmeyer_-_OpenSolaris_2008.05_CIFS_File_Server_OS_10.5.2_Prob_1.html

Any experts can help with any leads on this one?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS4-sharing-ZFS issues

2008-05-22 Thread Joerg Schilling
Richard Elling [EMAIL PROTECTED] wrote:

  In /etc/auto_home:
  # Home directory map for automounter
  #
  *   server:/home/
 
  This works on Solaris 9, Solaris 10, and OS-X Leopard.
  
  And Linux, too!  Thank you for the answer!  This makes my life much easier.
 

 geezer alert
 This solution is so old, almost 20 years old, that many of us
 forget that it isn't the default... :-)
 /geezer_alert

It is easier to understand when people know that BSD and Linux adopted to a 
clone of the old Sun automounter after Sun replaced it by something better ;-)

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pause Solaris with ZFS compression busy by doing a cp?

2008-05-22 Thread Neil Perrin

 I also noticed (perhaps by design) that a copy with compression off almost
 instantly returns, but the writes continue LONG after the cp process claims
 to be done. Is this normal?

Yes this is normal. Unless the application is doing synchronous writes
(eg DB) the file will be written to disk at the convenience of the FS.
Most fs operate this way. It's too expensive to synchronously write
out data, so it's batched up and written asynchronously.

 Wouldn't closing the file ensure it was written to disk?

No.

 Is that tunable somewhere?

No. For ZFS you can use sync(1M) which will force out all transactions
for all files in the pool. That is expensive though. 

Neil.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pause Solaris with ZFS compression busy by doing a cp?

2008-05-22 Thread Bart Smaalders
Neil Perrin wrote:
 I also noticed (perhaps by design) that a copy with compression off almost
 instantly returns, but the writes continue LONG after the cp process claims
 to be done. Is this normal?
 
 Yes this is normal. Unless the application is doing synchronous writes
 (eg DB) the file will be written to disk at the convenience of the FS.
 Most fs operate this way. It's too expensive to synchronously write
 out data, so it's batched up and written asynchronously.
 
 Wouldn't closing the file ensure it was written to disk?
 
 No.
 
 Is that tunable somewhere?
 
 No. For ZFS you can use sync(1M) which will force out all transactions
 for all files in the pool. That is expensive though. 
 
 Neil.

Your application can call f[d]sync when it's done writing the file
and before it does the close if it wants all the data on disk.
This has been standard operating procedure for many, many
years.

 From TFMP:

DESCRIPTION
  The fsync() function moves all modified data and  attributes
  of  the  file  descriptor  fildes  to a storage device. When
  fsync() returns, all in-memory modified  copies  of  buffers
  associated  with  fildes  have  been written to the physical
  medium. The fsync() function is different from sync(), which
  schedules disk I/O for all files  but returns before the I/O
  completes. The fsync() function forces all outstanding  data
  operations  to  synchronized  file integrity completion (see
  fcntl.h(3HEAD) definition of O_SYNC.)

...

USAGE
  The fsync() function should be  used  by  applications  that
  require  that  a  file  be in a known state. For example, an
  application that  contains  a  simple  transaction  facility
  might  use   fsync() to ensure that all changes to a file or
  files caused by a  given  transaction  were  recorded  on  a
  storage medium.

- Bart

-- 
Bart Smaalders  Solaris Kernel Performance
[EMAIL PROTECTED]   http://blogs.sun.com/barts
You will contribute more with mercurial than with thunderbird.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-22 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Robin Guo wrote:
|   At least, s10u6 will contain L2ARC cache, ZFS as root filesystem, etc..

Any detail about this L2ARC thing?. I see some references in Google (a
cache device) but no in deep description.

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
~   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.8 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBSDXu85lgi5GaxT1NAQJRNQP+LauaUCQ+rdV6AYTe1ZK/Y9LpPEfCa+U8
hkuCnUdqJiqFLDM/TDMRLNkK/CmzhmjTRyF3cu054MNJpiw8MqRc3/pUQUgV/NVX
ot2J90Qwwrsz7lAOItBnGLMnM/yShOovpb5joZjPT/A14OZXYNFmlzDrMBHjyRSG
jjXhmLbrJD4=
=DiFU
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-22 Thread Richard Elling
Jesus Cea wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Robin Guo wrote:
 |   At least, s10u6 will contain L2ARC cache, ZFS as root filesystem, etc..

 Any detail about this L2ARC thing?. I see some references in Google (a
 cache device) but no in deep description.

   
Sure.  The concept is quite simple, really. We observe that
solid state memories can be very fast, when compared to
spinning rust (disk) drives. The Adaptive Replacement Cache
(ARC) uses main memory as a read cache.  But sometimes
people want high performance, but don't want to spend money
on main memory. So, the Level-2 ARC can be placed on a
block device, such as a fast [solid state] disk which may even
be volatile.  This may be very useful for those cases where the
actual drive is located far away in time (eg across the internet)
but near-by, fast disks are readily available.  Since the L2ARC
is only a read cache, it doesn't have to be nonvolatile.  This
opens up some interesting possibilities for some applications
which have large data stored ( RAM) where you might get
some significant performance improvements with local, fast
devices.

The PSARC case materials go into some detail:
http://opensolaris.org/os/community/arc/caselog/2007/618/

 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss