Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-22 Thread Brandon High
On Wed, Apr 21, 2010 at 10:13 PM, Richard Elling
richard.ell...@gmail.com wrote:
 Repeating my previous question in another way...
 So how do they handle mv home/joeuser home/moeuser ?
 Does that mv delete all snapshots below home/joeuser?

If you wanted to go into home/joeuser/.snapshot , I think you'd have
to look at home/.snapshot/joeuser.

I think the way the .snapshot dir works is similar to if the user
looks at $VOL_ROOT/home/user1/files/.snapshot, the directory is
magically directed to $VOL_ROOT/.snapshot/home/user1/files

 To make this work in ZFS, does this require that the mv(1)
 command only work when the user has snapshot delete privilege?

No, because the snapshots don't exist for each directory. The path is
internally redirected to the volume's snapshot list, but starting at
the current directory.

 I fear that nothing in this thread is moving the problem closer to
 RFE status :-(

That's a shame, the secret .snapshot directories are really nice to
have. If FUSE was ready, it'd be trivial to work up a POC on top of
zfs.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] build a zfs continuous data replication in opensolaris

2010-04-22 Thread Andreas Grüninger
You may have a look in the whitepaper from Torsten Frueauf.
see here http://sun.systemnews.com/articles/137/4/OpenSolaris/22016

This should give you the functionality of a DRBD-Cluster.

Andreas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-22 Thread Darren J Moffat

On 22/04/2010 00:14, Jason King wrote:

It still has the issue that the end user has to know where the root of
the filesystem is in the tree (assuming it's even accessible on the
system -- might not be for an NFS mount).


For CIFS ZFS provides the Volume Shadow Service (Previous Versions in 
Windows Explorer).


See this blog entry for pictures of how this looks to a Windows user:

http://blogs.sun.com/amw/entry/using_the_previous_versions_tab

For (local) OpenSolaris clients the Nautilus file browser works as if 
the snapshots were visible in each directory - click the little clock 
icon that is between refresh and home.  This only works locally though 
not over NFS.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-22 Thread Edward Ned Harvey
 From: Richard Elling [mailto:richard.ell...@gmail.com]
 
 Repeating my previous question in another way...
 So how do they handle mv home/joeuser home/moeuser ?
 Does that mv delete all snapshots below home/joeuser?
 To make this work in ZFS, does this require that the mv(1)
 command only work when the user has snapshot delete privilege?
 
 I fear that nothing in this thread is moving the problem closer to
 RFE status :-(

It's not a real directory.  Just like the .zfs directory, it is magically
accessible in every directory or subdirectory, without any need to mkdir or
anything.  Whenever you mv some directory to a new name, there's still a
magical .snapshot directory inside of it, but all the contents are magically
generated upon access, so the new .snapshot will reference the new directory
name.

It's all just a frontend in the filesystem.  You do something like cd
.zfs or cd .snapshot (or ls, or cp, or whatever) and the filesystem
responds as if that were a real directory.  But there are no *actual*
contents of any actual directory of that name.  The filesystem just
generates a response for you which looks like subdirectories, but are really
links to the appropriate stuff, and makes it simply look as if it were
normal directories.

To move closer to RFE status ... I think the description would have to be
written in verbage pertaining to zfs which is more than I know.  I can
describe how they each work, but I can't make it technical enough to be an
RFE for zfs.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Snapshots and Data Loss

2010-04-22 Thread Ross Walker

On Apr 20, 2010, at 4:44 PM, Geoff Nordli geo...@grokworx.com wrote:


From: matthew patton [mailto:patto...@yahoo.com]
Sent: Tuesday, April 20, 2010 12:54 PM

Geoff Nordli geo...@grokworx.com wrote:


With our particular use case we are going to do a save
state on their
virtual machines, which is going to write  100-400 MB
per VM via CIFS or
NFS, then we take a snapshot of the volume, which
guarantees we get a
consistent copy of their VM.


maybe you left out a detail or two but I can't see how your ZFS  
snapshot

is going to be consistent UNLESS every VM on that ZFS volume is
prevented from doing any and all I/O from the time it finishes save
state and you take your ZFS snapshot.

If by save state you mean something akin to VMWare's disk snapshot,
why would you even bother with a ZFS snapshot in addition?



We are using VirtualBox as our hypervisor.  When it does a save  
state it
generates a memory file.  The memory file plus the volume snapshot  
creates a

consistent state.

In our platform each student's VM points to a unique backend volume  
via

iscsi using VBox's built-in iscsi initiator.  So there is a one-to-one
relationship between VM and Volume.  Just for clarity, a single VM  
could
have multiple disks attached to it.  In that scenario, then a VM  
would have

multiple volumes.



end we could have
maybe 20-30 VMs getting saved at the same time, which could
mean several GB
of data would need to get written in a short time frame and
would need to
get committed to disk.

So it seems the best case would be to get those save
state writes as sync
and get them into a ZIL.


That I/O pattern is vastly 32kb and so will hit the 'rust' ZIL  
(which
ALWAYS exists) and if you were thinking an SSD would help you, I  
don't

see any/much evidence it will buy you anything.




If I set the logbias (b122) to latency, then it will direct all sync  
IO to
the log device, even if it exceeds the zfs_immediate_write_sz  
threshold.


If you combine the hypervisor and storage server and have students  
connect to the VMs via RDP or VNC or XDM then you will have the  
performance of local storage and even script VirtualBox to take a  
snapshot right after a save state.


A lot less difficult to configure on the client side, and allows you  
to deploy thin clients instead of full desktops where you can get away  
with it.


It also allows you to abstract the hypervisor from the client.

Need a bigger storage server with lots of memory, CPU and storage  
though.


Later, if need be, you can break out the disks to a storage appliance  
with an 8GB FC or 10Gbe iSCSI interconnect.


-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] build a zfs continuous data replication in opensolaris

2010-04-22 Thread tranceash
Hi Richard

What do you mean by A mirror would be simple do you mean to use zfs send and 
receive. Also is the auto-cdp plugin free with nexenstor developer.
Is there a detailed explaination of AVS where they explain all the companents 
involved like what is bitmap for etc. If AVS is there around for some time 
where are books explaining this technolgy, Does anyone use it in production as 
seems to me it has a lot problems in terms of performance.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] build a zfs continuous data replication in opensolaris

2010-04-22 Thread tranceash
Hi Andreas ,
The paper looks good are any basic examples on AVS or open-ha that explains the 
componets throughly and guides. What books or resource you recommend I get more 
information about this as I cant find any books
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Find out which of many FS from a zpool is busy?

2010-04-22 Thread Carsten Aulbert
Hi all,

sorry if this is in any FAQ - then I've clearly missed it.

Is there an easy or at least straight forward way to determine which of n ZFS 
is currently under heavy NFS load?

Once upon a time, when one had old style file systems and exported these as a 
whole iostat -x came in handy, however, with zpools, this is not the case 
anymore, right?

Imagine

zpool create tank . (many devices here)
zfs set sharenfs=on tank
zfs create tank/a
zfs create tank/b
zfs create tank/c
[...]
zfs create tank/z

Now, you have these lovely number of ZFS but how to find out which user is 
currently (ab)using the system most?

Cheers
Carsten
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Find out which of many FS from a zpool is busy?

2010-04-22 Thread Peter Tribble
On Thu, Apr 22, 2010 at 3:30 PM, Carsten Aulbert
carsten.aulb...@aei.mpg.de wrote:
 Hi all,

 sorry if this is in any FAQ - then I've clearly missed it.

 Is there an easy or at least straight forward way to determine which of n ZFS
 is currently under heavy NFS load?

 Once upon a time, when one had old style file systems and exported these as a
 whole iostat -x came in handy, however, with zpools, this is not the case
 anymore, right?

fsstat?

Typically along the lines of

fsstat /tank/* 1

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Find out which of many FS from a zpool is busy?

2010-04-22 Thread Darren J Moffat

On 22/04/2010 15:30, Carsten Aulbert wrote:

sorry if this is in any FAQ - then I've clearly missed it.

Is there an easy or at least straight forward way to determine which of n ZFS
is currently under heavy NFS load?


DTrace Analytics in the SS7000 appliance would be perfect for this.


Once upon a time, when one had old style file systems and exported these as a
whole iostat -x came in handy, however, with zpools, this is not the case
anymore, right?




Now, you have these lovely number of ZFS but how to find out which user is
currently (ab)using the system most?


fstat might help but ultimately the question you are asking is one that 
the DTrace Analytics in the SS7000 appliance are perfect for.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Find out which of many FS from a zpool is busy?

2010-04-22 Thread Carsten Aulbert
Hi

On Thursday 22 April 2010 16:33:51 Peter Tribble wrote:
 fsstat?
 
 Typically along the lines of
 
 fsstat /tank/* 1
 

Sh**, I knew about fsstat but never ever even tried to run it on many file 
systems at once. D'oh.

*sigh* well, at least a good one for the archives...

Thanks a lot!

Carsten
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-22 Thread Bob Friesenhahn

On Thu, 22 Apr 2010, Edward Ned Harvey wrote:


To move closer to RFE status ... I think the description would have to be
written in verbage pertaining to zfs which is more than I know.  I can
describe how they each work, but I can't make it technical enough to be an
RFE for zfs.


Someone would also need to verify that this feature is not protected 
by a patent.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] build a zfs continuous data replication in opensolaris

2010-04-22 Thread Andreas Grüninger
If you read this
http://hub.opensolaris.org/bin/download/Project+colorado/files/Whitepaper-OpenHAClusterOnOpenSolaris-external.pdf
and especially starting at page 25 you will find a detailed explanation how to 
implement a storage cluster with shared storage based on Comstar and ISCSI.
If you want to install on physical hardware just ignore the installation and 
configuration of VirtualBox.
IMHO simpler than AVS.

Regards

Andreas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-22 Thread Richard Elling
On Apr 22, 2010, at 4:50 AM, Edward Ned Harvey wrote:
 From: Richard Elling [mailto:richard.ell...@gmail.com]
 
 Repeating my previous question in another way...
 So how do they handle mv home/joeuser home/moeuser ?
 Does that mv delete all snapshots below home/joeuser?
 To make this work in ZFS, does this require that the mv(1)
 command only work when the user has snapshot delete privilege?
 
 I fear that nothing in this thread is moving the problem closer to
 RFE status :-(
 
 It's not a real directory.  Just like the .zfs directory, it is magically
 accessible in every directory or subdirectory, without any need to mkdir or
 anything.  Whenever you mv some directory to a new name, there's still a
 magical .snapshot directory inside of it, but all the contents are magically
 generated upon access, so the new .snapshot will reference the new directory
 name.
 
 It's all just a frontend in the filesystem.  You do something like cd
 .zfs or cd .snapshot (or ls, or cp, or whatever) and the filesystem
 responds as if that were a real directory.  But there are no *actual*
 contents of any actual directory of that name.  The filesystem just
 generates a response for you which looks like subdirectories, but are really
 links to the appropriate stuff, and makes it simply look as if it were
 normal directories.

One last try. If you change the real directory structure, how are those
changes reflected in the snapshot directory structure?

Consider:
echo whee  /a/b/c/d.txt
[snapshot]
mv /a/b /a/B

What does /a/B/c/.snapshot point to?  If the answer is nothing, then I see
significantly less value in the feature.

IIRC, POSIX does not permit hard links to directories. Moving or renaming
the directory structure gets disconnected from the original because these
are relative relationships. Clearly, NetApp achieves this in some manner
which is not constrained by POSIX -- a manner which appears to be beyond 
your ability to describe.

 To move closer to RFE status ... I think the description would have to be
 written in verbage pertaining to zfs which is more than I know.  I can
 describe how they each work, but I can't make it technical enough to be an
 RFE for zfs.

I'm not disputing the value here, but the RFE may need something other than
ZPL.  Today, NetApp might enjoy temporary advantage because their file
system does not have to be POSIX compliant and they limit the total number of
snapshots. OTOH, the only barrier for someone writing a non-POSIX file system 
layer on ZFS is resources and enthusiasm (unlimited snapshots are already
available on ZFS).
 -- richard

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-22 Thread Joerg Schilling
Richard Elling richard.ell...@gmail.com wrote:

 IIRC, POSIX does not permit hard links to directories. Moving or renaming
 the directory structure gets disconnected from the original because these
 are relative relationships. Clearly, NetApp achieves this in some manner
 which is not constrained by POSIX -- a manner which appears to be beyond 
 your ability to describe.

I have recently frequently seen the name POSIX and I am not sure whether the 
people who used the name know what they are talking about. POSIX e.g. of course
permits harlinks to directories.

If you like a fact based discussion, I in general would like to a verification 
of the related claim as this allows to prove or disprove a claim.


There e.g. never has been an explanation on why a special directory like .zfs
or whatever should not be allowed by POSIX.

Let me make an example.

In Sommer 2001, a person from Microsoft asked the Microsoft notation of named 
streams (filename:streamname) to be introduced into POSIX. It was easy to 
explain him that this would not be POSIX compliant as it would introduce 
another forbidden character (':') for filenames. Currently only '/' and '\0' 
are discallowed in filenames.

He later asked to introduce a special directory ... that could hold the named 
streams for files. This is also non POSIX compliant as it would require to 
modify all implementationd for find(1), ls(1), tar(1) and similar to know about
the secific meaning of 

As a result, Sun did introduce the attibute directory, runat(1),  openat(2)
and similar around August 2001 in order to preove that there is a POSIX 
compliant way to implement named streams in files.

If we could have a discussion at a similar level, I would be happy to help with 
the discussion.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] In iSCSI hell...

2010-04-22 Thread Maurice Volaski
This sounds like 
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6894775. 
It seems this can be avoided by switching to an LSI card that uses 
mpt_sas. For example, the 9211.


However, certain drives, such as the Western Digital 
WD2002FYPS-01U1B0, can also result in the behavior .


Apr 21 19:58:43 storage scsi: [ID 107833 kern.warning] WARNING: 
/p...@0,0/pci8086,3...@6/pci8086,3...@0/pci1028,1...@8 (mpt1):

Apr 21 19:58:43 storage Disconnected command timeout for Target 27
Apr 21 19:58:45 storage scsi: [ID 365881 kern.info] 
/p...@0,0/pci8086,3...@6/pci8086,3...@0/pci1028,1...@8 (mpt1):

Apr 21 19:58:45 storage Log info 0x3114 received for target 27.
Apr 21 19:58:45 storage scsi_status=0x0, ioc_status=0x8048, 
scsi_state=0xc
Apr 21 19:58:45 storage scsi: [ID 365881 kern.info] 
/p...@0,0/pci8086,3...@6/pci8086,3...@0/pci1028,1...@8 (mpt1):

Apr 21 19:58:45 storage Log info 0x3114 received for target 27.
Apr 21 19:58:45 storage scsi_status=0x0, ioc_status=0x8048, 
scsi_state=0xc
Apr 21 19:58:45 storage scsi: [ID 365881 kern.info] 
/p...@0,0/pci8086,3...@6/pci8086,3...@0/pci1028,1...@8 (mpt1):

Apr 21 19:58:45 storage Log info 0x3114 received for target 27.
Apr 21 19:58:45 storage scsi_status=0x0, ioc_status=0x8048, 
scsi_state=0xc


--

Maurice Volaski, maurice.vola...@einstein.yu.edu
Computing Support, Rose F. Kennedy Center
Albert Einstein College of Medicine of Yeshiva University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Severe Problems on ZFS server

2010-04-22 Thread Andreas Höschler

Hi all

we are encountering severe problems on our X4240 (64GB, 16 disks) 
running Solaris 10 and ZFS. From time to time (5-6 times a day)


• FrontBase hangs or crashes
• VBox virtual machine do hang
• Other applications show rubber effect (white screen) while moving the 
windows


I have been tearing my hair off where this comes from. Could be 
software bugs, but in all these applications from different vendors? 
Could be a Solaris bug or bad memory!? Rather unlikely. I just was hit 
by a thought. On another machine with 6GB RAM I fired up a second 
virtual machine (vbox). This drove the machine almost to a halt. The 
second vbox instance never came up. I finally saw a panel raised by the 
first vbox instance that there was not enough memory available (non 
severe vbox error) and the virtual machine was halted!! After killing 
the process of the second vbox I could simply press resume and the 
first vbox machine continued to work properly.


OK, now this starts to make sense. My idea is that ZFS is 
blocking/allocating all of the available system memory. When an app 
(FrontBase, VBox,...) is started and suddenly requests larger chunks of 
memory from the system, the malloc calls fail because ZFS has allocated 
all the memory or because the system cannot release the memory quickly 
enough and make it available fo rthe requesting apps, so the malloc 
fails or times out or whatever which is not catched in the apps and 
makes them hang or crash or stall for minutes. Does this make any 
sense? Any similar experiences?


What can I do about that?

Thanks a lot,

 Andreas

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HELP! zpool corrupted data

2010-04-22 Thread Cindy Swearingen

Hi Clint,

Your symptoms to point to disk label problems, dangling device links,
or overlapping partitions. All could be related to the power failure.

The OpenSolaris error message (b134, I think you mean) brings up these
bugs:

6912251, describes the dangling links problem, which you might be able
to clear up with devfsadm -C
6904358, describes this error due overlapping partitions and points to 
this CR:


http://defect.opensolaris.org/bz/show_bug.cgi?id=13331

If none of the above issues match, review what ZFS thinks the disk
labels are and what the disk labels actually are post-power failure.

On the OpenSolaris side, you can use zdb command to review the disk
label information. For example, I have a pool on c5t1d0 so to review the 
pool's idea of the disk labels, I use this command to review whether

the disk labels are coherent.

# zdb -l /dev/rdsk/c5t1d0s0

Review the above bug info and if it turns out that the disk labels
need to be recreated, maybe someone who had done this task can help.

Thanks,

Cindy

On 04/21/10 16:23, Clint wrote:

Hello,

Due to a power outage our file server running FreeBSD 8.0p2 will no longer come 
up due to zpool corruption.  I get the following output when trying to import 
the ZFS pool using either a FreeBSD 8.0p2 cd or the latest OpenSolaris snv_143 
cd:

FreeBSD mfsbsd 8.0-RELEASE-p2.vx.sk:/usr/obj/usr/src/sys/GENERIC  amd64
mfsbsd# zpool import
  pool: tank
id: 1998957762692994918
 state: FAULTED
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

tankFAULTED  corrupted data
  raidz1ONLINE
gptid/e895b5d6-4bab-11df-8a83-0019d159e82b  ONLINE
gptid/e96cf4a2-4bab-11df-8a83-0019d159e82b  ONLINE
gptid/ea4a127c-4bab-11df-8a83-0019d159e82b  ONLINE
gptid/eb3160a6-4bab-11df-8a83-0019d159e82b  ONLINE
gptid/ec02f050-4bab-11df-8a83-0019d159e82b  ONLINE
gptid/ecdb408b-4bab-11df-8a83-0019d159e82b  ONLINE

mfsbsd# zpool import -f tank
internal error: Illegal byte sequence
Abort (core dumped)


SunOS opensolaris 5.11 snv_134 i86pc i386 i86pc Solaris

r...@opensolaris:/# zpool import -nfFX -R /mnt tank
Assertion failed: rn-rn_nozpool == B_FALSE, file ../common/libzfs_import.c, 
line 1078, function zpool_open_func
Abort (core dumped)


I don't really need to get the server bootable again, but I do need to get the 
data off of one of the file systems. Any help would be greatly appreciated.

Thanks,
Clint

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-04-22 Thread Shawn Ferry

On Apr 22, 2010, at 1:26 PM, Rich Teer wrote:

 Hi all,
 
 I have a server running SXCE b130 and I use ZFS for all file systems.  I
 also have a couple of workstations running the same OS, and all is well.
 But I also have a MacBook Pro laptop running Snow Leopard (OS X 10.6.3),
 and I have troubles creating files on exported ZFS file systems.
 
 From the laptop, I can read and write existing files on the exported ZFS
 file systems just fine, but I can't create new ones.  My understanding is
 that Mac OS makes extensive use of file attributes so I was wondering if
 this might be the cause of the problem (I know ZFS supports file attributes,
 but I wonder if I have to utter some magic incantation to get them working
 properly with Mac OS).
 
 At the moment I have a workaround: I use sftp to copy the files from the
 laptop to the server.  But this is a pain in the ass and I'm sure there's
 a way to make this just work properly!

I haven't seen this behavior. However, all of my file systems used by my
Mac are pool version 8 fs ver 2. I don't know if that could be part of your
problem or not.

I am attaching two ways direct attach and iSCSI zvol with pool and FS created
locally.

Shawn 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Severe Problems on ZFS server

2010-04-22 Thread Andreas Höschler

Hi all,

we are encountering severe problems on our X4240 (64GB, 16 disks) 
running Solaris 10 and ZFS. From time to time (5-6 times a day)


• FrontBase hangs or crashes
• VBox virtual machine do hang
• Other applications show rubber effect (white screen) while moving 
the windows


I have been tearing my hair off where this comes from. Could be 
software bugs, but in all these applications from different vendors? 
Could be a Solaris bug or bad memory!? Rather unlikely. I just was hit 
by a thought. On another machine with 6GB RAM I fired up a second 
virtual machine (vbox). This drove the machine almost to a halt. The 
second vbox instance never came up. I finally saw a panel raised by 
the first vbox instance that there was not enough memory available 
(non severe vbox error) and the virtual machine was halted!! After 
killing the process of the second vbox I could simply press resume and 
the first vbox machine continued to work properly.


OK, now this starts to make sense. My idea is that ZFS is 
blocking/allocating all of the available system memory. When an app 
(FrontBase, VBox,...) is started and suddenly requests larger chunks 
of memory from the system, the malloc calls fail because ZFS has 
allocated all the memory or because the system cannot release the 
memory quickly enough and make it available fo rthe requesting apps, 
so the malloc fails or times out or whatever which is not catched in 
the apps and makes them hang or crash or stall for minutes. Does this 
make any sense? Any similar experiences?




Followup to my owm message. On the X4240 I have

set zfs:zfs_arc_max = 0x78000

in /etc/system. Would it be a good idea to reduce that to say

set zfs:zfs_arc_max = 0x28000

?? Hints greatly appreciated!

Thanks,

 Andreas

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-04-22 Thread Rich Teer
On Thu, 22 Apr 2010, Shawn Ferry wrote:

 I haven't seen this behavior. However, all of my file systems used by my
 Mac are pool version 8 fs ver 2. I don't know if that could be part of your
 problem or not.

Thanks for the info.  I should have said that all the file systems I'm using
were created on the Solaris server, rather than ther Mac.  It's just files on
that Solaris-created ZFS file system that I'm having problems creating.

Of course, it probably doesn't help that Apple, in their infinite wisdom, canned
native suport for ZFS in Snow Leopard (idiots).

 I am attaching two ways direct attach and iSCSI zvol with pool and FS created
 locally.

Ah.  The file systems I'm trying to use are locally attached to the server, and
shared via NFS.

Does anyone else have any ideas?

-- 
Rich Teer, Publisher
Vinylphile Magazine

www.vinylphilemag.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-04-22 Thread Tomas Ögren
On 22 April, 2010 - Rich Teer sent me these 1,1K bytes:

 Hi all,
 
 I have a server running SXCE b130 and I use ZFS for all file systems.  I
 also have a couple of workstations running the same OS, and all is well.
 But I also have a MacBook Pro laptop running Snow Leopard (OS X 10.6.3),
 and I have troubles creating files on exported ZFS file systems.
 
 From the laptop, I can read and write existing files on the exported ZFS
 file systems just fine, but I can't create new ones.  My understanding is
 that Mac OS makes extensive use of file attributes so I was wondering if
 this might be the cause of the problem (I know ZFS supports file attributes,
 but I wonder if I have to utter some magic incantation to get them working
 properly with Mac OS).

I've noticed some issues with copying files to an smb share from macosx
clients like the last week.. haven't had time to investigate it fully,
but it sure seems EA related..
Copying a file from smb to smb (via the client) works as long as the
file hasn't gotten any EA yet.. If I for instance do the hide file
ext, then it's not working anymore. Enabling EA on a file works, but
creating one with EA doesn't.. So it seems like a Finder bug..

Copying via terminal (and cp) works.

 At the moment I have a workaround: I use sftp to copy the files from the
 laptop to the server.  But this is a pain in the ass and I'm sure there's
 a way to make this just work properly!

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Is it safe to disable the swap partition?

2010-04-22 Thread Karl Dalen
If I want to reduce the I/O accesses for example to SSD media on a laptop
and I don't plan to run any big applications is it safe to delete the swap file 
?

How do I configure opensolairs to run without swap ?
I've tried 'swap -d /dev/zvol/dsk/rpool/swap'
but 'swap -s' still shows the same amount of memory
allocated.

What happens with the /tmp file system when there is no swap device?
I suppose it could fill up the RAM and cause a crash if not limited.
Is there any other potential problem in running without swap?

Any suggestion would be appreciated
Thanks,
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-04-22 Thread Alex Blewitt
Rich, Shawn,

 Of course, it probably doesn't help that Apple, in their infinite wisdom, 
 canned
 native suport for ZFS in Snow Leopard (idiots).

For your information, the ZFS project lives (well, limps really) on at 
http://code.google.com/p/mac-zfs. You can get ZFS for Snow Leopard from there 
and we're working on moving forwards from the ancient pool support to something 
more recent. I've relatively recently merged in the onnv-gate repository (at 
build 72) which should make things easier to track in the future.

 Ah.  The file systems I'm trying to use are locally attached to the server, 
 and
 shared via NFS.

What are the problems? I have read-write files over a (Mac-exported) ZFS share 
via NFS to Mac clients, and that has no problem at all. It's possible that it 
could be permissions related, especially if you're using NFSv4 - AFAIK the Mac 
client is an alpha stage of that on Snow Leopard. 

You could try listing the files (from OSX) with ls -...@e which should show you 
all the extended attributes and ACLs to see if that's causing a problem.

Alex
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-04-22 Thread Rich Teer
On Thu, 22 Apr 2010, Tomas Ögren wrote:

 Copying via terminal (and cp) works.

Interesting: if I copy a file *which has no extended attributes* using cp in
a terminal, it works fine.  If I try to cp a file that has EA (to the same
destination), it hangs.  But I get this error message after a few seconds:

cp file_without_EA /net/zen/export/home/rich
cp file_with_EA /net/zen/export/home/rich
nfs server zen:/export/home: lockd not responding

Note that the first cp is successful.  So, is there some server-side magic I
need to configure?  And if so, what is it?

-- 
Rich Teer, Publisher
Vinylphile Magazine

www.vinylphilemag.com___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-04-22 Thread Rich Teer
On Thu, 22 Apr 2010, Alex Blewitt wrote:

Hi Alex,

 For your information, the ZFS project lives (well, limps really) on
 at http://code.google.com/p/mac-zfs. You can get ZFS for Snow Leopard
 from there and we're working on moving forwards from the ancient pool
 support to something more recent. I've relatively recently merged in
 the onnv-gate repository (at build 72) which should make things easier
 to track in the future.

That's good to hear!  I thought Apple yanking ZFS support from Mac OS was
a really dumb idea.  Do you work for Apple?

 What are the problems? I have read-write files over a (Mac-exported)
 ZFS share via NFS to Mac clients, and that has no problem at all. It's
 possible that it could be permissions related, especially if you're
 using NFSv4 - AFAIK the Mac client is an alpha stage of that on Snow
 Leopard.

See my other messages for a description of the problem--but I am using
NFS v4 on the server, so that might be the cause of breakage...

 You could try listing the files (from OSX) with ls -...@e which should
 show you all the extended attributes and ACLs to see if that's causing
 a problem.

That ls command works (it shows files on the server), but of course, none
of the files shown atually have extended attributes...

-- 
Rich Teer, Publisher
Vinylphile Magazine

www.vinylphilemag.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-04-22 Thread Mike Mackovitch
On Thu, Apr 22, 2010 at 12:40:37PM -0700, Rich Teer wrote:
 On Thu, 22 Apr 2010, Tomas Ögren wrote:
 
  Copying via terminal (and cp) works.
 
 Interesting: if I copy a file *which has no extended attributes* using cp in
 a terminal, it works fine.  If I try to cp a file that has EA (to the same
 destination), it hangs.  But I get this error message after a few seconds:
 
 cp file_without_EA /net/zen/export/home/rich
 cp file_with_EA /net/zen/export/home/rich
 nfs server zen:/export/home: lockd not responding

So, it looks like you need to investigate why the client isn't
getting responses from the server's lockd.

This is usually caused by a firewall or NAT getting in the way.

HTH
--macko
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Severe Problems on ZFS server

2010-04-22 Thread Bob Friesenhahn

On Thu, 22 Apr 2010, Andreas Höschler wrote:

we are encountering severe problems on our X4240 (64GB, 16 disks) running 
Solaris 10 and ZFS. From time to time (5-6 times a day)


• FrontBase hangs or crashes
• VBox virtual machine do hang
• Other applications show rubber effect (white screen) while moving the 
windows


I have been tearing my hair off where this comes from. Could be software 
bugs, but in all these applications from different vendors? Could be a 
Solaris bug or bad memory!? Rather unlikely. I just was hit by a thought. On


I see that no one has responded yet.  You are jumping to conclusions 
that zfs and its memory usage is somehow responsible for the problem 
you are seeing.


The problem could be due to a faulty/failing disk, a poor connection 
with a disk, or some other hardware issue.  A failing disk can easily 
make the system pause temporarily like that.


As root you can run '/usr/sbin/fmdump -ef' to see all the fault events 
as they are reported.  Be sure to execute '/usr/sbin/fmadm faulty' to 
see if a fault has already been identified on your system.  Also 
execute '/usr/bin/iostat -xe' to see if there are errors reported 
against some of your disks, or if some are reported as being 
abnormally slow.


You might also want to verify that your Solaris 10 is current.  I 
notice that you did not identify what Solaris 10 you are using.


another machine with 6GB RAM I fired up a second virtual machine (vbox). This 
drove the machine almost to a halt. The second vbox instance never came up. I 
finally saw a panel raised by the first vbox instance that there was not 
enough memory available (non severe vbox error) and the virtual machine was 
halted!! After killing the process of the second vbox I could simply press 
resume and the first vbox machine continued to work properly.


Maybe you should read the VirtualBox documentation.  There is a note 
about Solaris 10 and about how VirtualBox may fail if it can't get 
enough contiguous memory space.


Maybe I am lucky since I have run three VirtualBox instances at a time 
(2GB allocation each) on my system with no problem at all.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-04-22 Thread Rich Teer
On Thu, 22 Apr 2010, Mike Mackovitch wrote:

Hi Mike,

 So, it looks like you need to investigate why the client isn't
 getting responses from the server's lockd.
 
 This is usually caused by a firewall or NAT getting in the way.

Great idea--I was indeed connected to my network using the AirPort interface,
thorugh a Wifi router.  So as an experiment, I tried using a hard-wired,
manually set up Ethernet connection.  Same result: no dice.  :-(

I checked the firewall settings on my laptop, and the firewall is turned off.

Do you have any other ideas?  It'd be really nice to get this working!

-- 
Rich Teer, Publisher
Vinylphile Magazine

www.vinylphilemag.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-04-22 Thread Mike Mackovitch
On Thu, Apr 22, 2010 at 01:54:26PM -0700, Rich Teer wrote:
 On Thu, 22 Apr 2010, Mike Mackovitch wrote:
 
 Hi Mike,
 
  So, it looks like you need to investigate why the client isn't
  getting responses from the server's lockd.
  
  This is usually caused by a firewall or NAT getting in the way.
 
 Great idea--I was indeed connected to my network using the AirPort interface,
 thorugh a Wifi router.  So as an experiment, I tried using a hard-wired,
 manually set up Ethernet connection.  Same result: no dice.  :-(
 
 I checked the firewall settings on my laptop, and the firewall is turned off.
 
 Do you have any other ideas?  It'd be really nice to get this working!

I would also check /var/log/system.log and /var/log/kernel.log on the Mac to
see if any other useful messages are getting logged.

Then I'd grab packet traces with wireshark/tcpdump/snoop *simultaneously* on
the client and the server, reproduce the problem, and then determine which
packets are being sent and which packets are being received.

HTH
--macko
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-22 Thread A Darren Dunham
On Wed, Apr 21, 2010 at 10:10:09PM -0400, Edward Ned Harvey wrote:
  From: Nicolas Williams [mailto:nicolas.willi...@oracle.com]
  
  POSIX doesn't allow us to have special dot files/directories outside
  filesystem root directories.
 
 So?  Tell it to Netapp.  They don't seem to have any problem with it.

And while it's on by default, there is certainly an option to remove it.

-- 
Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-04-22 Thread Rich Teer
On Thu, 22 Apr 2010, Mike Mackovitch wrote:

 I would also check /var/log/system.log and /var/log/kernel.log on the Mac to
 see if any other useful messages are getting logged.

Ah, we're getting closer.  The latter shows nothing interesting, but system.log
has this line appended the minute I try the copy:

sandboxd[78312]: portmap(78311) deny network-outbound 
/private/var/tmp/launchd/sock

Then, when the attempt times out, these appear:

KernelEventAgent[36]: tid  received event(s) VQ_NOTRESP (1)
KernelEventAgent[36]: tid  type 'nfs', mounted on 
'/net/zen/export/home' from 'zen:/export/home', not responding
KernelEventAgent[36]: tid  found 1 filesystem(s) with problem(s)

Does that shed any morelight on this?

-- 
Rich Teer, Publisher
Vinylphile Magazine

www.vinylphilemag.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Severe Problems on ZFS server

2010-04-22 Thread Andreas Höschler

Hi Bob,

The problem could be due to a faulty/failing disk, a poor connection 
with a disk, or some other hardware issue.  A failing disk can easily 
make the system pause temporarily like that.


As root you can run '/usr/sbin/fmdump -ef' to see all the fault events 
as they are reported.  Be sure to execute '/usr/sbin/fmadm faulty' to 
see if a fault has already been identified on your system.  Also 
execute '/usr/bin/iostat -xe' to see if there are errors reported 
against some of your disks, or if some are reported as being 
abnormally slow.


You might also want to verify that your Solaris 10 is current.  I 
notice that you did not identify what Solaris 10 you are using.


Thanks a lot for these hints. I checked all this. On my mirror server I 
found a faulty DIMM with these commands. But on the main server 
exhibiting the described problem everything seems fine.


another machine with 6GB RAM I fired up a second virtual machine 
(vbox). This drove the machine almost to a halt. The second vbox 
instance never came up. I finally saw a panel raised by the first 
vbox instance that there was not enough memory available (non severe 
vbox error) and the virtual machine was halted!! After killing the 
process of the second vbox I could simply press resume and the first 
vbox machine continued to work properly.


Maybe you should read the VirtualBox documentation.  There is a note 
about Solaris 10 and about how VirtualBox may fail if it can't get 
enough contiguous memory space.


Maybe I am lucky since I have run three VirtualBox instances at a time 
(2GB allocation each) on my system with no problem at all.


I have inserted

set zfs:zfs_arc_max = 0x2

in /etc/system and rebooted the machine having 64GB of memory. Tomorrow 
will show whether this did the trick!


Thanks a lot,

 Andreas

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Snapshots and Data Loss

2010-04-22 Thread Geoff Nordli
From: Ross Walker [mailto:rswwal...@gmail.com]
Sent: Thursday, April 22, 2010 6:34 AM

On Apr 20, 2010, at 4:44 PM, Geoff Nordli geo...@grokworx.com wrote:


If you combine the hypervisor and storage server and have students
connect to the VMs via RDP or VNC or XDM then you will have the
performance of local storage and even script VirtualBox to take a
snapshot right after a save state.

A lot less difficult to configure on the client side, and allows you
to deploy thin clients instead of full desktops where you can get away
with it.

It also allows you to abstract the hypervisor from the client.

Need a bigger storage server with lots of memory, CPU and storage
though.

Later, if need be, you can break out the disks to a storage appliance
with an 8GB FC or 10Gbe iSCSI interconnect.


Right, I am in the process now of trying to figure out what the load looks
like with a central storage box and how ZFS needs to be configured to
support that load.  So far what I am seeing is very exciting :)   

We are currently porting over our existing Learning Lab Infrastructure
platform from MS Virtual Server to VBox + ZFS.  When students connect into
their lab environment it dynamically creates their VMs and load balances
them across physical servers.  

Geoff 



  




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-22 Thread A Darren Dunham
On Wed, Apr 21, 2010 at 04:49:30PM +0100, Darren J Moffat wrote:
 /foo is the filesystem
 /foo/bar is a directory in the filesystem
 
 cd /foo/bar/
 touch stuff
 
 [ you wait, time passes; a snapshot is taken ]
 
 At this point /foo/bar/.snapshot/.../stuff exists
 
 Now do this:
 
 rm -rf /foo/bar
 
 There is a snapshot of /foo/bar/stuff in the ZFS model to get to it
 you go to /foo/.zfs/snapshot/name/bar  and in there you will find
 the file called stuff.

Same thing on a netapp except for the name of the virtual directory.

 How do you find what was /foo/bar/stuff in the model where the
 .snapshot directory exists at every subdir rather than just at the
 filesystem root when the subdirs have been removed ?

The .snapshot directory still exists at the filesystem root.  It's not a
replacement for that.

Asking for the contents of /a/b/c/d/e/.snapshot gives you a view as if
you had asked for /a/.snapshot/b/c/d/e (assuming /a is the filesystem).

The benefits arise when the filesystem root is not mounted, or you don't
have access to it, or you don't know where it is, and the hierarchy
isn't under constant change (which is true any place that my users care
about). 

 What does it look like when the directory hierarchy is really deep ?

Same thing that .zfs/snapshot/a/b/c/d/e/f/g/h looks like, but you can
enter the snapshot tree at any directory that exists in the live
filesystem as well as from the top.

-- 
Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Severe Problems on ZFS server

2010-04-22 Thread Bob Friesenhahn

On Fri, 23 Apr 2010, Andreas Höschler wrote:


Maybe I am lucky since I have run three VirtualBox instances at a time (2GB 
allocation each) on my system with no problem at all.


I have inserted

set zfs:zfs_arc_max = 0x2

in /etc/system and rebooted the machine having 64GB of memory. Tomorrow will 
show whether this did the trick!


This *could* help if your server runs a rather strange and 
intermittent program which suddenly requests a huge amount of memory, 
accesses all that memory, and then releases the memory.  ZFS actually 
gives memory back to the kernel when requested, but of course it needs 
to determine which memory should be returned.  It seems unlikely that 
this would cause other applications to freeze unless there is a common 
dependency.  I do limit the size of the ARC on my system because I do 
run programs which request a lot of memory and then quit.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] build a zfs continuous data replication in opensolaris

2010-04-22 Thread tranceash
Hi Andreas 
I will explain to you what I need . You say IMHO is simpler than AVS thats good 
.
I have setup 2 nexentacore boxes with zfs pools and nfs on the first node. Now 
I need to install the open-ha cluster software with non shared disks. 
I now need to make zfs with nfs HA. I understand the concepts of HA where one 
needs to have heartbeat, disks and failover configs created. 
Is there examples on this setup I can work with. Dont get me wrong comstar is 
good but not fast as nfs.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-22 Thread BM
On Tue, Apr 20, 2010 at 2:18 PM, Ken Gunderson kgund...@teamcool.net wrote:
 Greetings All:

 Granted there has been much fear, uncertainty, and doubt following
 Oracle's take over of Sun, but I ran across this on a FreeBSD mailing
 list post dated 4/20/2010

 ...Seems that Oracle won't offer support for ZFS on opensolaris

 Link here to full post here:

 http://lists.freebsd.org/pipermail/freebsd-questions/2010-April/215269.html

I am not surprised it comes from FreeBSD mail list. :) I am amazed of
their BSD conferences when they presenting all this *BSD stuff using
Apple Macs (they claim it is a FreeBSD, just very bad version of it),
Ubuntu Linux (not yet BSD) or GNU/Microsoft Windows (oh, everybody
does that sin, right?) with a PowerPoint running on it (sure, who
wants ugly OpenOffice if there no brain enough to use LaTeX).

As for a starter, please somebody read this:
http://developers.sun.ru/techdays2010/reports/OracleSolarisTrack/TD_STP_OracleSolarisFuture_Roberts.pdf
...and thus better I suggest to refrain people broadcasting a complete
garbage from a trash dump places to spread this kind of FUD to the
public and thus just shaking an air with no meaning behind.

Take care.

-- 
Kind regards, BM

Things, that are stupid at the beginning, rarely ends up wisely.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD best practices

2010-04-22 Thread thomas
Someone on this list threw out the idea a year or so ago to just setup 2 
ramdisk servers, export a ramdisk from each and create a mirror slog from them.

Assuming newer version zpools, this sounds like it could be even safer since 
there is (supposedly) less of a chance of catastrophic failure if your ramdisk 
setup fails. Use just one remote ramdisk or two with battery backup.. whatever 
meets your paranoia level.

It's not ssd cheap, but I'm sure you could dream up several options that are 
less than stec prices. You also could probably use these machines on multiple 
pools if you've got them. I know, it still probably sounds a bit too cowboy for 
most on this list though.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can RAIDZ disks be slices ?

2010-04-22 Thread Haudy Kazemi

Ian Collins wrote:

On 04/20/10 04:13 PM, Sunil wrote:

Hi,

I have a strange requirement. My pool consists of 2 500GB disks in 
stripe which I am trying to convert into a RAIDZ setup without data 
loss but I have only two additional disks: 750GB and 1TB. So, here is 
what I thought:


1. Carve a 500GB slice (A) in 750GB and 2 500GB slices (B,C) in 1TB.
2. Create a RAIDZ pool out of these 3 slices. Performance will be bad 
because of seeks in the same disk for B and C but its just temporary.
   


If the 1TB drive fails, you're buggered.  So there's not a lot of 
point setting up a raidz.
It is possible to survive failures of a single drive with multiple 
slices on it that are in the same pool.  It requires using a RAIDZ level 
equal or greater than the number of slices on that drive.  RAIDZ2 on a 1 
TB drive with two slices will survive the same as RAIDZ1 with one slice.


(I'm focusing on addressing data survival here.  Performance will be 
worse than usual, but even this impact may be mitigated by using a 
dedicated ZIL.  (Remote and cloud based data storage using remote iSCSI 
devices and local ZIL devices have been shown to have much better 
performance characteristics than would have otherwise been expected from 
a cloud based system.  See 
http://blogs.sun.com/jkshah/entry/zfs_with_cloud_storage_and  )


With RAIDZ3, you can survive the loss of one drive with 3 slices on it 
that are all in one pool.  (Of course at that point you can't handle any 
further failures.  Reliability with this kind of configuration is at 
worst equal to RAIDZ1, but likely better on average, because you can 
tolerate some specific multiple drive failure combinations that RAIDZ1 
cannot handle.  A similar comparison might be made between the 
reliability of a 4 drive RAIDZ2 pool vs. 4 drives in a stripe-mirror 
arrangement...you get similar usable space but in one case you can lose 
any 2 drives, in the other case you can lose any 1 drive and some 
combinations of 2 drives.


I shared a variation of this idea a while ago in a comment here:
http://blogs.sun.com/ahl/entry/expand_o_matic_raid_z

A how to is below:


You may as well create a pool on the 1TB drive and copy to that.


3. zfs send | recv my current pool data into the new pool.
4. Destroy the current pool.
5. In the new pool, replace B with the 500GB disk freed by the 
destruction of the current pool.
6. Optionally, replace C with second 500GB to free up the 750GB 
completely.


   

Or use the two 500GB and the 750 GB drive for the raidz.


Option to get all drives included:
1.) move all data to 1 TB drive
2.) create RAIDZ1/RAIDZ2 pool using 2* 500 GB drives, 750 GB drive, and 
a sparse file that you delete right after the pool is created.  Your 
pool will be degraded by deleting the sparse file but will still work 
(because it is a RAIDZ).  Use RAIDZ2 if you want ZFS's protections to be 
active immediately (as you'll have 3 out of 4 devices available).

3.) move all data from 1 TB drive to RAIDZ pool
4.) replace sparse file device with 1 TB drive (or 500 GB slice of 1 TB 
drive)

5.) resilver pool

A variation on this is to create a RAIDZ2 using 2* 500 GB drives, 750 GB 
drive, and 2 sparse files.  After the data is moved from the 1 TB drive 
to the RAIDZ2, two 500 GB slices are created on the 1 TB drive.  These 2 
slices in turn are used to replace the 2 sparse files.  You'll end up 
with 3*500GB of usable space and protection from at least 1 drive 
failure (the 1 TB drive) up to 2 drive failures (any of the other 
drives).  Performance caveats of 2 slices on one drive apply.


If you like, you can later add a fifth drive relatively easily by 
replacing one of the slices with a whole drive.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD best practices

2010-04-22 Thread Daniel Carosone
On Thu, Apr 22, 2010 at 09:58:12PM -0700, thomas wrote:
 Assuming newer version zpools, this sounds like it could be even
 safer since there is (supposedly) less of a chance of catastrophic
 failure if your ramdisk setup fails. Use just one remote ramdisk or
 two with battery backup.. whatever meets your paranoia level.   

If the iscsi initiator worked for me at all, I would be trying this.
I liked the idea, but it's just not accessible now.

--
Dan.


pgpfrImp3sC6A.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss