[zfs-discuss] ZFS vs FAT

2008-08-03 Thread Rahul
Can u site the differences b/w ZFS and FAT filesystems?? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS vs FAT

2008-08-03 Thread Ian Collins
Rahul wrote: Can u site the differences b/w ZFS and FAT filesystems?? You are joking, aren't you? Have you read any of the ZFS documentation? Ian ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] are these errors dangerous

2008-08-03 Thread Ross
Hi, First of all, I really should warn you that I'm very new to Solaris, I'll happily share my thoughts but be aware that there's not a lot of experience backing them up. From what you've said, and the logs you've posted I suspect you're hitting recoverable read errors. ZFS wouldn't flag

Re: [zfs-discuss] ZFS vs FAT

2008-08-03 Thread Johan Hartzenberg
On Sun, Aug 3, 2008 at 9:31 AM, Rahul [EMAIL PROTECTED] wrote: Can u site the differences b/w ZFS and FAT filesystems?? Assuming you are serious, the technical bits can be found here: http://en.wikipedia.org/wiki/Comparison_of_file_systems But there is a bigger, fundamental difference

Re: [zfs-discuss] ZFS boot mirror

2008-08-03 Thread andrew
The second disk doesn't have the root pool on slice 2 - it is on slice 0 as with the first disk. All I did differently was to create a slice 2 covering the whole Solaris FDISK primary partition. If you then issue this command as before: installgrub /boot/grub/stage1 /boot/grub/stage2

[zfs-discuss] cron and roles (zfs-auto-snapshot 0.11 work)

2008-08-03 Thread Nils Goroll
My previous reply via email did not get linked to this post, so let me resend it: can roles run cron jobs ?), No. You need a user who can take on the role. Darn, back to the drawing board. I don't have all the context on this but Solaris RBAC roles *can* run cron jobs. Roles don't have to

[zfs-discuss] zfs-auto-snapshot: Use at ? SMF prop caching?

2008-08-03 Thread Nils Goroll
Hi Tim, So, I've got a pretty basic solution: Every time the service starts, we check for the existence of a snapshot [...] - if one doesn't exist, then we take a snapshot under the policy set down by that instance. This does sound like a valid alternative solution for this requirement if

Re: [zfs-discuss] ZFS boot mirror

2008-08-03 Thread Malachi de Ælfweald
I have to say, looking at that confuses me a little. How can the two disks be mirrored when the partition tables don't match? On Sun, Aug 3, 2008 at 6:00 AM, andrew [EMAIL PROTECTED] wrote: OK, I've put up some screenshots and a copy of my menu.lst to clarify my setup:

Re: [zfs-discuss] are these errors dangerous

2008-08-03 Thread Matt Harrison
Ross wrote: Hi, First of all, I really should warn you that I'm very new to Solaris, I'll happily share my thoughts but be aware that there's not a lot of experience backing them up. From what you've said, and the logs you've posted I suspect you're hitting recoverable read errors. ZFS

Re: [zfs-discuss] Terrible zfs performance under NFS load

2008-08-03 Thread Bob Friesenhahn
On Thu, 31 Jul 2008, Paul Fisher wrote: Syslog is funny in that it does a lot of open/write/close cycles so that rotate can work trivially. Those are meta-data updates and on NFS each implies a COMMIT. This leads us back to the old solaris nfs over zfs is slow discussion, where we talk

Re: [zfs-discuss] are these errors dangerous

2008-08-03 Thread Miles Nordin
mh == Matt Harrison [EMAIL PROTECTED] writes: mh I'm worried about is if the entire batch is failing slowly mh and will all die at the same time. If you can download smartctl, you can use the approach described here: http://web.Ivy.NET/~carton/rant/ml/raid-findingBadDisks-0.html

Re: [zfs-discuss] Can I trust ZFS?

2008-08-03 Thread Bob Friesenhahn
According to the hard disk drive guide at http://www.storagereview.com/guide2000/ref/hdd/index.html, a wopping 36% of data loss is due to human error. 49% of data loss was due to hardware or system malfunction. With proper pool design, zfs addresses most of the 49% of data loss due to

Re: [zfs-discuss] Can I trust ZFS?

2008-08-03 Thread Bill Sommerfeld
On Sun, 2008-08-03 at 11:42 -0500, Bob Friesenhahn wrote: Zfs makes human error really easy. For example $ zpool destroy mypool Note that zpool destroy can be undone by zpool import -D (if you get to it before the disks are overwritten). ___

[zfs-discuss] Setting up a zpool that can attached to a failover zone

2008-08-03 Thread Chris
Hi I have 2 Servers with zones. I have LUNs on a SAN that will contain application data which will be switched from zone A on Server 1 to zone A-failover on Server 2. What is the best way to set this up? I think that it should work if i create a zpool and use legacy mountpoints. I have to

Re: [zfs-discuss] are these errors dangerous

2008-08-03 Thread Ross Smith
Hi Matt, If it's all 3 disks, I wouldn't have thought it likely to be disk errors, and I don't think it's a ZFS fault as such. You might be better posting the question in the storage or help forums to see if anybody there can shed more light on this. Ross Date: Sun, 3 Aug 2008 16:48:03

Re: [zfs-discuss] are these errors dangerous

2008-08-03 Thread Matt Harrison
Miles Nordin wrote: mh == Matt Harrison [EMAIL PROTECTED] writes: mh I'm worried about is if the entire batch is failing slowly mh and will all die at the same time. If you can download smartctl, you can use the approach described here:

[zfs-discuss] Scrubbing only checks used data?

2008-08-03 Thread Jens
Hi there, I am currently evaluating OpenSolaris as a replacement for my linux installations. I installed it as a xen domU, so there is a remote chance, that my observations are caused by xen. First, my understanding of zpool [i]scrub[/i] is Ok, go ahead, and rewrite [b]each block of each

Re: [zfs-discuss] are these errors dangerous

2008-08-03 Thread Johan Hartzenberg
On Sun, Aug 3, 2008 at 8:48 PM, Matt Harrison [EMAIL PROTECTED]wrote: Miles Nordin wrote: mh == Matt Harrison [EMAIL PROTECTED] writes: mh I'm worried about is if the entire batch is failing slowly mh and will all die at the same time. Matt, can you please post the output

Re: [zfs-discuss] Disabling disks' write-cache in J4200 with ZFS?

2008-08-03 Thread Richard Elling
Todd E. Moore wrote: I'm working with a group that wants to commit all the way to disk every single write - flushing or bypassing all the caches each time. The fsync() call will flush the ZIL. As for the disk's cache, if given the entire disk, ZFS enables its cache by default. Rather

Re: [zfs-discuss] are these errors dangerous

2008-08-03 Thread Richard Elling
Matt Harrison wrote: Hi everyone, I've been running a zfs fileserver for about a month now (on snv_91) and it's all working really well. I'm scrubbing once a week and nothing has come up as a problem yet. I'm a little worried as I've just noticed these messages in /var/adm/message and I

Re: [zfs-discuss] are these errors dangerous

2008-08-03 Thread Matt Harrison
Johan Hartzenberg wrote: On Sun, Aug 3, 2008 at 8:48 PM, Matt Harrison [EMAIL PROTECTED]wrote: Miles Nordin wrote: mh == Matt Harrison [EMAIL PROTECTED] writes: mh I'm worried about is if the entire batch is failing slowly mh and will all die at the same time. Matt, can you

Re: [zfs-discuss] are these errors dangerous

2008-08-03 Thread Matt Harrison
Richard Elling wrote: Matt Harrison wrote: Aug 2 14:46:06 exodus Error for Command: read_defect_data Error Level: Informational key here: Informational Aug 2 14:46:06 exodus scsi: [ID 107833 kern.notice]Requested Block: 0 Error Block: 0 Aug 2

Re: [zfs-discuss] Scrubbing only checks used data?

2008-08-03 Thread Richard Elling
Jens wrote: Hi there, I am currently evaluating OpenSolaris as a replacement for my linux installations. I installed it as a xen domU, so there is a remote chance, that my observations are caused by xen. First, my understanding of zpool [i]scrub[/i] is Ok, go ahead, and rewrite [b]each

Re: [zfs-discuss] ZFS boot mirror

2008-08-03 Thread Richard Elling
Malachi de Ælfweald wrote: I have to say, looking at that confuses me a little. How can the two disks be mirrored when the partition tables don't match? Welcome to ZFS! In traditional disk mirrors, disk A block 0 == disk B block 0 disk A block 1 == disk B block 1 ... disk A

[zfs-discuss] how to make two disks of one pool mirror both readable separately.

2008-08-03 Thread wan_jm
there are two disks in one ZFS pool used as mirror. So we all know that there are the same date on the two disks. I want to know, how can migrate them into two separate pools, so I can later read write them separately.( just as in UFS mirror, we can mount each separately). thanks. This

[zfs-discuss] help me....

2008-08-03 Thread Rahul
hi can you give some disadvantages of the ZFS file system?? plzz its urgent... help me. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] help me....

2008-08-03 Thread Bob Netherton
On Sun, 2008-08-03 at 20:46 -0700, Rahul wrote: hi can you give some disadvantages of the ZFS file system?? In what context ? Relative to what ? Bob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] help me....

2008-08-03 Thread Bob Friesenhahn
On Sun, 3 Aug 2008, Rahul wrote: hi can you give some disadvantages of the ZFS file system?? Yes. It provides very poor performance for large-file random-access read/write of 128 bytes at a time. Is that enough info? ZFS is great but it is not perfect in every regard. It is easy to build

Re: [zfs-discuss] help me....

2008-08-03 Thread Neal Pollack
Rahul wrote: hi can you give some disadvantages of the ZFS file system?? Yes, it's too easy to administer. This makes it rough to charge a lot as a sysadmin. All the problems, manual decisions during fsck and data recovery, head-aches after a power failure or getting disk drives mixed up