Can u site the differences b/w ZFS and FAT filesystems??
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Rahul wrote:
Can u site the differences b/w ZFS and FAT filesystems??
You are joking, aren't you?
Have you read any of the ZFS documentation?
Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi,
First of all, I really should warn you that I'm very new to Solaris, I'll
happily share my thoughts but be aware that there's not a lot of experience
backing them up.
From what you've said, and the logs you've posted I suspect you're hitting
recoverable read errors. ZFS wouldn't flag
On Sun, Aug 3, 2008 at 9:31 AM, Rahul [EMAIL PROTECTED] wrote:
Can u site the differences b/w ZFS and FAT filesystems??
Assuming you are serious, the technical bits can be found here:
http://en.wikipedia.org/wiki/Comparison_of_file_systems
But there is a bigger, fundamental difference
The second disk doesn't have the root pool on slice 2 - it is on slice 0 as
with the first disk. All I did differently was to create a slice 2 covering the
whole Solaris FDISK primary partition. If you then issue this command as before:
installgrub /boot/grub/stage1 /boot/grub/stage2
My previous reply via email did not get linked to this post, so let me resend
it:
can roles run cron jobs ?),
No. You need a user who can take on the role.
Darn, back to the drawing board.
I don't have all the context on this but Solaris RBAC roles *can* run cron
jobs. Roles don't have to
Hi Tim,
So, I've got a pretty basic solution:
Every time the service starts, we check for the existence of a snapshot
[...] - if one doesn't exist, then we take a snapshot under the policy set
down by that instance.
This does sound like a valid alternative solution for this requirement if
I have to say, looking at that confuses me a little. How can the two disks
be mirrored when the partition tables don't match?
On Sun, Aug 3, 2008 at 6:00 AM, andrew [EMAIL PROTECTED] wrote:
OK, I've put up some screenshots and a copy of my menu.lst to clarify my
setup:
Ross wrote:
Hi,
First of all, I really should warn you that I'm very new to Solaris, I'll
happily share my thoughts but be aware that there's not a lot of experience
backing them up.
From what you've said, and the logs you've posted I suspect you're hitting
recoverable read errors. ZFS
On Thu, 31 Jul 2008, Paul Fisher wrote:
Syslog is funny in that it does a lot of open/write/close cycles so that
rotate can work trivially. Those are meta-data updates and on NFS each
implies a COMMIT. This leads us back to the old solaris nfs over zfs
is slow discussion, where we talk
mh == Matt Harrison [EMAIL PROTECTED] writes:
mh I'm worried about is if the entire batch is failing slowly
mh and will all die at the same time.
If you can download smartctl, you can use the approach described here:
http://web.Ivy.NET/~carton/rant/ml/raid-findingBadDisks-0.html
According to the hard disk drive guide at
http://www.storagereview.com/guide2000/ref/hdd/index.html, a wopping
36% of data loss is due to human error. 49% of data loss was due to
hardware or system malfunction. With proper pool design, zfs
addresses most of the 49% of data loss due to
On Sun, 2008-08-03 at 11:42 -0500, Bob Friesenhahn wrote:
Zfs makes human error really easy. For example
$ zpool destroy mypool
Note that zpool destroy can be undone by zpool import -D (if you get
to it before the disks are overwritten).
___
Hi
I have 2 Servers with zones. I have LUNs on a SAN that will contain application
data which will be switched from zone A on Server 1 to zone A-failover on
Server 2.
What is the best way to set this up?
I think that it should work if i create a zpool and use legacy mountpoints. I
have to
Hi Matt,
If it's all 3 disks, I wouldn't have thought it likely to be disk errors, and I
don't think it's a ZFS fault as such. You might be better posting the question
in the storage or help forums to see if anybody there can shed more light on
this.
Ross
Date: Sun, 3 Aug 2008 16:48:03
Miles Nordin wrote:
mh == Matt Harrison [EMAIL PROTECTED] writes:
mh I'm worried about is if the entire batch is failing slowly
mh and will all die at the same time.
If you can download smartctl, you can use the approach described here:
Hi there,
I am currently evaluating OpenSolaris as a replacement for my linux
installations. I installed it as a xen domU, so there is a remote chance, that
my observations are caused by xen.
First, my understanding of zpool [i]scrub[/i] is Ok, go ahead, and rewrite
[b]each block of each
On Sun, Aug 3, 2008 at 8:48 PM, Matt Harrison
[EMAIL PROTECTED]wrote:
Miles Nordin wrote:
mh == Matt Harrison [EMAIL PROTECTED] writes:
mh I'm worried about is if the entire batch is failing slowly
mh and will all die at the same time.
Matt, can you please post the output
Todd E. Moore wrote:
I'm working with a group that wants to commit all the way to disk
every single write - flushing or bypassing all the caches each time.
The fsync() call will flush the ZIL. As for the disk's cache, if
given the entire disk, ZFS enables its cache by default. Rather
Matt Harrison wrote:
Hi everyone,
I've been running a zfs fileserver for about a month now (on snv_91) and
it's all working really well. I'm scrubbing once a week and nothing has
come up as a problem yet.
I'm a little worried as I've just noticed these messages in
/var/adm/message and I
Johan Hartzenberg wrote:
On Sun, Aug 3, 2008 at 8:48 PM, Matt Harrison
[EMAIL PROTECTED]wrote:
Miles Nordin wrote:
mh == Matt Harrison [EMAIL PROTECTED] writes:
mh I'm worried about is if the entire batch is failing slowly
mh and will all die at the same time.
Matt, can you
Richard Elling wrote:
Matt Harrison wrote:
Aug 2 14:46:06 exodus Error for Command: read_defect_data
Error Level: Informational
key here: Informational
Aug 2 14:46:06 exodus scsi: [ID 107833 kern.notice]Requested
Block: 0 Error Block: 0
Aug 2
Jens wrote:
Hi there,
I am currently evaluating OpenSolaris as a replacement for my linux
installations. I installed it as a xen domU, so there is a remote chance,
that my observations are caused by xen.
First, my understanding of zpool [i]scrub[/i] is Ok, go ahead, and rewrite
[b]each
Malachi de Ælfweald wrote:
I have to say, looking at that confuses me a little. How can the two
disks be mirrored when the partition tables don't match?
Welcome to ZFS! In traditional disk mirrors,
disk A block 0 == disk B block 0
disk A block 1 == disk B block 1
...
disk A
there are two disks in one ZFS pool used as mirror. So we all know that there
are the same date on the two disks. I want to know, how can migrate them into
two separate pools, so I can later read write them separately.( just as in
UFS mirror, we can mount each separately).
thanks.
This
hi
can you give some disadvantages of the ZFS file system??
plzz its urgent...
help me.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, 2008-08-03 at 20:46 -0700, Rahul wrote:
hi
can you give some disadvantages of the ZFS file system??
In what context ? Relative to what ?
Bob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Sun, 3 Aug 2008, Rahul wrote:
hi
can you give some disadvantages of the ZFS file system??
Yes. It provides very poor performance for large-file random-access
read/write of 128 bytes at a time. Is that enough info?
ZFS is great but it is not perfect in every regard.
It is easy to build
Rahul wrote:
hi
can you give some disadvantages of the ZFS file system??
Yes, it's too easy to administer.
This makes it rough to charge a lot as a sysadmin.
All the problems, manual decisions during fsck and data recovery,
head-aches after a power failure or getting disk drives mixed up
29 matches
Mail list logo