So that leaves us with a Samba vs NFS issue (not
related to
ZFS). We know that NFS is able to create file _at
most_ at
one file per server I/O latency. Samba appears better
and this is
what we need to investigate. It might be better in a
way
that NFS can borrow (maybe through some better
On August 10, 2007 2:20:30 PM +0300 Tuomas Leikola
[EMAIL PROTECTED] wrote:
We call that a mirror :-)
Mirror and raidz suffer from the classic blockdevice abstraction
problem in that they need disks of equal size.
Not that I'm aware of. Mirror and raid-z will simply use the smallest
We call that a mirror :-)
Mirror and raidz suffer from the classic blockdevice abstraction
problem in that they need disks of equal size.
Not that I'm aware of. Mirror and raid-z will simply use the smallest
size of your available disks.
Exactly. The rest is not usable.
Alec Muffett wrote:
Does anyone on this list have experience with a recent board with 6 or more
SATA ports that they know is supported?
Well so far I have only populated 5 of the ports I have available,
but my writeup with my 9-port SATS ASUS mobo is at:
On 8/10/07, Darren Dunham [EMAIL PROTECTED] wrote:
For instance, it might be nice to create a mirror with a 100G disk and
two 50G disks. Right now someone has to create slices on the big disk
manually and feed them to zpool. Letting ZFS handle everything itself
might be a win for some cases.
On 8/10/07, Darren J Moffat [EMAIL PROTECTED] wrote:
Tuomas Leikola wrote:
We call that a mirror :-)
Mirror and raidz suffer from the classic blockdevice abstraction
problem in that they need disks of equal size.
Not that I'm aware of. Mirror and raid-z will simply use the smallest
On 8/10/07, Moore, Joe [EMAIL PROTECTED] wrote:
Wishlist: It would be nice to put the whole redundancy definitions into
the zfs filesystem layer (rather than the pool layer): Imagine being
able to set copies=5+2 for a filesystem... (requires a 7-VDEV pool,
and stripes via RAIDz2, otherwise
On Fri, Aug 10, 2007 at 10:23:49AM -0700, Neal Pollack wrote:
Server class: Chipset ESB-2 southbridge
Desktop class: Chipset ICH-8 and ICH-9
Motherboards known as i965 chipset
and Intel P35 chipsets
Are the i975 chipset boards any less
We call that a mirror :-)
Mirror and raidz suffer from the classic blockdevice abstraction
problem in that they need disks of equal size.
Not that I'm aware of. Mirror and raid-z will simply use the smallest
size of your available disks.
Exactly. The rest is not usable.
Does anyone on this list have experience with a recent board with 6 or more
SATA ports that they know is supported?
Well so far I have only populated 5 of the ports I have available,
but my writeup with my 9-port SATS ASUS mobo is at:
http://www.crypticide.com/dropsafe/article/2091
Tuomas Leikola wrote:
We call that a mirror :-)
Mirror and raidz suffer from the classic blockdevice abstraction
problem in that they need disks of equal size.
Not that I'm aware of. Mirror and raid-z will simply use the smallest
size of your available disks.
Exactly. The rest is not
On 8/9/07, Richard Elling [EMAIL PROTECTED] wrote:
What I'm looking for is a disk full error if ditto cannot be written
to different disks. This would guarantee that a mirror is written on a
separate disk - and the entire filesystem can be salvaged from a full
disk failure.
We call that
On 8/9/07, Mario Goebbels [EMAIL PROTECTED] wrote:
If you're that bent on having maximum redundancy, I think you should
consider implementing real redundancy. I'm also biting the bullet and
going mirrors (cheaper than RAID-Z for home, less disks needed to start
with).
Currently I am, and as
On August 10, 2007 12:34:23 PM +0300 Tuomas Leikola
[EMAIL PROTECTED] wrote:
On 8/9/07, Richard Elling [EMAIL PROTECTED] wrote:
What I'm looking for is a disk full error if ditto cannot be written
to different disks. This would guarantee that a mirror is written on a
separate disk - and
This is practically the holy grail of dynamic raid - the ability to
dynamically use different redundancy settings on a per-directory
level, and to use a mix of different sized devices and add/remove them
at will.
Well I suspect that arbitrary redundancy configuration is not
something we'll
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Frank Cusack
Sent: Friday, August 10, 2007 7:26 AM
To: Tuomas Leikola
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Force ditto block on different vdev?
On August 10, 2007 2:20:30 PM +0300 Tuomas Leikola
[EMAIL
Hi,
I am wondering what the readers of this list are using to control their
ZFS RAID-Z arrays with.
A quote from an under answered comment the OpenSolaris device driver
forumhttp://www.opensolaris.org/jive/thread.jspa?threadID=32610tstart=0:
*I'm having a hard time finding any decent
Mirror and raidz suffer from the classic blockdevice abstraction
problem in that they need disks of equal size.
Not that I'm aware of. Mirror and raid-z will simply use the smallest
size of your available disks.
Exactly. The rest is not usable.
Well I don't understand how you
Is it possible/recommended to create a zpool and zfs setup such that the
OS itself (in root /) is in its own zpool?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi,
I want to send peices of a zfs filesystem to another system. Can zfs
send peices of a snapshot? Say I only want to send over /[EMAIL PROTECTED] and
not include /app/conf data while /app/conf is still apart of the
/[EMAIL PROTECTED] snapshot? I say app/conf as an example, it could be
Thanks Cindy and Erik,
The link to the boot page exactly answers my question.
Russ
Cindy Swearingen wrote:
Hi Russ,
If you are asking whether you can create a ZFS file system for the root file
system
and boot from it, it is possible on an x86 system running the Nevada release.
Not
Shannon Fiume wrote:
Hi,
I want to send peices of a zfs filesystem to another system. Can zfs
send peices of a snapshot? Say I only want to send over /[EMAIL PROTECTED]
and
not include /app/conf data while /app/conf is still apart of the
/[EMAIL PROTECTED] snapshot? I say app/conf as
22 matches
Mail list logo