Bryna,
Your timing is excellent! We've been working on this for a while now and
hopefully within the next day I'll be adding support for separate log
devices into Nevada.
I'll send out more details soon...
Neil.
Bryan Wagoner wrote:
Quick question,
Are there any tunables, or is there any
Good day ZFS-Discuss,
I am planning to build an array of 30 drives in a RaidZ2 configuration
with two hot spares. However I read on the internet that this was not
ideal.
So I ask those who are more experianced then me, what configuration
would you recommend with ZFS, I would like to have some
This feature is implemented as part of PSARC 2007/171 and will be
putback shortly.
- Eric
On Thu, Jun 21, 2007 at 03:25:30PM -0700, Bryan Wagoner wrote:
Quick question,
Are there any tunables, or is there any way to specify devices in a
pool to use for the ZIL specifically? I've been
On Jun 21, 2007, at 3:25 PM, Bryan Wagoner wrote:
Quick question,
Are there any tunables, or is there any way to specify devices in a
pool to use for the ZIL specifically? I've been thinking through
architectures to mitigate performance problems on SAN and various
other storage
Richard,
Joubert Nel wrote:
If the device was actually in use on another
system, I
would expect that libdiskmgmt would have warned
you about
this when you ran zpool create.
AFAIK, libdiskmgmt is not multi-node aware. It does
know about local
uses of the disk. Remote uses of the
On Thu, Jun 21, 2007 at 11:03:39AM -0700, Joubert Nel
wrote:
When I ran zpool create, the pool got created
without a warning.
zpool(1M) will diallow creation of the disk if it
contains data in
active use (mounted fs, zfs pool, dump device, swap,
etc). It will warn
if it contains a
Dan Saul wrote:
Good day ZFS-Discuss,
I am planning to build an array of 30 drives in a RaidZ2 configuration
with two hot spares. However I read on the internet that this was not
ideal.
So I ask those who are more experianced then me, what configuration
would you recommend with ZFS, I would
cust has this issue:
Sun Fire T2000 solaris 10 11/06
This is a new install and ZFS has not worked at all inside of a Logical
Domain. Unfortunately, nothing shows up in the messages file and I
receive no errors when trying to boot the zone. It appears to just hang
when trying to import the
Not specifically a ZFS question, but is anyone monitoring disk space of
their ZFS filesystems via the Solaris 10 snmpd? I can't find any
64-bit counters in the MIB for disk space, so the normal tools I use
get completely wrong numbers for my 1-terabyte pool.
Al,
Has there been any resolution to this problem? I get it repeatedly on my
5-500GB Raidz configuration. I sometimes get port drop/reconnect errors when
this occurs.
Gary
This message posted from opensolaris.org
___
zfs-discuss mailing list
Joubert Nel wrote:
What I meant is that when I do zpool create on a disk, the entire
contents of the disk doesn't seem to be overwritten/destroyed. I.e. I
suspect that if I didn't copy any data to this disk, a large portion of
what was on it is potentially recoverable.
If so, is there a tool
I'm curious if there has been any discussion of or work done toward
implementing storage classing within zpools (this would be similar to the
storage foundation QoSS feature).
I've searched the forum and inspected the documentation looking for a means to
do this, and haven't found anything, so
What I meant is that when I do zpool create on a disk, the entire
contents of the disk doesn't seem to be overwritten/destroyed. I.e. I
suspect that if I didn't copy any data to this disk, a large portion
of what was on it is potentially recoverable.
Presumably a scavenger program could try
On Thu, Jun 21, 2007 at 07:34:13PM -0700, Joubert Nel wrote:
OK, so if I didn't copy any data to this disk, presumably a large
portion of what was on the disk previously is theoretically
recoverable. There is really one file in particular that I'd like to
recover (it is a cpio backup).
Is
A data structure view of ZFS is now available:
http://www.opensolaris.org/os/community/zfs/structures/
We've only got one picture up right now (though its a juicy one!),
but let us know what you're interested in seeing, and we'll try to
make that happen.
I see this as a nice supplement to
Gimme specific examples and I'll have a look at it.
We (net-snmp) are just about to release a new version (5.4.1) so I'd
like to fix it before it goes to production.
It may be a known bug, since fixed, with 5.0.9.
Not specifically a ZFS question, but is anyone monitoring disk space of
their
Apologies: I've just realised all this talk of I've booted off of ZFS is
totally bogus. What they've actually done is booted off Ext3FS, for example,
then jumped into loading the real root from the zpool. That'll teach me to
read things first. This is indeed a pretty ugly hack.
The only
On Fri, 22 Jun 2007, Erast Benson wrote:
New unstable ISO of NexentaCP (Core Platform) available.
http://www.gnusolaris.org/unstable-iso/ncp_beta1-test2-b67_i386.iso
Also available at:
http://www.genunix.org/distributions/gnusolaris/index.html
Changes:
* ON B67 based
* ZFS/Boot manual
I care more about data integrity then performance. Of course if
performance is so bad that one would not be able to, say stream a
video off of it that wouldn't be acceptable.
On 6/22/07, Richard Elling [EMAIL PROTECTED] wrote:
Dan Saul wrote:
Good day ZFS-Discuss,
I am planning to build an
Dan Saul wrote:
I care more about data integrity then performance. Of course if
performance is so bad that one would not be able to, say stream a
video off of it that wouldn't be acceptable.
The model I used in this blog deals with small, random reads, not
streaming workloads. In part this is
Dan Saul wrote:
I care more about data integrity then performance. Of course if
performance is so bad that one would not be able to, say stream a
video off of it that wouldn't be acceptable.
Any config I could imagine would be able to stream several videos at once
(even 10Mbit/sec 1080p HD).
mike wrote:
it's about time. this hopefully won't spark another license debate,
etc... ZFS may never get into linux officially, but there's no reason
a lot of the same features and ideologies can't make it into a
linux-approved-with-no-arguments filesystem...
Well, there's a dark horse here
On Thu, Jun 21, 2007 at 11:36:53AM +0200, Roch - PAE wrote:
code) or Samba might be better by being careless with data.
Well, it *is* trying to be a Microsoft replacement. Gotta get it
right, you know? ;)
-brian
--
Perl can be fast and elegant as much as J2EE can be fast and elegant.
In
On Wed, Jun 20, 2007 at 12:03:02PM -0400, Will Murnane wrote:
Yes. 2 disks means when one fails, you've still got an extra. In
raid 5 boxes, it's not uncommon with large arrays for one disk to die,
and when it's replaced, the stress on the other disks causes another
failure. Then the array
24 matches
Mail list logo