For ~100 people, I like Bob's answer. RAID 10 will get you lots of speed.
Perhaps RAID50 would be just fine for you as well and give your more space, but
without measuring, you won't be sure. Don't forget a hot spare (or two)!
Your MySQL database - will that generate a lot of IO?
Also, to
To be honest, never. It's a cheap server sat at home, and I never got around
to writing a script to scrub it and report errors.
I'm going to write one now though! Look at how the resilver finished:
# zpool status
pool: zfspool
state: ONLINE
status: One or more devices has experienced an
Hi,
Currently no ACL inheritance takes place when a new
file system is
created. Feel free to open an RFE for this.
Thank you for your reply ...
Good to know about it, but its really simple to write a small shell-script that
would create the home directory, change ownership and set the ACL
Just a side note on the PERC labelled cards: they don't have a JBOD
mode so you _have_ to use hardware RAID. This may or may not be an
issue in your configuration but it does mean that moving disks between
controllers is no longer possible. The only way to do a pseudo JBOD is
to create
Andre van Eyssen schrieb:
On Mon, 22 Jun 2009, Jacob Ritorto wrote:
Is there a card for OpenSolaris 2009.06 SPARC that will do SATA
correctly yet? Need it for a super cheapie, low expectations,
SunBlade 100 filer, so I think it has to be notched for 5v PCI slot,
iirc. I'm OK with slow --
2) disks that were attached once leave a stale /dev/dsk entry behind
that takes full 7 seconds to stat() with kernel running at 100%.
Such entries should go away with an invocation of devfsadm -vC.
If they don't, it's a bug IMHO.
Regards -- Volker
--
Volker A. Brandt schrieb:
2) disks that were attached once leave a stale /dev/dsk entry behind
that takes full 7 seconds to stat() with kernel running at 100%.
Such entries should go away with an invocation of devfsadm -vC.
If they don't, it's a bug IMHO.
yes, they go away. But the problem is
Hi,
I'd like to be able to select zfs filesystems, based on the value of properties.
Something like this:
zfs select mounted=yes
Is anyone aware if this feature might be available in the future?
If not, is there a clean way of achieving the same result?
Thanks, Mike.
--
This message posted
Mike Forey wrote:
zfs select mounted=yes
If not, is there a clean way of achieving the same result?
How about this:
zfs list -o name,mounted | awk '$2 == yes {print $1}'
Allan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
very tidy, thanks! :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Mike Forey wrote:
Hi,
I'd like to be able to select zfs filesystems, based on the value of properties.
Something like this:
zfs select mounted=yes
What is the output of the above ?
Would you want to specify multiple properties ?
What about for properties that aren't index values (eg
On Tue, Jun 23, 2009 at 1:13 PM, Rossno-re...@opensolaris.org wrote:
Look at how the resilver finished:
c1t3d0 ONLINE 3 0 0 128K resilvered
c1t4d0 ONLINE 0 0 11 473K resilvered
c1t5d0 ONLINE 0 0 23 986K resilvered
On Mon, 22 Jun 2009, Ross wrote:
All seemed well, I replaced the faulty drive, imported the pool again, and
kicked off the repair with:
# zpool replace zfspool c1t1d0
What build are you running? Between builds 105 and 113 inclusive there's
a bug in the resilver code which causes it to miss
I thought I recalled reading somewhere that in the situation where you
have several zfs filesystems under one top level directory like this:
rpool
rpool/ROOT/osol-112
rpool/export
rpool/export/home
rpool/export/home/reader
you could do a shapshot encompassing everything below zpool instead of
Harry Putnam wrote:
I thought I recalled reading somewhere that in the situation where you
have several zfs filesystems under one top level directory like this:
rpool
rpool/ROOT/osol-112
rpool/export
rpool/export/home
rpool/export/home/reader
you could do a shapshot encompassing everything
Erik Trimble wrote:
All this discussion hasn't answered one thing for me: exactly _how_
does ZFS do resilvering? Both in the case of mirrors, and of RAIDZ[2] ?
I've seen some mention that it goes in cronological order (which to
me, means that the metadata must be read first) of file
We're definitely working on problems contributing to such 'picket
fencing'.
But beware to equate symptoms and root caused issues. We already know
that picket fencing is multicause and
we're tracking the ones we know about : there is something related to
taskq cpu scheduling and
something
Erik Ableson wrote:
Just a side note on the PERC labelled cards: they don't have a JBOD
mode so you _have_ to use hardware RAID. This may or may not be an
issue in your configuration but it does mean that moving disks between
controllers is no longer possible. The only way to do a pseudo
On 23 Jun 2009, at 23:59 , Darren J Moffat wrote:
Harry Putnam wrote:
I thought I recalled reading somewhere that in the situation where
you
have several zfs filesystems under one top level directory like this:
rpool
rpool/ROOT/osol-112
rpool/export
rpool/export/home
rpool/export/home/reader
Darren J Moffat darr...@opensolaris.org writes:
Harry Putnam wrote:
I thought I recalled reading somewhere that in the situation where you
have several zfs filesystems under one top level directory like this:
rpool
rpool/ROOT/osol-112
rpool/export
rpool/export/home
Harry Putnam wrote:
Darren J Moffat darr...@opensolaris.org writes:
Harry Putnam wrote:
I thought I recalled reading somewhere that in the situation where you
have several zfs filesystems under one top level directory like this:
rpool
rpool/ROOT/osol-112
rpool/export
rpool/export/home
Mike
---
Michael Sullivan
michael.p.sulli...@me.com
http://www.kamiogi.net/
Japan Mobile: +81-80-3202-2599
US Phone: +1-561-283-2034
On 24 Jun 2009, at 01:01 , Harry Putnam wrote:
Darren J Moffat darr...@opensolaris.org writes:
Harry Putnam wrote:
I thought I recalled reading somewhere
Richard Elling wrote:
Erik Trimble wrote:
All this discussion hasn't answered one thing for me: exactly _how_
does ZFS do resilvering? Both in the case of mirrors, and of RAIDZ[2] ?
I've seen some mention that it goes in cronological order (which to
me, means that the metadata must be
ave == Andre van Eyssen an...@purplecow.org writes:
et == Erik Trimble erik.trim...@sun.com writes:
ea == Erik Ableson eable...@mac.com writes:
edm == Eric D. Mudama edmud...@bounceswoosh.org writes:
ave The LSI SAS controllers with SATA ports work nicely with
ave SPARC.
I think what
dc == Daniel Carosone no-re...@opensolaris.org writes:
dc I'm concerned that, despite clear recommendations and advice
dc against it, there seem to be a number of solutions appearing
dc (like automated backup to cloud, via the auto-snapshot hooks)
dc that use the stream format
The problem I had was with the single raid 0 volumes (miswrote RAID 1
on the original message)
This is not a straight to disk connection and you'll have problems if
you ever need to move disks around or move them to another controller.
I agree that the MD1000 with ZFS is a rocking,
I thought the LSI 1068 do not work with SPARC (mfi driver, x86 only).
I thought the 1078 are supposed to work with SPARC (mega_sas).
Hmmm
shelob:/home/volker,23204 uname -a
SunOS shelob 5.10 Generic_137111-02 sun4v sparc SUNW,Sun-Fire-T1000
shelob:/home/volker,23205 man mpt
Devices
Miles Nordin wrote:
ave == Andre van Eyssen an...@purplecow.org writes:
et == Erik Trimble erik.trim...@sun.com writes:
ea == Erik Ableson eable...@mac.com writes:
edm == Eric D. Mudama edmud...@bounceswoosh.org writes:
ave The LSI SAS controllers with SATA ports work nicely
Erik Ableson wrote:
The problem I had was with the single raid 0 volumes (miswrote RAID 1
on the original message)
This is not a straight to disk connection and you'll have problems if
you ever need to move disks around or move them to another controller.
Would you mind explaining exactly
On 23-Jun-09, at 1:58 PM, Erik Trimble wrote:
Richard Elling wrote:
Erik Trimble wrote:
All this discussion hasn't answered one thing for me: exactly
_how_ does ZFS do resilvering? Both in the case of mirrors, and
of RAIDZ[2] ?
I've seen some mention that it goes in cronological
Chookiex wrote:
Hi all.
Because the property compression could decrease the file size, and the
file IO will be decreased also.
So, would it increase the ZFS I/O throughput with compression?
for example:
I turn on gzip-9,on a server with 2*4core Xeon, 8GB RAM.
It could compress my files with
On Jun 23, 2009, at 11:50 AM, Richard Elling wrote:
(2) is there some reasonable way to read in multiples of these
blocks in a single IOP? Theoretically, if the blocks are in
chronological creation order, they should be (relatively)
sequential on the drive(s). Thus, ZFS should be able
scrub: resilver completed after 5h50m with 0 errors on Tue Jun 23 05:04:18
2009
Zero errors even though other parts of the message definitely show errors?
This is described here: http://docs.sun.com/app/docs/doc/819-5461/gbcve?a=view
Device errors do not guarantee pool errors when redundancy
On Mon, 22 Jun 2009 15:28:08 -0700
Carson Gaspar car...@taltos.org wrote:
James C. McPherson wrote:
Use raidctl(1m). For fwflash(1m), this is on the future project
list purely because we've got much higher priority projects on the
boil - if we couldn't use raidctl(1m) this would be
vab == Volker A Brandt v...@bb-c.de writes:
I thought the LSI 1068 do not work with SPARC (mfi driver, x86
only). I thought the 1078 are supposed to work with SPARC
(mega_sas).
vab shelob:/home/volker,23204 uname -a SunOS shelob 5.10
vab Generic_137111-02 sun4v sparc
It has been quite some time (about a year) since I did testing of
batch processing with my software (GraphicsMagick). In between time,
ZFS added write-throttling. I am using Solaris 10 with kernel
141415-03.
Quite a while back I complained that ZFS was periodically stalling the
writing
Hi,
What does zio_assess do? Is it a stage of pipeline? I see quite a bit these
stacks in 5 second time.
I tried to search src.opensolaris, did not find any reference. Thanks for any
help
zfs`zio_assess+0x58
zfs`zio_execute+0x74
is this a direct write to a zfs filesystem or is it some kind of zvol export?
anyway, sounds similar to this:
http://opensolaris.org/jive/thread.jspa?threadID=105702tstart=0
On Tue, Jun 23, 2009 at 7:14 PM, Bob
Friesenhahnbfrie...@simple.dallas.tx.us wrote:
It has been quite some time (about a
[...]
Well yes actually you aren't looking for the snapshots the correct way.
[...]
No difference, and there is no rpool/dump
rpool/export
rpool/export/home
rpool/export/home/reader
under either snapshot... not to mention all the other
39 matches
Mail list logo