Hi all,
We would like to replace one of our 3.5 inch SATA drives of our Thumpers
with a SSD device (and put the ZIL on this device). We are currently
looking into this with in a bit more detail and would like to ask for
input if people already have experience with single vs. multi cell SSDs,
The drives that Sun sells will come with the correct bracket.
Ergo, there is no reason to sell the bracket as a separate
item unless the customer wishes to place non-Sun disks in
them. That represents a service liability for Sun, so they are
not inclined to do so. It is really basic business.
+--
| On 2009-02-02 09:46:49, casper@sun.com wrote:
|
| And think of all the money it costs to stock and distribute that
| separate part. (And our infrastructure is still expensive; too expensive
| for a $5 part)
Ok thanks for your help guys! :o)
One last question, how do I know that the spare sectors are finishing? SMARTS
are not available for Solaris, right? Is there any warnings that plop up in
ZFS? Will scrubbing reveal that there are errors? How will I know?
--
This message posted from
On Mon, Feb 2 at 5:48, Orvar Korvar wrote:
Ok thanks for your help guys! :o)
One last question, how do I know that the spare sectors are
finishing? SMARTS are not available for Solaris, right? Is there any
warnings that plop up in ZFS? Will scrubbing reveal that there are
errors? How will
Actually, the issue seems to be more than what I described below. I cannot
seemingly issue any zfs or zpool commands short of just zpool status -x ,
giving a 'healthy' status. If I do zpool status , I get the following:
r...@ec1-nas1# zpool status
pool: nasPool
state: ONLINE
scrub: none
The time has come to review the current Contributor and Core contributor
grants for ZFS. Since all of the ZFS core contributors grants are set
to expire on 02-24-2009 we need to renew the members that are still
contributing at core contributor levels. We should also add some new
members to
Could someone help me answer the following question :
What is the recommanded value for these 2 Oracle parameters when working
with ZFS ?
disk_asynch_io = true
filesystemio_options = setall
or
disk_asynch_io = false
filesystemio_options = none
Thanks in advance.
MiK.
Orvar Korvar wrote:
Ok. Just to confirm: A modern disk has already some spare capacity which is
not normally utilized by ZFS, UFS, etc. If the spare capacity is finished,
then the disc should be replaced.
Also, if ZFS decides that a block is bad, it can leave it unused.
For example, if
fm == Fredrich Maney fredrichma...@gmail.com writes:
fm Oddly enough, that seems to be the path was taken by
fm Sun quite some time ago with /usr/bin. Those tools are the
fm standard, default tools on Sun systems for a reason: they are
fm the ones that are maintained and updated
For example, ls recently got -% option. This seems to work for
/usr/bin/ls, /usr/xpg4/bin/ls, and /usr/xpg6/bin/ls. so, that's good!
albeit a little surprising.
There's only one source file. So if you add an option you'll add it to
all of them.
But if /usr/xpg6/bin/ls came first in PATH,
gm == Greg Mason gma...@msu.edu writes:
g == Gary Mills mi...@cc.umanitoba.ca writes:
gm I know disabling the ZIL is an Extremely Bad Idea,
but maybe you don't care about trashed thunderbird databases. You
just don't want to lose the whole pool to ``status: The pool metadata
is corrupted
It definitely does. I made some tests today comparing b101 with b105 while
doing 'zfs send -R -I A B /dev/null' with several dozen snapshots between A
and B. Well, b105 is almost 5x faster in my case - that's pretty good.
--
Robert Milkowski
http://milek.blogspot.com
--
This message posted
On Mon, Feb 2, 2009 at 9:22 PM, Gary Mills mi...@cc.umanitoba.ca wrote:
On Sun, Feb 01, 2009 at 11:44:14PM -0500, Jim Dunham wrote:
If there are two (or more) instances of ZFS in the end-to-end data
path, each instance is responsible for its own redundancy and error
recovery. There is no
Orvar Korvar wrote:
Ok. Just to confirm: A modern disk has already some spare capacity which is
not normally utilized by ZFS, UFS, etc. If the spare capacity is finished,
then the disc should be replaced.
Yup, that is the case.
___
zfs-discuss
Hello Richard,
Monday, February 2, 2009, 5:39:34 PM, you wrote:
RE Orvar Korvar wrote:
Ok. Just to confirm: A modern disk has already some spare capacity which is
not normally utilized by ZFS, UFS, etc. If the spare capacity is finished,
then the disc should be replaced.
RE Also, if
Hello Miles,
Monday, February 2, 2009, 7:20:49 PM, you wrote:
gm == Greg Mason gma...@msu.edu writes:
g == Gary Mills mi...@cc.umanitoba.ca writes:
MN gm I know disabling the ZIL is an Extremely Bad Idea,
MN but maybe you don't care about trashed thunderbird databases. You
MN just don't
Snapshots are not on a per-pool basis but a
per-file-system basis. Thus, when you took a
snapshot of testpol, you didn't actually snapshot
the pool; rather, you took a snapshot of the top
level file system (which has an implicit name
matching that of the pool).
Thus, you haven't actually
My system is OS 8.11, updated to dev build 105. I have two pools
constructed from iscsi targets with around 5600 file-systems in each.
I was able to enable NFS sharing and CIFS/SMB sharing on both pools,
however, after a reboot the SMB shares comes up but the NFS server
service does not and
On Mon, Feb 2 at 5:05, Orvar Korvar wrote:
Ok. Just to confirm: A modern disk has already some spare capacity
which is not normally utilized by ZFS, UFS, etc. If the spare
capacity is finished, then the disc should be replaced.
Actually, the device has spare sectors beyond the reported LBA
Ok. Just to confirm: A modern disk has already some spare capacity which is not
normally utilized by ZFS, UFS, etc. If the spare capacity is finished, then the
disc should be replaced.
--
This message posted from opensolaris.org
___
zfs-discuss
Then what if I ever need to export the pool on the primary server and then
import it on the replicated server. Will ZFS know which drives should be part
of the stripe even though the device names across servers may not be the same?
--
This message posted from opensolaris.org
BJ Quinn wrote:
Then what if I ever need to export the pool on the primary server
and then import it on the replicated server. Will ZFS know which
drives should be part of the stripe even though the device names
across servers may not be the same?
Yes, zpool import will figure it
Just a brief addendum
Something like this (or a fully DRAM based device if available in 3.5
inch FF) might also be interesting to test,
http://www.platinumhdd.com/
any thoughts?
Cheers
Carsten
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
+1, Thanks for the nomination, Cindy
Mark Shellenbaum wrote:
The time has come to review the current Contributor and Core contributor
grants for ZFS. Since all of the ZFS core contributors grants are set
to expire on 02-24-2009 we need to renew the members that are still
contributing at
On Mon, Feb 02, 2009 at 09:53:15PM +0700, Fajar A. Nugraha wrote:
On Mon, Feb 2, 2009 at 9:22 PM, Gary Mills mi...@cc.umanitoba.ca wrote:
On Sun, Feb 01, 2009 at 11:44:14PM -0500, Jim Dunham wrote:
If there are two (or more) instances of ZFS in the end-to-end data
path, each instance is
If creation of snapshot is allowed on a top level file system, roll back of
snapshot created on top level file system must take care not to disturb other
file systems that were created under it.
-Abishek
--
This message posted from opensolaris.org
On Sun, Feb 01, 2009 at 11:44:14PM -0500, Jim Dunham wrote:
I wrote:
I realize that this configuration is not supported.
The configuration is supported, but not in the manner mentioned below.
If there are two (or more) instances of ZFS in the end-to-end data
path, each instance is
I am having a problem that I am hoping someone might have some insight in
to. I am running a x4500 with solaris 5.10 and a zfs filesystem named
nasPool. I am also running NetBackup on the box as well...server and client
all in one. I have had this up and running for sometime now and recently
On Mon, Feb 02, 2009 at 08:22:13AM -0600, Gary Mills wrote:
On Sun, Feb 01, 2009 at 11:44:14PM -0500, Jim Dunham wrote:
I wrote:
I realize that this configuration is not supported.
The configuration is supported, but not in the manner mentioned below.
If there are two (or more)
Looks reasonable
+1
Neil.
On 02/02/09 08:55, Mark Shellenbaum wrote:
The time has come to review the current Contributor and Core contributor
grants for ZFS. Since all of the ZFS core contributors grants are set
to expire on 02-24-2009 we need to renew the members that are still
+1.
I would like to nominate roch.bourbonn...@sun.com for his work on
improving the performance of ZFS over the last few years.
thanks,
-neel
On Feb 2, 2009, at 4:02 PM, Neil Perrin wrote:
Looks reasonable
+1
Neil.
On 02/02/09 08:55, Mark Shellenbaum wrote:
The time has come to review
I would like to nominate roch.bourbonn...@sun.com for his work on
improving the performance of ZFS over the last few years.
Absolutely.
Jeff
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
The Validated Execution project is investigating how to utilize ZFS
snapshots as the basis of a validated filesystem. Given that the
blocks of the dataset form a Merkel tree of hashes, it seemed
straightforward to validate the individual objects in the snapshot and
then sign the hash of the root
On January 30, 2009 2:26:36 PM -0800 Marcus Reid mar...@blazingdot.com
wrote:
I am investigating using ZFS as a possible replacement for SVM for
root disk mirroring.
...
Great. However, if I place the disks into a different
machine and try to boot, I get:
Executing last command: boot
Upgrading to b105 seems to improve zfs send/recv quite a bit. See this
thread:
http://www.opensolaris.org/jive/message.jspa?messageID=330988
--
Dave
Kok Fong Lau wrote:
I have been using ZFS send and receive for a while and I noticed that when I
try to do a send on a zfs file system of
Kok Fong Lau wrote:
I have been using ZFS send and receive for a while and I noticed that when I
try to do a send on a zfs file system of about 3 gig plus it took only about
3 minutes max.
zfs send application/sam...@back /backup/sample.zfs
However when I tried to send a file system
On Mon, Feb 02, 2009 at 08:41:13PM -0800, Frank Cusack wrote:
On January 30, 2009 2:26:36 PM -0800 Marcus Reid mar...@blazingdot.com
wrote:
But what is probably best,
3) when it comes time to make your backup system act as the failed system,
first boot it from the network or from cdrom,
38 matches
Mail list logo