Oh, one more comment. If you don't mirror your ZIL, and your unmirrored SSD
goes bad, you lose your whole pool. Or at least suffer data corruption.
Hmmm, I thought that in that case ZFS reverts to the regular on disks ZIL?
With kind regards,
Jeroen
--
This message posted from opensolaris.org
The write cache is _not_ being disabled. The write cache is being marked
as non-volatile.
Of course you're right :) Please filter my postings with a sed 's/write
cache/write cache flush/g' ;)
BTW, why is a Sun/Oracle branded product not properly respecting the NV
bit in the cache flush command?
Hi,
even if you didn't specify so below (both, Comstar and legacy target services
are inactive) I assume that you have been using Comstar, right?
In that case, the questions are:
- is there still a view on the targets? (check stmfadm)
- is there still a LU mapped? (check sbdadm)
cheers,
I stand corrected. You don't lose your pool. You don't have corrupted
filesystem. But you lose whatever writes were not yet completed, so if
those writes happen to be things like database transactions, you could have
corrupted databases or files, or missing files if you were creating them
Hi Adam,
Very interesting data. Your test is inherently
single-threaded so I'm not surprised that the
benefits aren't more impressive -- the flash modules
on the F20 card are optimized more for concurrent
IOPS than single-threaded latency.
Thanks for your reply. I'll probably test the
On Wed, Mar 31, 2010 at 1:00 AM, Karsten Weiss
k.we...@science-computing.de wrote:
Hi Adam,
Very interesting data. Your test is inherently
single-threaded so I'm not surprised that the
benefits aren't more impressive -- the flash modules
on the F20 card are optimized more for concurrent
Brent Jones wrote:
I don't think you'll find the performance you paid for with ZFS and
Solaris at this time. I've been trying to more than a year, and
watching dozens, if not hundreds of threads.
Getting half-ways decent performance from NFS and ZFS is impossible
unless you disable the ZIL.
Dear all,
I have a hardware based array storage with a capacity of 192TB and being
sliced into 64 LUNs of 3TB.
What will be the best way to configure the ZFS on this? Of course we are
not requiring the self healing capability of the ZFS. We just want the
capability of handling big size file
Orvar's post over in opensol-discuss has me thinking:
After reading the paper and looking at design docs, I'm wondering if
there is some facility to allow for comparing data in the ARC to it's
corresponding checksum. That is, if I've got the data I want in the
ARC, how can I be sure it's
Nobody knows any way for me to remove my unmirrored
log device. Nobody knows any way for me to add a mirror to it (until
Since snv_125 you can remove log devices. See
http://bugs.opensolaris.org/view_bug.do?bug_id=6574286
I've used this all the time during my testing and was able to remove
I'm not saying that ZFS should consider doing this - doing a validation
for in-memory data is non-trivially expensive in performance terms, and
there's only so much you can do and still expect your machine to
survive. I mean, I've used the old NonStop stuff, and yes, you can
shoot them with
On Wed, Mar 31, 2010 at 1:00 AM, Karsten Weiss
Use something other than Open/Solaris with ZFS as an NFS server? :)
I don't think you'll find the performance you paid for with ZFS and
Solaris at this time. I've been trying to more than a year, and
watching dozens, if not hundreds of
Just to make sure you know ... if you disable the ZIL altogether, and
you
have a power interruption, failed cpu, or kernel halt, then you're
likely to
have a corrupt unusable zpool, or at least data corruption. If that
is
indeed acceptable to
casper@sun.com wrote:
I'm not saying that ZFS should consider doing this - doing a validation
for in-memory data is non-trivially expensive in performance terms, and
there's only so much you can do and still expect your machine to
survive. I mean, I've used the old NonStop stuff, and
standard ZIL: 7m40s (ZFS default)
1x SSD ZIL: 4m07s (Flash Accelerator F20)
2x SSD ZIL: 2m42s (Flash Accelerator F20)
2x SSD mirrored ZIL: 3m59s (Flash Accelerator F20)
3x SSD ZIL: 2m47s (Flash Accelerator F20)
4x SSD ZIL:
Why do we still need /etc/zfs/zpool.cache file???
(I could understand it was useful when zfs import was slow)
zpool import is now multi-threaded
(http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6844191), hence a
lot faster, each disk contains the hostname
I have a pool (on an X4540 running S10U8) in which a disk failed, and the
hot spare kicked in. That's perfect. I'm happy.
Then a second disk fails.
Now, I've replaced the first failed disk, and it's resilvered and I have my
hot spare back.
But: why hasn't it used the spare to cover the other
On Tue, Mar 30, 2010 at 10:42 PM, Eric Schrock eric.schr...@oracle.com wrote:
On Mar 30, 2010, at 5:39 PM, Peter Tribble wrote:
I have a pool (on an X4540 running S10U8) in which a disk failed, and the
hot spare kicked in. That's perfect. I'm happy.
Then a second disk fails.
Now, I've
On Tue, Mar 30, 2010 at 10:42 PM, Eric Schrockeric.schr...@oracle.com wrote:
On Mar 30, 2010, at 5:39 PM, Peter Tribble wrote:
I have a pool (on an X4540 running S10U8) in which a disk failed, and the
hot spare kicked in. That's perfect. I'm happy.
Then a second disk fails.
Now,
On 03/31/10 10:54 PM, Peter Tribble wrote:
On Tue, Mar 30, 2010 at 10:42 PM, Eric Schrockeric.schr...@oracle.com wrote:
On Mar 30, 2010, at 5:39 PM, Peter Tribble wrote:
I have a pool (on an X4540 running S10U8) in which a disk failed, and the
hot spare kicked in. That's perfect.
The ECC enabled RAM should be very cheap quickly if the industry embraces it
in every computer. :-)
best regards,
hanzhu
On Wed, Mar 31, 2010 at 5:46 PM, Erik Trimble erik.trim...@oracle.comwrote:
casper@sun.com wrote:
I'm not saying that ZFS should consider doing this - doing a
Hi Jeroen, Adam!
link. Switched write caching off with the following
addition to the /kernel/drv/sd.conf file (Karsten: if
you didn't do this already, you _really_ want to :)
Okay, I bite! :) format-inquiry on the F20 FMods disks returns:
# Vendor: ATA
# Product: MARVELL SD88SA02
So I
Use something other than Open/Solaris with ZFS as an NFS server? :)
I don't think you'll find the performance you paid for with ZFS and
Solaris at this time. I've been trying to more than a year, and
watching dozens, if not hundreds of threads.
Getting half-ways decent performance from NFS
Hi Karsten,
But is this mode of operation *really* safe?
As far as I can tell it is.
-The F20 uses some form of power backup that should provide power to the
interface card long enough to get the cache onto solid state in case of power
failure.
-Recollecting from earlier threads here; in
Nobody knows any way for me to remove my unmirrored
log device. Nobody knows any way for me to add a mirror to it (until
Since snv_125 you can remove log devices. See
http://bugs.opensolaris.org/view_bug.do?bug_id=6574286
I've used this all the time during my testing and was able to
Hi Richard,
For this case, what is the average latency to the F20?
I'm not giving the average since I only performed a single run here (still need
to get autopilot set up :) ). However here is a graph of iostat IOPS/svc_t
sampled in 10sec intervals during a run of untarring an eclipse tarbal
On Mar 30, 2010, at 5:39 PM, Peter Tribble wrote:
I have a pool (on an X4540 running S10U8) in which a disk failed, and the
hot spare kicked in. That's perfect. I'm happy.
Then a second disk fails.
Now, I've replaced the first failed disk, and it's resilvered and I have my
hot spare
On 03/30/10 20:00, Bob Friesenhahn wrote:
On Tue, 30 Mar 2010, Edward Ned Harvey wrote:
But the speedup of disabling the ZIL altogether is
appealing (and would
probably be acceptable in this environment).
Just to make sure you know ... if you disable the ZIL altogether, and
you
have a power
On 31/03/2010 10:27, Erik Trimble wrote:
Orvar's post over in opensol-discuss has me thinking:
After reading the paper and looking at design docs, I'm wondering if
there is some facility to allow for comparing data in the ARC to it's
corresponding checksum. That is, if I've got the data I
We're getting the notorious cannot destroy ... dataset already exists. I've
seen a number of reports of this, but none of the reports seem to get any
response. Fortunately this is a backup system, so I can recreate the pool, but
it's going to take me several days to get all the data back. Is
Incidentally, this is on Solaris 10, but I've seen identical reports from
Opensolaris.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 31-3-2010 14:52, Charles Hedrick wrote:
Incidentally, this is on Solaris 10, but I've seen identical reports from
Opensolaris.
Probably you need to delete any existing view over the lun you want to
destroy.
Example :
stmfadm list-lu
LU Name: 600144F0B67340004BB31F060001
stmfadm
I had a drive fail and replaced it with a new drive,During the resilvering
process,that show Too many errors,and process fail.
now,the pool can online,but cannot accept any zfs's commands that change
pool's state,I can list File directory,but don't mv、cp and rm -f.
what can I do,I need that
On 31/03/2010 10:27, Erik Trimble wrote:
Orvar's post over in opensol-discuss has me thinking:
After reading the paper and looking at design docs, I'm wondering if
there is some facility to allow for comparing data in the ARC to it's
corresponding checksum. That is, if I've got the data I
On Wed, Mar 31, 2010 at 6:31 AM, Edward Ned Harvey
solar...@nedharvey.comwrote:
Nobody knows any way for me to remove my unmirrored
log device. Nobody knows any way for me to add a mirror to it (until
Since snv_125 you can remove log devices. See
On Wed, 31 Mar 2010, Tim Cook wrote:
http://www.opensolaris.com/learn/features/availability/
Full production level support
Both Standard and Premium support offerings are available for
deployment of Open HA Cluster 2009.06 with OpenSolaris 2009.06 with
following configurations:
This
On Wed, 31 Mar 2010, Karsten Weiss wrote:
But frankly at the moment I care the most about the single-threaded case
because if we put e.g. user homes on this server I think they would be
severely disappointed if they would have to wait 2m42s just to extract a rather
small 50 MB tarball. The
On Tue, March 30, 2010 22:40, Edward Ned Harvey wrote:
Here's a snippet from man zpool. (Latest version available today in
solaris)
zpool remove pool device ...
Removes the specified device from the pool. This command
currently only supports removing hot spares and cache
On Wed, Mar 31, 2010 at 9:47 AM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Wed, 31 Mar 2010, Tim Cook wrote:
http://www.opensolaris.com/learn/features/availability/
Full production level support
Both Standard and Premium support offerings are available for deployment
of
On Wed, 31 Mar 2010, Robert Milkowski wrote:
or there might be an extra zpool level (or system wide) property to enable
checking checksums onevery access from ARC - there will be a siginificatn
performance impact but then it might be acceptable for really paranoid folks
especially with
Allow me to clarify a little further, why I care about this so much. I have
a solaris file server, with all the company jewels on it. I had a pair of
intel X.25 SSD mirrored log devices. One of them failed. The replacement
device came with a newer version of firmware on it. Now, instead
On 03/31/10 03:50 AM, Damon Atkins wrote:
Why do we still need /etc/zfs/zpool.cache file???
(I could understand it was useful when zfs import was slow)
zpool import is now multi-threaded
(http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6844191), hence a
lot faster, each disk
Would your users be concerned if there was a possibility that
after extracting a 50 MB tarball that files are incomplete, whole
subdirectories are missing, or file permissions are incorrect?
Correction: Would your users be concerned if there was a possibility that
after extracting a 50MB
On Wed, 31 Mar 2010, Tim Cook wrote:
If there is ever another OpenSolaris formal release, then the situation will be
different.
Cmon now, have a little faith. It hasn't even slipped past March
yet :) Of course it'd be way more fun if someone from Sun threw
caution to the wind and told us
On Wed, March 31, 2010 12:23, Bob Friesenhahn wrote:
Yesterday I noticed that the Sun Studio 12 compiler (used to build
OpenSolaris) now costs a minimum of $1,015/year. The Premium
service plan costs $200 more.
I feel a great disturbance in the force. It is as if a great multitude of
On Wed, Mar 31, 2010 at 11:23 AM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Wed, 31 Mar 2010, Tim Cook wrote:
If there is ever another OpenSolaris formal release, then the situation
will be different.
Cmon now, have a little faith. It hasn't even slipped past March yet :)
On 31 Mar 2010, at 17:23, Bob Friesenhahn wrote:
Yesterday I noticed that the Sun Studio 12 compiler (used to build
OpenSolaris) now costs a minimum of $1,015/year. The Premium service plan
costs $200 more.
The download still seems to be a free, full-license copy for SDN members; the
On Mar 31, 2010, at 2:50 AM, Damon Atkins wrote:
Why do we still need /etc/zfs/zpool.cache file???
(I could understand it was useful when zfs import was slow)
Yes. Imagine the case where your server has access to hundreds of LUs.
If you must probe each one, then booting can take a long time.
On 03/31/10 12:21 PM, lori.alt wrote:
The problem with splitting a root pool goes beyond the issue of the
zpool.cache file. If you look at the comments for 6939334
http://monaco.sfbay.sun.com/detail.jsf?cr=6939334, you will see other
files whose content is not correct when a root pool is
On Wed, Mar 31, 2010 at 11:39 AM, Chris Ridd chrisr...@mac.com wrote:
On 31 Mar 2010, at 17:23, Bob Friesenhahn wrote:
Yesterday I noticed that the Sun Studio 12 compiler (used to build
OpenSolaris) now costs a minimum of $1,015/year. The Premium service plan
costs $200 more.
The
I did those test and here are results:
r...@sl-node01:~# zfs list
NAMEUSED AVAIL REFER MOUNTPOINT
mypool01 91.9G 136G23K /mypool01
mypool01/storage01 91.9G 136G 91.7G /mypool01/storage01
On Mar 31, 2010, at 2:05 AM, Dedhi Sujatmiko wrote:
Dear all,
I have a hardware based array storage with a capacity of 192TB and being
sliced into 64 LUNs of 3TB.
What will be the best way to configure the ZFS on this? Of course we are not
requiring the self healing capability of the ZFS.
On Wed, 31 Mar 2010, Chris Ridd wrote:
Yesterday I noticed that the Sun Studio 12 compiler (used to build
OpenSolaris) now costs a minimum of $1,015/year. The Premium
service plan costs $200 more.
The download still seems to be a free, full-license copy for SDN
members; the $1015 you quote
On 31 Mar 2010, at 17:50, Bob Friesenhahn wrote:
On Wed, 31 Mar 2010, Chris Ridd wrote:
Yesterday I noticed that the Sun Studio 12 compiler (used to build
OpenSolaris) now costs a minimum of $1,015/year. The Premium service
plan costs $200 more.
The download still seems to be a free,
On 03/31/10 10:42 AM, Frank Middleton wrote:
On 03/31/10 12:21 PM, lori.alt wrote:
The problem with splitting a root pool goes beyond the issue of the
zpool.cache file. If you look at the comments for 6939334
http://monaco.sfbay.sun.com/detail.jsf?cr=6939334, you will see other
files whose
On 3/27/2010 3:14 AM, Svein Skogen wrote:
On 26.03.2010 23:55, Ian Collins wrote:
On 03/27/10 09:39 AM, Richard Elling wrote:
On Mar 26, 2010, at 2:34 AM, Bruno Sousa wrote:
Hi,
The jumbo-frames in my case give me a boost of around 2 mb/s, so it's
not that much.
That is
On 04/ 1/10 01:51 AM, Charles Hedrick wrote:
We're getting the notorious cannot destroy ... dataset already exists. I've
seen a number of reports of this, but none of the reports seem to get any response.
Fortunately this is a backup system, so I can recreate the pool, but it's going to take
rm == Robert Milkowski mi...@task.gda.pl writes:
rm This is not true. If ZIL device would die *while pool is
rm imported* then ZFS would start using z ZIL withing a pool and
rm continue to operate.
what you do not say, is that a pool with dead zil cannot be
'import -f'd. So, for
rm == Robert Milkowski mi...@task.gda.pl writes:
rm the reason you get better performance out of the box on Linux
rm as NFS server is that it actually behaves like with disabled
rm ZIL
careful.
Solaris people have been slinging mud at linux for things unfsd did in
spite of the fact
Karsten Weiss wrote:
Knowing that 100s of users could do this in parallel with good performance
is nice but it does not improve the situation for the single user which only
cares for his own tar run. If there's anything else we can do/try to improve
the single-threaded case I'm all ears.
A
# zfs destroy -r OIRT_BAK/backup_bad
cannot destroy 'OIRT_BAK/backup_...@annex-2010-03-23-07:04:04-bad': dataset
already exists
No, there are no clones.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Hi Ned,
If you look at the examples on the page that you cite, they start
with single-parity RAIDZ examples and then move to double-parity RAIDZ
example with supporting text, here:
http://docs.sun.com/app/docs/doc/819-5461/gcvjg?a=view
Can you restate the problem with this page?
Thanks,
Edward Ned Harvey solaris2 at nedharvey.com writes:
Allow me to clarify a little further, why I care about this so much. I have
a solaris file server, with all the company jewels on it. I had a pair of
intel X.25 SSD mirrored log devices. One of them failed. The replacement
device came
Hi Cindy,
This all issue started when i asked opinion in this list in how should i
create zpools. It seems that one of my initial ideas of creating a vdev
with 3 disks in a raidz configuration seems to be a non-sense configuration.
Somewhere along the way i defended my initial idea with the fact
On 31/03/2010 17:31, Bob Friesenhahn wrote:
On Wed, 31 Mar 2010, Edward Ned Harvey wrote:
Would your users be concerned if there was a possibility that
after extracting a 50 MB tarball that files are incomplete, whole
subdirectories are missing, or file permissions are incorrect?
Correction:
On 31/03/2010 17:22, Edward Ned Harvey wrote:
The advice I would give is: Do zfs autosnapshots frequently (say ... every
5 minutes, keeping the most recent 2 hours of snaps) and then run with no
ZIL. If you have an ungraceful shutdown or reboot, rollback to the latest
snapshot ... and
On 31/03/2010 16:44, Bob Friesenhahn wrote:
On Wed, 31 Mar 2010, Robert Milkowski wrote:
or there might be an extra zpool level (or system wide) property to
enable checking checksums onevery access from ARC - there will be a
siginificatn performance impact but then it might be acceptable for
On 31/03/2010 21:38, Miles Nordin wrote:
rm Which is an expected behavior when you break NFS requirements
rm as Linux does out of the box.
wrong. The default is 'sync' in /etc/exports. The default has
changed, but the default is 'sync', and the whole thing is
well-documented.
Hi Folks,
Im in a shop thats very resistant to change. The management here are looking
for major justification of a move away from ufs to zfs for root file systems.
Does anyone know if there are any whitepapers/blogs/discussions extolling the
benefits of zfsroot over ufsroot?
Regards in
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 2010/03/31 05:13, Darren J Moffat wrote:
On 31/03/2010 10:27, Erik Trimble wrote:
Orvar's post over in opensol-discuss has me thinking:
After reading the paper and looking at design docs, I'm wondering if
there is some facility to allow for
I assume the swap, dumpadm, grub is because the pool has a different name now,
but is it still a problem if you take it to a *different system* boot off a CD
change it back to rpool. (which is most likley unsupported, ie no help to get
it working)
Over 10 years ago (way before flash archive
On Mar 31, 2010, at 19:41, Robert Milkowski wrote:
I double checked the documentation and you're right - the default
has changed to sync.
I haven't found in which RH version it happened but it doesn't
really matter.
From the SourceForge site:
Since version 1.0.1 of the NFS utilities
Brett wrote:
Hi Folks,
Im in a shop thats very resistant to change. The management here are looking
for major justification of a move away from ufs to zfs for root file systems.
Does anyone know if there are any whitepapers/blogs/discussions extolling the
benefits of zfsroot over ufsroot?
So we tried recreating the pool and sending the data again.
1) compression wasn't set on the copy, even though I did sent -R, which is
supposed to send all properties
2) I tried killing to send | receive pipe. Receive couldn't be killed. It hung.
3) This is Solaris Cluster. We tried forcing a
On Thu, Apr 01, 2010 at 12:38:29AM +0100, Robert Milkowski wrote:
So I wasn't saying that it can work or that it can work in all
circumstances but rather I was trying to say that it probably shouldn't
be dismissed on a performance argument alone as for some use cases
It would be of great
On Mar 31, 2010, at 5:39 AM, Robert Milkowski mi...@task.gda.pl wrote:
On Wed, Mar 31, 2010 at 1:00 AM, Karsten Weiss
Use something other than Open/Solaris with ZFS as an NFS
server? :)
I don't think you'll find the performance you paid for with ZFS and
Solaris at this time. I've been
On Mar 31, 2010, at 7:11 PM, Ross Walker wrote:
On Mar 31, 2010, at 5:39 AM, Robert Milkowski mi...@task.gda.pl wrote:
On Wed, Mar 31, 2010 at 1:00 AM, Karsten Weiss
Use something other than Open/Solaris with ZFS as an NFS server? :)
I don't think you'll find the performance you paid
On 04/ 1/10 02:01 PM, Charles Hedrick wrote:
So we tried recreating the pool and sending the data again.
1) compression wasn't set on the copy, even though I did sent -R, which is
supposed to send all properties
2) I tried killing to send | receive pipe. Receive couldn't be killed. It hung.
On Mar 31, 2010, at 10:25 PM, Richard Elling
richard.ell...@gmail.com wrote:
On Mar 31, 2010, at 7:11 PM, Ross Walker wrote:
On Mar 31, 2010, at 5:39 AM, Robert Milkowski mi...@task.gda.pl
wrote:
On Wed, Mar 31, 2010 at 1:00 AM, Karsten Weiss
Use something other than Open/Solaris
Ah, I hadn't thought about that. That may be what was happening. Thanks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
So that eliminates one of my concerns. However the other one is still an issue.
Presumably Solaris Cluster shouldn't import a pool that's still active on the
other system. We'll be looking more carefully into that.
--
This message posted from opensolaris.org
On Wed, Mar 31, 2010 at 7:53 PM, Erik Trimble erik.trim...@oracle.com wrote:
Brett wrote:
Hi Folks,
Im in a shop thats very resistant to change. The management here are
looking for major justification of a move away from ufs to zfs for root file
systems. Does anyone know if there are any
On 04/ 1/10 02:01 PM, Charles Hedrick wrote:
So we tried recreating the pool and sending the data again.
1) compression wasn't set on the copy, even though I did sent -R, which is
supposed to send all properties
Was compression explicitly set on the root filesystem of your set?
I don't
I see the source for some confusion. On the ZFS Best Practices page:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
It says:
Failure of the log device may cause the storage pool to be inaccessible
if
you are running the Solaris Nevada release prior to build 96 and
A MegaRAID card with write-back cache? It should also be cheaper than
the F20.
I haven't posted results yet, but I just finished a few weeks of extensive
benchmarking various configurations. I can say this:
WriteBack cache is much faster than naked disks, but if you can buy an SSD
or two for
We ran into something similar with these drives in an X4170 that turned
out to
be an issue of the preconfigured logical volumes on the drives. Once
we made
sure all of our Sun PCI HBAs where running the exact same version of
firmware
and recreated the volumes on new drives arriving from
On Mar 31, 2010, at 8:58 PM, Edward Ned Harvey wrote:
We ran into something similar with these drives in an X4170 that turned
out to
be an issue of the preconfigured logical volumes on the drives. Once
we made
sure all of our Sun PCI HBAs where running the exact same version of
firmware
On Mar 31, 2010, at 9:22 AM, Edward Ned Harvey wrote:
Would your users be concerned if there was a possibility that
after extracting a 50 MB tarball that files are incomplete, whole
subdirectories are missing, or file permissions are incorrect?
Correction: Would your users be concerned if
88 matches
Mail list logo