Were you able to get more insight about this problem ?
U7 did not encounter such problems.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Quote: cindys
3. Boot failure from a previous BE if either #1 or #2 failure occurs.
#1 or #2 were not relevant in my case. Just found I could not boot into old u7
be. I am happy with workaround as shinsui points out, so this is purely for
your information.
Quote: renil82
U7 did not encounter
Hi,
Now I have tried to restart the resilvering by detaching c9t7d0 and then
attaching it again to the mirror, then the resilvering starts but now
after almost 24 hours it is still going.
From the iostat it still shows data flowing:
tank-nfs 446G 2,28T112 8 13,5M 35,9K
I have ZFS/Xen server for my home network. The box itself has two
physical NICs. I want Dom0 to be on my management network and the
guest domains to be on the dmz and private networks. The private
network is where all my home computers are and would like to export
iscsi volumes directly
Kent Watsen wrote:
I have ZFS/Xen server for my home network. The box itself has two
physical NICs. I want Dom0 to be on my management network and the
guest domains to be on the dmz and private networks. The private
network is where all my home computers are and would like to export
Dear all,
I was interested in the performance difference between filesystem operations
inside a local and global zone. Therefore I utilized filebench and made several
performance tests with the OLTP script for filebench. Here are some of my
results:
- In the global zone (filebench operates on
Very easy:
- make a directory
- mount it using lofs
run filebench on both directories.
It seems like that we need to make lofs faster.
Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I'm no expert but if I was in the same situation, I would definately keep the
integrity check on. Especially since your only running a raid5, the sooner you
know there is a problem the better. Even if zfs can not fix it for you it can
still be a useful tool. Basically a few errors may not be
I did that.
Isn't that sufficient proof?
Perhaps run both tests in the global zone?
Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, 19 Oct 2009, Espen Martinsen wrote:
Let's say I've chosen to live with a zpool without redundancy, (SAN
disks, has actually raid5 in disk-cabinet)
What benefit are you hoping zfs will provide in this situation? Examine
your situation carefully and determine what filesystem works
A word of caution, be sure not to read a lot into the fact that the F20 is
included in the Exadata Machine.
From what I've heard the flash_cache feature of 11.2.0 Oracle that was enabled
in beta, is not working in the production release, for anyone except the
Exadata 2.
The question is, why
On Tue, Oct 20, 2009 at 10:23 AM, Robert Dupuy rdu...@umpublishing.orgwrote:
A word of caution, be sure not to read a lot into the fact that the F20 is
included in the Exadata Machine.
From what I've heard the flash_cache feature of 11.2.0 Oracle that was
enabled in beta, is not working in
My post is a caution to test the performance, and get your own results.
http://www.storagesearch.com/ssd.html
Please see the entry for October 12th.
The result page you linked too, shows that you can use an arbitrarily high
number of threads, spread evenly across a large number of SAS
Hi Stuart,
The reason why used is larger than the volsize is because we
aren't accounting for metadata, which is covered by this CR:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6429996
6429996 zvols don't reserve enough space for requisite meta data
Metadata is usually only a
What benefit are you hoping zfs will provide in this
situation? Examine
your situation carefully and determine what
filesystem works best for you.
There are many reasons to use ZFS, but if your
configuration isn't set up
to take advantage of those reasons, then there's a
disconnect
Peter Wilk wrote:
tank/appswill be mounted as /apps -- need to be set with 10G
tank/apps/data1 will need to be mount as /apps/data1, need to be set
with 20G alone.
The question is:
If refquota is being used to set the filesystem sizes on /apps and
/apps/data1. /apps/data1 will not be
On Tue, 20 Oct 2009, Robert Dupuy wrote:
My post is a caution to test the performance, and get your own results.
http://www.storagesearch.com/ssd.html
Please see the entry for October 12th.
I see an editorial based on no experience and little data.
The result page you linked too, shows
Cindy,
Thanks for the pointer. Until this is resolved, is there some
documentation
available that will let me calculate this by hand? I would like to
know how large
the current 3-4% meta data storage I am observing can potentially grow.
Thanks.
On Oct 20, 2009, at 8:57 AM, Cindy
I agree, that assuming that the F20 works well for your application, because
its included in the Exadata 2, probably isn't logical.
Equally, assuming it doesn't work, isn't logical.
Yes, the X-25E is clearly a competitor. It was once part of the Pillar Data
Systems setup, and was disqualified
Alastair Neil wrote:
However, the user or group quota is applied when a clone or a
snapshot is created from a file system that has a user or group quota.
applied to a clone I understand what that means, applied to a
snapshot - not so clear does it mean enforced on the original dataset?
On Mon, Oct 19, 2009 at 09:02:20PM -0500, Albert Chin wrote:
On Mon, Oct 19, 2009 at 03:31:46PM -0700, Matthew Ahrens wrote:
Thanks for reporting this. I have fixed this bug (6822816) in build
127.
Thanks. I just installed OpenSolaris Preview based on 125 and will
attempt to apply the
The user/group used can be out of date by a few seconds, same as the used
and referenced properties. You can run sync(1M) to wait for these values
to be updated. However, that doesn't seem to be the problem you are
encountering here.
Can you send me the output of:
zfs list zpool1/sd01_mail
On Tue, 20 Oct 2009, Robert Dupuy wrote:
I'm not here to promote the X-25E, however Sun does sell a rebadged
X-25E in their own servers, and my particular salesman, spec'd both
an X-25E based system, and an F20 based systemso they were
clearly pitched against each other.
Sun salesmen
Alastair Neil wrote:
On Tue, Oct 20, 2009 at 12:12 PM, Matthew Ahrens matthew.ahr...@sun.com
mailto:matthew.ahr...@sun.com wrote:
Alastair Neil wrote:
However, the user or group quota is applied when a clone or a
snapshot is created from a file system that has a
Heya all,
I'm working on testing ZFS with NFS, and I could use some guidance - read
speeds are a bit less than I expected.
Over a gig-e line, we're seeing ~30 MB/s reads on average - doesn't seem to
matter if we're doing large numbers of small files or small numbers of large
files, the speed
cross-posting to nfs-discuss
On Oct 20, 2009, at 10:35 AM, Gary Gogick wrote:
Heya all,
I'm working on testing ZFS with NFS, and I could use some guidance -
read speeds are a bit less than I expected.
Over a gig-e line, we're seeing ~30 MB/s reads on average - doesn't
seem to matter if
People here dream of using it for the ZFS intent log but it is clear
that this was not Sun's initial focus for the product.
At the moment I'm considering using a Gigabyte iRAM as ZIL device.
(see
http://cgi.ebay.com/Gigabyte-IRAM-I-Ram-GC-RAMDISK-SSD-4GB-PCI-card-SATA_W0Q
On 20 October, 2009 - Matthew Ahrens sent me these 2,2K bytes:
The user/group used can be out of date by a few seconds, same as the
used and referenced properties. You can run sync(1M) to wait for
these values to be updated. However, that doesn't seem to be the problem
you are
Hi,
at the moment I am running a pool consisting of 4 vdefs (Seagate Enterprise
SATA disks) assmebled to 2 mirrors.
Now I want to add two more drives to extend the capacity to 1.5 times the
old capacity.
As these mirrors will be striped in the pool I want to know what will
happen to the
mmusa...@east.sun.com said:
What benefit are you hoping zfs will provide in this situation? Examine
your situation carefully and determine what filesystem works best for you.
There are many reasons to use ZFS, but if your configuration isn't set up to
take advantage of those reasons, then
On Tue, 20 Oct 2009, Matthias Appel wrote:
As these mirrors will be striped in the pool I want to know what will
happen to the existing data oft he pool.
Will it stay at its location and only new data will be written to the new
mirror or will the existing data be spread over all 3 mirrors?
Tomas Ögren wrote:
On a related note, there is a way to still have quota used even after
all files are removed, S10u8/SPARC:
In this case there are two directories that have not actually been removed.
They have been removed from the namespace, but they are still open, eg due to
some
On 20 October, 2009 - Matthew Ahrens sent me these 0,7K bytes:
Tomas Ögren wrote:
On a related note, there is a way to still have quota used even after
all files are removed, S10u8/SPARC:
In this case there are two directories that have not actually been
removed. They have been removed
Is anyone else tired of seeing the word redundancy? (:-)
Only in a perfect world (tm) ;-)
IMHO there is no such thing as too much redundancy.
In the real world the possibilities of redundancy are only limited by money,
be it online redundancy (mirror/RAIDZx,) offline redundancy (tape
On Tue, 20 Oct 2009, Matthias Appel wrote:
IMHO there is no such thing as too much redundancy.
In the real world the possibilities of redundancy are only limited by money,
Redundancy costs in terms of both time and money. Redundant hardware
which fails or feels upset requires time to
Hi Casper,
I did that.
1. I created a directory and mounted the device in the global zone - run
filebench
umount device
2. I created a directory and mounted the device in the local zone - run
filebench
-- No difference
It seems the loop-back-driver causes the performance degredation - but how
Tomas Ögren wrote:
On 20 October, 2009 - Matthew Ahrens sent me these 0,7K bytes:
Tomas Ögren wrote:
On a related note, there is a way to still have quota used even after
all files are removed, S10u8/SPARC:
In this case there are two directories that have not actually been
removed. They have
Redundancy costs in terms of both time and money. Redundant hardware
which fails or feels upset requires time to administer and repair.
This is why there is indeed such a thing as too much redundancy.
Yes that's true, but all I wanted to say is: If there is infinite of money
there can be
You will see more IOPS/bandwith, but if your existing disks are very
full, then more traffic may be sent to the new disks, which results in
less benefit.
OK, that means, over time, data will be distributed across all mirrors?
(assuming all blocks will be written once)
I think a useful
Hi,
Something like
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6855425 ?
Bruno
Matthias Appel wrote:
You will see more IOPS/bandwith, but if your existing disks are very
full, then more traffic may be sent to the new disks, which results in
less benefit.
OK, that
On Oct 20, 2009, at 8:23 AM, Robert Dupuy wrote:
A word of caution, be sure not to read a lot into the fact that the
F20 is included in the Exadata Machine.
From what I've heard the flash_cache feature of 11.2.0 Oracle that
was enabled in beta, is not working in the production release, for
Once data resides within a pool, there should be an efficient method of moving
it from one ZFS file system to another. Think Link/Unlink vs. Copy/Remove.
Here's my scenario... When I originally created a 3TB pool, I didn't know the
best way carve up the space, so I used a single, flat ZFS file
Gary
Where you measuring the Linux NFS write performance? It's well know
that Linux can use NFS in a very "unsafe" mode and report the write
complete when it is not all the way to safe storage. This is often
reported as Solaris has slow NFS write performance. This link does not
mention NFS v4
Richard Elling wrote:
I think where we stand today, the higher-level systems questions of
redundancy tend to work against builtin cards like the F20. These
sorts of cards have been available in one form or another for more
than 20 years, and yet they still have limited market share --
there is no consistent latency measurement in the industry
You bring up an important point, as did another poster earlier in the thread,
and certainly its an issue that needs to be addressed.
I'd be surprised if anyone could answer such a question while simultaneously
being credible.
So, yes, SSD and HDD are different, but latency is still important.
But on SSD, write performance is much more unpredictable than on HDD.
If you want to write to SSD you will have to erase the used blocks (assuming
this is not a brand-new SSD) before you are able to write to them.
This takes
But this is concerning reads not writes.
-Ross
On Oct 20, 2009, at 4:43 PM, Trevor Pretty trevor_pre...@eagle.co.nz
wrote:
Gary
Where you measuring the Linux NFS write performance? It's well know
that Linux can use NFS in a very unsafe mode and report the write
complete when it is
No it concerns the difference between reads and writes.
The write performance may be being over stated!
Ross Walker wrote:
But this is concerning reads not writes.
-Ross
On Oct 20, 2009, at 4:43 PM, Trevor Pretty trevor_pre...@eagle.co.nz
wrote:
On Tue, Oct 20, 2009 at 10:35 AM, Gary Gogick g...@workhabit.com wrote:
We're using NFS v4 via TCP, serving various Linux clients (the majority are
CentOS 5.3). Connectivity is presently provided by a single gigabit
ethernet link; entirely conventional configuration (no jumbo frames/etc).
Matthias Appel wrote:
But on SSD, write performance is much more unpredictable than on HDD.
If you want to write to SSD you will have to erase the used blocks (assuming
this is not a brand-new SSD) before you are able to write to them.
This takes much time, assuming the drive's firmeware
Von: Bruno Sousa [mailto:bso...@epinfante.com]
Gesendet: Dienstag, 20. Oktober 2009 22:20
An: Matthias Appel
Cc: zfs-discuss@opensolaris.org
Betreff: Re: [zfs-discuss] Adding another mirror to storage pool
Hi,
Something like
On Oct 20, 2009, at 1:58 PM, Robert Dupuy wrote:
there is no consistent latency measurement in the industry
You bring up an important point, as did another poster earlier in
the thread, and certainly its an issue that needs to be addressed.
I'd be surprised if anyone could answer such a
I wrote:
Is anyone else tired of seeing the word redundancy? (:-)
matthias.ap...@lanlabor.com said:
Only in a perfect world (tm) ;-)
IMHO there is no such thing as too much redundancy. In the real world the
possibilities of redundancy are only limited by money,
Sigh. I was just joking
I have an Intel X25-E 32G in the mail (actually the kingston version), and
wanted to get a sanity check before I start.
System:
Dell 2950
16G RAM
16 1.5T SATA disks in a SAS chassis hanging off of an LSI 3801e, no extra drive
slots, a single zpool.
svn_124, but with my zpool still running at
On Tue, 20 Oct 2009, Matthias Appel wrote:
OK, that means, over time, data will be distributed across all mirrors?
(assuming all blocks will be written once)
Yes, but it is quite rare for all files to be re-written. If you have
reliable storage somewhere else, you could send your existing
On Tue, 20 Oct 2009, Scott Meilicke wrote:
A. Use all 32G for the ZIL
B. Use 8G for the ZIL, 24G for an L2ARC. Any issues with slicing up an SSD like
this?
C. Use 8G for the ZIL, 16G for an L2ARC, and reserve 8G to be used as a ZIL for
the future zpool.
Since my future zpool would just be
On Oct 20, 2009, at 4:44 PM, Bob Friesenhahn wrote:
On Tue, 20 Oct 2009, Scott Meilicke wrote:
A. Use all 32G for the ZIL
B. Use 8G for the ZIL, 24G for an L2ARC. Any issues with slicing up
an SSD like this?
C. Use 8G for the ZIL, 16G for an L2ARC, and reserve 8G to be used
as a ZIL for
On Oct 20, 2009, at 5:28 PM, Trevor Pretty trevor_pre...@eagle.co.nz
wrote:
No it concerns the difference between reads and writes.
The write performance may be being over stated!
The clients are Linux, the server is Solaris.
True the mounts on the Linux clients were async, but so are
On Tue, 20 Oct 2009, Richard Elling wrote:
The ZIL device will never require more space than RAM.
In other words, if you only have 16 GB of RAM, you won't need
more than that for the separate log.
Does the wasted storage space annoy you? :-)
What happens if the machine is upgraded to 32GB of
On Tue, Oct 20, 2009 at 3:58 PM, Robert Dupuy rdu...@umpublishing.orgwrote:
there is no consistent latency measurement in the industry
You bring up an important point, as did another poster earlier in the
thread, and certainly its an issue that needs to be addressed.
I'd be surprised if
Trevor/all,
We've been timing the copying of actual data (1GB of assorted files,
generally 1MB with numerous larger files thrown in) in an attempt to
simulate real world use. We've been copying different sets of data around
to try and avoid anything being cached anywhere.
I don't recall the
On Tue, 20 Oct 2009, Richard Elling wrote:
Intel: X-25E read latency 75 microseconds
... but they don't say where it was measured or how big it was...
Probably measured using a logic analyzer and measuring the time from
the last bit of the request going in, to the first bit of the
bjquinn - on article -
http://www.opensolaris.org/jive/thread.jspa?threadID=89567 i would like to
contact you.
i am new to zfs and exactly need what you mentioned your requirements were and
that you figured out a solution for it.
would you like to share the solution step by step with
bjquinn - on article -
http://www.opensolaris.org/jive/thread.jspa?threadID=89567 i would like to
contact you.
i am new to zfs and exactly need what you mentioned your requirements were and
that you figured out a solution for it.
would you like to share the solution step by step with me.
System:
Dell 2950
16G RAM
16 1.5T SATA disks in a SAS chassis hanging off of an LSI 3801e, no
extra drive slots, a single zpool.
svn_124, but with my zpool still running at the 2009.06 version (14).
My plan is to put the SSD into an open disk slot on the 2950, but will
have to configure
The ZIL is a write-only log that is only read after a power failure. Several GB
is large enough for most workloads.
You can't use the Intel X25-E because it has a 32 or 64 MB volatile cache that
can't be disabled neither flushed by ZFS.
Imagine your server has a power failure while writing
66 matches
Mail list logo