Looks like it was RAM. I ran memtest+ 4.00, and it found no problems. I removed
2 of the 3 sticks of RAM, ran a backup, and had no errors. I'm running more
extensive tests, but it looks like that was it. A new motherboard, CPU and ECC
RAM are on the way to me now.
--
This message posted from
Hello,
For desktop use, and presumably rapidly changing non-desktop uses, I
find the ARC cache pretty annoying in its behavior. For example this
morning I had to hit my launch-terminal key perhaps 50 times (roughly)
before it would start completing without disk I/O. There are plenty of
other
On 5 apr 2010, at 04.35, Edward Ned Harvey wrote:
When running the card in copyback write cache mode, I got horrible
performance (with zfs), much worse than with copyback disabled
(which I believe should mean it does write-through), when tested
with filebench.
When I benchmark my disks, I
Hi all,
while setting of our X4140 I have - following suggestions - added two
SSDs as log devices as follows
zpool add tank log c1t6d0 c1t7d0
I currently have
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Marcus Wilhelmsson
I have a problem with my zfs system, it's getting slower and slower
over time. When the OpenSolaris machine is rebooted and just started I
get about 30-35MB/s in read and
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Marcus
Wilhelmsson
I have a problem with my zfs system, it's getting
slower and slower
over time. When the OpenSolaris machine is rebooted
and just started I
get about 30-35MB/s in
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Andreas Höschler
I would like to remove the two SSDs as log devices from the pool and
instead add them as a separate pool for sole use by the database to
see how this enhences performance.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Marcus Wilhelmsson
pool: s1
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
s1 ONLINE 0 0 0
While testing a zpool with a different storage adapter using my blkdev
device, I did a test which made a disk unavailable -- all attempts to
read from it report EIO.
I expected my configuration (which is a 3 disk test, with 2 disks in a
RAIDZ and a hot spare) to work where the hot spare would
Alright, I've made the benchmarks and there isn't a difference worth mentioning
except that i only get about 30MB/s (to my Mac, which has an SSD as system
disk). I've also tried copying to a ram disk with slightly better results.
Well, now that I've restarted the server I probably won't see the
On 4/4/2010 11:04 PM, Edward Ned Harvey wrote:
Actually, It's my experience that Sun (and other vendors) do exactly
that for you when you buy their parts - at least for rotating drives, I
have no experience with SSD's.
The Sun disk label shipped on all the drives is setup to make the drive
Hi Edward,
thanks a lot for your detailed response!
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Andreas Höschler
• I would like to remove the two SSDs as log devices from the pool and
instead add them as a separate pool for sole use by
Not true. There are different ways that a storage array, and it's
controllers, connect to the host visible front end ports which might be
confusing the author but i/o isn't duplicated as he suggests.
On 4/4/2010 9:55 PM, Brad wrote:
I had always thought that with mpxio, it load-balances IO
I would appreciate if somebody can clarify a few points.
I am doing some random WRITES (100% writes, 100% random) testing and observe
that ARC grows way beyond the hard limit during the test. The hard limit is
set 512 MB via /etc/system and I see the size going up to 1 GB - how come is it
Response below...
2010/4/5 Andreas Höschler ahoe...@smartsoft.de
Hi Edward,
thanks a lot for your detailed response!
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Andreas Höschler
• I would like to remove the two SSDs as log
On Apr 5, 2010, at 11:43 AM, Garrett D'Amore wrote:
I see ereport.fs.zfs.io_failure, and ereport.fs.zfs.probe_failure. Also,
ereport.io.service.lost and ereport.io.device.inval_state. There is indeed a
fault.fs.zfs.device in the list as well.
The ereports are not interesting, only the
On Sun, 4 Apr 2010, Brad wrote:
I had always thought that with mpxio, it load-balances IO request
across your storage ports but this article
http://christianbilien.wordpress.com/2007/03/23/storage-array-bottlenecks/
has got me thinking its not true.
The available bandwidth is 2 or 4Gb/s
Hi Khyron,
No, he did *not* say that a mirrored SLOG has no benefit,
redundancy-wise.
He said that YOU do *not* have a mirrored SLOG. You have 2 SLOG
devices
which are striped. And if this machine is running Solaris 10, then
you cannot
remove a log device because those updates have not
On Mon, 5 Apr 2010, Peter Schuller wrote:
For desktop use, and presumably rapidly changing non-desktop uses, I
find the ARC cache pretty annoying in its behavior. For example this
morning I had to hit my launch-terminal key perhaps 50 times (roughly)
before it would start completing without
It sounds like you are complaining about how FreeBSD has implemented zfs in
the system rather than about zfs in general. These problems don't occur
under Solaris. Zfs and the kernel need to agree on how to allocate/free
memory, and it seems that Solaris is more advanced than FreeBSD in this
Embedded Operating system/Networking (EON), RAM based live ZFS NAS appliance is
released on Genunix! This release marks the end of SXCE releases and Sun
Microsystems as we know it! It is dubbed the Sun-set release! Many thanks to Al
at Genunix.org for download hosting and serving the
I've seen the Nexenta and EON webpages, but I'm not looking to build my own.
Is there anything out there I can just buy?
-Kyle
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Install nexenta on a dell poweredge ?
or one of these http://www.pogolinux.com/products/storage_director
On Mon, Apr 5, 2010 at 9:48 PM, Kyle McDonald kmcdon...@egenera.com wrote:
I've seen the Nexenta and EON webpages, but I'm not looking to build my
own.
Is there anything out there I can
Kyle McDonald writes:
I've seen the Nexenta and EON webpages, but I'm not looking to build my own.
Is there anything out there I can just buy?
In Germany, someone sells preconfigured hardware based on Nexenta:
On Mon, 5 Apr 2010, Peter Schuller wrote:
It may be FreeBSD specific, but note that I a not talking about the
amount of memory dedicated to the ARC and how it balances with free
memory on the system. I am talking about eviction policy. I could be
wrong but I didn't think ZFS port made
- Kyle McDonald kmcdon...@egenera.com skrev:
I've seen the Nexenta and EON webpages, but I'm not looking to build
my own.
Is there anything out there I can just buy?
I've setup a few systems with supermicro hardware - works well and doesn't cost
a whole lot
roy
--
Roy Sigurd
The ARC is designed to use as much memory as is available up to a limit. If
the kernel allocator needs memory and there is none available, then the
allocator requests memory back from the zfs ARC. Note that some systems have
multiple memory allocators. For example, there may be a memory
On Apr 5, 2010, at 2:23 PM, Peter Schuller wrote:
That's a very general statement. I am talking about specifics here.
For example, you can have mountains of evidence that shows that a
plain LRU is optimal (under some conditions). That doesn't change
the fact that if I want to avoid a
In simple terms, the ARC is divided into a MRU and MFU side.
target size (c) = target MRU size (p) + target MFU size (c-p)
On Solaris, to get from the MRU to the MFU side, the block must be
read at least once in 62.5 milliseconds. For pure read-once workloads,
the data won't to the
On 04/05/10 15:24, Peter Schuller wrote:
In the urxvt case, I am basing my claim on informal observations.
I.e., hit terminal launch key, wait for disks to rattle, get my
terminal. Repeat. Only by repeating it very many times in very rapid
succession am I able to coerce it to be cached such that
On Apr 5, 2010, at 3:24 PM, Peter Schuller wrote:
I will have to look into it in better detail to understand the
consequences. Is there a paper that describes the ARC as it is
implemented in ZFS (since it clearly diverges from the IBM ARC)?
There are various blogs, but perhaps the best
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Andreas Höschler
Thanks for the clarification! This is very annoying. My intend was to
create a log mirror. I used
zpool add tank log c1t6d0 c1t7d0
and this was obviously false.
From: Kyle McDonald [mailto:kmcdon...@egenera.com]
So does your HBA have newer firmware now than it did when the first
disk
was connected?
Maybe it's the HBA that is handling the new disks differently now, than
it did when the first one was plugged in?
Can you down rev the HBA FW? Do you
On Apr 5, 2010, at 3:38 AM, Garrett D'Amore wrote:
Am I missing something here? Under what conditions can I expect hot spares
to be recruited?
Hot spares are activated by the zfs-retire agent in response to a list.suspect
event containing one of the following faults:
On 04/ 5/10 05:28 AM, Eric Schrock wrote:
On Apr 5, 2010, at 3:38 AM, Garrett D'Amore wrote:
Am I missing something here? Under what conditions can I expect hot spares to
be recruited?
Hot spares are activated by the zfs-retire agent in response to a list.suspect
event containing
On 04/05/10 11:43, Andreas Höschler wrote:
Hi Khyron,
No, he did *not* say that a mirrored SLOG has no benefit,
redundancy-wise.
He said that YOU do *not* have a mirrored SLOG. You have 2 SLOG devices
which are striped. And if this machine is running Solaris 10, then
you cannot
remove a
On Sun, Apr 04, 2010 at 11:46:16PM -0700, Willard Korfhage wrote:
Looks like it was RAM. I ran memtest+ 4.00, and it found no problems.
Then why do you suspect the ram?
Especially with 12 disks, another likely candidate could be an
overloaded power supply. While there may be problems showing
I'm wondering if the author is talking about cache mirroring where the cache
is mirrored between both controllers. If that is the case, is he saying that
for every write to the active controlle,r a second write issued on the passive
controller to keep the cache mirrored?
--
This message
Hi Folks:
I'm wondering what is the correct flow when both raid5 and de-dup are
enabled on a storage volume
I think we should do de-dup first and then raid5 ... is that
understanding correct?
Thanks!
___
zfs-discuss mailing list
On Mon, Apr 05, 2010 at 07:43:26AM -0400, Edward Ned Harvey wrote:
Is the database running locally on the machine? Or at the other end of
something like nfs? You should have better performance using your present
config than just about any other config ... By enabling the log devices,
such as
On Mon, Apr 05, 2010 at 06:32:13PM -0700, Learner Study wrote:
I'm wondering what is the correct flow when both raid5 and de-dup are
enabled on a storage volume
I think we should do de-dup first and then raid5 ... is that
understanding correct?
Not really. Strictly speaking, ZFS
Hi Jeff:
I'm a bit confused...did you say Correct to my orig email or the
reply from Daniel...Is there a doc that may explain it better?
Thanks!
On Mon, Apr 5, 2010 at 6:54 PM, jeff.bonw...@oracle.com
jeff.bonw...@oracle.com wrote:
Correct.
Jeff
Sent from my iPhone
On Apr 5, 2010, at
The author mentions multipathing software in the blog entry. Kind of
hard to mix that up with cache mirroring if you ask me.
On 4/5/2010 9:16 PM, Brad wrote:
I'm wondering if the author is talking about cache mirroring where the cache
is mirrored between both controllers. If that is the
On Mon, Apr 5, 2010 at 8:16 PM, Brad bene...@yahoo.com wrote:
I'm wondering if the author is talking about cache mirroring where the
cache is mirrored between both controllers. If that is the case, is he
saying that for every write to the active controlle,r a second write issued
on the
On Mon, Apr 05, 2010 at 06:58:57PM -0700, Learner Study wrote:
Hi Jeff:
I'm a bit confused...did you say Correct to my orig email or the
reply from Daniel...
Jeff is replying to your mail, not mine.
It looks like he's read your question a little differently. By that
reading, you are
It certainly has symptoms that match a marginal power supply, but I measured
the power consumption some time ago and found it comfortably within the power
supply's capacity. I've also wondered if the RAM is fine, but there is just
some kind of flaky interaction of the ram configuration I had
On Apr 5, 2010, at 6:32 PM, Learner Study wrote:
Hi Folks:
I'm wondering what is the correct flow when both raid5 and de-dup are
enabled on a storage volume
I think we should do de-dup first and then raid5 ... is that
understanding correct?
Yes. If you look at the (somewhat
On Mon, Apr 5, 2010 at 9:39 PM, Willard Korfhage opensola...@familyk.orgwrote:
It certainly has symptoms that match a marginal power supply, but I
measured the power consumption some time ago and found it comfortably within
the power supply's capacity. I've also wondered if the RAM is fine,
On Mon, Apr 05, 2010 at 09:46:58PM -0500, Tim Cook wrote:
On Mon, Apr 5, 2010 at 9:39 PM, Willard Korfhage
opensola...@familyk.orgwrote:
It certainly has symptoms that match a marginal power supply, but I
measured the power consumption some time ago and found it comfortably within
the
Memtest didn't show any errors, but between Frank, early in the thread, saying
that he had found memory errors that memtest didn't catch, and remove of DIMMs
apparently fixing the problem, I too soon jumped to the conclusion it was the
memory. Certainly there are other explanations.
I see
On Mon, Apr 05, 2010 at 09:35:21PM -0700, Willard Korfhage wrote:
By the way, I see that now one of the disks is listed as degraded - too many
errors. Is there a good way to identify exactly which of the disks it is?
It's hidden in iostat -E, of all places.
--
Dan.
pgpB1dUBrSfPC.pgp
On Tue, Apr 6, 2010 at 12:24 AM, Daniel Carosone d...@geek.com.au wrote:
On Mon, Apr 05, 2010 at 09:35:21PM -0700, Willard Korfhage wrote:
By the way, I see that now one of the disks is listed as degraded - too
many errors. Is there a good way to identify exactly which of the disks it
is?
On Tue, Apr 06, 2010 at 12:29:35AM -0500, Tim Cook wrote:
On Tue, Apr 6, 2010 at 12:24 AM, Daniel Carosone d...@geek.com.au wrote:
On Mon, Apr 05, 2010 at 09:35:21PM -0700, Willard Korfhage wrote:
By the way, I see that now one of the disks is listed as degraded - too
many errors. Is
53 matches
Mail list logo