Frank wrote:
Have you dealt with RedHat Enterprise support? lol.
Have you dealt with Sun/Oracle support lately? lololol It's a disaster.
We've had a failed disk in a fully support Sun system for over 3 weeks,
Explorer data turned in, and been given the runaround forever. The 7000
series
Edward wrote:
That is really weird. What are you calling failed? If you're
getting
either a red blinking light, or a checksum failure on a device in a
zpool...
You should get your replacement with no trouble.
Yes, failed, with all the normal failed signs, cfgadm not finding it,
FAULTED in
I would probably tune lotsfree down as well. At 72G of ram currently it's
probably reserving around 1.1GB of ram.
http://docs.sun.com/app/docs/doc/819-2724/6n50b07bk?a=view
Ethan
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org]
I have installed open solaris, build 111. I also added some packages
from www.sunfreeware.com to my system and other tools (compiled by me)
to /opt.
Problem is, that all new data (added by me) after some days get lost.
Disk looks like (for example) packages from sunfreeware was never
http://opensolaris.org/jive/thread.jspa?threadID=105702tstart=0
Yes, this does sound very similar. It looks to me like data from read
files is clogging the ARC so that there is no more room for more
writes when ZFS periodically goes to commit unwritten data.
I'm wondering if changing
correct ratio of arc to l2arc?
from http://blogs.sun.com/brendan/entry/l2arc_screenshots
Thanks Rob. Hmm...that ratio isn't awesome.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi all,
Since we've started running 2009.06 on a few servers we seem to be
hitting a problem with l2arc that causes it to stop receiving evicted
arc pages. Has anyone else seen this kind of problem?
The filesystem contains about 130G of compressed (lzjb) data, and looks
like:
$ zpool status -v
This is a mysql database server, so if you are wondering about the
smallish arc size, it's being artificially limited by set
zfs:zfs_arc_max = 0x8000 in /etc/system, so that the majority of
ram can be allocated to InnoDb.
I was told offline that it's likely because my arc size has been
Ethan Erchinger wrote:
Here is a sample set of messages at that time. It looks like timeouts
on the SSD for various requested blocks. Maybe I need to talk with
Intel about this issue.
Keeping everyone up-to-date, for those who care, I've RMAd the Intel
drive, and will retest when
Richard Elling wrote:
The answer may lie in the /var/adm/messages file which should report
if a reset was received or sent.
Here is a sample set of messages at that time. It looks like timeouts
on the SSD for various requested blocks. Maybe I need to talk with
Intel about this issue.
Ethan
Richard Elling wrote:
I've seen these symptoms when a large number of errors were reported
in a short period of time and memory was low. What does fmdump -eV
show?
fmdump -eV shows lots of messages like this, and yea, I believe that to
be sd16 which is the SSD:
Dec 03 2008
Ross wrote:
I'm no expert, but the first thing I'd ask is whether you could repeat that
test without using compression? I'd be quite worried about how a system is
going to perform when it's basically running off a 50GB compressed file.
Yes this does occur with compression off, but
Tim wrote:
Are you leaving ANY ram for zfs to do it's thing? If you're consuming
ALL system memory for just this file/application, I would expect the
system to fall over and die.
Hmm. I believe that the kernel should manage that relationship for me.
If the system cannot manage swap or
Richard Elling wrote:
asc = 0x29
ascq = 0x0
ASC/ASCQ 29/00 is POWER ON, RESET, OR BUS DEVICE RESET OCCURRED
http://www.t10.org/lists/asc-num.htm#ASC_29
[this should be more descriptive as the codes are, more-or-less,
standardized, I'll try to file an RFE, unless someone
Hi all,
First, I'll say my intent is not to spam a bunch of lists, but after
posting to opensolaris-discuss I had someone communicate with me offline
that these lists would possibly be a better place to start. So here we
are. For those on all three lists, sorry for the repetition.
Second,
Hello,
I've looked quickly through the archives and haven't found mention of
this issue. I'm running SXCE (snv_99), which I believe uses zfs version
13. I had an existing zpool:
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Sorry for the first incomplete send, stupid Ctrl-Enter. :-)
Hello,
I've looked quickly through the archives and haven't found mention of
this issue. I'm running SXCE (snv_99), which uses zfs version 13. I
had an existing zpool:
William Bauer wrote:
I've done some more research, but would still greatly appreciate someone
helping me understand this.
It seems that writes to only the home directory of the person logged in to the
console suffers from degraded performance. If I write to a subdirectory
beneath my home,
18 matches
Mail list logo