On 21.10.09 23:23, Paul B. Henson wrote:
I've had a case open for a while (SR #66210171) regarding the inability to
import a pool whose log device failed while the pool was off line.
I was told this was CR #6343667,
CR 6343667 synopsis is scrub/resilver has to start over when a snapshot is
If you use an LSI, maybe you install the LSI Logic MPT Configuration
Utility.
Example of the usage :
lsiutil
LSI Logic MPT Configuration Utility, Version 1.61, September 18, 2008
1 MPT Port found
Port Name Chip Vendor/Type/RevMPT Rev Firmware Rev IOC
1. mpt0
Hi all,
Recently i upgrade from snv_118 to snv_125, and suddently i started to
see this messages at /var/adm/messages :
Oct 22 12:54:37 SAN02 scsi: [ID 243001 kern.warning] WARNING:
/p...@0,0/pci10de,3...@a/pci1000,3...@0 (mpt0):
Oct 22 12:54:37 SAN02 mpt_handle_event: IOCStatus=0x8000,
Replacing failed disks is easy when PERC is doing the RAID. Just remove
the failed drive and replace with a good one, and the PERC will rebuild
automatically.
Sorry, not correct. When you replace a failed drive, the perc card doesn't
know for certain that the new drive you're adding is meant
The Intel specified random write IOPS are with the cache enabled and
without cache flushing. They also carefully only use a limited span
of the device, which fits most perfectly with how the device is built.
How do you know this? This sounds much more detailed than any average
person could
Actually, I think this is a case of crossed wires. This issue was reported a
while back on a news site for the X25-M G2. Somebody pointed out that these
devices have 8GB of cache, which is exactly the dataset size they use for the
iops figures.
The X25-E datasheet however states that while
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#RAID-Z_Configuration_Requirements_and_Recommendations
says that the number of disks in a RAIDZ should be (N+P) with
N = {2,4,8} and P = {1,2}.
But if you go down the page just a little further to the thumper
configuration
Thanks for your comments, Frank.
I will take a look at the inconsistencies.
Cindy
On 10/22/09 08:29, Frank Cusack wrote:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#RAID-Z_Configuration_Requirements_and_Recommendations
says that the number of disks in a RAIDZ
On Thu, 22 Oct 2009, Victor Latushkin wrote:
CR 6343667 synopsis is scrub/resilver has to start over when a snapshot is
taken:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6343667
so I do not see how it can be related to log removal.
Could you please check bug number in
On Thu, 22 Oct 2009, Marc Bevand wrote:
Bob Friesenhahn bfriesen at simple.dallas.tx.us writes:
For random write I/O, caching improves I/O latency not sustained I/O
throughput (which is what random write IOPS usually refer to). So Intel can't
cheat with caching. However they can cheat by
Hi Bruno,
I see some bugs associated with these messages (6694909) that point to
an LSI firmware upgrade that cause these harmless errors to display.
According to the 6694909 comments, this issue is documented in the
release notes.
As they are harmless, I wouldn't worry about them.
Maybe
Interesting. We must have different setups with our PERCs. Mine have
always auto rebuilt.
--
Scott Meilicke
On Oct 22, 2009, at 6:14 AM, Edward Ned Harvey
sola...@nedharvey.com wrote:
Replacing failed disks is easy when PERC is doing the RAID. Just
remove
the failed drive and replace
jel+...@cs.uni-magdeburg.de said:
2nd) Never had a Sun STK RAID INT before. Actually my intention was to create
a zpool mirror of sd0 and sd1 for boot and logs, and a 2x2-way zpool mirror
with the 4 remaining disks. However, the controller seems not to support
JBODs :( - which is also bad,
anyone?
---BeginMessage---
Hi,
pre
::status
debugging live kernel (64-bit) on mk-archive-1
operating system: 5.11 snv_123 (i86pc)
::system
set noexec_user_stack_log=0x1 [0t1]
set noexec_user_stack=0x1 [0t1]
set snooping=0x1 [0t1]
set zfs:zfs_arc_max=0x28000 [0t10737418240]
::memstat
Hi Jason,
Since spare replacement is an important process, I've rewritten this
section to provide 3 main examples, here:
http://docs.sun.com/app/docs/doc/817-2271/gcvcw?a=view
Scroll down the section:
Activating and Deactivating Hot Spares in Your Storage Pool
Example 4–7 Manually Replacing
Jens Elkner wrote:
Hmmm,
wondering about IMHO strange ZFS results ...
X4440: 4x6 2.8GHz cores (Opteron 8439 SE), 64 GB RAM
6x Sun STK RAID INT V1.0 (Hitachi H103012SCSUN146G SAS)
Nevada b124
Started with a simple test using zfs on c1t0d0s0: cd /var/tmp
(1) time sh -c 'mkfile
Hello All
There is still time to register online. You will also be available
to register on-site as well.
Just to give you an idea of the presentation that will be
given.
* Presentation: Kerberos Authentication for Web Security
* Presentation: Protecting Oracle Applications with Built-In
Cindy: How can I view the bug report you referenced? Standard methods show my
the bug number is valid (6694909) but no content or notes. We are having
similar messages appear with snv_118 with a busy LSI controller, especially
during scrubbing, and I'd be interested to see what they mentioned
Adam Cheal wrote:
Cindy: How can I view the bug report you referenced? Standard methods
show my the bug number is valid (6694909) but no content or notes. We are
having similar messages appear with snv_118 with a busy LSI controller,
especially during scrubbing, and I'd be interested to see what
I have a new array of 4x1.5TB drives running fine. I also have the old array of
4x400GB drives in the box on a separate pool for testing. I was planning to
have the old drives just be a backup file store, so I could keep snapshots and
such over there for important files.
I was wondering if it
James: We are running Phase 16 on our LSISAS3801E's, and have also tried the
recently released Phase 17 but it didn't help. All firmware NVRAM settings are
default. Basically, when we put the disks behind this controller under load
(e.g. scrubbing, recursive ls on large ZFS filesystem) we get
Adam Cheal wrote:
James: We are running Phase 16 on our LSISAS3801E's, and have also tried
the recently released Phase 17 but it didn't help. All firmware NVRAM
settings are default. Basically, when we put the disks behind this
controller under load (e.g. scrubbing, recursive ls on large ZFS
Hey folks!
We're using zfs-based file servers for our backups and we've been
having some issues as of late with certain situations causing zfs/
zpool commands to hang.
Currently, it appears that raid3155 is in this broken state:
r...@homiebackup10:~# ps auxwww | grep zfs
root 15873
I've filed the bug, but was unable to include the prtconf -v output as the
comments field only accepted 15000 chars total. Let me know if there is
anything else I can provide/do to help figure this problem out as it is
essentially preventing us from doing any kind of heavy IO to these pools,
Thank you for your follow-up. The doc looks great. Having good
examples goes a long way to helping others that have my problem.
Ideally, the replacement would all happen magically, and I would have
had everything marked as good, with one failed disk (like a certain
other storage vendor that has
I have a system who's rpool has gone defunct. The rpool is made of a
single disk which is a raid5EE made of all 8 146G disks on the box.
The raid card is the Adaptec brand card. It was running nv_107, but its
currently net booted to nv_121. I have already checked in the raid card
bios, and it
On Oct 22, 2009, at 12:29 PM, Jason Frank wrote:
Thank you for your follow-up. The doc looks great. Having good
examples goes a long way to helping others that have my problem.
Ideally, the replacement would all happen magically, and I would have
had everything marked as good, with one
On 10/22/09 4:07 PM, James C. McPherson wrote:
Adam Cheal wrote:
It seems to be timing out accessing a disk, retrying, giving up and then
doing a bus reset?
...
ugh. New bug time - bugs.opensolaris.org, please select
Solaris / kernel / driver-mpt. In addition to the error
messages and
On 21/10/2009, at 7:39 AM, Mike Bo wrote:
Once data resides within a pool, there should be an efficient method
of moving it from one ZFS file system to another. Think Link/Unlink
vs. Copy/Remove.
I agree with this sentiment, it's certainly a surprise when you first
notice.
Here's my
29 matches
Mail list logo