Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-26 Thread David Turnbull
I'm having similar issues, with two AOC-USAS-L8i Supermicro 1068e  
cards mpt2 and mpt3, running 1.26.00.00IT

It seems to only affect a specific revision of disk. (???)

sd67  Soft Errors: 0 Hard Errors: 127 Transport Errors: 3416
Vendor: ATA  Product: WDC WD10EACS-00D Revision: 1A01 Serial No:
Size: 1000.20GB <1000204886016 bytes>

sd58  Soft Errors: 0 Hard Errors: 83 Transport Errors: 2087
Vendor: ATA  Product: WDC WD10EACS-00D Revision: 1A01 Serial No:
Size: 1000.20GB <1000204886016 bytes>

There are 8 other disks on the two controllers:
6xWDC WD10EACS-00Z Revision: 1B01 (no errors)
2xSAMSUNG HD103UJ  Revision: 1113 (no errors)

The two EACS-00D disks are in seperate enclosures with new SAS->SATA  
fanout cables.


Example error messages:

Oct 27 14:26:05 fleet scsi: [ID 107833 kern.warning] WARNING: /p...@0,0/ 
pci1002,5...@2/pci15d9,a...@0 (mpt2):

Oct 27 14:26:05 fleet   wwn for target has changed

Oct 27 14:25:56 fleet scsi: [ID 107833 kern.warning] WARNING: /p...@0,0/ 
pci1002,5...@3/pci15d9,a...@0 (mpt3):

Oct 27 14:25:56 fleet   wwn for target has changed

Oct 27 14:25:57 fleet scsi: [ID 243001 kern.warning] WARNING: /p...@0,0/ 
pci1002,5...@2/pci15d9,a...@0 (mpt2):
Oct 27 14:25:57 fleet   mpt_handle_event_sync: IOCStatus=0x8000,  
IOCLogInfo=0x31110d00


Oct 27 14:25:48 fleet scsi: [ID 243001 kern.warning] WARNING: /p...@0,0/ 
pci1002,5...@3/pci15d9,a...@0 (mpt3):
Oct 27 14:25:48 fleet   mpt_handle_event_sync: IOCStatus=0x8000,  
IOCLogInfo=0x31110d00


Oct 27 14:26:01 fleet scsi: [ID 365881 kern.info] /p...@0,0/ 
pci1002,5...@2/pci15d9,a...@0 (mpt2):

Oct 27 14:26:01 fleet   Log info 0x31110d00 received for target 1.
Oct 27 14:26:01 fleet   scsi_status=0x0, ioc_status=0x804b,  
scsi_state=0xc


Oct 27 14:25:51 fleet scsi: [ID 365881 kern.info] /p...@0,0/ 
pci1002,5...@3/pci15d9,a...@0 (mpt3):

Oct 27 14:25:51 fleet   Log info 0x31120403 received for target 2.
Oct 27 14:25:51 fleet   scsi_status=0x0, ioc_status=0x804b,  
scsi_state=0xc


On 22/10/2009, at 10:40 PM, Bruno Sousa wrote:


Hi all,

Recently i upgrade from snv_118 to snv_125, and suddently i started  
to see this messages at /var/adm/messages :


Oct 22 12:54:37 SAN02 scsi: [ID 243001 kern.warning] WARNING: / 
p...@0,0/pci10de,3...@a/pci1000,3...@0 (mpt0):
Oct 22 12:54:37 SAN02  mpt_handle_event: IOCStatus=0x8000,  
IOCLogInfo=0x3112011a
Oct 22 12:56:47 SAN02 scsi: [ID 243001 kern.warning] WARNING: / 
p...@0,0/pci10de,3...@a/pci1000,3...@0 (mpt0):
Oct 22 12:56:47 SAN02  mpt_handle_event_sync: IOCStatus=0x8000,  
IOCLogInfo=0x3112011a
Oct 22 12:56:47 SAN02 scsi: [ID 243001 kern.warning] WARNING: / 
p...@0,0/pci10de,3...@a/pci1000,3...@0 (mpt0):
Oct 22 12:56:47 SAN02  mpt_handle_event: IOCStatus=0x8000,  
IOCLogInfo=0x3112011a
Oct 22 12:56:50 SAN02 scsi: [ID 243001 kern.warning] WARNING: / 
p...@0,0/pci10de,3...@a/pci1000,3...@0 (mpt0):
Oct 22 12:56:50 SAN02  mpt_handle_event_sync: IOCStatus=0x8000,  
IOCLogInfo=0x3112011a
Oct 22 12:56:50 SAN02 scsi: [ID 243001 kern.warning] WARNING: / 
p...@0,0/pci10de,3...@a/pci1000,3...@0 (mpt0):
Oct 22 12:56:50 SAN02  mpt_handle_event: IOCStatus=0x8000,  
IOCLogInfo=0x3112011a



Is this a symptom of a disk error or some change was made in the  
driver?,that now i have more information, where in the past such  
information didn't appear?


Thanks,
Bruno

I'm using a LSI Logic SAS1068E B3 and i within lsiutil i have this  
behaviour :



1 MPT Port found

   Port Name Chip Vendor/Type/RevMPT Rev  Firmware Rev   
IOC
1.  mpt0  LSI Logic SAS1068E B3 105   
011a 0


Select a device:  [1-1 or 0 to quit] 1

1.  Identify firmware, BIOS, and/or FCode
2.  Download firmware (update the FLASH)
4.  Download/erase BIOS and/or FCode (update the FLASH)
8.  Scan for devices
10.  Change IOC settings (interrupt coalescing)
13.  Change SAS IO Unit settings
16.  Display attached devices
20.  Diagnostics
21.  RAID actions
22.  Reset bus
23.  Reset target
42.  Display operating system names for devices
45.  Concatenate SAS firmware and NVDATA files
59.  Dump PCI config space
60.  Show non-default settings
61.  Restore default settings
66.  Show SAS discovery errors
69.  Show board manufacturing information
97.  Reset SAS link, HARD RESET
98.  Reset SAS link
99.  Reset port
e   Enable expert mode in menus
p   Enable paged mode
w   Enable logging

Main menu, select an option:  [1-99 or e/p/w or 0 to quit] 20

1.  Inquiry Test
2.  WriteBuffer/ReadBuffer/Compare Test
3.  Read Test
4.  Write/Read/Compare Test
8.  Read Capacity / Read Block Limits Test
12.  Display phy counters
13.  Clear phy counters
14.  SATA SMART Read Test
15.  SEP (SCSI Enclosure Processor) Test
18.  Report LUNs Test
19.  Drive firmware download
20.  Expander firmware download
21.  Read Logical Blocks
99.  Reset port
e   Enable expert mode in menus
p   Enable paged mode
w   Enable logging

Diagnostics menu, select an option:  [1-99 or e/p/w or 0 to quit] 12

Adapter Phy 0:  Link

Re: [zfs-discuss] zpool with very different sized vdevs?

2009-10-24 Thread David Turnbull

On 23/10/2009, at 9:39 AM, Travis Tabbal wrote:

I have a new array of 4x1.5TB drives running fine. I also have the  
old array of 4x400GB drives in the box on a separate pool for  
testing. I was planning to have the old drives just be a backup file  
store, so I could keep snapshots and such over there for important  
files.


I was wondering if it makes any sense to add the older drives to the  
new pool. Reliability might be lower as they are older drives, so if  
I were to loose 2 of them, things could get ugly. I'm just curious  
if it would make any sense to do something like this.


Makes sense to me. My current upgrade strategy is to add groups of 5  
disks whenever space is needed, up until physical space is exhausted,  
each time getting the current best $/GB disks.
This will result, at times, in having significant amounts of data on  
relatively few disks though, impacting performance.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] moving files from one fs to another, splittin/merging

2009-10-22 Thread David Turnbull

On 21/10/2009, at 7:39 AM, Mike Bo wrote:

Once data resides within a pool, there should be an efficient method  
of moving it from one ZFS file system to another. Think Link/Unlink  
vs. Copy/Remove.


I agree with this sentiment, it's certainly a surprise when you first  
notice.


Here's my scenario... When I originally created a 3TB pool, I didn't  
know the best way carve up the space, so I used a single, flat ZFS  
file system. Now that I'm more familiar with ZFS, managing the sub- 
directories as separate file systems would have made a lot more  
sense (seperate policies, snapshots, etc.). The problem is that some  
of these directories contain tens of thousands of files and many  
hundreds of gigabytes. Copying this much data between file systems  
within the same disk pool just seems wrong.


I hope such a feature is possible and not too difficult to  
implement, because I'd like to see this capability in ZFS.


It doesn't seem unreasonable. It seems like the different properties  
available on the given datasets (recordsize, checksum, compression,  
encryption, copies, version, utf8only, casesensitivity) would have to  
match, or else fall back to blind copying?




Regards,
mikebo
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Borked zpool, missing slog/zil

2009-09-26 Thread David Turnbull

I believe this is relevant: http://github.com/pjjw/logfix
Saved my array last year, looks maintained.

On 27/09/2009, at 4:49 AM, Erik Ableson wrote:


Hmmm - this is an annoying one.

I'm currently running an OpenSolaris install (2008.11 upgraded to  
2009.06) :

SunOS shemhazai 5.11 snv_111b i86pc i386 i86pc Solaris

with a zpool made up of one radiz vdev and a small ramdisk based  
zil.  I usually swap out the zil for a file-based copy when I need  
to reboot (zpool replace /dev/ramdisk/slog /root/slog.tmp) but this  
time I had a brain fart and forgot to.


The server came back up and I could sort of work on the zpool but it  
was complaining so I did my replace command and it happily  
resilvered.  Then I restarted one more time in order to test  
bringing everything up cleanly and this time it can't find the file  
based zil.


I try importing and it comes back with:
zpool import
 pool: siovale
   id: 13808783103733022257
state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
   devices and try again.
  see: http://www.sun.com/msg/ZFS-8000-6X
config:

   siovale UNAVAIL  missing device
 raidz1ONLINE
   c8d0ONLINE
   c9d0ONLINE
   c10d0   ONLINE
   c11d0   ONLINE

   Additional devices are known to be part of this pool, though  
their

   exact configuration cannot be determined.

Now the file still exists so I don't know why it can't seem to find  
it and I thought the missing zil issue was corrected in this version  
(or did I miss something?).


I've looked around for solutions to bring it back online and ran  
across this method:  but before I jump in on this one I was hoping there was a newer,  
cleaner approach that I missed somehow.


Ideas appreciated...

Erik

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss