[zfs-discuss] ZFS utility like Filefrag on linux to help analyzing the extents mapping

2011-02-16 Thread Jeff liu
Hello All,

I'd like to know if there is an utility like `Filefrag' shipped with e2fsprogs 
on linux, which is used to fetch the extents mapping info of a file(especially 
a sparse file) located on ZFS?

I am working on efficient sparse file detection and backup through 
lseek(SEEK_DATA/SEEK_HOLE)  on ZFS,  and I need to verify the result by 
comparing the original sparse file
and the copied file, so if there is such a tool available, it can be used to 
analyze the start offset and length of each data extent.


Thanks in advance!
-Jeff
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Best way/issues with large ZFS send?

2011-02-16 Thread Eff Norwood
I'm preparing to replicate about 200TB of data between two data centers using 
zfs send. We have ten 10TB zpools that are further broken down into zvols of 
various sizes in each data center. One DC is primary and the other will be the 
replication target and there is plenty of bandwidth between them (10 gig dark 
fiber).

Are there any gotchas that I should be aware of? Also, at what level should I 
be taking the snapshot to do the zfs send? At the primary pool level or at the 
zvol level? Since the targets are to be exact replicas, I presume at the 
primary pool level (e.g. tank) rather than for every zvol (e.g. 
tank/prod/vol1)?

This is all using Solaris 11 Express, snv_151a.

Thanks,

Eff
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS utility like Filefrag on linux to help analyzing the extents mapping

2011-02-16 Thread Fajar A. Nugraha
On Wed, Feb 16, 2011 at 8:53 PM, Jeff liu jeff@oracle.com wrote:
 Hello All,

 I'd like to know if there is an utility like `Filefrag' shipped with 
 e2fsprogs on linux, which is used to fetch the extents mapping info of a 
 file(especially a sparse file) located on ZFS?

Something like zdb - maybe?
http://cuddletech.com/blog/?p=407

-- 
Fajar


 I am working on efficient sparse file detection and backup through 
 lseek(SEEK_DATA/SEEK_HOLE)  on ZFS,  and I need to verify the result by 
 comparing the original sparse file
 and the copied file, so if there is such a tool available, it can be used to 
 analyze the start offset and length of each data extent.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS utility like Filefrag on linux to help analyzing the extents mapping

2011-02-16 Thread Jeff liu
Hi Fajar,
 
Thanks for your quick response,  just playing it around for a while, it is very 
useful to me.

Have a nice day!
-Jeff

在 2011-2-16,下午10:16, Fajar A. Nugraha 写道:

 On Wed, Feb 16, 2011 at 8:53 PM, Jeff liu jeff@oracle.com wrote:
 Hello All,
 
 I'd like to know if there is an utility like `Filefrag' shipped with 
 e2fsprogs on linux, which is used to fetch the extents mapping info of a 
 file(especially a sparse file) located on ZFS?
 
 Something like zdb - maybe?
 http://cuddletech.com/blog/?p=407
 
 -- 
 Fajar
 
 
 I am working on efficient sparse file detection and backup through 
 lseek(SEEK_DATA/SEEK_HOLE)  on ZFS,  and I need to verify the result by 
 comparing the original sparse file
 and the copied file, so if there is such a tool available, it can be used to 
 analyze the start offset and length of each data extent.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Very bad ZFS write performance. Ok Read.

2011-02-16 Thread Richard Elling
On Feb 15, 2011, at 11:26 PM, Khushil Dep wrote:
 Could you not also pin process' to cores, preventing switching should help 
 too? I've done this for performance reasons before on a 24 core Linux box
 

Yes. More importantly, you could send interrupts to a processor set. There are 
many
ways to implement resource management in Solaris-based systems :-)
 -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best way/issues with large ZFS send?

2011-02-16 Thread Richard Elling
On Feb 16, 2011, at 6:05 AM, Eff Norwood wrote:

 I'm preparing to replicate about 200TB of data between two data centers using 
 zfs send. We have ten 10TB zpools that are further broken down into zvols of 
 various sizes in each data center. One DC is primary and the other will be 
 the replication target and there is plenty of bandwidth between them (10 gig 
 dark fiber).
 
 Are there any gotchas that I should be aware of? Also, at what level should I 
 be taking the snapshot to do the zfs send? At the primary pool level or at 
 the zvol level? Since the targets are to be exact replicas, I presume at the 
 primary pool level (e.g. tank) rather than for every zvol (e.g. 
 tank/prod/vol1)?

There is no such thing as a pool snapshot. There are only dataset snapshots.

The trick to a successful snapshot+send strategy at this size is to start 
snapping
early and often. You don't want to send 200TB, you want to send 2TB, 100 times 
:-)

The performance is tends to be bursty, so the fixed record size of the zvols 
can 
work to your advantage for capacity planning. Also, a buffer of some sort can 
help
smooth out the utilization, see the threads on ZFS and mbuffer.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best way/issues with large ZFS send?

2011-02-16 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Eff Norwood
 
 Are there any gotchas that I should be aware of? Also, at what level
should I
 be taking the snapshot to do the zfs send? At the primary pool level or at
the
 zvol level? Since the targets are to be exact replicas, I presume at the
primary
 pool level (e.g. tank) rather than for every zvol (e.g. tank/prod/vol1)?

I don't think you have any choice.  There is such a thing as zfs send and
there is no such thing as zpool send.  You can use -R for replication
recursive, which is kind of like doing the zpool directly...  But not
exactly.

The only gotcha is to ensure your applications (whatever they are) are
resilient to power failure.  Because the zfs snapshot will literally produce
a block-level snapshot of the way the disks are right now, at this instant.
If you ever need to restore, then it will be as-if you sustained a power
loss and came back online.

The most notable such situation would probably be databases, if you have
any.  Ensure to use a database backup tool to export the database to a
backup file, and then send the filesystem.  Or else momentarily stop the
database services while you take the snapshot.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best way/issues with large ZFS send?

2011-02-16 Thread Tomas Ögren
On 16 February, 2011 - Richard Elling sent me these 1,3K bytes:

 On Feb 16, 2011, at 6:05 AM, Eff Norwood wrote:
 
  I'm preparing to replicate about 200TB of data between two data centers 
  using zfs send. We have ten 10TB zpools that are further broken down into 
  zvols of various sizes in each data center. One DC is primary and the other 
  will be the replication target and there is plenty of bandwidth between 
  them (10 gig dark fiber).
  
  Are there any gotchas that I should be aware of? Also, at what level should 
  I be taking the snapshot to do the zfs send? At the primary pool level or 
  at the zvol level? Since the targets are to be exact replicas, I presume at 
  the primary pool level (e.g. tank) rather than for every zvol (e.g. 
  tank/prod/vol1)?
 
 There is no such thing as a pool snapshot. There are only dataset snapshots.

.. but you can make a single recursive snapshot call that affects all
datasets.

 The trick to a successful snapshot+send strategy at this size is to start 
 snapping
 early and often. You don't want to send 200TB, you want to send 2TB, 100 
 times :-)
 
 The performance is tends to be bursty, so the fixed record size of the zvols 
 can 
 work to your advantage for capacity planning. Also, a buffer of some sort can 
 help
 smooth out the utilization, see the threads on ZFS and mbuffer.
  -- richard
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS send/recv initial data load

2011-02-16 Thread whitetr6
Hi, I have a very limited amount of bandwidth between main office and a  
colocated rack of servers in a managed datacenter. My hope is to be able to  
zfs send/recv small incremental changes on a nightly basis as a secondary  
offsite backup strategy. My question is about the initial seed of the  
data. Is it possible to use a portable drive to copy the initial zfs  
filesystem(s) to the remote location and then make the subsequent  
incrementals over the network? If so, what would I need to do to make sure  
it is an exact copy? Thank you,

Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS space_map

2011-02-16 Thread Charlie Paper
Hello all,
I am trying to understand how the allocation of space_map happens.
What I am trying to figure out is how the recursive part is handled. From what 
I understand a new allocation (say appending to a file) will cause the space 
map to change by appending more allocs that will require extra space on disk 
and as such will change the space map again.
I understand that the space map is treated as an object at DMU level, but it is 
not clear on how and who allocates the blocks for it.
Thanks in advance
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Recover data from disk with zfs

2011-02-16 Thread Sergey
Hello everybody! Please, help me!

I have Solaris 10x86_64 server with a 5x40gb hdd.
1 HDD with /root and /usr (and other partition) (ufs filesystem) were crashed. 
He's died.
Other 4 HDD (zfs file system) were mounted by 4 pool (zfs create pool disk1 
c0t1d0 and etc.).

I install Solaris 10x86_64 on new disk and then mount (zpool import) other 4 
HDD disks. 3 disk mount successfully, but 1 don't mount (i can create new pool 
with this disk, but he is empty).

How can I mount this disk or recover data from this disk?
Sorry for my English.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/recv initial data load

2011-02-16 Thread Richard Elling

On Feb 16, 2011, at 7:38 AM, white...@gmail.com wrote:

 Hi, I have a very limited amount of bandwidth between main office and a 
 colocated rack of servers in a managed datacenter. My hope is to be able to 
 zfs send/recv small incremental changes on a nightly basis as a secondary 
 offsite backup strategy. My question is about the initial seed of the data. 
 Is it possible to use a portable drive to copy the initial zfs filesystem(s) 
 to the remote location and then make the subsequent incrementals over the 
 network? If so, what would I need to do to make sure it is an exact copy? 
 Thank you,

Yes, and this is a good idea. Once you have replicated a snapshot, it will be an
exact replica -- it is an all-or-nothing operation.  You can then make more 
replicas
or incrementally add snapshots.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/recv initial data load

2011-02-16 Thread a . smith

On Feb 16, 2011, at 7:38 AM, whitetr6 at gmail.com wrote:

My question is about the initial seed of the data. Is it possible  
to use a portable drive to copy the initial zfs filesystem(s) to the  
remote location and then make the subsequent incrementals over the  
network? If so, what would I need to do to make sure it is an exact  
copy? Thank you,


Yes, you can send the initial seed snapshot to a file on a portable  
disk. for example:


 # zfs send tank/volume@seed  /myexternaldrive/zfssnap.data

If the volume of data is too much to fit on a single disk then you can  
create a new pool spread across the number of disks you require, make  
a duplicate of the snapshot onto your new pool. Then from the new pool  
you can run a new zfs send when connected to your offsite server.


thanks Andy.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/recv initial data load

2011-02-16 Thread Karl Wagner
From what I have read, this is not the best way to do it.

Your best bet is to create a ZFS pool using the external device (or even
better, devices) then zfs send | zfs receive. You can then do the same at
your remote location.

If you just send to a file, you may find it was a wasted trip (or postage,
if you send it that way) as a single error in the file will result in a
failure when you try to pull the data back.

If you have 2 external devices (USB or eSata HDDs?) each capable of holding
all your data, my personal choice would be to use them both in a mirror,
transfer to that pool, then go to your remote site a do the same. If you
need more devices, try a raidz across 3-4 (or more) devices.

Only my opinion.

 -Original Message-
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of a.sm...@ukgrid.net
 Sent: 16 February 2011 16:46
 To: zfs-discuss@opensolaris.org
 Subject: Re: [zfs-discuss] ZFS send/recv initial data load
 
 On Feb 16, 2011, at 7:38 AM, whitetr6 at gmail.com wrote:
 
  My question is about the initial seed of the data. Is it possible
  to use a portable drive to copy the initial zfs filesystem(s) to the
  remote location and then make the subsequent incrementals over the
  network? If so, what would I need to do to make sure it is an exact
  copy? Thank you,
 
 Yes, you can send the initial seed snapshot to a file on a portable
 disk. for example:
 
   # zfs send tank/volume@seed  /myexternaldrive/zfssnap.data
 
 If the volume of data is too much to fit on a single disk then you can
 create a new pool spread across the number of disks you require, make
 a duplicate of the snapshot onto your new pool. Then from the new pool
 you can run a new zfs send when connected to your offsite server.
 
 thanks Andy.
 
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recover data from disk with zfs

2011-02-16 Thread Cindy Swearingen

Sergey,

I think you are saying that you had 4 separate ZFS storage pools on 4
separate disks and one ZFS pool/fs didn't not import successfully.

If you created a new storage pool on the disk for the pool that
failed to import then the data on that disk is no longer available
because it was overwritten with new pool info.

Is this what happened?

If a pool fails to import in the Solaris 10 9/10 release, we can
try to import it in recovery mode.

Thanks,

Cindy

On 02/16/11 05:42, Sergey wrote:

Hello everybody! Please, help me!

I have Solaris 10x86_64 server with a 5x40gb hdd.
1 HDD with /root and /usr (and other partition) (ufs filesystem) were crashed. 
He's died.
Other 4 HDD (zfs file system) were mounted by 4 pool (zfs create pool disk1 
c0t1d0 and etc.).

I install Solaris 10x86_64 on new disk and then mount (zpool import) other 4 
HDD disks. 3 disk mount successfully, but 1 don't mount (i can create new pool 
with this disk, but he is empty).

How can I mount this disk or recover data from this disk?
Sorry for my English.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/recv initial data load

2011-02-16 Thread Stephan Budach

Am 16.02.11 16:38, schrieb white...@gmail.com:
Hi, I have a very limited amount of bandwidth between main office and 
a colocated rack of servers in a managed datacenter. My hope is to be 
able to zfs send/recv small incremental changes on a nightly basis as 
a secondary offsite backup strategy. My question is about the initial 
seed of the data. Is it possible to use a portable drive to copy the 
initial zfs filesystem(s) to the remote location and then make the 
subsequent incrementals over the network? If so, what would I need to 
do to make sure it is an exact copy? Thank you,

Mark
Just be aware that sending zfs snapshots not only involves the actual 
new data, that has been written to the source(s) but also the metadata. 
If you don't need it, maybe disableing atime will take away some of that 
load, but I have learned that, depending on how the dataset gets 
accessed - in my case it's all about file sharing via netatalk and 
samba, the stream resulting from an incremental zfs send as always 
significantly greater than the data, that has been altered or has been 
written newly onto the dataset.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/recv initial data load

2011-02-16 Thread Bill Sommerfeld

On 02/16/11 07:38, white...@gmail.com wrote:

 Is it possible to use a portable drive to copy the
initial zfs filesystem(s) to the remote location and then make the
subsequent incrementals over the network?


Yes.

 If so, what would I need to do

to make sure it is an exact copy? Thank you,


Rough outline:

plug removable storage into source or a system near the source.
zpool create backup pool on removable storage
use an appropriate combination of zfs send  zfs receive to copy bits.
zpool export backup pool.
unplug removable storage
move it
plug it in to remote server
zpool import backup pool
use zfs send -i to verify that incrementals work

(I did something like the above when setting up my home backup because I 
initially dinked around with the backup pool hooked up to a laptop and 
then moved it to a desktop system).


optional: use zpool attach to mirror the removable storage to something 
faster/better/..., then after the mirror completes zpool detach to free 
up the removable storage.


- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/recv initial data load

2011-02-16 Thread whitetr6
All of these responses have been very helpful and are much appreciated.  
Thank you all.

Mark

On Feb 16, 2011 2:54pm, Erik ABLESON eable...@mac.com wrote:

Check out :





http://www.infrageeks.com/groups/infrageeks/wiki/8fb35/zfs_autoreplicate_script.html




It also works to an external hard disk with localhost as the destination  
server. Although I don't know if that's the latest version which skips  
ssh if it detects localhost as a destination.





Cheers,





Erik





On 16 févr. 2011, at 16:38, white...@gmail.com wrote:




 Hi, I have a very limited amount of bandwidth between main office and a  
colocated rack of servers in a managed datacenter. My hope is to be able  
to zfs send/recv small incremental changes on a nightly basis as a  
secondary offsite backup strategy. My question is about the  
initial seed of the data. Is it possible to use a portable drive to  
copy the initial zfs filesystem(s) to the remote location and then make  
the subsequent incrementals over the network? If so, what would I need to  
do to make sure it is an exact copy? Thank you,



 Mark___



 zfs-discuss mailing list



 zfs-discuss@opensolaris.org



 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/recv initial data load

2011-02-16 Thread Erik Trimble

On 2/16/2011 8:08 AM, Richard Elling wrote:

On Feb 16, 2011, at 7:38 AM, white...@gmail.com wrote:


Hi, I have a very limited amount of bandwidth between main office and a colocated rack of 
servers in a managed datacenter. My hope is to be able to zfs send/recv small incremental 
changes on a nightly basis as a secondary offsite backup strategy. My question is about 
the initial seed of the data. Is it possible to use a portable drive to copy 
the initial zfs filesystem(s) to the remote location and then make the subsequent 
incrementals over the network? If so, what would I need to do to make sure it is an exact 
copy? Thank you,

Yes, and this is a good idea. Once you have replicated a snapshot, it will be an
exact replica -- it is an all-or-nothing operation.  You can then make more 
replicas
or incrementally add snapshots.
  -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



To follow up on Richard's post, what you want to do is a perfectly good 
way to deal with moving large amounts of data via Sneakernet. :-)


I'd suggest that you create a full zfs filesystem on the external drive, 
and use 'zfs send/receive' to copy a snapshot from the production box to 
there, rather than try to store just a file from the output of 'zfs 
send'.  You can then 'zfs send/receive' that backup snapshot from the 
external drive onto your remote backup machine when you carry the drive 
over there later.


As Richard mentioned, that snapshot is unique, and it doesn't matter 
that you recovered it onto an external drive first, then copied that 
snapshot over to the backup machine.  It's a frozen snapshot, so you're 
all good for future incrementals.



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CPU Limited on Checksums?

2011-02-16 Thread Krunal Desai
On Wed, Feb 9, 2011 at 12:02 AM, Richard Elling
richard.ell...@gmail.com wrote:
 The data below does not show heavy CPU usage. Do you have data that
 does show heavy CPU usage?  mpstat would be a good start.

Here is mpstat output during a network copy; I think one of the CPUs
disappeared due to a L2 Cache error.

movax@megatron:~# mpstat -p
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl set
  1  333   06  4057 3830 19467  140   27  2650  15611  48
 0  51   0

 Some ZFS checksums are always SHA-256. By default, data checksums are
 Fletcher4 on most modern ZFS implementations, unless dedup is enabled.
I see, thanks for the info.

 Second, a copy from my desktop PC to my new zpool. (5900rpm drive over
 GigE to 2 6-drive RAID-Z2s). Load average are around ~3.

 Lockstat won't provide direct insight to the run queue (which is used to 
 calculate
 load average). Perhaps you'd be better off starting with prstat.
Ah, gotcha. I ran prstat, which is more of what I wanted:
   PID USERNAME  SIZE   RSS STATE  PRI NICE  TIME  CPU PROCESS/NLWP
  1434 root0K0K run  0  -20   0:01:54  23% zpool-tank/136
  1515 root 9804K 3260K cpu1590   0:00:00 0.1% prstat/1
  1438 root   14M 9056K run 590   0:00:00 0.0% smbd/16

zpool thread near the top of usage, which is what I suppose you would expect.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] fmadm faulty not showing faulty/offline disks?

2011-02-16 Thread Krunal Desai
On Wed, Feb 2, 2011 at 8:38 PM, Carson Gaspar car...@taltos.org wrote:
 Works For Me (TM).

 c7t0d0 is hanging off an LSI SAS3081E-R (SAS1068E chip) rev B3 MPT rev 105
 Firmware rev 011d (1.29.00.00) (IT FW)

 This is a SATA disk - I don't have any SAS disks behind a LSI1068E to test.

When I try to do a SMART status read (more than just a simple
identify), looks like the 1068E drops the drive for a little bit. I
bought the Intel-branded LSI SAS3081E:
Current active firmware version is 0120 (1.32.00)
Firmware image's version is MPTFW-01.32.00.00-IT
  LSI Logic
x86 BIOS image's version is MPTBIOS-6.34.00.00 (2010.12.07)

kernel log messages:
Feb 17 00:54:05 megatron scsi: [ID 107833 kern.warning] WARNING:
/pci@0,0/pci8086,2e29@6/pci1000,3140@0 (mpt4):
Feb 17 00:54:05 megatronDisconnected command timeout for Target 0
Feb 17 00:54:06 megatron scsi: [ID 365881 kern.info]
/pci@0,0/pci8086,2e29@6/pci1000,3140@0 (mpt4):
Feb 17 00:54:06 megatronLog info 0x3114 received for target 0.
Feb 17 00:54:06 megatronscsi_status=0x0, ioc_status=0x8048,
scsi_state=0xc
Feb 17 00:54:06 megatron scsi: [ID 365881 kern.info]
/pci@0,0/pci8086,2e29@6/pci1000,3140@0 (mpt4):
Feb 17 00:54:06 megatronLog info 0x3113 received for target 0.
Feb 17 00:54:06 megatronscsi_status=0x0, ioc_status=0x8048,
scsi_state=0xc
Feb 17 00:54:06 megatron scsi: [ID 365881 kern.info]
/pci@0,0/pci8086,2e29@6/pci1000,3140@0 (mpt4):
Feb 17 00:54:06 megatronLog info 0x3113 received for target 0.
Feb 17 00:54:06 megatronscsi_status=0x0, ioc_status=0x8048,
scsi_state=0xc
Feb 17 00:54:06 megatron scsi: [ID 365881 kern.info]
/pci@0,0/pci8086,2e29@6/pci1000,3140@0 (mpt4):
Feb 17 00:54:06 megatronLog info 0x3113 received for target 0.
Feb 17 00:54:06 megatronscsi_status=0x0, ioc_status=0x8048,
scsi_state=0xc
Feb 17 00:54:06 megatron scsi: [ID 365881 kern.info]
/pci@0,0/pci8086,2e29@6/pci1000,3140@0 (mpt4):
Feb 17 00:54:06 megatronLog info 0x3113 received for target 0.
Feb 17 00:54:06 megatronscsi_status=0x0, ioc_status=0x8048,
scsi_state=0xc
Feb 17 00:54:06 megatron scsi: [ID 107833 kern.notice]
/pci@0,0/pci8086,2e29@6/pci1000,3140@0 (mpt4):
Feb 17 00:54:06 megatronmpt_flush_target discovered non-NULL
cmd in slot 33, tasktype 0x3
Feb 17 00:54:06 megatron scsi: [ID 365881 kern.info]
/pci@0,0/pci8086,2e29@6/pci1000,3140@0 (mpt4):
Feb 17 00:54:06 megatronCmd (0xff02dea63a40) dump for
Target 0 Lun 0:
Feb 17 00:54:06 megatron scsi: [ID 365881 kern.info]
/pci@0,0/pci8086,2e29@6/pci1000,3140@0 (mpt4):
Feb 17 00:54:06 megatroncdb=[ ]
Feb 17 00:54:06 megatron scsi: [ID 365881 kern.info]
/pci@0,0/pci8086,2e29@6/pci1000,3140@0 (mpt4):
Feb 17 00:54:06 megatronpkt_flags=0x8000 pkt_statistics=0x0
pkt_state=0x0
Feb 17 00:54:06 megatron scsi: [ID 365881 kern.info]
/pci@0,0/pci8086,2e29@6/pci1000,3140@0 (mpt4):
Feb 17 00:54:06 megatronpkt_scbp=0x0 cmd_flags=0x2800024
Feb 17 00:54:06 megatron scsi: [ID 107833 kern.warning] WARNING:
/pci@0,0/pci8086,2e29@6/pci1000,3140@0 (mpt4):
Feb 17 00:54:06 megatronioc reset abort passthru

Fault management records some transport errors followed by recovery.
Any ideas? Disks are ST32000542AS.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss