[zfs-discuss] how to discover disks?

2009-07-06 Thread Hua-Ying Ling
Hi,

How do I discover the disk name to use for zfs commands such as:
c3d0s0?  I tried using format command but it only gave me the first 4
letters: c3d1.  Also why do some command accept only 4 letter disk
names and others require 6 letters?

Thanks
Hua-Ying
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to discover disks?

2009-07-06 Thread Carsten Aulbert
Hi

Hua-Ying Ling wrote:
 How do I discover the disk name to use for zfs commands such as:
 c3d0s0?  I tried using format command but it only gave me the first 4
 letters: c3d1.  Also why do some command accept only 4 letter disk
 names and others require 6 letters?


Usually i find

cfgadm -a

helpful enough for that (mayby adding '|grep disk' to it).

Why sometimes 4 and sometimes 6 characters:

c3d1 - this would be disk#1 on controller#3
c3d0s0 - this would be slice #0 (partition) on disk #0 on controller #3

Usually there is a also t0 there, e.g.:

cfgadm -a|grep disk |head
sata0/0::dsk/c0t0d0disk connectedconfigured   ok
sata0/1::dsk/c0t1d0disk connectedconfigured   ok
sata0/2::dsk/c0t2d0disk connectedconfigured   ok
sata0/3::dsk/c0t3d0disk connectedconfigured   ok
sata0/4::dsk/c0t4d0disk connectedconfigured   ok
sata0/5::dsk/c0t5d0disk connectedconfigured   ok
sata0/6::dsk/c0t6d0disk connectedconfigured   ok
sata0/7::dsk/c0t7d0disk connectedconfigured   ok
sata1/0::dsk/c1t0d0disk connectedconfigured   ok
sata1/1::dsk/c1t1d0disk connectedconfigured   ok


HTH

Carsten
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to discover disks?

2009-07-06 Thread Hua-Ying Ling
When I use cfgadm -a it only seems to list usb devices?

#cfgadm -a
Ap_Id  Type Receptacle   Occupant Condition
usb2/1 unknown  emptyunconfigured ok
usb2/2 unknown  emptyunconfigured ok
usb2/3 unknown  emptyunconfigured ok
usb3/1 unknown  emptyunconfigured ok

I'm trying to convert a nonredundant storage pool to a mirrored pool.
I'm following the zfs admin guide on page 71.

I currently have a existing rpool:

#zpool status

  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c3d0s0ONLINE   0 0 0

I want to mirror this drive, I tried using format to get the disk name

#format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c3d0 DEFAULT cyl 24318 alt 2 hd 255 sec 63
  /p...@0,0/pci-...@14,1/i...@0/c...@0,0
   1. c3d1 drive type unknown
  /p...@0,0/pci-...@14,1/i...@0/c...@1,0

So I tried

#zpool attach rpool c3d0s0 c3d1s0 // failed
cannot open '/dev/dsk/c3d1s0': No such device or address

#zpool attach rpool c3d0s0 c3d1 // failed
cannot label 'c3d1': EFI labeled devices are not supported on root pools.

Thoughts?

Thanks,
Hua-Ying

On Mon, Jul 6, 2009 at 2:37 AM, Carsten
Aulbertcarsten.aulb...@aei.mpg.de wrote:
 Hi

 Hua-Ying Ling wrote:
 How do I discover the disk name to use for zfs commands such as:
 c3d0s0?  I tried using format command but it only gave me the first 4
 letters: c3d1.  Also why do some command accept only 4 letter disk
 names and others require 6 letters?


 Usually i find

 cfgadm -a

 helpful enough for that (mayby adding '|grep disk' to it).

 Why sometimes 4 and sometimes 6 characters:

 c3d1 - this would be disk#1 on controller#3
 c3d0s0 - this would be slice #0 (partition) on disk #0 on controller #3

 Usually there is a also t0 there, e.g.:

 cfgadm -a|grep disk |head
 sata0/0::dsk/c0t0d0            disk         connected    configured   ok
 sata0/1::dsk/c0t1d0            disk         connected    configured   ok
 sata0/2::dsk/c0t2d0            disk         connected    configured   ok
 sata0/3::dsk/c0t3d0            disk         connected    configured   ok
 sata0/4::dsk/c0t4d0            disk         connected    configured   ok
 sata0/5::dsk/c0t5d0            disk         connected    configured   ok
 sata0/6::dsk/c0t6d0            disk         connected    configured   ok
 sata0/7::dsk/c0t7d0            disk         connected    configured   ok
 sata1/0::dsk/c1t0d0            disk         connected    configured   ok
 sata1/1::dsk/c1t1d0            disk         connected    configured   ok


 HTH

 Carsten
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to discover disks?

2009-07-06 Thread Jorgen Lundman


If you want to use the entire disk in a zpool, you should use the 
notation without the c? trailing part. Ie, c2d0. (SATA related disks 
do not have the t? target part, whereas SCSI, and SCSI-emulated 
devices do. Like CDROMs, USB etc).


If you are using just a part of a disk, one partition/slice, you will 
use the s? notation. For example, c2d0s6.


There is one caveat, x86 bootable HDDs need to be SMI partitioned, EFI 
partitions will not work. So for bootable root volumes, it has to be a 
partition.


Run format on the disk, and create your partition the way you want it. 
Probably just s0 spanning the entire disk. (Not counting the virtual 
s8 boot partition, and of course the entire-disk partition s2).


Then write it as a SMI label, then you can attach it to your root pool.
It usually reminds you to run installgrub on the disk too.

I am not an expert on this, this is just what I have found out so far.

Lund




Hua-Ying Ling wrote:

When I use cfgadm -a it only seems to list usb devices?

#cfgadm -a
Ap_Id  Type Receptacle   Occupant Condition
usb2/1 unknown  emptyunconfigured ok
usb2/2 unknown  emptyunconfigured ok
usb2/3 unknown  emptyunconfigured ok
usb3/1 unknown  emptyunconfigured ok

I'm trying to convert a nonredundant storage pool to a mirrored pool.
I'm following the zfs admin guide on page 71.

I currently have a existing rpool:

#zpool status

  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c3d0s0ONLINE   0 0 0

I want to mirror this drive, I tried using format to get the disk name

#format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c3d0 DEFAULT cyl 24318 alt 2 hd 255 sec 63
  /p...@0,0/pci-...@14,1/i...@0/c...@0,0
   1. c3d1 drive type unknown
  /p...@0,0/pci-...@14,1/i...@0/c...@1,0

So I tried

#zpool attach rpool c3d0s0 c3d1s0 // failed
cannot open '/dev/dsk/c3d1s0': No such device or address

#zpool attach rpool c3d0s0 c3d1 // failed
cannot label 'c3d1': EFI labeled devices are not supported on root pools.

Thoughts?

Thanks,
Hua-Ying

On Mon, Jul 6, 2009 at 2:37 AM, Carsten
Aulbertcarsten.aulb...@aei.mpg.de wrote:

Hi

Hua-Ying Ling wrote:

How do I discover the disk name to use for zfs commands such as:
c3d0s0?  I tried using format command but it only gave me the first 4
letters: c3d1.  Also why do some command accept only 4 letter disk
names and others require 6 letters?


Usually i find

cfgadm -a

helpful enough for that (mayby adding '|grep disk' to it).

Why sometimes 4 and sometimes 6 characters:

c3d1 - this would be disk#1 on controller#3
c3d0s0 - this would be slice #0 (partition) on disk #0 on controller #3

Usually there is a also t0 there, e.g.:

cfgadm -a|grep disk |head
sata0/0::dsk/c0t0d0disk connectedconfigured   ok
sata0/1::dsk/c0t1d0disk connectedconfigured   ok
sata0/2::dsk/c0t2d0disk connectedconfigured   ok
sata0/3::dsk/c0t3d0disk connectedconfigured   ok
sata0/4::dsk/c0t4d0disk connectedconfigured   ok
sata0/5::dsk/c0t5d0disk connectedconfigured   ok
sata0/6::dsk/c0t6d0disk connectedconfigured   ok
sata0/7::dsk/c0t7d0disk connectedconfigured   ok
sata1/0::dsk/c1t0d0disk connectedconfigured   ok
sata1/1::dsk/c1t1d0disk connectedconfigured   ok


HTH

Carsten
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



--
Jorgen Lundman   | lund...@lundman.net
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to discover disks?

2009-07-06 Thread Vikash Gupta
Hi,

The format shows the 2nd disk as
1. c3d1 drive type unknown

Which means some H/w issue and the OS did not recognize it label/size.

Please check the physical disk and the connectivity and once ok. Please run 
devfsadm and try the format command again.


Thanks  Regards,
Vikash Gupta
Extn: 88-220-7318
Cell: 408-307-9718


-Original Message-
From: zfs-discuss-boun...@opensolaris.org 
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Hua-Ying Ling
Sent: Monday, July 06, 2009 12:23 AM
To: Carsten Aulbert
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] how to discover disks?

When I use cfgadm -a it only seems to list usb devices?

#cfgadm -a
Ap_Id  Type Receptacle   Occupant Condition
usb2/1 unknown  emptyunconfigured ok
usb2/2 unknown  emptyunconfigured ok
usb2/3 unknown  emptyunconfigured ok
usb3/1 unknown  emptyunconfigured ok

I'm trying to convert a nonredundant storage pool to a mirrored pool.
I'm following the zfs admin guide on page 71.

I currently have a existing rpool:

#zpool status

  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c3d0s0ONLINE   0 0 0

I want to mirror this drive, I tried using format to get the disk name

#format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c3d0 DEFAULT cyl 24318 alt 2 hd 255 sec 63
  /p...@0,0/pci-...@14,1/i...@0/c...@0,0
   1. c3d1 drive type unknown
  /p...@0,0/pci-...@14,1/i...@0/c...@1,0

So I tried

#zpool attach rpool c3d0s0 c3d1s0 // failed
cannot open '/dev/dsk/c3d1s0': No such device or address

#zpool attach rpool c3d0s0 c3d1 // failed
cannot label 'c3d1': EFI labeled devices are not supported on root pools.

Thoughts?

Thanks,
Hua-Ying

On Mon, Jul 6, 2009 at 2:37 AM, Carsten
Aulbertcarsten.aulb...@aei.mpg.de wrote:
 Hi

 Hua-Ying Ling wrote:
 How do I discover the disk name to use for zfs commands such as:
 c3d0s0?  I tried using format command but it only gave me the first 4
 letters: c3d1.  Also why do some command accept only 4 letter disk
 names and others require 6 letters?


 Usually i find

 cfgadm -a

 helpful enough for that (mayby adding '|grep disk' to it).

 Why sometimes 4 and sometimes 6 characters:

 c3d1 - this would be disk#1 on controller#3
 c3d0s0 - this would be slice #0 (partition) on disk #0 on controller #3

 Usually there is a also t0 there, e.g.:

 cfgadm -a|grep disk |head
 sata0/0::dsk/c0t0d0            disk         connected    configured   ok
 sata0/1::dsk/c0t1d0            disk         connected    configured   ok
 sata0/2::dsk/c0t2d0            disk         connected    configured   ok
 sata0/3::dsk/c0t3d0            disk         connected    configured   ok
 sata0/4::dsk/c0t4d0            disk         connected    configured   ok
 sata0/5::dsk/c0t5d0            disk         connected    configured   ok
 sata0/6::dsk/c0t6d0            disk         connected    configured   ok
 sata0/7::dsk/c0t7d0            disk         connected    configured   ok
 sata1/0::dsk/c1t0d0            disk         connected    configured   ok
 sata1/1::dsk/c1t1d0            disk         connected    configured   ok


 HTH

 Carsten
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to discover disks?

2009-07-06 Thread Andrew Gabriel

Hua-Ying Ling wrote:

When I use cfgadm -a it only seems to list usb devices?

#cfgadm -a
Ap_Id  Type Receptacle   Occupant Condition
usb2/1 unknown  emptyunconfigured ok
usb2/2 unknown  emptyunconfigured ok
usb2/3 unknown  emptyunconfigured ok
usb3/1 unknown  emptyunconfigured ok
  


I think it only shows hot-swap capable devices, and presumably your 
disks are not being driven by a hot swap capable controller _and_ 
driver. For SATA disks, the controller must have a Solaris driver which 
drives the disks in native SATA mode (such as nv_sata(7D)), and not IDE 
compatibility mode (such as SATA disks driven by ata(7D)).


--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove the exported zpool

2009-07-06 Thread Juergen Nickelsen
Ketan no-re...@opensolaris.org writes:

 I had a pool which was exported and due to some issues on my SAN i
 was never able to import it again. Can anyone tell me how can i
 destroy the exported pool to free up the LUN.

I did that once; I *think* that was with the -f option to zpool
destroy.

Regards, Juergen.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disappearing snapshots

2009-07-06 Thread Juergen Nickelsen
DL Consulting no-re...@opensolaris.org writes:

 It takes daily snapshots and sends them to another machine as a
 backup. The sending and receiving is scripted and run from a
 cronjob. The problem is that some of the snapshots disappear from
 monster after they've been sent to the backup machine.

Do not use the snapshots made for the time slider feature. These are
under control of the auto-snapshot service for exactly the time
slider and not for anything else.

Snapshots are cheap; create your own for file system replication. 
As you always need to keep the last common snapshot on both source
and target of the replication, you want to have snapshot creation
and deletion under your own control and not under the control of a
service that is made for something else.

For my own filesystem replication I have written a script that looks
at the snapshots on the target side, locates the last one of those,
and then makes an incremental replication with a newly created
snapshot relativ to the last common one. That one is then destroyed
after the replication was successful, so the new snapshot is now the
last common one.

Once your replication gets out of sync such that the last snapshot
on the target is not the common one, you must delete snapshots on
the target until the common one is the last one; if there is no
common one any more, you have to start the replication anew with
deleting (or renaming) the file system on the target and doing a
non-incremental send of a source snapshot to the target.

Regards, Juergen.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove the exported zpool

2009-07-06 Thread Ketan
Already tried that ... :-( 


# zpool destroy -f emcpool2
cannot open 'emcpool2': no such pool
#
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-06 Thread Boyd Adamson
Phil Harman phil.har...@sun.com writes:

 Gary Mills wrote:
 The Solaris implementation of mmap(2) is functionally correct, but the
 wait for a 64 bit address space rather moved the attention of
 performance tuning elsewhere. I must admit I was surprised to see so
 much code out there that still uses mmap(2) for general I/O (rather
 than just to support dynamic linking).

Probably this is encouraged by documentation like this:

 The memory mapping interface is described in Memory Management
 Interfaces. Mapping files is the most efficient form of file I/O for
 most applications run under the SunOS platform.

Found at:

http://docs.sun.com/app/docs/doc/817-4415/fileio-2?l=ena=view


Boyd.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-06 Thread Bob Friesenhahn

On Mon, 6 Jul 2009, Boyd Adamson wrote:


Probably this is encouraged by documentation like this:


The memory mapping interface is described in Memory Management
Interfaces. Mapping files is the most efficient form of file I/O for
most applications run under the SunOS platform.


Found at:

http://docs.sun.com/app/docs/doc/817-4415/fileio-2?l=ena=view


People often think about the main benefit of mmap() being to reduce 
CPU consumption and buffer copies but the mmap() family of programming 
interfaces is much richer than low-level read/write, pread/pwrite, or 
stdio, because madvise() provides the ability for I/O scheduling, or 
to flush stale data from memory.  In recent Solaris, it also includes 
provisions which allow applications to improve their performance on 
NUMA systems.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to discover disks?

2009-07-06 Thread Cindy . Swearingen

Hi Hua-Ying,

Some disks don't have target identifiers, like you c3d0
and c3d1 disks.

To attach your c3d1 disk, you need to relabel it with an
SMI label and provide a slice, s0, for example.

See the steps here:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Replacing.2FRelabeling_the_Root_Pool_Disk

Cindy

P.S. For cfgadm output, you might need to use the cfgadm -al or maybe
-av syntax. The options/output of this command might depend on the 
hardware types. I'm not quite sure what it needs in this case.



Hua-Ying Ling wrote:

When I use cfgadm -a it only seems to list usb devices?

#cfgadm -a
Ap_Id  Type Receptacle   Occupant Condition
usb2/1 unknown  emptyunconfigured ok
usb2/2 unknown  emptyunconfigured ok
usb2/3 unknown  emptyunconfigured ok
usb3/1 unknown  emptyunconfigured ok

I'm trying to convert a nonredundant storage pool to a mirrored pool.
I'm following the zfs admin guide on page 71.

I currently have a existing rpool:

#zpool status

  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c3d0s0ONLINE   0 0 0

I want to mirror this drive, I tried using format to get the disk name

#format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c3d0 DEFAULT cyl 24318 alt 2 hd 255 sec 63
  /p...@0,0/pci-...@14,1/i...@0/c...@0,0
   1. c3d1 drive type unknown
  /p...@0,0/pci-...@14,1/i...@0/c...@1,0

So I tried

#zpool attach rpool c3d0s0 c3d1s0 // failed
cannot open '/dev/dsk/c3d1s0': No such device or address

#zpool attach rpool c3d0s0 c3d1 // failed
cannot label 'c3d1': EFI labeled devices are not supported on root pools.

Thoughts?

Thanks,
Hua-Ying

On Mon, Jul 6, 2009 at 2:37 AM, Carsten
Aulbertcarsten.aulb...@aei.mpg.de wrote:


Hi

Hua-Ying Ling wrote:


How do I discover the disk name to use for zfs commands such as:
c3d0s0?  I tried using format command but it only gave me the first 4
letters: c3d1.  Also why do some command accept only 4 letter disk
names and others require 6 letters?



Usually i find

cfgadm -a

helpful enough for that (mayby adding '|grep disk' to it).

Why sometimes 4 and sometimes 6 characters:

c3d1 - this would be disk#1 on controller#3
c3d0s0 - this would be slice #0 (partition) on disk #0 on controller #3

Usually there is a also t0 there, e.g.:

cfgadm -a|grep disk |head
sata0/0::dsk/c0t0d0disk connectedconfigured   ok
sata0/1::dsk/c0t1d0disk connectedconfigured   ok
sata0/2::dsk/c0t2d0disk connectedconfigured   ok
sata0/3::dsk/c0t3d0disk connectedconfigured   ok
sata0/4::dsk/c0t4d0disk connectedconfigured   ok
sata0/5::dsk/c0t5d0disk connectedconfigured   ok
sata0/6::dsk/c0t6d0disk connectedconfigured   ok
sata0/7::dsk/c0t7d0disk connectedconfigured   ok
sata1/0::dsk/c1t0d0disk connectedconfigured   ok
sata1/1::dsk/c1t1d0disk connectedconfigured   ok


HTH

Carsten
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-06 Thread Gary Mills
On Sat, Jul 04, 2009 at 07:18:45PM +0100, Phil Harman wrote:
 Gary Mills wrote:
 On Sat, Jul 04, 2009 at 08:48:33AM +0100, Phil Harman wrote:
   
 ZFS doesn't mix well with mmap(2). This is because ZFS uses the ARC  
 instead of the Solaris page cache. But mmap() uses the latter. So if  
 anyone maps a file, ZFS has to keep the two caches in sync.
 
 That's the first I've heard of this issue.  Our e-mail server runs
 Cyrus IMAP with mailboxes on ZFS filesystems.  Cyrus uses mmap(2)
 extensively.  I understand that Solaris has an excellent
 implementation of mmap(2).  ZFS has many advantages, snapshots for
 example, for mailbox storage.  Is there anything that we can be do to
 optimize the two caches in this environment?  Will mmap(2) one day
 play nicely with ZFS?
 
[..]
 Software engineering is always about prioritising resource. Nothing 
 prioritises performance tuning attention quite like compelling 
 competitive data. When Bart Smaalders and I wrote libMicro we generated 
 a lot of very compelling data. I also coined the phrase If Linux is 
 faster, it's a Solaris bug. You will find quite a few (mostly fixed) 
 bugs with the synopsis linux is faster than solaris at 
 
 So, if mmap(2) playing nicely with ZFS is important to you, probably the 
 best thing you can do to help that along is to provide data that will 
 help build the business case for spending engineering resource on the issue.

First of all, how significant is the double caching in terms of
performance?  If the effect is small, I won't worry about it anymore.

What sort of data do you need?  Would a list of software products that
utilize mmap(2) extensively and could benefit from ZFS be suitable?

As for a business case, we just had an extended and catastrophic
performance degradation that was the result of two ZFS bugs.  If we
have another one like that, our director is likely to instruct us to
throw away all our Solaris toys and convert to Microsoft products.

-- 
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-06 Thread Andre van Eyssen

On Mon, 6 Jul 2009, Gary Mills wrote:


As for a business case, we just had an extended and catastrophic
performance degradation that was the result of two ZFS bugs.  If we
have another one like that, our director is likely to instruct us to
throw away all our Solaris toys and convert to Microsoft products.


If you change platform every time you get two bugs in a product, you must 
cycle platforms on a pretty regular basis!


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-06 Thread Bryan Allen
+--
| On 2009-07-07 01:29:11, Andre van Eyssen wrote:
| 
| On Mon, 6 Jul 2009, Gary Mills wrote:
| 
| As for a business case, we just had an extended and catastrophic
| performance degradation that was the result of two ZFS bugs.  If we
| have another one like that, our director is likely to instruct us to
| throw away all our Solaris toys and convert to Microsoft products.
| 
| If you change platform every time you get two bugs in a product, you must 
| cycle platforms on a pretty regular basis!

Given that policy, I don't imagine Windows will last very long anyway.
-- 
bda
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Zfs destroy taking inordinately long time...

2009-07-06 Thread Daniel Liebster
I ran a zfs destroy on a 20TB volume on a Thumper running snv_117, and its been 
2 hours now with a huge amount of read activity. In the past(2008.06) destroy 
came back with in a minutes.
After a couple of hours , activity still looks like:

--  -  -  -  -  -  -
dataPool18.8T  21.2T364  0  2.01M  0
rpool   26.5G   902G  0  0  0  0
--  -  -  -  -  -  -
dataPool18.8T  21.2T342  0  1.95M  0
rpool   26.5G   902G  0  0  0  0
--  -  -  -  -  -  -
dataPool18.8T  21.2T348  0  1.98M  0
rpool   26.5G   902G  0  0  0  0
--  -  -  -  -  -  -


Is this expected in snv_117? and if not , how to start debugging?

Dan
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] getting actual size in bytes of a zfs fs

2009-07-06 Thread Harry Putnam
I think this has probably been discussed here.. but I'm getting
confused about how to determine actual disk usage of zfs filesystems.

Here is an example:

 $ du -sb callisto
 46744   callisto

 $ du -sb callisto/.zfs/snapshot
  86076   callisto/.zfs/snapshot

Two questions then.

I do need to add those two to get the actual disk usage right?

Is something wrong here... there are only 2 snapshots there.
And I've seen it mentioned repeatedly about how little space snapshots
take. 

It is rsync'd data from a remote.. but I did use the --inplace flag.
The whole command used:

  rsync -avvz -L --delete 
  --exclude-from=/www/rexcl/callisto_exclude.txt 
  --inplace --delete-excluded 
  rea...@callisto.jtan.com:/home/reader/public_html/ /www/callisto/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to discover disks?

2009-07-06 Thread Hua-Ying Ling
I'm confused by the output of the partition command, this is the
partition table created by the installer:

current partition table (original):
Total disk cylinders available: 24318 + 2 (reserved cylinders)

Part  TagFlag Cylinders SizeBlocks
  0   rootwm   1 - 24316  186.27GB(24316/0/0) 390636540
  1 unassignedwm   00 (0/0/0) 0
  2 backupwu   0 - 24317  186.29GB(24318/0/0) 390668670
  3 unassignedwm   00 (0/0/0) 0
  4 unassignedwm   00 (0/0/0) 0
  5 unassignedwm   00 (0/0/0) 0
  6 unassignedwm   00 (0/0/0) 0
  7 unassignedwm   00 (0/0/0) 0
  8   bootwu   0 - 07.84MB(1/0/0) 16065
  9 unassignedwm   00 (0/0/0) 0

It seems like the partition 0,2,8 are sharing the same part of the
disk.  How is this possible?

Hua-Ying



On Mon, Jul 6, 2009 at 11:27 AM, cindy.swearin...@sun.com wrote:
 Hi Hua-Ying,

 Some disks don't have target identifiers, like you c3d0
 and c3d1 disks.

 To attach your c3d1 disk, you need to relabel it with an
 SMI label and provide a slice, s0, for example.

 See the steps here:

 http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Replacing.2FRelabeling_the_Root_Pool_Disk

 Cindy

 P.S. For cfgadm output, you might need to use the cfgadm -al or maybe
 -av syntax. The options/output of this command might depend on the hardware
 types. I'm not quite sure what it needs in this case.


 Hua-Ying Ling wrote:

 When I use cfgadm -a it only seems to list usb devices?

 #cfgadm -a
 Ap_Id                          Type         Receptacle   Occupant
 Condition
 usb2/1                         unknown      empty        unconfigured ok
 usb2/2                         unknown      empty        unconfigured ok
 usb2/3                         unknown      empty        unconfigured ok
 usb3/1                         unknown      empty        unconfigured ok

 I'm trying to convert a nonredundant storage pool to a mirrored pool.
 I'm following the zfs admin guide on page 71.

 I currently have a existing rpool:

 #zpool status

  pool: rpool
  state: ONLINE
  scrub: none requested
 config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c3d0s0    ONLINE       0     0     0

 I want to mirror this drive, I tried using format to get the disk name

 #format
 Searching for disks...done


 AVAILABLE DISK SELECTIONS:
       0. c3d0 DEFAULT cyl 24318 alt 2 hd 255 sec 63
          /p...@0,0/pci-...@14,1/i...@0/c...@0,0
       1. c3d1 drive type unknown
          /p...@0,0/pci-...@14,1/i...@0/c...@1,0

 So I tried

 #zpool attach rpool c3d0s0 c3d1s0 // failed
 cannot open '/dev/dsk/c3d1s0': No such device or address

 #zpool attach rpool c3d0s0 c3d1 // failed
 cannot label 'c3d1': EFI labeled devices are not supported on root pools.

 Thoughts?

 Thanks,
 Hua-Ying

 On Mon, Jul 6, 2009 at 2:37 AM, Carsten
 Aulbertcarsten.aulb...@aei.mpg.de wrote:

 Hi

 Hua-Ying Ling wrote:

 How do I discover the disk name to use for zfs commands such as:
 c3d0s0?  I tried using format command but it only gave me the first 4
 letters: c3d1.  Also why do some command accept only 4 letter disk
 names and others require 6 letters?


 Usually i find

 cfgadm -a

 helpful enough for that (mayby adding '|grep disk' to it).

 Why sometimes 4 and sometimes 6 characters:

 c3d1 - this would be disk#1 on controller#3
 c3d0s0 - this would be slice #0 (partition) on disk #0 on controller #3

 Usually there is a also t0 there, e.g.:

 cfgadm -a|grep disk |head
 sata0/0::dsk/c0t0d0            disk         connected    configured   ok
 sata0/1::dsk/c0t1d0            disk         connected    configured   ok
 sata0/2::dsk/c0t2d0            disk         connected    configured   ok
 sata0/3::dsk/c0t3d0            disk         connected    configured   ok
 sata0/4::dsk/c0t4d0            disk         connected    configured   ok
 sata0/5::dsk/c0t5d0            disk         connected    configured   ok
 sata0/6::dsk/c0t6d0            disk         connected    configured   ok
 sata0/7::dsk/c0t7d0            disk         connected    configured   ok
 sata1/0::dsk/c1t0d0            disk         connected    configured   ok
 sata1/1::dsk/c1t1d0            disk         connected    configured   ok


 HTH

 Carsten
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to discover disks?

2009-07-06 Thread Cindy . Swearingen

Hua-Ying,

The partition table *is* confusing so don't try to make sense of it. :-)

Partition or slice 2 represents the entire disk, cylinders 0-24317.
You created slice 0, which is cylinders 1-24316. Slice 8 is a reserved,
legacy area for boot info on some x86 systems. You can ignore it.

Looks like a reasonable partition setup to me.

As long as the partition tables for your c3d0 and c3d1 disks are similar 
you should be able to attach the disk, like this:


# zpool attach rpool c3d0s0 c3d1s0

Don't forget to add the bootblocks to c3d1s0 as described in the
troubleshooting wiki.

Cindy

Hua-Ying Ling wrote:

I'm confused by the output of the partition command, this is the
partition table created by the installer:

current partition table (original):
Total disk cylinders available: 24318 + 2 (reserved cylinders)

Part  TagFlag Cylinders SizeBlocks
  0   rootwm   1 - 24316  186.27GB(24316/0/0) 390636540
  1 unassignedwm   00 (0/0/0) 0
  2 backupwu   0 - 24317  186.29GB(24318/0/0) 390668670
  3 unassignedwm   00 (0/0/0) 0
  4 unassignedwm   00 (0/0/0) 0
  5 unassignedwm   00 (0/0/0) 0
  6 unassignedwm   00 (0/0/0) 0
  7 unassignedwm   00 (0/0/0) 0
  8   bootwu   0 - 07.84MB(1/0/0) 16065
  9 unassignedwm   00 (0/0/0) 0

It seems like the partition 0,2,8 are sharing the same part of the
disk.  How is this possible?

Hua-Ying



On Mon, Jul 6, 2009 at 11:27 AM, cindy.swearin...@sun.com wrote:


Hi Hua-Ying,

Some disks don't have target identifiers, like you c3d0
and c3d1 disks.

To attach your c3d1 disk, you need to relabel it with an
SMI label and provide a slice, s0, for example.

See the steps here:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Replacing.2FRelabeling_the_Root_Pool_Disk

Cindy

P.S. For cfgadm output, you might need to use the cfgadm -al or maybe
-av syntax. The options/output of this command might depend on the hardware
types. I'm not quite sure what it needs in this case.


Hua-Ying Ling wrote:


When I use cfgadm -a it only seems to list usb devices?

#cfgadm -a
Ap_Id  Type Receptacle   Occupant
Condition
usb2/1 unknown  emptyunconfigured ok
usb2/2 unknown  emptyunconfigured ok
usb2/3 unknown  emptyunconfigured ok
usb3/1 unknown  emptyunconfigured ok

I'm trying to convert a nonredundant storage pool to a mirrored pool.
I'm following the zfs admin guide on page 71.

I currently have a existing rpool:

#zpool status

pool: rpool
state: ONLINE
scrub: none requested
config:

  NAMESTATE READ WRITE CKSUM
  rpool   ONLINE   0 0 0
c3d0s0ONLINE   0 0 0

I want to mirror this drive, I tried using format to get the disk name

#format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
 0. c3d0 DEFAULT cyl 24318 alt 2 hd 255 sec 63
/p...@0,0/pci-...@14,1/i...@0/c...@0,0
 1. c3d1 drive type unknown
/p...@0,0/pci-...@14,1/i...@0/c...@1,0

So I tried

#zpool attach rpool c3d0s0 c3d1s0 // failed
cannot open '/dev/dsk/c3d1s0': No such device or address

#zpool attach rpool c3d0s0 c3d1 // failed
cannot label 'c3d1': EFI labeled devices are not supported on root pools.

Thoughts?

Thanks,
Hua-Ying

On Mon, Jul 6, 2009 at 2:37 AM, Carsten
Aulbertcarsten.aulb...@aei.mpg.de wrote:



Hi

Hua-Ying Ling wrote:



How do I discover the disk name to use for zfs commands such as:
c3d0s0?  I tried using format command but it only gave me the first 4
letters: c3d1.  Also why do some command accept only 4 letter disk
names and others require 6 letters?



Usually i find

cfgadm -a

helpful enough for that (mayby adding '|grep disk' to it).

Why sometimes 4 and sometimes 6 characters:

c3d1 - this would be disk#1 on controller#3
c3d0s0 - this would be slice #0 (partition) on disk #0 on controller #3

Usually there is a also t0 there, e.g.:

cfgadm -a|grep disk |head
sata0/0::dsk/c0t0d0disk connectedconfigured   ok
sata0/1::dsk/c0t1d0disk connectedconfigured   ok
sata0/2::dsk/c0t2d0disk connectedconfigured   ok
sata0/3::dsk/c0t3d0disk connectedconfigured   ok
sata0/4::dsk/c0t4d0disk connectedconfigured   ok
sata0/5::dsk/c0t5d0disk connectedconfigured   ok
sata0/6::dsk/c0t6d0disk connectedconfigured   ok
sata0/7::dsk/c0t7d0disk 

Re: [zfs-discuss] Disappearing snapshots

2009-07-06 Thread DL Consulting
Thanks guys
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] getting actual size in bytes of a zfs fs

2009-07-06 Thread Robert Thurlow

Harry Putnam wrote:

I think this has probably been discussed here.. but I'm getting
confused about how to determine actual disk usage of zfs filesystems.

Here is an example:

 $ du -sb callisto
 46744   callisto

 $ du -sb callisto/.zfs/snapshot
  86076   callisto/.zfs/snapshot

Two questions then.

I do need to add those two to get the actual disk usage right?

Is something wrong here... there are only 2 snapshots there.
And I've seen it mentioned repeatedly about how little space snapshots
take. 


'du' does a stat() of each file it finds; it sees and reports
identical files in snapshots as full size.  'rsync' will also
work on all copies of a file.

To see space usage, you need to ask zfs itself:

NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT 10.4G  7.93G18K  /rpool/ROOT
rpool/r...@rob 0  -18K  -

The snapshot I just took and named after myself doesn't yet
take any space for itself.  If I delete a file, e.g. my
/var/crash/* files that I'm done with, I *may* see space
start to be accounted to the snapshot.

Rob T
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] getting actual size in bytes of a zfs fs

2009-07-06 Thread Erik Trimble

Robert Thurlow wrote:

Harry Putnam wrote:

I think this has probably been discussed here.. but I'm getting
confused about how to determine actual disk usage of zfs filesystems.

Here is an example:

 $ du -sb callisto
 46744   callisto

 $ du -sb callisto/.zfs/snapshot
  86076   callisto/.zfs/snapshot

Two questions then.

I do need to add those two to get the actual disk usage right?

Is something wrong here... there are only 2 snapshots there.
And I've seen it mentioned repeatedly about how little space snapshots
take. 


'du' does a stat() of each file it finds; it sees and reports
identical files in snapshots as full size.  'rsync' will also
work on all copies of a file.

To see space usage, you need to ask zfs itself:

NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT 10.4G  7.93G18K  /rpool/ROOT
rpool/r...@rob 0  -18K  -

The snapshot I just took and named after myself doesn't yet
take any space for itself.  If I delete a file, e.g. my
/var/crash/* files that I'm done with, I *may* see space
start to be accounted to the snapshot.

Rob T
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
As Robert pointed out, you DON'T want to use 'du', as it doesn't take in 
account that ZFS use Copy On Write to reduce the amount of space that 
snapshots take up.  Think of it as similar to the situation you face 
with hard links (i.e. multiple identical files actually all referring to 
the same bits on the disk).



Remember, there are generally TWO concepts you don't want to confuse:  a 
FILESYSTEM, and a POOL.  The Pool is the larger object, inside which 
multiple filesystems, snapshots, and volumes can exit. A pool has a hard 
limit as to the amount of space it can handle, as determined by the 
underlying storage objects that make it up (disks, arrays, LUNs, etc).  
The other objects are flexible size, up to the amount of the pool that 
contains them.


By your question, you seem to be interested in the available/used space 
of a Pool.  You want the 'zpool' command for this, specifically:


zpool list

Look at the zpool(1M) man page.

To look at the size of FILESYSTEMS, you can use 'df' for a mounted 
filesystem, as normal.  Alternately, use 'zfs list filesystem' to list 
a specific filesystem by name (i.e. data/foo/bar/baz ), or 'zpool list 
-t type' where type is one of 'filesystem', 'snapshot', or 'volume' 
to list all occurrances of that type of object.


Look at the zfs(1M) man page.


Note that most the output uses Human readable info, so it's going to 
report GB and TB, not Bytes. 



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-06 Thread Sanjeev
Bob,

Catching up late on this thread.

Would it be possible for you to collect the following data :
- /usr/sbin/lockstat -CcwP -n 5 -D 20 -s 40 sleep 5
- /usr/sbin/lockstat -HcwP -n 5 -D 20 -s 40 sleep 5
- /usr/sbin/lockstat -kIW -i 977 -D 20 -s 40 sleep 5

Or if you have access to the GUDs tool please collect data using that.

We need to understand how ARC plays a role here.

Thanks and regards,
Sanjeev.
On Sat, Jul 04, 2009 at 02:49:05PM -0500, Bob Friesenhahn wrote:
 On Sat, 4 Jul 2009, Jonathan Edwards wrote:

 this is only going to help if you've got problems in zfetch .. you'd 
 probably see this better by looking for high lock contention in zfetch 
 with lockstat

 This is what lockstat says when performance is poor:

 Adaptive mutex spin: 477 events in 30.019 seconds (16 events/sec)

 Count indv cuml rcnt nsec Lock   Caller  
 ---
47  10%  10% 0.00 5813 0x80256000 untimeout+0x24
46  10%  19% 0.00 2223 0xb0a2f200 taskq_thread+0xe3
38   8%  27% 0.00 2252 0xb0a2f200 cv_wait+0x70
29   6%  34% 0.00 1115 0x80256000 callout_execute+0xeb
26   5%  39% 0.00 3006 0xb0a2f200 taskq_dispatch+0x1b8
22   5%  44% 0.00 1200 0xa06158c0 post_syscall+0x206
18   4%  47% 0.00 3858 arc_eviction_mtx   arc_do_user_evicts+0x76
16   3%  51% 0.00 1352 arc_eviction_mtx   arc_buf_add_ref+0x2d
15   3%  54% 0.00 5376 0xb1adac28 taskq_thread+0xe3
11   2%  56% 0.00 2520 0xb1adac28 taskq_dispatch+0x1b8
 9   2%  58% 0.00 2158 0xbb909e20 pollwakeup+0x116
 9   2%  60% 0.00 2431 0xb1adac28 cv_wait+0x70
 8   2%  62% 0.00 3912 0x80259000 untimeout+0x24
 7   1%  63% 0.00 3679 0xb10dfbc0 polllock+0x3f
 7   1%  65% 0.00 2171 0xb0a2f2d8 cv_wait+0x70
 6   1%  66% 0.00  771 0xb3f23708 pcache_delete_fd+0xac
 6   1%  67% 0.00 4679 0xb0a2f2d8 taskq_dispatch+0x1b8
 5   1%  68% 0.00  500 0xbe555040 fifo_read+0xf8
 5   1%  69% 0.0015838 0x8025c000 untimeout+0x24
 4   1%  70% 0.00 1213 0xac44b558 sd_initpkt_for_buf+0x110
 4   1%  71% 0.00  638 0xa28722a0 polllock+0x3f
 4   1%  72% 0.00  610 0x80259000 timeout_common+0x39
 4   1%  73% 0.0010691 0x80256000 timeout_common+0x39
 3   1%  73% 0.00 1559 htable_mutex+0x78  htable_release+0x8a
 3   1%  74% 0.00 3610 0xbb909e20 cv_timedwait_sig+0x1c1
 3   1%  74% 0.00 1636 0xa240d410 
 ohci_allocate_periodic_in_resource+0x71
 2   0%  75% 0.00 5959 0xbe555040 fifo_read+0x5c
 2   0%  75% 0.00 3744 0xbe555040 polllock+0x3f
 2   0%  76% 0.00  635 0xb3f23708 pollwakeup+0x116
 2   0%  76% 0.00  709 0xb3f23708 cv_timedwait_sig+0x1c1
 2   0%  77% 0.00  831 0xb3dd2070 pcache_insert+0x13d
 2   0%  77% 0.00 5976 0xb3dd2070 pollwakeup+0x116
 2   0%  77% 0.00 1339 0xb1eb9b80 
 metaslab_group_alloc+0x136
 2   0%  78% 0.00 1514 0xb0a2f2d8 taskq_thread+0xe3
 2   0%  78% 0.00 4042 0xb0a22988 vdev_queue_io_done+0xc3
 2   0%  79% 0.00 3428 0xb0a21f08 vdev_queue_io_done+0xc3
 2   0%  79% 0.00 1002 0xac44b558 sd_core_iostart+0x37
 2   0%  79% 0.00 1387 0xa8c56d80 xbuf_iostart+0x7d
 2   0%  80% 0.00  698 0xa58a3318 sd_return_command+0x11b
 2   0%  80% 0.00  385 0xa58a3318 sd_start_cmds+0x115
 2   0%  81% 0.00  562 0xa5647800 ssfcp_scsi_start+0x30
 2   0%  81% 0.00 1620 0xa4162d58 ssfcp_scsi_init_pkt+0x1be
 2   0%  82% 0.00  897 0xa4162d58 ssfcp_scsi_start+0x42
 2   0%  82% 0.00  475 0xa4162b78 ssfcp_scsi_start+0x42
 2   0%  82% 0.00  697 0xa40fb158 sd_start_cmds+0x115
 2   0%  83% 0.0010901 0xa28722a0 fifo_write+0x5b
 2   0%  83% 0.00 4379 0xa28722a0 fifo_read+0xf8
 2   0%  84% 0.00 1534 0xa2638390 emlxs_tx_get+0x38
 2   0%  84% 0.00 1601 0xa2638350 emlxs_issue_iocb_cmd+0xc1
 2   0%  84% 0.00 6697 0xa2503f08 vdev_queue_io_done+0x7b
 2   0%  85% 0.00 4113 0xa24040b0 
 gcpu_ntv_mca_poll_wrapper+0x64
 2   0%  85% 0.00  928 0xfe85dc140658 pollwakeup+0x116
 1   0%  86% 0.00  404 iommulib_lock  lookup_cache+0x2c
 1   0%  86% 0.00 4867 pidlockthread_exit+0x6f
 1   0%  86% 0.00 1245 plocks+0x3c0