On 6/22/16 6:20 , Youzhong Yang wrote:
> Thanks Robert. Yes the driver failed to attach:
> 
> # rem_drv nvme
> # add_drv nvme
> devfsadm: driver failed to attach: nvme
> Warning: Driver (nvme) successfully added to system but failed to attach
> 
> # modinfo | grep nvme
> 237 fffffffff8081000   6088 294   1  nvme (NVMe v1.0e)

I presume that each time this happens we're seeing more ereports being
generated, is that the case? If so, it might make sense to rig up a
DTrace script to see what's causing us to generate it and see if we can
correlate it to specific activity during attach and the like.

Robert

> On Tue, Jun 21, 2016 at 8:29 PM, Robert Mustacchi <[email protected]> wrote:
> 
>> On 6/21/16 6:12 , Youzhong wrote:
>>> Any thoughts?
>>
>> From my perspective, the thing that I would look at is why are we seeing
>> these PCI express errors. Is this happening when we access the NVMe
>> device from the driver somehow?
>>
>> If it's reproducible as part of attaching, it might be interesting to
>> see if we can figure out what accesses to these devices is causing that,
>> which may be an issue we're not properly handling with the device or a
>> driver bug somewhere.
>>
>> Robert
>>
>>>> On Jun 17, 2016, at 1:17 PM, Youzhong Yang <[email protected]> wrote:
>>>>
>>>> Hi Hans,
>>>>
>>>> The drivers won't show up in command output of 'diskinfo' or 'format'.
>>>>
>>>> Please refer to the attachment:
>>>>
>>>> - ls /dev/dsk and /dev/rdsk reported broken sym links
>>>> - once I ran devfsadm -C, those broken links disappeared, then iostat
>> -En reported those drives as blkdev*
>>>>
>>>> Thanks for your quick reply.
>>>>
>>>> -- Youzhong
>>>>
>>>>> On Fri, Jun 17, 2016 at 12:49 PM, Hans Rosenfeld <
>> [email protected]> wrote:
>>>>> Hi,
>>>>>
>>>>> On Fri, Jun 17, 2016 at 12:01:18PM -0400, Youzhong Yang wrote:
>>>>>> Since NVMe driver has been in illumos for quite a while, so we
>> decided to
>>>>>> get an evaluation unit and see how it goes. Here is what we got:
>>>>>>
>>>>>> Baseboad: Supermicro X10DRU-i+ (System SYS-2028U-TN24R4T+)
>>>>>> Memory: 768G
>>>>>> CPU: 2 x Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz, each with 16
>> physical
>>>>>> cores, 32 hyper-threaded cores
>>>>>> SATA SSDs: 2 x Intel DC S3100 1TB (SSDSC2KI010X601)
>>>>>> NVMe SSDs: 2 x Intel DC P3700 800GB (SSDPE2MD800G4)
>>>>>> NVMe SSDs: 22 x Intel DC P3600 400GB (SSDPE2ME400G4)
>>>>>> Ethernet: Ethernet Controller X710 for 10GbE SFP+
>>>>>
>>>>>> Attached are some files with information of the system such as
>> prtconf,
>>>>>> nvme.conf etc. (we pulled out some drives, left only two INTEL DC 3100
>>>>>> SSDs, one Intel DC P3700, three Intel DC P3600 SSDs in place).
>>>>>
>>>>>> c2t1d0           Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
>>>>>> Model: INTEL SSDPE2ME400G4                      Revision: 8DV10171
>> Serial No: CVMD50510009400FGN
>>>>>> Size: 400.09GB <400088457216 bytes>
>>>>>> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
>>>>>> Illegal Request: 0 Predictive Failure Analysis: 0
>>>>>> c0t1d0           Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
>>>>>> Model: INTEL SSDPE2MD800G4                      Revision: 8DV10171
>> Serial No: CVFT4222002Q800HGN
>>>>>> Size: 800.17GB <800166076416 bytes>
>>>>>> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
>>>>>> Illegal Request: 0 Predictive Failure Analysis: 0
>>>>>> c1t1d0           Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
>>>>>> Model: INTEL SSDPE2ME400G4                      Revision: 8DV10171
>> Serial No: CVMD5051000Z400FGN
>>>>>> Size: 400.09GB <400088457216 bytes>
>>>>>> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
>>>>>> Illegal Request: 0 Predictive Failure Analysis: 0
>>>>>> c3t1d0           Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
>>>>>> Model: INTEL SSDPE2ME400G4                      Revision: 8DV10171
>> Serial No: CVMD4450003A400FGN
>>>>>> Size: 400.09GB <400088457216 bytes>
>>>>>> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
>>>>>> Illegal Request: 0 Predictive Failure Analysis: 0
>>>>>
>>>>> Looks like iostat -En shows the remaining 4 NVMe devices correctly.
>> They
>>>>> also show up correctly in messages and prtconf.
>>>>>
>>>>> What did you do to check whether they "show up"? format(1M) or some
>>>>> other tool?
>>>>>
>>>>> Hans
>>>>>
>>>>> --
>>>>> %SYSTEM-F-ANARCHISM, The operating system has been overthrown
>>>>>
>>>>
>>>> <nvme-disks.txt>
>>>
>>>
>>>
>>> -------------------------------------------
>>> illumos-developer
>>> Archives: https://www.listbox.com/member/archive/182179/=now
>>> RSS Feed:
>> https://www.listbox.com/member/archive/rss/182179/21175256-4d0bc9dc
>>> Modify Your Subscription:
>> https://www.listbox.com/member/?&;
>>> Powered by Listbox: http://www.listbox.com
>>>
>>
>>
> 
> 


-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com

Reply via email to