first good that the problem at least is solved, it would be great if you
could open a PMR so this gets properly fixed, the daemon shouldn't
segfault, but rather print a message that the device is too big.
on the caching , it only gets used when you run out of pagepool or when you
run out of full
after restart. still doesn't seem to be in use.
> [root@ces1 ~]# mmdiag --lroc
>
> === mmdiag: lroc ===
> LROC Device(s): '0A6403AA5865389E#/dev/nvme0n1;' status Running
> Cache inodes 1 dirs 1 data 1 Config: maxFile 1073741824 stubFile
> 1073741824
> Max capacity: 1526184 MB, currently in use:
wow that was it.
> mmdiag --lroc
>
> === mmdiag: lroc ===
> LROC Device(s): '0A6403AA5865389E#/dev/nvme0n1;' status Running
> Cache inodes 1 dirs 1 data 1 Config: maxFile 1073741824 stubFile
> 1073741824
> Max capacity: 1526184 MB, currently in use: 0 MB
> Statistics from: Thu Dec 29 10:08:58
On 12/29/16 10:02 AM, Aaron Knister wrote:
> Interesting. Thanks Matt. I admit I'm somewhat grasping at straws here.
>
> That's a *really* long device path (and nested too), I wonder if
> that's causing issues.
was thinking of trying just /dev/sdxx
>
> What does a "tspreparedisk -S" show on that
i agree that is a very long name , given this is a nvme device it should
show up as /dev/nvmeXYZ
i suggest to report exactly that in nsddevices and retry.
i vaguely remember we have some fixed length device name limitation , but i
don't remember what the length is, so this would be my first guess
Interesting. Thanks Matt. I admit I'm somewhat grasping at straws here.
That's a *really* long device path (and nested too), I wonder if that's
causing issues.
What does a "tspreparedisk -S" show on that node?
Also, what does your nsddevices script look like? I'm wondering if you
could have