On 3/29/2018 10:51 PM, Andrew Lunn wrote:
Show all of the exposed regions with region sizes:
$ devlink region show
pci/0000:00:05.0/cr-space: size 1048576 snapshot [1 2]
So you have 2Mbytes of snapshot data. Is this held in the device, or
kernel memory?
This is allocated in devlink, the maximum number of snapshots is set by the
driver.
And it seems to want contiguous pages. How well does that work after
the system has been running for a while and memory is fragmented?

The allocation can be changed, there is no read need for contiguous pages.
It is important to note that we the amount of snapshots is limited by the driver
this can be based on the dump size or expected frequency of collection.
I also prefer not to pre-allocate this memory.
Dump a snapshot:
$ devlink region dump pci/0000:00:05.0/fw-health snapshot 1
0000000000000000 0014 95dc 0014 9514 0035 1670 0034 db30
0000000000000010 0000 0000 ffff ff04 0029 8c00 0028 8cc8
0000000000000020 0016 0bb8 0016 1720 0000 0000 c00f 3ffc
0000000000000030 bada cce5 bada cce5 bada cce5 bada cce5

Read a specific part of a snapshot:
$ devlink region read pci/0000:00:05.0/fw-health snapshot 1 address 0
        length 16
0000000000000000 0014 95dc 0014 9514 0035 1670 0034 db30
Why a separate command? It seems to be just a subset of dump.
This is useful when debugging values on specific addresses, this also
brings the API one step closer for a read and write API.
The functionality is useful, yes. But why two commands? Why not one
command, dump, which takes optional parameters?

Dump in devlink means provide all the data, saying dump address x length y sounds
confusing.  Do you see this as a critical issue?

Also, i doubt write support will be accepted. That sounds like the
start of an API to allow a user space driver.

If this will be an issue we will stay with read access only.


       Andrew

Reply via email to