您好,有个小姐姐很想认识你
她Q是:2621531734
退订码;pwGbELQ
___
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
___
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
___
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
___
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
___
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
A misbehaving qemu created a situation where the ACPI SRAT table
advertised one fewer proximity domains than intended. The NFIT table did
describe all the expected proximity domains. This caused the device dax
driver to assign an impossible target_node to the device, and when
hotplugged as system
On Thu, 2020-04-16 at 19:53 +0200, David Hildenbrand wrote:
> > > >
> > > Hm, I'm happy to make the changes, but EINVAL to me suggests there is a
> > > problem in the way this was called by the user. And in this case there
> > > really might not be much the user can change in case fo buggy
On Fri, Apr 17, 2020 at 12:22:23AM +0530, Vaibhav Jain wrote:
> The 'for' loop in do_cmd() that generates multiple ioctls to
> libnvdimm assumes that each ioctl will result in transfer of
> 'iter->max_xfer' bytes. Hence after each successful iteration the
> buffer 'offset' is incremented by
On Thu, Apr 16, 2020 at 01:22:29AM +0800, Liu Bo wrote:
> On Tue, Apr 14, 2020 at 03:30:45PM -0400, Vivek Goyal wrote:
> > On Sat, Mar 28, 2020 at 06:06:06AM +0800, Liu Bo wrote:
> > > On Fri, Mar 27, 2020 at 10:01:14AM -0400, Vivek Goyal wrote:
> > > > On Thu, Mar 26, 2020 at 08:09:05AM +0800,
The 'for' loop in do_cmd() that generates multiple ioctls to
libnvdimm assumes that each ioctl will result in transfer of
'iter->max_xfer' bytes. Hence after each successful iteration the
buffer 'offset' is incremented by 'iter->max_xfer'.
This is in contrast to similar implementation in
On 16.04.20 19:25, David Hildenbrand wrote:
> On 16.04.20 19:23, Verma, Vishal L wrote:
>> On Thu, 2020-04-16 at 19:12 +0200, David Hildenbrand wrote:
>>> On 16.04.20 19:10, Vishal Verma wrote:
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 0a54ffac8c68..ddd3347edd54
On 16.04.20 19:23, Verma, Vishal L wrote:
> On Thu, 2020-04-16 at 19:12 +0200, David Hildenbrand wrote:
>> On 16.04.20 19:10, Vishal Verma wrote:
>>>
>>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>>> index 0a54ffac8c68..ddd3347edd54 100644
>>> --- a/mm/memory_hotplug.c
>>> +++
On Thu, 2020-04-16 at 19:12 +0200, David Hildenbrand wrote:
> On 16.04.20 19:10, Vishal Verma wrote:
> >
> > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> > index 0a54ffac8c68..ddd3347edd54 100644
> > --- a/mm/memory_hotplug.c
> > +++ b/mm/memory_hotplug.c
> > @@ -1005,6 +1005,11 @@
On 16.04.20 19:10, Vishal Verma wrote:
> A misbehaving qemu created a situation where the ACPI SRAT table
> advertised one fewer proximity domains than intended. The NFIT table did
> describe all the expected proximity domains. This caused the device dax
> driver to assign an impossible
A misbehaving qemu created a situation where the ACPI SRAT table
advertised one fewer proximity domains than intended. The NFIT table did
describe all the expected proximity domains. This caused the device dax
driver to assign an impossible target_node to the device, and when
hotplugged as system
On Thu, 2020-04-16 at 18:16 +0200, David Hildenbrand wrote:
> > >
> > > Doing that papers over something that is clearly a FW issue and makes
> > > it "my performance is suboptimal" deal with it OS problem. Really, is
> > > this something we have to care about. Your changelog talks about a Qemu
On 16.04.20 18:13, Verma, Vishal L wrote:
> On Thu, 2020-04-16 at 08:19 +0200, Michal Hocko wrote:
>> On Wed 15-04-20 20:32:00, Verma, Vishal L wrote:
I really do not like this. Why should we try to be clever and change the
node id requested by the caller? I would just stick with
On Thu, 2020-04-16 at 08:19 +0200, Michal Hocko wrote:
> On Wed 15-04-20 20:32:00, Verma, Vishal L wrote:
> > >
> > > I really do not like this. Why should we try to be clever and change the
> > > node id requested by the caller? I would just stick with node_possible
> > > check and be done with
For kernel versions older than 5.4, the numa_node attribute is not
present for regions; due to which `ndctl list -U 1` fails to list
namespaces.
Signed-off-by: Santosh Sivaraj
---
ndctl/lib/libndctl.c | 13 ++---
ndctl/lib/libndctl.sym | 1 +
ndctl/libndctl.h | 4
On Wed 15-04-20 20:32:00, Verma, Vishal L wrote:
> On Wed, 2020-04-15 at 12:43 +0200, Michal Hocko wrote:
> > On Tue 14-04-20 17:58:12, Vishal Verma wrote:
> > [...]
> > > +static int check_hotplug_node(int nid)
> > > +{
> > > + int alt_nid;
> > > +
> > > + if (node_possible(nid))
> > > +
20 matches
Mail list logo