On Thu, Jan 18, 2018 at 11:51 AM, David Hildenbrand wrote:
>
>>> 1] Existing pmem driver & virtio for region discovery:
>>> -
>>> Use existing pmem driver which is tightly coupled with concepts of
>>> namespaces, labels
>> 1] Existing pmem driver & virtio for region discovery:
>> -
>> Use existing pmem driver which is tightly coupled with concepts of
>> namespaces, labels etc
>> from ACPI region discovery and re-implement these concepts with virtio so
On Thu, Jan 18, 2018 at 11:36 AM, Pankaj Gupta wrote:
>
>>
>> On Thu, Jan 18, 2018 at 10:54 AM, Pankaj Gupta wrote:
>> >
>> >>
>> >> >> I'd like to emphasize again, that I would prefer a virtio-pmem only
>> >> >> solution.
>> >> >>
>> >> >> There are
>
> On Thu, Jan 18, 2018 at 10:54 AM, Pankaj Gupta wrote:
> >
> >>
> >> >> I'd like to emphasize again, that I would prefer a virtio-pmem only
> >> >> solution.
> >> >>
> >> >> There are architectures out there (e.g. s390x) that don't support
> >> >> NVDIMMs - there is no HW
On Thu, Jan 18, 2018 at 10:54 AM, Pankaj Gupta wrote:
>
>>
>> >> I'd like to emphasize again, that I would prefer a virtio-pmem only
>> >> solution.
>> >>
>> >> There are architectures out there (e.g. s390x) that don't support
>> >> NVDIMMs - there is no HW interface to expose
>
> >> I'd like to emphasize again, that I would prefer a virtio-pmem only
> >> solution.
> >>
> >> There are architectures out there (e.g. s390x) that don't support
> >> NVDIMMs - there is no HW interface to expose any such stuff.
> >>
> >> However, with virtio-pmem, we could make it work also
On Thu, Jan 18, 2018 at 9:48 AM, David Hildenbrand wrote:
>>> I'd like to emphasize again, that I would prefer a virtio-pmem only
>>> solution.
>>>
>>> There are architectures out there (e.g. s390x) that don't support
>>> NVDIMMs - there is no HW interface to expose any such
>> I'd like to emphasize again, that I would prefer a virtio-pmem only
>> solution.
>>
>> There are architectures out there (e.g. s390x) that don't support
>> NVDIMMs - there is no HW interface to expose any such stuff.
>>
>> However, with virtio-pmem, we could make it work also on architectures
Hi Dan,
Thanks for your reply.
>
> On Fri, Jan 12, 2018 at 10:23 PM, Pankaj Gupta wrote:
> >
> > Hello Dan,
> >
> >> Not a flag, but a new "Address Range Type GUID". See section "5.2.25.2
> >> System Physical Address (SPA) Range Structure" in the ACPI 6.2A
> >>
On Fri, Jan 12, 2018 at 10:23 PM, Pankaj Gupta wrote:
>
> Hello Dan,
>
>> Not a flag, but a new "Address Range Type GUID". See section "5.2.25.2
>> System Physical Address (SPA) Range Structure" in the ACPI 6.2A
>> specification. Since it is a GUID we could define a Linux
Hello Dan,
> Not a flag, but a new "Address Range Type GUID". See section "5.2.25.2
> System Physical Address (SPA) Range Structure" in the ACPI 6.2A
> specification. Since it is a GUID we could define a Linux specific
> type for this case, but spec changes would allow non-Linux hypervisors
> to
On Fri, Nov 24, 2017 at 4:40 AM, Pankaj Gupta wrote:
[..]
> 1] Expose vNVDIMM memory range to KVM guest.
>
>- Add flag in ACPI NFIT table for this new memory type. Do we need NVDIMM
> spec
> changes for this?
Not a flag, but a new "Address Range Type GUID". See
On 24/11/2017 14:02, Pankaj Gupta wrote:
>
>>>- Suggestion by Paolo & Stefan(previously) to use virtio-blk makes sense
>>>if just
>>> want a flush vehicle to send guest commands to host and get reply
>>> after asynchronous
>>> execution. There was previous discussion [1]
> >- Suggestion by Paolo & Stefan(previously) to use virtio-blk makes sense
> >if just
> > want a flush vehicle to send guest commands to host and get reply
> > after asynchronous
> > execution. There was previous discussion [1] with Rik & Dan on this.
> >
> > [1]
On 23/11/2017 17:14, Dan Williams wrote:
> On Wed, Nov 22, 2017 at 8:05 PM, Xiao Guangrong
> wrote:
>>
>>
>> On 11/22/2017 02:19 AM, Rik van Riel wrote:
>>
>>> We can go with the "best" interface for what
>>> could be a relatively slow flush (fsync on a
>>> file on
On Wed, Nov 22, 2017 at 8:05 PM, Xiao Guangrong
wrote:
>
>
> On 11/22/2017 02:19 AM, Rik van Riel wrote:
>
>> We can go with the "best" interface for what
>> could be a relatively slow flush (fsync on a
>> file on ssd/disk on the host), which requires
>> that the
On 11/22/2017 02:19 AM, Rik van Riel wrote:
We can go with the "best" interface for what
could be a relatively slow flush (fsync on a
file on ssd/disk on the host), which requires
that the flushing task wait on completion
asynchronously.
I'd like to clarify the interface of "wait on
On Tue, Nov 21, 2017 at 10:19 AM, Rik van Riel wrote:
> On Fri, 2017-11-03 at 14:21 +0800, Xiao Guangrong wrote:
>> On 11/03/2017 12:30 AM, Dan Williams wrote:
>> >
>> > Good point, I was assuming that the mmio flush interface would be
>> > discovered separately from the
> >
> >
> >> [..]
> >> >> Yes, the GUID will specifically identify this range as "Virtio Shared
> >> >> Memory" (or whatever name survives after a bikeshed debate). The
> >> >> libnvdimm core then needs to grow a new region type that mostly
> >> >> behaves the same as a "pmem" region, but
> [..]
> >> Yes, the GUID will specifically identify this range as "Virtio Shared
> >> Memory" (or whatever name survives after a bikeshed debate). The
> >> libnvdimm core then needs to grow a new region type that mostly
> >> behaves the same as a "pmem" region, but drivers/nvdimm/pmem.c grows a
On 11/03/2017 12:30 AM, Dan Williams wrote:
On Thu, Nov 2, 2017 at 1:50 AM, Xiao Guangrong
wrote:
[..]
Yes, the GUID will specifically identify this range as "Virtio Shared
Memory" (or whatever name survives after a bikeshed debate). The
libnvdimm core then
On Thu, Nov 2, 2017 at 1:50 AM, Xiao Guangrong
wrote:
[..]
>> Yes, the GUID will specifically identify this range as "Virtio Shared
>> Memory" (or whatever name survives after a bikeshed debate). The
>> libnvdimm core then needs to grow a new region type that mostly
On 11/01/2017 11:20 PM, Dan Williams wrote:
On 11/01/2017 12:25 PM, Dan Williams wrote:
[..]
It's not persistent memory if it requires a hypercall to make it
persistent. Unless memory writes can be made durable purely with cpu
instructions it's dangerous for it to be treated as a PMEM range.
> On 11/01/2017 12:25 PM, Dan Williams wrote:
[..]
>> It's not persistent memory if it requires a hypercall to make it
>> persistent. Unless memory writes can be made durable purely with cpu
>> instructions it's dangerous for it to be treated as a PMEM range.
>> Consider a guest that tried to map
On 11/01/2017 12:25 PM, Dan Williams wrote:
On Tue, Oct 31, 2017 at 8:43 PM, Xiao Guangrong
wrote:
On 10/31/2017 10:20 PM, Dan Williams wrote:
On Tue, Oct 31, 2017 at 12:13 AM, Xiao Guangrong
wrote:
On 07/27/2017 08:54 AM,
On Tue, Oct 31, 2017 at 8:43 PM, Xiao Guangrong
wrote:
>
>
> On 10/31/2017 10:20 PM, Dan Williams wrote:
>>
>> On Tue, Oct 31, 2017 at 12:13 AM, Xiao Guangrong
>> wrote:
>>>
>>>
>>>
>>> On 07/27/2017 08:54 AM, Dan Williams wrote:
>>>
On 10/31/2017 10:20 PM, Dan Williams wrote:
On Tue, Oct 31, 2017 at 12:13 AM, Xiao Guangrong
wrote:
On 07/27/2017 08:54 AM, Dan Williams wrote:
At that point, would it make sense to expose these special
virtio-pmem areas to the guest in a slightly different
On Tue, Oct 31, 2017 at 12:13 AM, Xiao Guangrong
wrote:
>
>
> On 07/27/2017 08:54 AM, Dan Williams wrote:
>
>>> At that point, would it make sense to expose these special
>>> virtio-pmem areas to the guest in a slightly different way,
>>> so the regions that need
On 07/27/2017 08:54 AM, Dan Williams wrote:
At that point, would it make sense to expose these special
virtio-pmem areas to the guest in a slightly different way,
so the regions that need virtio flushing are not bound by
the regular driver, and the regular driver can continue to
work for
On Wed, Jul 26, 2017 at 4:46 PM, Rik van Riel wrote:
> On Wed, 2017-07-26 at 14:40 -0700, Dan Williams wrote:
>> On Wed, Jul 26, 2017 at 2:27 PM, Rik van Riel
>> wrote:
>> > On Wed, 2017-07-26 at 09:47 -0400, Pankaj Gupta wrote:
>> > > >
>> > >
>> > > Just want
On Wed, 2017-07-26 at 14:40 -0700, Dan Williams wrote:
> On Wed, Jul 26, 2017 at 2:27 PM, Rik van Riel
> wrote:
> > On Wed, 2017-07-26 at 09:47 -0400, Pankaj Gupta wrote:
> > > >
> > >
> > > Just want to summarize here(high level):
> > >
> > > This will require implementing
>
> On Tue, 2017-07-25 at 07:46 -0700, Dan Williams wrote:
> > On Tue, Jul 25, 2017 at 7:27 AM, Pankaj Gupta
> > wrote:
> > >
> > > Looks like only way to send flush(blk dev) from guest to host with
> > > nvdimm
> > > is using flush hint addresses. Is this the correct
On Tue, Jul 25, 2017 at 7:27 AM, Pankaj Gupta <pagu...@redhat.com> wrote:
>
>> Subject: Re: KVM "fake DAX" flushing interface - discussion
>>
>> On Mon 24-07-17 08:06:07, Pankaj Gupta wrote:
>> >
>> > > On Sun 23-07-17 13:10:34, Dan Will
> Subject: Re: KVM "fake DAX" flushing interface - discussion
>
> On Mon 24-07-17 08:06:07, Pankaj Gupta wrote:
> >
> > > On Sun 23-07-17 13:10:34, Dan Williams wrote:
> > > > On Sun, Jul 23, 2017 at 11:10 AM, Rik van Riel <r...@redhat.com>
On Mon, Jul 24, 2017 at 8:48 AM, Jan Kara wrote:
> On Mon 24-07-17 08:10:05, Dan Williams wrote:
>> On Mon, Jul 24, 2017 at 5:37 AM, Jan Kara wrote:
[..]
>> This approach would turn into a full fsync on the host. The question
>> in my mind is whether there is any
On Mon 24-07-17 08:10:05, Dan Williams wrote:
> On Mon, Jul 24, 2017 at 5:37 AM, Jan Kara wrote:
> > On Mon 24-07-17 08:06:07, Pankaj Gupta wrote:
> >>
> >> > On Sun 23-07-17 13:10:34, Dan Williams wrote:
> >> > > On Sun, Jul 23, 2017 at 11:10 AM, Rik van Riel
On Mon, Jul 24, 2017 at 5:37 AM, Jan Kara wrote:
> On Mon 24-07-17 08:06:07, Pankaj Gupta wrote:
>>
>> > On Sun 23-07-17 13:10:34, Dan Williams wrote:
>> > > On Sun, Jul 23, 2017 at 11:10 AM, Rik van Riel wrote:
>> > > > On Sun, 2017-07-23 at 09:01 -0700, Dan
On Mon 24-07-17 08:06:07, Pankaj Gupta wrote:
>
> > On Sun 23-07-17 13:10:34, Dan Williams wrote:
> > > On Sun, Jul 23, 2017 at 11:10 AM, Rik van Riel wrote:
> > > > On Sun, 2017-07-23 at 09:01 -0700, Dan Williams wrote:
> > > >> [ adding Ross and Jan ]
> > > >>
> > > >> On Sun,
> On Sun 23-07-17 13:10:34, Dan Williams wrote:
> > On Sun, Jul 23, 2017 at 11:10 AM, Rik van Riel wrote:
> > > On Sun, 2017-07-23 at 09:01 -0700, Dan Williams wrote:
> > >> [ adding Ross and Jan ]
> > >>
> > >> On Sun, Jul 23, 2017 at 7:04 AM, Rik van Riel
> >
On Sun 23-07-17 13:10:34, Dan Williams wrote:
> On Sun, Jul 23, 2017 at 11:10 AM, Rik van Riel wrote:
> > On Sun, 2017-07-23 at 09:01 -0700, Dan Williams wrote:
> >> [ adding Ross and Jan ]
> >>
> >> On Sun, Jul 23, 2017 at 7:04 AM, Rik van Riel
> >> wrote:
> >>
On Sun, Jul 23, 2017 at 11:10 AM, Rik van Riel wrote:
> On Sun, 2017-07-23 at 09:01 -0700, Dan Williams wrote:
>> [ adding Ross and Jan ]
>>
>> On Sun, Jul 23, 2017 at 7:04 AM, Rik van Riel
>> wrote:
>> >
>> > The goal is to increase density of guests, by moving
On Sun, 2017-07-23 at 09:01 -0700, Dan Williams wrote:
> [ adding Ross and Jan ]
>
> On Sun, Jul 23, 2017 at 7:04 AM, Rik van Riel
> wrote:
> >
> > The goal is to increase density of guests, by moving page
> > cache into the host (where it can be easily reclaimed).
> >
> > If
[ adding Ross and Jan ]
On Sun, Jul 23, 2017 at 7:04 AM, Rik van Riel wrote:
> On Sat, 2017-07-22 at 12:34 -0700, Dan Williams wrote:
>> On Fri, Jul 21, 2017 at 8:58 AM, Stefan Hajnoczi > > wrote:
>> >
>> > Maybe the NVDIMM folks can comment on this idea.
On Fri, Jul 21, 2017 at 8:58 AM, Stefan Hajnoczi wrote:
> On Fri, Jul 21, 2017 at 09:29:15AM -0400, Pankaj Gupta wrote:
>>
>> > > A] Problems to solve:
>> > > --
>> > >
>> > > 1] We are considering two approaches for 'fake DAX flushing interface'.
>> > >
>> >
> > A] Problems to solve:
> > --
> >
> > 1] We are considering two approaches for 'fake DAX flushing interface'.
> >
> > 1.1] fake dax with NVDIMM flush hints & KVM async page fault
> >
> > - Existing interface.
> >
> > - The approach to use flush hint address
> >
> > Hello,
> >
> > We shared a proposal for 'KVM fake DAX flushing interface'.
> >
> > https://lists.gnu.org/archive/html/qemu-devel/2017-05/msg02478.html
> >
>
> In above link,
> "Overall goal of project
>is to increase the number of virtual machines that can be
>run on a
46 matches
Mail list logo