Re: Follow-up on the CXL discussion at OFTC

2021-12-01 Thread Alex Bennée


Jonathan Cameron  writes:

> On Tue, 30 Nov 2021 09:21:58 -0800
> Ben Widawsky  wrote:
>
>> On 21-11-30 13:09:56, Jonathan Cameron wrote:
>> > On Mon, 29 Nov 2021 18:28:43 +
>> > Alex Bennée  wrote:
>> >   
>> > > Ben Widawsky  writes:
>> > >   
>> > > > On 21-11-26 12:08:08, Alex Bennée wrote:
>> > > >> 
>> > > >> Ben Widawsky  writes:
>> > > >> 

>> > >   
>> > > >> * Some means at least ensuring qtest can instantiate the device and 
>> > > >> not
>> > > >>   fall over. Obviously more testing is better but it can always be
>> > > >>   expanded on in later series.
>> > > >
>> > > > This was in the patch series. It could use more testing for sure, but 
>> > > > I had
>> > > > basic functional testing in place via qtest.
>> > > 
>> > > More is always better but the basic qtest does ensure a device doesn't
>> > > segfault if it's instantiated.  
>> > 
>> > I'll confess this is a bit I haven't looked at yet. Will get Shameer to 
>> > give
>> > me a hand.
>> > 
>> > Thanks  
>> 
>> I'd certainly feel better if we had more tests. I also suspect the qtest I 
>> wrote
>> originally no longer works. The biggest challenge I had was getting gitlab CI
>> working for me.
>
> Looks like it'll be tests that slow things down. *sigh*.

Hopefully the GitLab stuff has stabilised over the last year as we've
aggressively pushed out stuff that times out and also limited some test
to only run on upstream staging branches.

The biggest hole is properly exercising KVM stuff (due to the
limitations of GitLab runners). As a result you fall back to TCG which
can get slow if your booting full distros with it.

> Why are there not enough days in the week?

"oh it's softfreeze already?" - a regular occurrence for me ;-)

>
> Jonathan

-- 
Alex Bennée



Re: Follow-up on the CXL discussion at OFTC

2021-12-01 Thread Jonathan Cameron via
On Tue, 30 Nov 2021 09:21:58 -0800
Ben Widawsky  wrote:

> On 21-11-30 13:09:56, Jonathan Cameron wrote:
> > On Mon, 29 Nov 2021 18:28:43 +
> > Alex Bennée  wrote:
> >   
> > > Ben Widawsky  writes:
> > >   
> > > > On 21-11-26 12:08:08, Alex Bennée wrote:
> > > >> 
> > > >> Ben Widawsky  writes:
> > > >> 
> > > >> > On 21-11-19 02:29:51, Shreyas Shah wrote:
> > > >> >> Hi Ben
> > > >> >> 
> > > >> >> Are you planning to add the CXL2.0 switch inside QEMU or already 
> > > >> >> added in one of the version? 
> > > >> >>  
> > > >> >
> > > >> > From me, there are no plans for QEMU anything until/unless upstream 
> > > >> > thinks it
> > > >> > will merge the existing patches, or provide feedback as to what it 
> > > >> > would take to
> > > >> > get them merged. If upstream doesn't see a point in these patches, 
> > > >> > then I really
> > > >> > don't see much value in continuing to further them. Once hardware 
> > > >> > comes out, the
> > > >> > value proposition is certainly less.
> > > >> 
> > > >> I take it:
> > > >> 
> > > >>   Subject: [RFC PATCH v3 00/31] CXL 2.0 Support
> > > >>   Date: Mon,  1 Feb 2021 16:59:17 -0800
> > > >>   Message-Id: <20210202005948.241655-1-ben.widaw...@intel.com>
> > > >> 
> > > >> is the current state of the support? I saw there was a fair amount of
> > > >> discussion on the thread so assumed there would be a v4 forthcoming at
> > > >> some point.
> > > >
> > > > Hi Alex,
> > > >
> > > > There is a v4, however, we never really had a solid plan for the 
> > > > primary issue
> > > > which was around handling CXL memory expander devices properly (both 
> > > > from an
> > > > interleaving standpoint as well as having a device which hosts multiple 
> > > > memory
> > > > capacities, persistent and volatile). I didn't feel it was worth 
> > > > sending a v4
> > > > unless someone could say
> > > >
> > > > 1. we will merge what's there and fix later, or
> > > > 2. you must have a more perfect emulation in place, or
> > > > 3. we want to see usages for a real guest
> > > 
> > > I think 1. is acceptable if the community is happy there will be ongoing
> > > development and it's not just a code dump. Given it will have a
> > > MAINTAINERS entry I think that is demonstrated.  
> > 
> > My thought is also 1.  There are a few hacks we need to clean out but
> > nothing that should take too long.  I'm sure it'll take a rev or two more.
> > Right now for example, I've added support to arm-virt and maybe need to
> > move that over to a different machine model...
> >   
> 
> The most annoying thing about rebasing it is passing the ACPI tests. They keep
> changing upstream. Being able to at least merge up to there would be huge.

Guess I really need to take a look at the tests :)  It went in clean so
I didn't poke them. Maybe we were just lucky!  A bunch of ACPI infrastructure
had changed which was the biggest update needed + amusingly x86 kernel code now
triggers the issue around smaller writes than the implementation supports for
the mailbox.  For now I've just added the implementations as that removes
a blocker on this going upstream.

> 
> > > 
> > > What's the current use case? Testing drivers before real HW comes out?
> > > Will it still be useful after real HW comes out for people wanting to
> > > debug things without HW?  
> > 
> > CXL is continuing to expand in scope and capabilities and I don't see that
> > reducing any time soon (My guess is 3 years+ to just catch up with what is
> > under discussion today).  So I see two long term use cases:
> > 
> > 1) Automated verification that we haven't broken things.  I suspect no
> > one person is going to have a test farm covering all the corner cases.
> > So we'll need emulation + firmware + kernel based testing.
> >   
> 
> Does this exist in other forms? AFAICT for x86, there isn't much example of
> this.

We run a bunch of stuff internally on a CI farm, targetting various trees,
though this is a complex case because of more elements than most hardware tests
etc.  Our friends in openEuler run a bunch more stuff as well on a mixture of
physical and emulated machines on various architectures.  The other distros have
similar setups though perhaps don't provide as much public info as our folks do.
We are a bit early for CXL support so far so I don't think we have
yet moved beyond manual testing.  It'll come though as it's vital once customers
start caring about the hardware they bought.

Otherwise, if we contribute the resources there are various other orgs who
run tests on stable / mainline and next + various vendor trees. That stuff is
a mixture of real and virtual hardware and is used to verify stable releases
very quickly before Greg pushes them out.

Emulation based testing is easier obviously and we do some of that + I know 
others
do. Once the CXL support is upstream, adding all the tuning parameters to QEMU 
to
start exercising corner cases will be needed to support this. 

> 
> > 2) New feature 

Re: Follow-up on the CXL discussion at OFTC

2021-11-30 Thread Ben Widawsky
On 21-11-30 13:09:56, Jonathan Cameron wrote:
> On Mon, 29 Nov 2021 18:28:43 +
> Alex Bennée  wrote:
> 
> > Ben Widawsky  writes:
> > 
> > > On 21-11-26 12:08:08, Alex Bennée wrote:  
> > >> 
> > >> Ben Widawsky  writes:
> > >>   
> > >> > On 21-11-19 02:29:51, Shreyas Shah wrote:  
> > >> >> Hi Ben
> > >> >> 
> > >> >> Are you planning to add the CXL2.0 switch inside QEMU or already 
> > >> >> added in one of the version? 
> > >> >>
> > >> >
> > >> > From me, there are no plans for QEMU anything until/unless upstream 
> > >> > thinks it
> > >> > will merge the existing patches, or provide feedback as to what it 
> > >> > would take to
> > >> > get them merged. If upstream doesn't see a point in these patches, 
> > >> > then I really
> > >> > don't see much value in continuing to further them. Once hardware 
> > >> > comes out, the
> > >> > value proposition is certainly less.  
> > >> 
> > >> I take it:
> > >> 
> > >>   Subject: [RFC PATCH v3 00/31] CXL 2.0 Support
> > >>   Date: Mon,  1 Feb 2021 16:59:17 -0800
> > >>   Message-Id: <20210202005948.241655-1-ben.widaw...@intel.com>
> > >> 
> > >> is the current state of the support? I saw there was a fair amount of
> > >> discussion on the thread so assumed there would be a v4 forthcoming at
> > >> some point.  
> > >
> > > Hi Alex,
> > >
> > > There is a v4, however, we never really had a solid plan for the primary 
> > > issue
> > > which was around handling CXL memory expander devices properly (both from 
> > > an
> > > interleaving standpoint as well as having a device which hosts multiple 
> > > memory
> > > capacities, persistent and volatile). I didn't feel it was worth sending 
> > > a v4
> > > unless someone could say
> > >
> > > 1. we will merge what's there and fix later, or
> > > 2. you must have a more perfect emulation in place, or
> > > 3. we want to see usages for a real guest  
> > 
> > I think 1. is acceptable if the community is happy there will be ongoing
> > development and it's not just a code dump. Given it will have a
> > MAINTAINERS entry I think that is demonstrated.
> 
> My thought is also 1.  There are a few hacks we need to clean out but
> nothing that should take too long.  I'm sure it'll take a rev or two more.
> Right now for example, I've added support to arm-virt and maybe need to
> move that over to a different machine model...
> 

The most annoying thing about rebasing it is passing the ACPI tests. They keep
changing upstream. Being able to at least merge up to there would be huge.

> > 
> > What's the current use case? Testing drivers before real HW comes out?
> > Will it still be useful after real HW comes out for people wanting to
> > debug things without HW?
> 
> CXL is continuing to expand in scope and capabilities and I don't see that
> reducing any time soon (My guess is 3 years+ to just catch up with what is
> under discussion today).  So I see two long term use cases:
> 
> 1) Automated verification that we haven't broken things.  I suspect no
> one person is going to have a test farm covering all the corner cases.
> So we'll need emulation + firmware + kernel based testing.
> 

Does this exist in other forms? AFAICT for x86, there isn't much example of
this.

> 2) New feature prove out.  We have already used it for some features that
> will appear in the next spec version. Obviously I can't say what or
> send that code out yet.  Its very useful and the spec draft has changed
> in various ways a result.  I can't commit others, but Huawei will be
> doing more of this going forwards.  For that we need a stable base to
> which we add the new stuff once spec publication allows it.
> 

I can't commit for Intel but I will say there's more latitude now to work on
projects like this compared to when I first wrote the patches. I have
interesting in continuing to develop this as well. I'm very interested in
supporting interleave and hotplug specifically.

> > 
> > >
> > > I had hoped we could merge what was there mostly as is and fix it up as 
> > > we go.
> > > It's useful in the state it is now, and as time goes on, we find more 
> > > usecases
> > > for it in a VMM, and not just driver development.
> > >  
> > >> 
> > >> Adding new subsystems to QEMU does seem to be a pain point for new
> > >> contributors. Patches tend to fall through the cracks of existing
> > >> maintainers who spend most of their time looking at stuff that directly
> > >> touches their files. There is also a reluctance to merge large chunks of
> > >> functionality without an identified maintainer (and maybe reviewers) who
> > >> can be the contact point for new patches. So in short you need:
> > >> 
> > >>  - Maintainer Reviewed-by/Acked-by on patches that touch other 
> > >> sub-systems  
> > >
> > > This is the challenging one. I have Cc'd the relevant maintainers (hw/pci 
> > > and
> > > hw/mem are the two) in the past, but I think there interest is lacking 
> > > (and
> > > reasonably so, it is an entirely different subsystem).  
> > 
> 

Re: Follow-up on the CXL discussion at OFTC

2021-11-30 Thread Jonathan Cameron via
On Mon, 29 Nov 2021 18:28:43 +
Alex Bennée  wrote:

> Ben Widawsky  writes:
> 
> > On 21-11-26 12:08:08, Alex Bennée wrote:  
> >> 
> >> Ben Widawsky  writes:
> >>   
> >> > On 21-11-19 02:29:51, Shreyas Shah wrote:  
> >> >> Hi Ben
> >> >> 
> >> >> Are you planning to add the CXL2.0 switch inside QEMU or already added 
> >> >> in one of the version? 
> >> >>
> >> >
> >> > From me, there are no plans for QEMU anything until/unless upstream 
> >> > thinks it
> >> > will merge the existing patches, or provide feedback as to what it would 
> >> > take to
> >> > get them merged. If upstream doesn't see a point in these patches, then 
> >> > I really
> >> > don't see much value in continuing to further them. Once hardware comes 
> >> > out, the
> >> > value proposition is certainly less.  
> >> 
> >> I take it:
> >> 
> >>   Subject: [RFC PATCH v3 00/31] CXL 2.0 Support
> >>   Date: Mon,  1 Feb 2021 16:59:17 -0800
> >>   Message-Id: <20210202005948.241655-1-ben.widaw...@intel.com>
> >> 
> >> is the current state of the support? I saw there was a fair amount of
> >> discussion on the thread so assumed there would be a v4 forthcoming at
> >> some point.  
> >
> > Hi Alex,
> >
> > There is a v4, however, we never really had a solid plan for the primary 
> > issue
> > which was around handling CXL memory expander devices properly (both from an
> > interleaving standpoint as well as having a device which hosts multiple 
> > memory
> > capacities, persistent and volatile). I didn't feel it was worth sending a 
> > v4
> > unless someone could say
> >
> > 1. we will merge what's there and fix later, or
> > 2. you must have a more perfect emulation in place, or
> > 3. we want to see usages for a real guest  
> 
> I think 1. is acceptable if the community is happy there will be ongoing
> development and it's not just a code dump. Given it will have a
> MAINTAINERS entry I think that is demonstrated.

My thought is also 1.  There are a few hacks we need to clean out but
nothing that should take too long.  I'm sure it'll take a rev or two more.
Right now for example, I've added support to arm-virt and maybe need to
move that over to a different machine model...

> 
> What's the current use case? Testing drivers before real HW comes out?
> Will it still be useful after real HW comes out for people wanting to
> debug things without HW?

CXL is continuing to expand in scope and capabilities and I don't see that
reducing any time soon (My guess is 3 years+ to just catch up with what is
under discussion today).  So I see two long term use cases:

1) Automated verification that we haven't broken things.  I suspect no
one person is going to have a test farm covering all the corner cases.
So we'll need emulation + firmware + kernel based testing.

2) New feature prove out.  We have already used it for some features that
will appear in the next spec version. Obviously I can't say what or
send that code out yet.  Its very useful and the spec draft has changed
in various ways a result.  I can't commit others, but Huawei will be
doing more of this going forwards.  For that we need a stable base to
which we add the new stuff once spec publication allows it.

> 
> >
> > I had hoped we could merge what was there mostly as is and fix it up as we 
> > go.
> > It's useful in the state it is now, and as time goes on, we find more 
> > usecases
> > for it in a VMM, and not just driver development.
> >  
> >> 
> >> Adding new subsystems to QEMU does seem to be a pain point for new
> >> contributors. Patches tend to fall through the cracks of existing
> >> maintainers who spend most of their time looking at stuff that directly
> >> touches their files. There is also a reluctance to merge large chunks of
> >> functionality without an identified maintainer (and maybe reviewers) who
> >> can be the contact point for new patches. So in short you need:
> >> 
> >>  - Maintainer Reviewed-by/Acked-by on patches that touch other sub-systems 
> >>  
> >
> > This is the challenging one. I have Cc'd the relevant maintainers (hw/pci 
> > and
> > hw/mem are the two) in the past, but I think there interest is lacking (and
> > reasonably so, it is an entirely different subsystem).  
> 
> So the best approach to that is to leave a Cc: tag in the patch itself
> on your next posting so we can see the maintainer did see it but didn't
> contribute a review tag. This is also a good reason to keep Message-Id
> tags in patches so we can go back to the original threads.
> 
> So in my latest PR you'll see:
> 
>   Signed-off-by: Willian Rampazzo 
>   Reviewed-by: Beraldo Leal 
>   Message-Id: <20211122191124.31620-1-willi...@redhat.com>
>   Signed-off-by: Alex Bennée 
>   Reviewed-by: Philippe Mathieu-Daudé 
>   Message-Id: <20211129140932.4115115-7-alex.ben...@linaro.org>
> 
> which shows the Message-Id from Willian's original posting and the
> latest Message-Id from my posting of the maintainer tree (I trim off my
> old ones).
> 
> >>  - Reviewed-by tags on the 

Re: Follow-up on the CXL discussion at OFTC

2021-11-29 Thread Alex Bennée


Ben Widawsky  writes:

> On 21-11-26 12:08:08, Alex Bennée wrote:
>> 
>> Ben Widawsky  writes:
>> 
>> > On 21-11-19 02:29:51, Shreyas Shah wrote:
>> >> Hi Ben
>> >> 
>> >> Are you planning to add the CXL2.0 switch inside QEMU or already added in 
>> >> one of the version? 
>> >>  
>> >
>> > From me, there are no plans for QEMU anything until/unless upstream thinks 
>> > it
>> > will merge the existing patches, or provide feedback as to what it would 
>> > take to
>> > get them merged. If upstream doesn't see a point in these patches, then I 
>> > really
>> > don't see much value in continuing to further them. Once hardware comes 
>> > out, the
>> > value proposition is certainly less.
>> 
>> I take it:
>> 
>>   Subject: [RFC PATCH v3 00/31] CXL 2.0 Support
>>   Date: Mon,  1 Feb 2021 16:59:17 -0800
>>   Message-Id: <20210202005948.241655-1-ben.widaw...@intel.com>
>> 
>> is the current state of the support? I saw there was a fair amount of
>> discussion on the thread so assumed there would be a v4 forthcoming at
>> some point.
>
> Hi Alex,
>
> There is a v4, however, we never really had a solid plan for the primary issue
> which was around handling CXL memory expander devices properly (both from an
> interleaving standpoint as well as having a device which hosts multiple memory
> capacities, persistent and volatile). I didn't feel it was worth sending a v4
> unless someone could say
>
> 1. we will merge what's there and fix later, or
> 2. you must have a more perfect emulation in place, or
> 3. we want to see usages for a real guest

I think 1. is acceptable if the community is happy there will be ongoing
development and it's not just a code dump. Given it will have a
MAINTAINERS entry I think that is demonstrated.

What's the current use case? Testing drivers before real HW comes out?
Will it still be useful after real HW comes out for people wanting to
debug things without HW?

>
> I had hoped we could merge what was there mostly as is and fix it up as we go.
> It's useful in the state it is now, and as time goes on, we find more usecases
> for it in a VMM, and not just driver development.
>
>> 
>> Adding new subsystems to QEMU does seem to be a pain point for new
>> contributors. Patches tend to fall through the cracks of existing
>> maintainers who spend most of their time looking at stuff that directly
>> touches their files. There is also a reluctance to merge large chunks of
>> functionality without an identified maintainer (and maybe reviewers) who
>> can be the contact point for new patches. So in short you need:
>> 
>>  - Maintainer Reviewed-by/Acked-by on patches that touch other sub-systems
>
> This is the challenging one. I have Cc'd the relevant maintainers (hw/pci and
> hw/mem are the two) in the past, but I think there interest is lacking (and
> reasonably so, it is an entirely different subsystem).

So the best approach to that is to leave a Cc: tag in the patch itself
on your next posting so we can see the maintainer did see it but didn't
contribute a review tag. This is also a good reason to keep Message-Id
tags in patches so we can go back to the original threads.

So in my latest PR you'll see:

  Signed-off-by: Willian Rampazzo 
  Reviewed-by: Beraldo Leal 
  Message-Id: <20211122191124.31620-1-willi...@redhat.com>
  Signed-off-by: Alex Bennée 
  Reviewed-by: Philippe Mathieu-Daudé 
  Message-Id: <20211129140932.4115115-7-alex.ben...@linaro.org>

which shows the Message-Id from Willian's original posting and the
latest Message-Id from my posting of the maintainer tree (I trim off my
old ones).

>>  - Reviewed-by tags on the new sub-system patches from anyone who 
>> understands CXL
>
> I have/had those from Jonathan.
>
>>  - Some* in-tree testing (so it doesn't quietly bitrot)
>
> We had this, but it's stale now. We can bring this back up.
>
>>  - A patch adding the sub-system to MAINTAINERS with identified people
>
> That was there too. Since the original posting, I'd be happy to sign Jonathan 
> up
> to this if he's willing.

Sounds good to me.

>> * Some means at least ensuring qtest can instantiate the device and not
>>   fall over. Obviously more testing is better but it can always be
>>   expanded on in later series.
>
> This was in the patch series. It could use more testing for sure, but I had
> basic functional testing in place via qtest.

More is always better but the basic qtest does ensure a device doesn't
segfault if it's instantiated.

>
>> 
>> Is that the feedback you were looking for?
>
> You validated my assumptions as to what's needed, but your first bullet is the
> one I can't seem to pin down.
>
> Thanks.
> Ben


-- 
Alex Bennée



Re: Follow-up on the CXL discussion at OFTC

2021-11-29 Thread Ben Widawsky
On 21-11-26 12:08:08, Alex Bennée wrote:
> 
> Ben Widawsky  writes:
> 
> > On 21-11-19 02:29:51, Shreyas Shah wrote:
> >> Hi Ben
> >> 
> >> Are you planning to add the CXL2.0 switch inside QEMU or already added in 
> >> one of the version? 
> >>  
> >
> > From me, there are no plans for QEMU anything until/unless upstream thinks 
> > it
> > will merge the existing patches, or provide feedback as to what it would 
> > take to
> > get them merged. If upstream doesn't see a point in these patches, then I 
> > really
> > don't see much value in continuing to further them. Once hardware comes 
> > out, the
> > value proposition is certainly less.
> 
> I take it:
> 
>   Subject: [RFC PATCH v3 00/31] CXL 2.0 Support
>   Date: Mon,  1 Feb 2021 16:59:17 -0800
>   Message-Id: <20210202005948.241655-1-ben.widaw...@intel.com>
> 
> is the current state of the support? I saw there was a fair amount of
> discussion on the thread so assumed there would be a v4 forthcoming at
> some point.

Hi Alex,

There is a v4, however, we never really had a solid plan for the primary issue
which was around handling CXL memory expander devices properly (both from an
interleaving standpoint as well as having a device which hosts multiple memory
capacities, persistent and volatile). I didn't feel it was worth sending a v4
unless someone could say
1. we will merge what's there and fix later, or
2. you must have a more perfect emulation in place, or
3. we want to see usages for a real guest

I had hoped we could merge what was there mostly as is and fix it up as we go.
It's useful in the state it is now, and as time goes on, we find more usecases
for it in a VMM, and not just driver development.

> 
> Adding new subsystems to QEMU does seem to be a pain point for new
> contributors. Patches tend to fall through the cracks of existing
> maintainers who spend most of their time looking at stuff that directly
> touches their files. There is also a reluctance to merge large chunks of
> functionality without an identified maintainer (and maybe reviewers) who
> can be the contact point for new patches. So in short you need:
> 
>  - Maintainer Reviewed-by/Acked-by on patches that touch other sub-systems

This is the challenging one. I have Cc'd the relevant maintainers (hw/pci and
hw/mem are the two) in the past, but I think there interest is lacking (and
reasonably so, it is an entirely different subsystem).

>  - Reviewed-by tags on the new sub-system patches from anyone who understands 
> CXL

I have/had those from Jonathan.

>  - Some* in-tree testing (so it doesn't quietly bitrot)

We had this, but it's stale now. We can bring this back up.

>  - A patch adding the sub-system to MAINTAINERS with identified people

That was there too. Since the original posting, I'd be happy to sign Jonathan up
to this if he's willing.

> 
> * Some means at least ensuring qtest can instantiate the device and not
>   fall over. Obviously more testing is better but it can always be
>   expanded on in later series.

This was in the patch series. It could use more testing for sure, but I had
basic functional testing in place via qtest.

> 
> Is that the feedback you were looking for?

You validated my assumptions as to what's needed, but your first bullet is the
one I can't seem to pin down.

Thanks.
Ben



Re: Follow-up on the CXL discussion at OFTC

2021-11-26 Thread Alex Bennée


Ben Widawsky  writes:

> On 21-11-19 02:29:51, Shreyas Shah wrote:
>> Hi Ben
>> 
>> Are you planning to add the CXL2.0 switch inside QEMU or already added in 
>> one of the version? 
>>  
>
> From me, there are no plans for QEMU anything until/unless upstream thinks it
> will merge the existing patches, or provide feedback as to what it would take 
> to
> get them merged. If upstream doesn't see a point in these patches, then I 
> really
> don't see much value in continuing to further them. Once hardware comes out, 
> the
> value proposition is certainly less.

I take it:

  Subject: [RFC PATCH v3 00/31] CXL 2.0 Support
  Date: Mon,  1 Feb 2021 16:59:17 -0800
  Message-Id: <20210202005948.241655-1-ben.widaw...@intel.com>

is the current state of the support? I saw there was a fair amount of
discussion on the thread so assumed there would be a v4 forthcoming at
some point.

Adding new subsystems to QEMU does seem to be a pain point for new
contributors. Patches tend to fall through the cracks of existing
maintainers who spend most of their time looking at stuff that directly
touches their files. There is also a reluctance to merge large chunks of
functionality without an identified maintainer (and maybe reviewers) who
can be the contact point for new patches. So in short you need:

 - Maintainer Reviewed-by/Acked-by on patches that touch other sub-systems
 - Reviewed-by tags on the new sub-system patches from anyone who understands 
CXL
 - Some* in-tree testing (so it doesn't quietly bitrot)
 - A patch adding the sub-system to MAINTAINERS with identified people

* Some means at least ensuring qtest can instantiate the device and not
  fall over. Obviously more testing is better but it can always be
  expanded on in later series.

Is that the feedback you were looking for?

-- 
Alex Bennée



Re: Follow-up on the CXL discussion at OFTC

2021-11-26 Thread Jonathan Cameron via
On Fri, 19 Nov 2021 18:53:43 +
Jonathan Cameron  wrote:

> On Thu, 18 Nov 2021 17:52:07 -0800
> Ben Widawsky  wrote:
> 
> > On 21-11-18 15:20:34, Saransh Gupta1 wrote:  
> > > Hi Ben and Jonathan,
> > > 
> > > Thanks for your replies. I'm looking forward to the patches.
> > > 
> > > For QEMU, I see hotplug support as an item on the list and would like to 
> > > start working on it. It would be great if you can provide some pointers 
> > > about how I should go about it.
> > 
> > It's been a while, so I can't recall what's actually missing. I think it 
> > should
> > mostly behave like a normal PCIe endpoint.
> >   
> > > Also, which version of kernel and QEMU (maybe Jonathan's upcoming 
> > > version) 
> > > would be a good starting point for it?
> > 
> > If he rebased and claims it works I have no reason to doubt it :-). I have a
> > small fix on my v4 branch if you want to use the latest port patches.  
> 
> Thanks. I'd missed that one. Now pushed down into the original patch.
> 
> It occurred to me that technically I only know my rebase works on Arm64...
> Fingers crossed for x86.
> 
> Anyhow, I'll run more tests on it next week (possibly even including x86),

x86 tests throw up an issue with a 2 byte write to the box registers.
For now I've papered over that by explicitly adding support - obvious how to
do it if you look at mailbox_reg_read.  I want to understand what the source
of that access is though before deciding if this fix is correct and that might
take a little bit of tracking down.

Jonathan

> 
> Available at: 
> https://github.com/hisilicon/qemu/tree/cxl-hacks
> 
> For arm64 the description at
> https://people.kernel.org/jic23/ will almost work with this. 
> There is a bug however that I need to track down which currently means you
> need to set the pxb uid to the same as the bus number.   Shouldn't take
> long to fix but it's Friday evening...
> (add uid=0x80 to the options for pxb-cxl)
> 
> I dropped the CMA patch from Avery from this tree as need to improve
> the way it's getting hold of some parts of libSPDM and move to the current
> version of that library (rather than the old openSPDM)
> 
> Ben, if you don't mind me trying to push this forwards, I'll do a bit
> of cleanup and reordering then make use of the QEMU folks we have / know and
> try and start getting your hard work upstream.
> 
> Whilst I've not poked the various interfaces yet, this is working with
> a kernel tree that is current cxl/next + Ira's DOE series and Ben's region 
> series
> + (for fun) my SPDM series.  That tree's a franken monster so I'm not planning
> to share it unless anyone has particular need of it.  Hopefully the various
> parts will move forwards this cycle anyway so I can stop having to spend
> as much time on rebases!
> 
> Jonathan 
> 
> >   
> > > 
> > > Thanks,
> > > Saransh
> > > 
> > > 
> > > 
> > > From:   "Jonathan Cameron" 
> > > To: "Ben Widawsky" 
> > > Cc: "Saransh Gupta1" , , 
> > > 
> > > Date:   11/17/2021 09:32 AM
> > > Subject:[EXTERNAL] Re: Follow-up on the CXL discussion at OFTC
> > > 
> > > 
> > > 
> > > On Wed, 17 Nov 2021 08:57:19 -0800
> > > Ben Widawsky  wrote:
> > > 
> > > > Hi Saransh. Please add the list for these kind of questions. I've 
> > > converted your
> > > > HTML mail, but going forward, the list will eat it, so please use text  
> > > >
> > > only.
> > > > 
> > > > On 21-11-16 00:14:33, Saransh Gupta1 wrote:
> > > > >Hi Ben,
> > > > > 
> > > > >This is Saransh from IBM. Sorry to have (unintentionally) dropped  
> > > > >
> > > out
> > > > >of the conversion on OFTC, I'm new to IRC.
> > > > >Just wanted to follow-up on the discussion there. We discussed 
> > > about
> > > > >helping with linux patches reviews. On that front, I have 
> > > identified
> > > > >some colleague(s) who can help me with this. Let me know if/how you
> > > > >want to proceed with that. 
> > > > 
> > > > Currently the ball is in my court to re-roll the RFC v2 patches [1] 
> > > based on
> > > > feedback from Dan. I've implemented all/most of it, but I'm still 
> > > debugging some
> > > > issues with the result.
> &

Re: Follow-up on the CXL discussion at OFTC

2021-11-19 Thread Ben Widawsky
On 21-11-19 18:53:43, Jonathan Cameron wrote:
> On Thu, 18 Nov 2021 17:52:07 -0800
> Ben Widawsky  wrote:
> 
> > On 21-11-18 15:20:34, Saransh Gupta1 wrote:
> > > Hi Ben and Jonathan,
> > > 
> > > Thanks for your replies. I'm looking forward to the patches.
> > > 
> > > For QEMU, I see hotplug support as an item on the list and would like to 
> > > start working on it. It would be great if you can provide some pointers 
> > > about how I should go about it.  
> > 
> > It's been a while, so I can't recall what's actually missing. I think it 
> > should
> > mostly behave like a normal PCIe endpoint.
> > 
> > > Also, which version of kernel and QEMU (maybe Jonathan's upcoming 
> > > version) 
> > > would be a good starting point for it?  
> > 
> > If he rebased and claims it works I have no reason to doubt it :-). I have a
> > small fix on my v4 branch if you want to use the latest port patches.
> 
> Thanks. I'd missed that one. Now pushed down into the original patch.
> 
> It occurred to me that technically I only know my rebase works on Arm64...
> Fingers crossed for x86.
> 
> Anyhow, I'll run more tests on it next week (possibly even including x86),
> 
> Available at: 
> https://github.com/hisilicon/qemu/tree/cxl-hacks
> 
> For arm64 the description at
> https://people.kernel.org/jic23/ will almost work with this. 
> There is a bug however that I need to track down which currently means you
> need to set the pxb uid to the same as the bus number.   Shouldn't take
> long to fix but it's Friday evening...
> (add uid=0x80 to the options for pxb-cxl)
> 
> I dropped the CMA patch from Avery from this tree as need to improve
> the way it's getting hold of some parts of libSPDM and move to the current
> version of that library (rather than the old openSPDM)
> 
> Ben, if you don't mind me trying to push this forwards, I'll do a bit
> of cleanup and reordering then make use of the QEMU folks we have / know and
> try and start getting your hard work upstream.

I don't mind at all.

> 
> Whilst I've not poked the various interfaces yet, this is working with
> a kernel tree that is current cxl/next + Ira's DOE series and Ben's region 
> series
> + (for fun) my SPDM series.  That tree's a franken monster so I'm not planning
> to share it unless anyone has particular need of it.  Hopefully the various
> parts will move forwards this cycle anyway so I can stop having to spend
> as much time on rebases!
> 
> Jonathan 
> 
> > 
> > > 
> > > Thanks,
> > > Saransh
> > > 
> > > 
> > > 
> > > From:   "Jonathan Cameron" 
> > > To: "Ben Widawsky" 
> > > Cc: "Saransh Gupta1" , , 
> > > 
> > > Date:   11/17/2021 09:32 AM
> > > Subject:[EXTERNAL] Re: Follow-up on the CXL discussion at OFTC
> > > 
> > > 
> > > 
> > > On Wed, 17 Nov 2021 08:57:19 -0800
> > > Ben Widawsky  wrote:
> > >   
> > > > Hi Saransh. Please add the list for these kind of questions. I've   
> > > converted your  
> > > > HTML mail, but going forward, the list will eat it, so please use text  
> > > >  
> > > only.  
> > > > 
> > > > On 21-11-16 00:14:33, Saransh Gupta1 wrote:  
> > > > >Hi Ben,
> > > > > 
> > > > >This is Saransh from IBM. Sorry to have (unintentionally) dropped  
> > > > >  
> > > out  
> > > > >of the conversion on OFTC, I'm new to IRC.
> > > > >Just wanted to follow-up on the discussion there. We discussed   
> > > about  
> > > > >helping with linux patches reviews. On that front, I have   
> > > identified  
> > > > >some colleague(s) who can help me with this. Let me know if/how you
> > > > >want to proceed with that.   
> > > > 
> > > > Currently the ball is in my court to re-roll the RFC v2 patches [1]   
> > > based on  
> > > > feedback from Dan. I've implemented all/most of it, but I'm still   
> > > debugging some  
> > > > issues with the result.
> > > >   
> > > > > 
> > > > >Maybe not urgently, but my team would also like to get an   
> > > understanding  
> > > > >of the missing pieces in QEMU. Initially our focus is on type3   
> > > memory  
> > > > >access and hotplug support. Most of the work that my team does is
> > &g

Re: Follow-up on the CXL discussion at OFTC

2021-11-19 Thread Jonathan Cameron
On Thu, 18 Nov 2021 17:52:07 -0800
Ben Widawsky  wrote:

> On 21-11-18 15:20:34, Saransh Gupta1 wrote:
> > Hi Ben and Jonathan,
> > 
> > Thanks for your replies. I'm looking forward to the patches.
> > 
> > For QEMU, I see hotplug support as an item on the list and would like to 
> > start working on it. It would be great if you can provide some pointers 
> > about how I should go about it.  
> 
> It's been a while, so I can't recall what's actually missing. I think it 
> should
> mostly behave like a normal PCIe endpoint.
> 
> > Also, which version of kernel and QEMU (maybe Jonathan's upcoming version) 
> > would be a good starting point for it?  
> 
> If he rebased and claims it works I have no reason to doubt it :-). I have a
> small fix on my v4 branch if you want to use the latest port patches.

Thanks. I'd missed that one. Now pushed down into the original patch.

It occurred to me that technically I only know my rebase works on Arm64...
Fingers crossed for x86.

Anyhow, I'll run more tests on it next week (possibly even including x86),

Available at: 
https://github.com/hisilicon/qemu/tree/cxl-hacks

For arm64 the description at
https://people.kernel.org/jic23/ will almost work with this. 
There is a bug however that I need to track down which currently means you
need to set the pxb uid to the same as the bus number.   Shouldn't take
long to fix but it's Friday evening...
(add uid=0x80 to the options for pxb-cxl)

I dropped the CMA patch from Avery from this tree as need to improve
the way it's getting hold of some parts of libSPDM and move to the current
version of that library (rather than the old openSPDM)

Ben, if you don't mind me trying to push this forwards, I'll do a bit
of cleanup and reordering then make use of the QEMU folks we have / know and
try and start getting your hard work upstream.

Whilst I've not poked the various interfaces yet, this is working with
a kernel tree that is current cxl/next + Ira's DOE series and Ben's region 
series
+ (for fun) my SPDM series.  That tree's a franken monster so I'm not planning
to share it unless anyone has particular need of it.  Hopefully the various
parts will move forwards this cycle anyway so I can stop having to spend
as much time on rebases!

Jonathan 

> 
> > 
> > Thanks,
> > Saransh
> > 
> > 
> > 
> > From:   "Jonathan Cameron" 
> > To:     "Ben Widawsky" 
> > Cc: "Saransh Gupta1" , , 
> > 
> > Date:   11/17/2021 09:32 AM
> > Subject:[EXTERNAL] Re: Follow-up on the CXL discussion at OFTC
> > 
> > 
> > 
> > On Wed, 17 Nov 2021 08:57:19 -0800
> > Ben Widawsky  wrote:
> >   
> > > Hi Saransh. Please add the list for these kind of questions. I've   
> > converted your  
> > > HTML mail, but going forward, the list will eat it, so please use text   
> > only.  
> > > 
> > > On 21-11-16 00:14:33, Saransh Gupta1 wrote:  
> > > >Hi Ben,
> > > > 
> > > >This is Saransh from IBM. Sorry to have (unintentionally) dropped   
> > out  
> > > >of the conversion on OFTC, I'm new to IRC.
> > > >Just wanted to follow-up on the discussion there. We discussed   
> > about  
> > > >helping with linux patches reviews. On that front, I have   
> > identified  
> > > >some colleague(s) who can help me with this. Let me know if/how you
> > > >want to proceed with that.   
> > > 
> > > Currently the ball is in my court to re-roll the RFC v2 patches [1]   
> > based on  
> > > feedback from Dan. I've implemented all/most of it, but I'm still   
> > debugging some  
> > > issues with the result.
> > >   
> > > > 
> > > >Maybe not urgently, but my team would also like to get an   
> > understanding  
> > > >of the missing pieces in QEMU. Initially our focus is on type3   
> > memory  
> > > >access and hotplug support. Most of the work that my team does is
> > > >open-source, so contributing to the QEMU effort is another possible
> > > >line of collaboration.   
> > > 
> > > If you haven't seen it already, check out my LPC talk [2]. The QEMU   
> > patches  
> > > could use a lot of love. Mostly, I have little/no motivation until   
> > upstream  
> > > shows an interest because I don't have time currently to make sure I   
> > don't break  
> > > vs. upstream. If you want more details here, I can provide them, and I   
> > will Cc  
> > > the qemu-devel mailing list; the end of the LPC talk [2] does have 

Re: Follow-up on the CXL discussion at OFTC

2021-11-18 Thread Ben Widawsky
On 21-11-19 02:29:51, Shreyas Shah wrote:
> Hi Ben
> 
> Are you planning to add the CXL2.0 switch inside QEMU or already added in one 
> of the version? 
>  

>From me, there are no plans for QEMU anything until/unless upstream thinks it
will merge the existing patches, or provide feedback as to what it would take to
get them merged. If upstream doesn't see a point in these patches, then I really
don't see much value in continuing to further them. Once hardware comes out, the
value proposition is certainly less.

Having said that, once I get the port/region patches merged for the Linux
driver, I do intend to go back and try to implement a basic switch so that we
can test those flows.

I admit, I'm curious why you're interested in switches.

> Regards,
> Shreyas
> 
> -Original Message-
> From: Ben Widawsky  
> Sent: Thursday, November 18, 2021 5:48 PM
> To: Shreyas Shah 
> Cc: Saransh Gupta1 ; Jonathan Cameron 
> ; linux-...@vger.kernel.org; 
> qemu-devel@nongnu.org
> Subject: Re: Follow-up on the CXL discussion at OFTC
> 
> On 21-11-18 22:52:56, Shreyas Shah wrote:
> > Hello Folks,
> > 
> > Any plan to add CXL2.0 switch ports in QEMU? 
> 
> What's your definition of plan?
> 
> > 
> > Regards,
> > Shreyas
> 
> [snip]



RE: Follow-up on the CXL discussion at OFTC

2021-11-18 Thread Shreyas Shah via
Hi Ben

Are you planning to add the CXL2.0 switch inside QEMU or already added in one 
of the version? 
 
Regards,
Shreyas

-Original Message-
From: Ben Widawsky  
Sent: Thursday, November 18, 2021 5:48 PM
To: Shreyas Shah 
Cc: Saransh Gupta1 ; Jonathan Cameron 
; linux-...@vger.kernel.org; qemu-devel@nongnu.org
Subject: Re: Follow-up on the CXL discussion at OFTC

On 21-11-18 22:52:56, Shreyas Shah wrote:
> Hello Folks,
> 
> Any plan to add CXL2.0 switch ports in QEMU? 

What's your definition of plan?

> 
> Regards,
> Shreyas

[snip]



Re: Follow-up on the CXL discussion at OFTC

2021-11-18 Thread Ben Widawsky
On 21-11-18 15:20:34, Saransh Gupta1 wrote:
> Hi Ben and Jonathan,
> 
> Thanks for your replies. I'm looking forward to the patches.
> 
> For QEMU, I see hotplug support as an item on the list and would like to 
> start working on it. It would be great if you can provide some pointers 
> about how I should go about it.

It's been a while, so I can't recall what's actually missing. I think it should
mostly behave like a normal PCIe endpoint.

> Also, which version of kernel and QEMU (maybe Jonathan's upcoming version) 
> would be a good starting point for it?

If he rebased and claims it works I have no reason to doubt it :-). I have a
small fix on my v4 branch if you want to use the latest port patches.

> 
> Thanks,
> Saransh
> 
> 
> 
> From:   "Jonathan Cameron" 
> To: "Ben Widawsky" 
> Cc: "Saransh Gupta1" , , 
> 
> Date:   11/17/2021 09:32 AM
> Subject:[EXTERNAL] Re: Follow-up on the CXL discussion at OFTC
> 
> 
> 
> On Wed, 17 Nov 2021 08:57:19 -0800
> Ben Widawsky  wrote:
> 
> > Hi Saransh. Please add the list for these kind of questions. I've 
> converted your
> > HTML mail, but going forward, the list will eat it, so please use text 
> only.
> > 
> > On 21-11-16 00:14:33, Saransh Gupta1 wrote:
> > >Hi Ben,
> > > 
> > >This is Saransh from IBM. Sorry to have (unintentionally) dropped 
> out
> > >of the conversion on OFTC, I'm new to IRC.
> > >Just wanted to follow-up on the discussion there. We discussed 
> about
> > >helping with linux patches reviews. On that front, I have 
> identified
> > >some colleague(s) who can help me with this. Let me know if/how you
> > >want to proceed with that. 
> > 
> > Currently the ball is in my court to re-roll the RFC v2 patches [1] 
> based on
> > feedback from Dan. I've implemented all/most of it, but I'm still 
> debugging some
> > issues with the result.
> > 
> > > 
> > >Maybe not urgently, but my team would also like to get an 
> understanding
> > >of the missing pieces in QEMU. Initially our focus is on type3 
> memory
> > >access and hotplug support. Most of the work that my team does is
> > >open-source, so contributing to the QEMU effort is another possible
> > >line of collaboration. 
> > 
> > If you haven't seen it already, check out my LPC talk [2]. The QEMU 
> patches
> > could use a lot of love. Mostly, I have little/no motivation until 
> upstream
> > shows an interest because I don't have time currently to make sure I 
> don't break
> > vs. upstream. If you want more details here, I can provide them, and I 
> will Cc
> > the qemu-devel mailing list; the end of the LPC talk [2] does have a 
> list.
> Hi Ben, Saransh
> 
> I have a forward port of the series + DOE etc to near current QEMU that is 
> lightly tested,
> and can look to push that out publicly later this week.
> 
> I'd also like to push QEMU support forwards and to start getting this 
> upstream in QEMU
> + fill in some of the missing parts.
> 
> Was aiming to make progress on this a few weeks ago, but as ever other 
> stuff
> got in the way.
> 
> +CC qemu-devel in case anyone else also looking at this.
> 
> Jonathan
> 
> 
> 
> > 
> > > 
> > >Thanks for your help and guidance!
> > > 
> > >Best,
> > >Saransh Gupta
> > >Research Staff Member, IBM Research 
> > 
> > [1]: 
> https://lore.kernel.org/linux-cxl/20211022183709.1199701-1-ben.widaw...@intel.com/T/#t
>  
> 
> > [2]: 
> https://www.youtube.com/watch?v=g89SLjt5Bd4=PLVsQ_xZBEyN3wA8Ej4BUjudXFbXuxhnfc=49
>  
> 
> 
> 
> 
> 
> 



Re: Follow-up on the CXL discussion at OFTC

2021-11-18 Thread Ben Widawsky
On 21-11-18 22:52:56, Shreyas Shah wrote:
> Hello Folks,
> 
> Any plan to add CXL2.0 switch ports in QEMU? 

What's your definition of plan?

> 
> Regards,
> Shreyas

[snip]



RE: Follow-up on the CXL discussion at OFTC

2021-11-18 Thread Shreyas Shah via
Hello Folks,

Any plan to add CXL2.0 switch ports in QEMU? 

Regards,
Shreyas

-Original Message-
From: Saransh Gupta1  
Sent: Thursday, November 18, 2021 2:21 PM
To: Jonathan Cameron ; Ben Widawsky 

Cc: linux-...@vger.kernel.org; qemu-devel@nongnu.org
Subject: RE: Follow-up on the CXL discussion at OFTC

Hi Ben and Jonathan,

Thanks for your replies. I'm looking forward to the patches.

For QEMU, I see hotplug support as an item on the list and would like to start 
working on it. It would be great if you can provide some pointers about how I 
should go about it.
Also, which version of kernel and QEMU (maybe Jonathan's upcoming version) 
would be a good starting point for it?

Thanks,
Saransh



From:   "Jonathan Cameron" 
To: "Ben Widawsky" 
Cc: "Saransh Gupta1" , , 

Date:   11/17/2021 09:32 AM
Subject:    [EXTERNAL] Re: Follow-up on the CXL discussion at OFTC



On Wed, 17 Nov 2021 08:57:19 -0800
Ben Widawsky  wrote:

> Hi Saransh. Please add the list for these kind of questions. I've
converted your
> HTML mail, but going forward, the list will eat it, so please use text
only.
> 
> On 21-11-16 00:14:33, Saransh Gupta1 wrote:
> >Hi Ben,
> > 
> >This is Saransh from IBM. Sorry to have (unintentionally) dropped
out
> >of the conversion on OFTC, I'm new to IRC.
> >Just wanted to follow-up on the discussion there. We discussed
about
> >helping with linux patches reviews. On that front, I have
identified
> >some colleague(s) who can help me with this. Let me know if/how you
> >want to proceed with that. 
> 
> Currently the ball is in my court to re-roll the RFC v2 patches [1]
based on
> feedback from Dan. I've implemented all/most of it, but I'm still
debugging some
> issues with the result.
> 
> > 
> >Maybe not urgently, but my team would also like to get an
understanding
> >of the missing pieces in QEMU. Initially our focus is on type3
memory
> >access and hotplug support. Most of the work that my team does is
> >open-source, so contributing to the QEMU effort is another possible
> >line of collaboration. 
> 
> If you haven't seen it already, check out my LPC talk [2]. The QEMU
patches
> could use a lot of love. Mostly, I have little/no motivation until
upstream
> shows an interest because I don't have time currently to make sure I
don't break
> vs. upstream. If you want more details here, I can provide them, and I
will Cc
> the qemu-devel mailing list; the end of the LPC talk [2] does have a
list.
Hi Ben, Saransh

I have a forward port of the series + DOE etc to near current QEMU that is 
lightly tested, and can look to push that out publicly later this week.

I'd also like to push QEMU support forwards and to start getting this upstream 
in QEMU
+ fill in some of the missing parts.

Was aiming to make progress on this a few weeks ago, but as ever other 
stuff
got in the way.

+CC qemu-devel in case anyone else also looking at this.

Jonathan



> 
> > 
> >Thanks for your help and guidance!
> > 
> >Best,
> >Saransh Gupta
> >Research Staff Member, IBM Research 
> 
> [1]: 
https://lore.kernel.org/linux-cxl/20211022183709.1199701-1-ben.widaw...@intel.com/T/#t
 

> [2]: 
https://www.youtube.com/watch?v=g89SLjt5Bd4=PLVsQ_xZBEyN3wA8Ej4BUjudXFbXuxhnfc=49
 









RE: Follow-up on the CXL discussion at OFTC

2021-11-18 Thread Saransh Gupta1
Hi Ben and Jonathan,

Thanks for your replies. I'm looking forward to the patches.

For QEMU, I see hotplug support as an item on the list and would like to 
start working on it. It would be great if you can provide some pointers 
about how I should go about it.
Also, which version of kernel and QEMU (maybe Jonathan's upcoming version) 
would be a good starting point for it?

Thanks,
Saransh



From:   "Jonathan Cameron" 
To: "Ben Widawsky" 
Cc: "Saransh Gupta1" , , 

Date:   11/17/2021 09:32 AM
Subject:    [EXTERNAL] Re: Follow-up on the CXL discussion at OFTC



On Wed, 17 Nov 2021 08:57:19 -0800
Ben Widawsky  wrote:

> Hi Saransh. Please add the list for these kind of questions. I've 
converted your
> HTML mail, but going forward, the list will eat it, so please use text 
only.
> 
> On 21-11-16 00:14:33, Saransh Gupta1 wrote:
> >Hi Ben,
> > 
> >This is Saransh from IBM. Sorry to have (unintentionally) dropped 
out
> >of the conversion on OFTC, I'm new to IRC.
> >Just wanted to follow-up on the discussion there. We discussed 
about
> >helping with linux patches reviews. On that front, I have 
identified
> >some colleague(s) who can help me with this. Let me know if/how you
> >want to proceed with that. 
> 
> Currently the ball is in my court to re-roll the RFC v2 patches [1] 
based on
> feedback from Dan. I've implemented all/most of it, but I'm still 
debugging some
> issues with the result.
> 
> > 
> >Maybe not urgently, but my team would also like to get an 
understanding
> >of the missing pieces in QEMU. Initially our focus is on type3 
memory
> >access and hotplug support. Most of the work that my team does is
> >open-source, so contributing to the QEMU effort is another possible
> >line of collaboration. 
> 
> If you haven't seen it already, check out my LPC talk [2]. The QEMU 
patches
> could use a lot of love. Mostly, I have little/no motivation until 
upstream
> shows an interest because I don't have time currently to make sure I 
don't break
> vs. upstream. If you want more details here, I can provide them, and I 
will Cc
> the qemu-devel mailing list; the end of the LPC talk [2] does have a 
list.
Hi Ben, Saransh

I have a forward port of the series + DOE etc to near current QEMU that is 
lightly tested,
and can look to push that out publicly later this week.

I'd also like to push QEMU support forwards and to start getting this 
upstream in QEMU
+ fill in some of the missing parts.

Was aiming to make progress on this a few weeks ago, but as ever other 
stuff
got in the way.

+CC qemu-devel in case anyone else also looking at this.

Jonathan



> 
> > 
> >Thanks for your help and guidance!
> > 
> >Best,
> >Saransh Gupta
> >Research Staff Member, IBM Research 
> 
> [1]: 
https://lore.kernel.org/linux-cxl/20211022183709.1199701-1-ben.widaw...@intel.com/T/#t
 

> [2]: 
https://www.youtube.com/watch?v=g89SLjt5Bd4=PLVsQ_xZBEyN3wA8Ej4BUjudXFbXuxhnfc=49
 









Re: Follow-up on the CXL discussion at OFTC

2021-11-17 Thread Jonathan Cameron
On Wed, 17 Nov 2021 08:57:19 -0800
Ben Widawsky  wrote:

> Hi Saransh. Please add the list for these kind of questions. I've converted 
> your
> HTML mail, but going forward, the list will eat it, so please use text only.
> 
> On 21-11-16 00:14:33, Saransh Gupta1 wrote:
> >Hi Ben,
> > 
> >This is Saransh from IBM. Sorry to have (unintentionally) dropped out
> >of the conversion on OFTC, I'm new to IRC.
> >Just wanted to follow-up on the discussion there. We discussed about
> >helping with linux patches reviews. On that front, I have identified
> >some colleague(s) who can help me with this. Let me know if/how you
> >want to proceed with that.  
> 
> Currently the ball is in my court to re-roll the RFC v2 patches [1] based on
> feedback from Dan. I've implemented all/most of it, but I'm still debugging 
> some
> issues with the result.
> 
> > 
> >Maybe not urgently, but my team would also like to get an understanding
> >of the missing pieces in QEMU. Initially our focus is on type3 memory
> >access and hotplug support. Most of the work that my team does is
> >open-source, so contributing to the QEMU effort is another possible
> >line of collaboration.  
> 
> If you haven't seen it already, check out my LPC talk [2]. The QEMU patches
> could use a lot of love. Mostly, I have little/no motivation until upstream
> shows an interest because I don't have time currently to make sure I don't 
> break
> vs. upstream. If you want more details here, I can provide them, and I will Cc
> the qemu-devel mailing list; the end of the LPC talk [2] does have a list.
Hi Ben, Saransh

I have a forward port of the series + DOE etc to near current QEMU that is 
lightly tested,
and can look to push that out publicly later this week.

I'd also like to push QEMU support forwards and to start getting this upstream 
in QEMU
+ fill in some of the missing parts.

Was aiming to make progress on this a few weeks ago, but as ever other stuff
got in the way.

+CC qemu-devel in case anyone else also looking at this.

Jonathan



> 
> > 
> >Thanks for your help and guidance!
> > 
> >Best,
> >Saransh Gupta
> >Research Staff Member, IBM Research  
> 
> [1]: 
> https://lore.kernel.org/linux-cxl/20211022183709.1199701-1-ben.widaw...@intel.com/T/#t
> [2]: 
> https://www.youtube.com/watch?v=g89SLjt5Bd4=PLVsQ_xZBEyN3wA8Ej4BUjudXFbXuxhnfc=49