Hi All
> A BPF track will join the annual LSF/MM Summit this year! Please read the
> updated description and CFP information below.
Well if we are adding BPF to LSF/MM I have to submit a request to discuss BPF
for block devices please!
There has been quite a bit of activity around the concept
: Eric Wehage
Sent: December 31, 2018 11:20 PM
To: Bjorn Helgaas; yu
Cc: linux-...@vger.kernel.org; linux-kernel@vger.kernel.org; Logan Gunthorpe;
Stephen Bates; Jonathan Cameron; Alexander Duyck
Subject: RE: How to force RC to forward p2p TLPs
There is no method to force an RC to forward
>I use gen_pool_first_fit_align() as pool allocation algorithm allocating
>buffers with requested alignment. But if a chunk base address is not
>aligned to the requested alignment(from some reason), the returned
>address is not aligned too.
Alexey
Can you try using
>I use gen_pool_first_fit_align() as pool allocation algorithm allocating
>buffers with requested alignment. But if a chunk base address is not
>aligned to the requested alignment(from some reason), the returned
>address is not aligned too.
Alexey
Can you try using
Palmer
> I don't really know anything about this, but you're welcome to add a
>
>Reviewed-by: Palmer Dabbelt
Thanks. I think it would be good to get someone who's familiar with linux/mm to
take a look.
> if you think it'll help. I'm assuming you're targeting a different tree for
Palmer
> I don't really know anything about this, but you're welcome to add a
>
>Reviewed-by: Palmer Dabbelt
Thanks. I think it would be good to get someone who's familiar with linux/mm to
take a look.
> if you think it'll help. I'm assuming you're targeting a different tree for
Hi Jason and Leon
> This year we expect to have close to a day set aside for RDMA related
> topics. Including up to half a day for the thorny general kernel issues
> related to get_user_pages(), particularly as exasperated by RDMA.
Looks like a great set of topics.
> RDMA and PCI peer
Hi Jason and Leon
> This year we expect to have close to a day set aside for RDMA related
> topics. Including up to half a day for the thorny general kernel issues
> related to get_user_pages(), particularly as exasperated by RDMA.
Looks like a great set of topics.
> RDMA and PCI peer
All
> Alex (or anyone else) can you point to where IOVA addresses are generated?
A case of RTFM perhaps (though a pointer to the code would still be
appreciated).
https://www.kernel.org/doc/Documentation/Intel-IOMMU.txt
Some exceptions to IOVA
---
Interrupt ranges are not
All
> Alex (or anyone else) can you point to where IOVA addresses are generated?
A case of RTFM perhaps (though a pointer to the code would still be
appreciated).
https://www.kernel.org/doc/Documentation/Intel-IOMMU.txt
Some exceptions to IOVA
---
Interrupt ranges are not
>I find this hard to believe. There's always the possibility that some
>part of the system doesn't support ACS so if the PCI bus addresses and
>IOVA overlap there's a good chance that P2P and ATS won't work at all on
>some hardware.
I tend to agree but this comes down to how
>I find this hard to believe. There's always the possibility that some
>part of the system doesn't support ACS so if the PCI bus addresses and
>IOVA overlap there's a good chance that P2P and ATS won't work at all on
>some hardware.
I tend to agree but this comes down to how
Hi Jerome
>Hopes this helps understanding the big picture. I over simplify thing and
>devils is in the details.
This was a great primer thanks for putting it together. An LWN.net article
perhaps ;-)??
Stephen
Hi Jerome
>Hopes this helps understanding the big picture. I over simplify thing and
>devils is in the details.
This was a great primer thanks for putting it together. An LWN.net article
perhaps ;-)??
Stephen
Hi Jerome
>Note on GPU we do would not rely on ATS for peer to peer. Some part
>of the GPU (DMA engines) do not necessarily support ATS. Yet those
>are the part likely to be use in peer to peer.
OK this is good to know. I agree the DMA engine is probably one of the GPU
components
Hi Jerome
>Note on GPU we do would not rely on ATS for peer to peer. Some part
>of the GPU (DMA engines) do not necessarily support ATS. Yet those
>are the part likely to be use in peer to peer.
OK this is good to know. I agree the DMA engine is probably one of the GPU
components
> Not to me. In the p2pdma code we specifically program DMA engines with
> the PCI bus address.
Ah yes of course. Brain fart on my part. We are not programming the P2PDMA
initiator with an IOVA but with the PCI bus address...
> So regardless of whether we are using the IOMMU or
> not, the
> Not to me. In the p2pdma code we specifically program DMA engines with
> the PCI bus address.
Ah yes of course. Brain fart on my part. We are not programming the P2PDMA
initiator with an IOVA but with the PCI bus address...
> So regardless of whether we are using the IOMMU or
> not, the
Hi Jerome
> As it is tie to PASID this is done using IOMMU so looks for caller
> of amd_iommu_bind_pasid() or intel_svm_bind_mm() in GPU the existing
> user is the AMD GPU driver see:
Ah thanks. This cleared things up for me. A quick search shows there are still
no users of
Hi Jerome
> As it is tie to PASID this is done using IOMMU so looks for caller
> of amd_iommu_bind_pasid() or intel_svm_bind_mm() in GPU the existing
> user is the AMD GPU driver see:
Ah thanks. This cleared things up for me. A quick search shows there are still
no users of
Hi Christian
> Why would a switch not identify that as a peer address? We use the PASID
>together with ATS to identify the address space which a transaction
>should use.
I think you are conflating two types of TLPs here. If the device supports ATS
then it will issue a TR TLP to obtain
Hi Christian
> Why would a switch not identify that as a peer address? We use the PASID
>together with ATS to identify the address space which a transaction
>should use.
I think you are conflating two types of TLPs here. If the device supports ATS
then it will issue a TR TLP to obtain
Hi Jerome
> Now inside that page table you can point GPU virtual address
> to use GPU memory or use system memory. Those system memory entry can
> also be mark as ATS against a given PASID.
Thanks. This all makes sense.
But do you have examples of this in a kernel driver (if so can you
Hi Jerome
> Now inside that page table you can point GPU virtual address
> to use GPU memory or use system memory. Those system memory entry can
> also be mark as ATS against a given PASID.
Thanks. This all makes sense.
But do you have examples of this in a kernel driver (if so can you
Christian
>Interesting point, give me a moment to check that. That finally makes
>all the hardware I have standing around here valuable :)
Yes. At the very least it provides an initial standards based path for P2P DMAs
across RPs which is something we have discussed on this list in
Christian
>Interesting point, give me a moment to check that. That finally makes
>all the hardware I have standing around here valuable :)
Yes. At the very least it provides an initial standards based path for P2P DMAs
across RPs which is something we have discussed on this list in
Jerome and Christian
> I think there is confusion here, Alex properly explained the scheme
> PCIE-device do a ATS request to the IOMMU which returns a valid
> translation for a virtual address. Device can then use that address
> directly without going through IOMMU for translation.
So I went
Jerome and Christian
> I think there is confusion here, Alex properly explained the scheme
> PCIE-device do a ATS request to the IOMMU which returns a valid
> translation for a virtual address. Device can then use that address
> directly without going through IOMMU for translation.
So I went
Hi Don
>RDMA VFs lend themselves to NVMEoF w/device-assignment need a way to
>put NVME 'resources' into an assignable/manageable object for
> 'IOMMU-grouping',
>which is really a 'DMA security domain' and less an 'IOMMU grouping
> domain'.
Ha, I like your term "DMA Security
Hi Don
>RDMA VFs lend themselves to NVMEoF w/device-assignment need a way to
>put NVME 'resources' into an assignable/manageable object for
> 'IOMMU-grouping',
>which is really a 'DMA security domain' and less an 'IOMMU grouping
> domain'.
Ha, I like your term "DMA Security
Hi Logan
>Yeah, I'm having a hard time coming up with an easy enough solution for
>the user. I agree with Dan though, the bus renumbering risk would be
>fairly low in the custom hardware seeing the switches are likely going
>to be directly soldered to the same board with the CPU.
Hi Logan
>Yeah, I'm having a hard time coming up with an easy enough solution for
>the user. I agree with Dan though, the bus renumbering risk would be
>fairly low in the custom hardware seeing the switches are likely going
>to be directly soldered to the same board with the CPU.
Hi Alex and Don
>Correct, the VM has no concept of the host's IOMMU groups, only the
> hypervisor knows about the groups,
But as I understand it these groups are usually passed through to VMs on a
pre-group basis by the hypervisor? So IOMMU group 1 might be passed to VM A and
IOMMU
Hi Alex and Don
>Correct, the VM has no concept of the host's IOMMU groups, only the
> hypervisor knows about the groups,
But as I understand it these groups are usually passed through to VMs on a
pre-group basis by the hypervisor? So IOMMU group 1 might be passed to VM A and
IOMMU
>Yeah, so based on the discussion I'm leaning toward just having a
>command line option that takes a list of BDFs and disables ACS for them.
>(Essentially as Dan has suggested.) This avoids the shotgun.
I concur that this seems to be where the conversation is taking us.
@Alex -
>Yeah, so based on the discussion I'm leaning toward just having a
>command line option that takes a list of BDFs and disables ACS for them.
>(Essentially as Dan has suggested.) This avoids the shotgun.
I concur that this seems to be where the conversation is taking us.
@Alex -
Hi Alex
>But it would be a much easier proposal to disable ACS when the IOMMU is
>not enabled, ACS has no real purpose in that case.
I guess one issue I have with this is that it disables IOMMU groups for all
Root Ports and not just the one(s) we wish to do p2pdma on.
>The
Hi Alex
>But it would be a much easier proposal to disable ACS when the IOMMU is
>not enabled, ACS has no real purpose in that case.
I guess one issue I have with this is that it disables IOMMU groups for all
Root Ports and not just the one(s) we wish to do p2pdma on.
>The
Hi Jerome
>I think there is confusion here, Alex properly explained the scheme
> PCIE-device do a ATS request to the IOMMU which returns a valid
>translation for a virtual address. Device can then use that address
>directly without going through IOMMU for translation.
This makes
Hi Jerome
>I think there is confusion here, Alex properly explained the scheme
> PCIE-device do a ATS request to the IOMMU which returns a valid
>translation for a virtual address. Device can then use that address
>directly without going through IOMMU for translation.
This makes
Hi Don
>Well, p2p DMA is a function of a cooperating 'agent' somewhere above the two
>devices.
>That agent should 'request' to the kernel that ACS be removed/circumvented
> (p2p enabled) btwn two endpoints.
>I recommend doing so via a sysfs method.
Yes we looked at something like this
Hi Don
>Well, p2p DMA is a function of a cooperating 'agent' somewhere above the two
>devices.
>That agent should 'request' to the kernel that ACS be removed/circumvented
> (p2p enabled) btwn two endpoints.
>I recommend doing so via a sysfs method.
Yes we looked at something like this
Hi Dan
>It seems unwieldy that this is a compile time option and not a runtime
>option. Can't we have a kernel command line option to opt-in to this
>behavior rather than require a wholly separate kernel image?
I think because of the security implications associated with p2pdma and
Hi Dan
>It seems unwieldy that this is a compile time option and not a runtime
>option. Can't we have a kernel command line option to opt-in to this
>behavior rather than require a wholly separate kernel image?
I think because of the security implications associated with p2pdma and
Hi Christian
> AMD APUs mandatory need the ACS flag set for the GPU integrated in the
> CPU when IOMMU is enabled or otherwise you will break SVM.
OK but in this case aren't you losing (many of) the benefits of P2P since all
DMAs will now get routed up to the IOMMU before being passed
Hi Christian
> AMD APUs mandatory need the ACS flag set for the GPU integrated in the
> CPU when IOMMU is enabled or otherwise you will break SVM.
OK but in this case aren't you losing (many of) the benefits of P2P since all
DMAs will now get routed up to the IOMMU before being passed
> I'll see if I can get our PCI SIG people to follow this through
Hi Jonathan
Can you let me know if this moves forward within PCI-SIG? I would like to track
it. I can see this being doable between Root Ports that reside in the same Root
Complex but might become more challenging to
> I'll see if I can get our PCI SIG people to follow this through
Hi Jonathan
Can you let me know if this moves forward within PCI-SIG? I would like to track
it. I can see this being doable between Root Ports that reside in the same Root
Complex but might become more challenging to
> That would be very nice but many devices do not support the internal
> route.
But Logan in the NVMe case we are discussing movement within a single function
(i.e. from a NVMe namespace to a NVMe CMB on the same function). Bjorn is
discussing movement between two functions (PFs or VFs) in the
> That would be very nice but many devices do not support the internal
> route.
But Logan in the NVMe case we are discussing movement within a single function
(i.e. from a NVMe namespace to a NVMe CMB on the same function). Bjorn is
discussing movement between two functions (PFs or VFs) in the
> I've seen the response that peers directly below a Root Port could not
> DMA to each other through the Root Port because of the "route to self"
> issue, and I'm not disputing that.
Bjorn
You asked me for a reference to RTS in the PCIe specification. As luck would
have it I ended up in an
> I've seen the response that peers directly below a Root Port could not
> DMA to each other through the Root Port because of the "route to self"
> issue, and I'm not disputing that.
Bjorn
You asked me for a reference to RTS in the PCIe specification. As luck would
have it I ended up in an
> P2P over PCI/PCI-X is quite common in devices like raid controllers.
Hi Dan
Do you mean between PCIe devices below the RAID controller? Isn't it pretty
novel to be able to support PCIe EPs below a RAID controller (as opposed to
SCSI based devices)?
> It would be useful if those
> P2P over PCI/PCI-X is quite common in devices like raid controllers.
Hi Dan
Do you mean between PCIe devices below the RAID controller? Isn't it pretty
novel to be able to support PCIe EPs below a RAID controller (as opposed to
SCSI based devices)?
> It would be useful if those
>I assume you want to exclude Root Ports because of multi-function
> devices and the "route to self" error. I was hoping for a reference
> to that so I could learn more about it.
Apologies Bjorn. This slipped through my net. I will try and get you a
reference for RTS in the next couple of
>I assume you want to exclude Root Ports because of multi-function
> devices and the "route to self" error. I was hoping for a reference
> to that so I could learn more about it.
Apologies Bjorn. This slipped through my net. I will try and get you a
reference for RTS in the next couple of
Hi Sinan
>If hardware doesn't support it, blacklisting should have been the right
>path and I still think that you should remove all switch business from the
> code.
>I did not hear enough justification for having a switch requirement
>for P2P.
We disagree. As does the
Hi Sinan
>If hardware doesn't support it, blacklisting should have been the right
>path and I still think that you should remove all switch business from the
> code.
>I did not hear enough justification for having a switch requirement
>for P2P.
We disagree. As does the
>> It sounds like you have very tight hardware expectations for this to work
>> at this moment. You also don't want to generalize this code for others and
>> address the shortcomings.
> No, that's the way the community has pushed this work
Hi Sinan
Thanks for all the input. As Logan has pointed
>> It sounds like you have very tight hardware expectations for this to work
>> at this moment. You also don't want to generalize this code for others and
>> address the shortcomings.
> No, that's the way the community has pushed this work
Hi Sinan
Thanks for all the input. As Logan has pointed
>Yes i need to document that some more in hmm.txt...
Hi Jermone, thanks for the explanation. Can I suggest you update hmm.txt with
what you sent out?
> I am about to send RFC for nouveau, i am still working out some bugs.
Great. I will keep an eye out for it. An example user of hmm will
>Yes i need to document that some more in hmm.txt...
Hi Jermone, thanks for the explanation. Can I suggest you update hmm.txt with
what you sent out?
> I am about to send RFC for nouveau, i am still working out some bugs.
Great. I will keep an eye out for it. An example user of hmm will
> It seems people miss-understand HMM :(
Hi Jerome
Your unhappy face emoticon made me sad so I went off to (re)read up on HMM.
Along the way I came up with a couple of things.
While hmm.txt is really nice to read it makes no mention of DEVICE_PRIVATE and
DEVICE_PUBLIC. It also gives no
> It seems people miss-understand HMM :(
Hi Jerome
Your unhappy face emoticon made me sad so I went off to (re)read up on HMM.
Along the way I came up with a couple of things.
While hmm.txt is really nice to read it makes no mention of DEVICE_PRIVATE and
DEVICE_PUBLIC. It also gives no
>http://nvmexpress.org/wp-content/uploads/NVM-Express-1.3-Ratified-TPs.zip
@Keith - my apologies.
@Christoph - thanks for the link
So my understanding of when the technical content surrounding new NVMe
Technical Proposals (TPs) was wrong. I though the TP content could only be
discussed
>http://nvmexpress.org/wp-content/uploads/NVM-Express-1.3-Ratified-TPs.zip
@Keith - my apologies.
@Christoph - thanks for the link
So my understanding of when the technical content surrounding new NVMe
Technical Proposals (TPs) was wrong. I though the TP content could only be
discussed
> We don't want to lump these all together without knowing which region you're
> allocating from, right?
In all seriousness I do agree with you on these Keith in the long term. We
would consider adding property flags for the memory as it is added to the p2p
core and then the allocator could
> We don't want to lump these all together without knowing which region you're
> allocating from, right?
In all seriousness I do agree with you on these Keith in the long term. We
would consider adding property flags for the memory as it is added to the p2p
core and then the allocator could
> There's a meaningful difference between writing to an NVMe CMB vs PMR
When the PMR spec becomes public we can discuss how best to integrate it into
the P2P framework (if at all) ;-).
Stephen
> There's a meaningful difference between writing to an NVMe CMB vs PMR
When the PMR spec becomes public we can discuss how best to integrate it into
the P2P framework (if at all) ;-).
Stephen
> No, locality matters. If you have a bunch of NICs and bunch of drives
> and the allocator chooses to put all P2P memory on a single drive your
> performance will suck horribly even if all the traffic is offloaded.
Sagi brought this up earlier in his comments about the _find_ function.
> No, locality matters. If you have a bunch of NICs and bunch of drives
> and the allocator chooses to put all P2P memory on a single drive your
> performance will suck horribly even if all the traffic is offloaded.
Sagi brought this up earlier in his comments about the _find_ function.
> I'm pretty sure the spec disallows routing-to-self so doing a P2P
> transaction in that sense isn't going to work unless the device
> specifically supports it and intercepts the traffic before it gets to
> the port.
This is correct. Unless the device intercepts the TLP before it hits the
> I'm pretty sure the spec disallows routing-to-self so doing a P2P
> transaction in that sense isn't going to work unless the device
> specifically supports it and intercepts the traffic before it gets to
> the port.
This is correct. Unless the device intercepts the TLP before it hits the
>> We'd prefer to have a generic way to get p2pmem instead of restricting
>> ourselves to only using CMBs. We did work in the past where the P2P memory
>> was part of an IB adapter and not the NVMe card. So this won't work if it's
>> an NVMe only interface.
> It just seems like it it
>> We'd prefer to have a generic way to get p2pmem instead of restricting
>> ourselves to only using CMBs. We did work in the past where the P2P memory
>> was part of an IB adapter and not the NVMe card. So this won't work if it's
>> an NVMe only interface.
> It just seems like it it
> The intention of HMM is to be useful for all device memory that wish
> to have struct page for various reasons.
Hi Jermone and thanks for your input! Understood. We have looked at HMM in the
past and long term I definitely would like to consider how we can add P2P
functionality to HMM for
> The intention of HMM is to be useful for all device memory that wish
> to have struct page for various reasons.
Hi Jermone and thanks for your input! Understood. We have looked at HMM in the
past and long term I definitely would like to consider how we can add P2P
functionality to HMM for
> your kernel provider needs to decide whether they favor device assignment or
> p2p
Thanks Alex! The hardware requirements for P2P (switch, high performance EPs)
are such that we really only expect CONFIG_P2P_DMA to be enabled in specific
instances and in those instances the users have made a
> your kernel provider needs to decide whether they favor device assignment or
> p2p
Thanks Alex! The hardware requirements for P2P (switch, high performance EPs)
are such that we really only expect CONFIG_P2P_DMA to be enabled in specific
instances and in those instances the users have made a
> I agree, I don't think this series should target anything other than
> using p2p memory located in one of the devices expected to participate
> in the p2p trasnaction for a first pass..
I disagree. There is definitely interest in using a NVMe CMB as a bounce buffer
and in deploying
> I agree, I don't think this series should target anything other than
> using p2p memory located in one of the devices expected to participate
> in the p2p trasnaction for a first pass..
I disagree. There is definitely interest in using a NVMe CMB as a bounce buffer
and in deploying
Thanks for the detailed review Bjorn!
>>
>> + Enabling this option will also disable ACS on all ports behind
>> + any PCIe switch. This effictively puts all devices behind any
>> + switch into the same IOMMU group.
>
> Does this really mean "all devices behind the same Root
Thanks for the detailed review Bjorn!
>>
>> + Enabling this option will also disable ACS on all ports behind
>> + any PCIe switch. This effictively puts all devices behind any
>> + switch into the same IOMMU group.
>
> Does this really mean "all devices behind the same Root
>> So Oliver (CC) was having issues getting any of that to work for us.
>>
>> The problem is that acccording to him (I didn't double check the latest
>> patches) you effectively hotplug the PCIe memory into the system when
>> creating struct pages.
>>
>> This cannot possibly work for us. First
>> So Oliver (CC) was having issues getting any of that to work for us.
>>
>> The problem is that acccording to him (I didn't double check the latest
>> patches) you effectively hotplug the PCIe memory into the system when
>> creating struct pages.
>>
>> This cannot possibly work for us. First
> > Ideally, we'd want to use an NVME CMB buffer as p2p memory. This would
> > save an extra PCI transfer as the NVME card could just take the data
> > out of it's own memory. However, at this time, cards with CMB buffers
> > don't seem to be available.
> Can you describe what would be the plan
> > Ideally, we'd want to use an NVME CMB buffer as p2p memory. This would
> > save an extra PCI transfer as the NVME card could just take the data
> > out of it's own memory. However, at this time, cards with CMB buffers
> > don't seem to be available.
> Can you describe what would be the plan
> Any plans adding the capability to nvme-rdma? Should be
> straight-forward... In theory, the use-case would be rdma backend
> fabric behind. Shouldn't be hard to test either...
Nice idea Sagi. Yes we have been starting to look at that. Though again we
would probably want to impose the
> Any plans adding the capability to nvme-rdma? Should be
> straight-forward... In theory, the use-case would be rdma backend
> fabric behind. Shouldn't be hard to test either...
Nice idea Sagi. Yes we have been starting to look at that. Though again we
would probably want to impose the
> On Feb 6, 2018, at 8:02 AM, Keith Busch wrote:
>
>> On Mon, Feb 05, 2018 at 03:32:23PM -0700, sba...@raithlin.com wrote:
>>
>> -if (dev->cmb && (dev->cmbsz & NVME_CMBSZ_SQS)) {
>> +if (dev->cmb && use_cmb_sqes && (dev->cmbsz & NVME_CMBSZ_SQS)) {
>
> Is this a
> On Feb 6, 2018, at 8:02 AM, Keith Busch wrote:
>
>> On Mon, Feb 05, 2018 at 03:32:23PM -0700, sba...@raithlin.com wrote:
>>
>> -if (dev->cmb && (dev->cmbsz & NVME_CMBSZ_SQS)) {
>> +if (dev->cmb && use_cmb_sqes && (dev->cmbsz & NVME_CMBSZ_SQS)) {
>
> Is this a prep patch for
>> This patch adds a new boot option to the pci kernel parameter called
>> "acs_disable" that will disable ACS. This is useful for PCI peer to
>> peer communication but can cause problems when IOVA isolation is
>> required and an IOMMU is enabled. Use with care.
> Eww.
Thanks for the feedback
>> This patch adds a new boot option to the pci kernel parameter called
>> "acs_disable" that will disable ACS. This is useful for PCI peer to
>> peer communication but can cause problems when IOVA isolation is
>> required and an IOMMU is enabled. Use with care.
> Eww.
Thanks for the feedback
> Do we still need #include ? For me, it compiles without it.
Yes we do. Kbuild reported a failure when I tried omitting it
(arm-multi_v7_defconfig).
> Reviewed-by: Daniel Mentz danielme...@google.com
Thanks for the review
Andrew can you look at picking this up or do you want me to respin
> Do we still need #include ? For me, it compiles without it.
Yes we do. Kbuild reported a failure when I tried omitting it
(arm-multi_v7_defconfig).
> Reviewed-by: Daniel Mentz danielme...@google.com
Thanks for the review
Andrew can you look at picking this up or do you want me to respin
> We have atomic_long_t for that. Please use it instead. It will be
> 64-bit on 64-bit archs, and 32-bit on 32-bit archs, which seems to
> fit your purpose here.
Thanks you Mathieu! Yes atomic_long_t looks perfect for this and addresses
Daniel’s concerns for 32 bit systems. I’ll prepare a v2
> We have atomic_long_t for that. Please use it instead. It will be
> 64-bit on 64-bit archs, and 32-bit on 32-bit archs, which seems to
> fit your purpose here.
Thanks you Mathieu! Yes atomic_long_t looks perfect for this and addresses
Daniel’s concerns for 32 bit systems. I’ll prepare a v2
> I found that genalloc is very slow for large chunk sizes because
> bitmap_find_next_zero_area has to grind through that entire bitmap.
> Hence, I recommend avoiding genalloc for large chunk sizes.
Thanks for the feedback Daniel! We have been doing 16GiB without any noticeable
issues.
> I'm
> I found that genalloc is very slow for large chunk sizes because
> bitmap_find_next_zero_area has to grind through that entire bitmap.
> Hence, I recommend avoiding genalloc for large chunk sizes.
Thanks for the feedback Daniel! We have been doing 16GiB without any noticeable
issues.
> I'm
1 - 100 of 162 matches
Mail list logo