On Wed, 2018-03-21 at 02:43 +, Chongyun Wu wrote:
> On 2018/3/21 0:51, Martin Wilck wrote:
> > The ppoll() calls of the uxlsnr thread are vital for proper
> > functioning of
> > multipathd. If the uxlsnr thread can't open the socket or fails to
> > call ppoll()
> > for other reasons, quit the d
libmultipath's prio routines can deal with pp->priority == PRIO_UNDEF
just fine. PRIO_UNDEF is just a very low priority. So there's
no reason to reject setting up a multipath map because paths have
undefined priority.
Signed-off-by: Martin Wilck
---
libmultipath/configure.c | 5 -
1 file cha
The ppoll() calls of the uxlsnr thread are vital for proper functioning of
multipathd. If the uxlsnr thread can't open the socket or fails to call ppoll()
for other reasons, quit the daemon. If we don't do that, multipathd may
hang in a state where it can't be terminated any more, because the uxlsn
On Tue, 20 Mar 2018, Mikulas Patocka wrote:
> > > Another problem with slub_max_order is that it would pad all caches to
> > > slub_max_order, even those that already have a power-of-two size (in that
> > > case, the padding is counterproductive).
> >
> > No it does not. Slub will calculate the co
On Wed, 21 Mar 2018, Christopher Lameter wrote:
> On Tue, 20 Mar 2018, Mikulas Patocka wrote:
>
> > > > Another problem with slub_max_order is that it would pad all caches to
> > > > slub_max_order, even those that already have a power-of-two size (in
> > > > that
> > > > case, the padding is
Early alpha processors cannot write a single byte or word; they read 8
bytes, modify the value in registers and write back 8 bytes.
The type blk_status_t is defined as one byte, it is often written
asynchronously by I/O completion routines, this asynchronous modification
can corrupt content of nea
I'm getting a slab named "biovec-(1<<(21-12))". It is caused by unintended
expansion of the macro BIO_MAX_PAGES. This patch renames it to biovec-max.
Signed-off-by: Mikulas Patocka
Cc: sta...@vger.kernel.org # v4.14+
---
block/bio.c |4 ++--
1 file changed, 2 insertions(+), 2 deletions
On 3/21/18 10:42 AM, Mikulas Patocka wrote:
> Early alpha processors cannot write a single byte or word; they read 8
> bytes, modify the value in registers and write back 8 bytes.
>
> The type blk_status_t is defined as one byte, it is often written
> asynchronously by I/O completion routines, thi
On Wed, 21 Mar 2018, Jens Axboe wrote:
> On 3/21/18 10:42 AM, Mikulas Patocka wrote:
> > Early alpha processors cannot write a single byte or word; they read 8
> > bytes, modify the value in registers and write back 8 bytes.
> >
> > The type blk_status_t is defined as one byte, it is often writ
On 3/21/18 11:00 AM, Mikulas Patocka wrote:
>
>
> On Wed, 21 Mar 2018, Jens Axboe wrote:
>
>> On 3/21/18 10:42 AM, Mikulas Patocka wrote:
>>> Early alpha processors cannot write a single byte or word; they read 8
>>> bytes, modify the value in registers and write back 8 bytes.
>>>
>>> The type b
On Wed, 21 Mar 2018, Mikulas Patocka wrote:
> > You should not be using the slab allocators for these. Allocate higher
> > order pages or numbers of consecutive smaller pagess from the page
> > allocator. The slab allocators are written for objects smaller than page
> > size.
>
> So, do you argue
One other thought: If you want to improve the behavior for large scale
objects allocated through kmalloc/kmemcache then we would certainly be
glad to entertain those ideas.
F.e. you could optimize the allcations > 2x PAGE_SIZE so that they do not
allocate powers of two pages. It would be relativel
On Wed, 21 Mar 2018, Matthew Wilcox wrote:
> I don't know if that's a good idea. That will contribute to fragmentation
> if the allocation is held onto for a short-to-medium length of time.
> If the allocation is for a very long period of time then those pages
> would have been unavailable anyway
On Wed, 21 Mar 2018, Matthew Wilcox wrote:
> On Wed, Mar 21, 2018 at 12:39:33PM -0500, Christopher Lameter wrote:
> > One other thought: If you want to improve the behavior for large scale
> > objects allocated through kmalloc/kmemcache then we would certainly be
> > glad to entertain those idea
On Wed, 21 Mar 2018, Christopher Lameter wrote:
> On Wed, 21 Mar 2018, Mikulas Patocka wrote:
>
> > > You should not be using the slab allocators for these. Allocate higher
> > > order pages or numbers of consecutive smaller pagess from the page
> > > allocator. The slab allocators are written
On Wed, 21 Mar 2018, Mikulas Patocka wrote:
> > > F.e. you could optimize the allcations > 2x PAGE_SIZE so that they do not
> > > allocate powers of two pages. It would be relatively easy to make
> > > kmalloc_large round the allocation to the next page size and then allocate
> > > N consecutive p
On Wed, 21 Mar 2018, Christopher Lameter wrote:
> On Wed, 21 Mar 2018, Mikulas Patocka wrote:
>
> > > > F.e. you could optimize the allcations > 2x PAGE_SIZE so that they do
> > > > not
> > > > allocate powers of two pages. It would be relatively easy to make
> > > > kmalloc_large round the al
On Wed, 21 Mar 2018, Mikulas Patocka wrote:
> So, what would you recommend for allocating 640KB objects while minimizing
> wasted space?
> * alloc_pages - rounds up to the next power of two
> * kmalloc - rounds up to the next power of two
> * alloc_pages_exact - O(n*log n) complexity; and causes m
On Wed, 21 Mar 2018, Matthew Wilcox wrote:
> > Have a look at include/linux/mempool.h.
>
> That's not what mempool is for. mempool is a cache of elements that were
> allocated from slab in the first place. (OK, technically, you don't have
> to use slab as the allocator, but since there is no all
On Wed, 21 Mar 2018, Christopher Lameter wrote:
> On Wed, 21 Mar 2018, Mikulas Patocka wrote:
>
> > So, what would you recommend for allocating 640KB objects while minimizing
> > wasted space?
> > * alloc_pages - rounds up to the next power of two
> > * kmalloc - rounds up to the next power of
On Wed, 21 Mar 2018, Christopher Lameter wrote:
> One other thought: If you want to improve the behavior for large scale
> objects allocated through kmalloc/kmemcache then we would certainly be
> glad to entertain those ideas.
>
> F.e. you could optimize the allcations > 2x PAGE_SIZE so that th
On Wed, 2018-03-21 at 01:54 +, Chongyun Wu wrote:
> Is there any special operation or conditions to reproduce the dead lock?
> I have use SCSI sysfs delete atrribute to remove stale devices in my
> previous patch and test many times, but I haven't encountered any
> deadlock problems.
Hell
On Wed, 21 Mar 2018, Mikulas Patocka wrote:
> For example, if someone creates a slab cache with the flag SLAB_CACHE_DMA,
> and he allocates an object from this cache and this allocation races with
> the user writing to /sys/kernel/slab/cache/order - then the allocator can
> for a small period of t
On Wed, 21 Mar 2018, Christopher Lameter wrote:
> On Wed, 21 Mar 2018, Mikulas Patocka wrote:
>
> > For example, if someone creates a slab cache with the flag SLAB_CACHE_DMA,
> > and he allocates an object from this cache and this allocation races with
> > the user writing to /sys/kernel/slab/c
On 3/21/18 10:49 AM, Mikulas Patocka wrote:
> I'm getting a slab named "biovec-(1<<(21-12))". It is caused by unintended
> expansion of the macro BIO_MAX_PAGES. This patch renames it to biovec-max.
Applied, thanks.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.co
On 2018/3/22 3:56, Bart Van Assche wrote:
> On Wed, 2018-03-21 at 01:54 +, Chongyun Wu wrote:
>> Is there any special operation or conditions to reproduce the dead lock?
>>I have use SCSI sysfs delete atrribute to remove stale devices in my
>> previous patch and test many times, but I haven
On 2018/3/20 23:14, Bart Van Assche wrote:
> On Tue, 2018-03-20 at 16:12 +0100, Xose Vazquez Perez wrote:
>> On 03/20/2018 03:58 PM, Bart Van Assche wrote:
>>
>>> It is on purpose that the SCSI core does not remove stale SCSI device nodes.
>>> If you want that these stale SCSI device nodes get remo
27 matches
Mail list logo