Hi Jens,
On Thu, Mar 30, 2017 at 07:38:26PM -0600, Jens Axboe wrote:
> On 03/30/2017 05:45 PM, Minchan Kim wrote:
> > On Thu, Mar 30, 2017 at 09:35:56AM -0600, Jens Axboe wrote:
> >> On 03/30/2017 09:08 AM, Minchan Kim wrote:
> >>> Hi Jens,
> >>>
> >>> It seems you miss this.
> >>> Could you
Hi Jens,
On Thu, Mar 30, 2017 at 07:38:26PM -0600, Jens Axboe wrote:
> On 03/30/2017 05:45 PM, Minchan Kim wrote:
> > On Thu, Mar 30, 2017 at 09:35:56AM -0600, Jens Axboe wrote:
> >> On 03/30/2017 09:08 AM, Minchan Kim wrote:
> >>> Hi Jens,
> >>>
> >>> It seems you miss this.
> >>> Could you
On 03/30/2017 05:45 PM, Minchan Kim wrote:
> On Thu, Mar 30, 2017 at 09:35:56AM -0600, Jens Axboe wrote:
>> On 03/30/2017 09:08 AM, Minchan Kim wrote:
>>> Hi Jens,
>>>
>>> It seems you miss this.
>>> Could you handle this?
>>
>> I can, but I'm a little confused. The comment talks about replacing
On 03/30/2017 05:45 PM, Minchan Kim wrote:
> On Thu, Mar 30, 2017 at 09:35:56AM -0600, Jens Axboe wrote:
>> On 03/30/2017 09:08 AM, Minchan Kim wrote:
>>> Hi Jens,
>>>
>>> It seems you miss this.
>>> Could you handle this?
>>
>> I can, but I'm a little confused. The comment talks about replacing
On Thu, Mar 30, 2017 at 09:35:56AM -0600, Jens Axboe wrote:
> On 03/30/2017 09:08 AM, Minchan Kim wrote:
> > Hi Jens,
> >
> > It seems you miss this.
> > Could you handle this?
>
> I can, but I'm a little confused. The comment talks about replacing
> the one I merged with this one, I can't do
On Thu, Mar 30, 2017 at 09:35:56AM -0600, Jens Axboe wrote:
> On 03/30/2017 09:08 AM, Minchan Kim wrote:
> > Hi Jens,
> >
> > It seems you miss this.
> > Could you handle this?
>
> I can, but I'm a little confused. The comment talks about replacing
> the one I merged with this one, I can't do
On 03/30/2017 09:08 AM, Minchan Kim wrote:
> Hi Jens,
>
> It seems you miss this.
> Could you handle this?
I can, but I'm a little confused. The comment talks about replacing
the one I merged with this one, I can't do that. I'm assuming you
are talking about this commit:
commit
On 03/30/2017 09:08 AM, Minchan Kim wrote:
> Hi Jens,
>
> It seems you miss this.
> Could you handle this?
I can, but I'm a little confused. The comment talks about replacing
the one I merged with this one, I can't do that. I'm assuming you
are talking about this commit:
commit
Hi Jens,
It seems you miss this.
Could you handle this?
Thanks.
On Thu, Mar 9, 2017 at 2:28 PM, Minchan Kim wrote:
< snip>
> Jens, Could you replace the one merged with this? And I don't want
> to add stable mark in this patch because I feel it need enough
> testing in
Hi Jens,
It seems you miss this.
Could you handle this?
Thanks.
On Thu, Mar 9, 2017 at 2:28 PM, Minchan Kim wrote:
< snip>
> Jens, Could you replace the one merged with this? And I don't want
> to add stable mark in this patch because I feel it need enough
> testing in 64K page system I
On Wed, Mar 08, 2017 at 08:58:02AM +0100, Johannes Thumshirn wrote:
> On 03/08/2017 06:11 AM, Minchan Kim wrote:
> > And could you test this patch? It avoids split bio so no need new bio
> > allocations and makes zram code simple.
> >
> > From f778d7564d5cd772f25bb181329362c29548a257 Mon Sep 17
On Wed, Mar 08, 2017 at 08:58:02AM +0100, Johannes Thumshirn wrote:
> On 03/08/2017 06:11 AM, Minchan Kim wrote:
> > And could you test this patch? It avoids split bio so no need new bio
> > allocations and makes zram code simple.
> >
> > From f778d7564d5cd772f25bb181329362c29548a257 Mon Sep 17
On 03/08/2017 06:11 AM, Minchan Kim wrote:
> And could you test this patch? It avoids split bio so no need new bio
> allocations and makes zram code simple.
>
> From f778d7564d5cd772f25bb181329362c29548a257 Mon Sep 17 00:00:00 2001
> From: Minchan Kim
> Date: Wed, 8 Mar 2017
On 03/08/2017 06:11 AM, Minchan Kim wrote:
> And could you test this patch? It avoids split bio so no need new bio
> allocations and makes zram code simple.
>
> From f778d7564d5cd772f25bb181329362c29548a257 Mon Sep 17 00:00:00 2001
> From: Minchan Kim
> Date: Wed, 8 Mar 2017 13:35:29 +0900
>
Hi Johannes,
On Tue, Mar 07, 2017 at 10:51:45AM +0100, Johannes Thumshirn wrote:
> On 03/07/2017 09:55 AM, Minchan Kim wrote:
> > On Tue, Mar 07, 2017 at 08:48:06AM +0100, Hannes Reinecke wrote:
> >> On 03/07/2017 08:23 AM, Minchan Kim wrote:
> >>> Hi Hannes,
> >>>
> >>> On Tue, Mar 7, 2017 at
Hi Johannes,
On Tue, Mar 07, 2017 at 10:51:45AM +0100, Johannes Thumshirn wrote:
> On 03/07/2017 09:55 AM, Minchan Kim wrote:
> > On Tue, Mar 07, 2017 at 08:48:06AM +0100, Hannes Reinecke wrote:
> >> On 03/07/2017 08:23 AM, Minchan Kim wrote:
> >>> Hi Hannes,
> >>>
> >>> On Tue, Mar 7, 2017 at
On 03/07/2017 08:23 AM, Minchan Kim wrote:
> Hi Hannes,
>
> On Tue, Mar 7, 2017 at 4:00 PM, Hannes Reinecke wrote:
>> On 03/07/2017 06:22 AM, Minchan Kim wrote:
>>> Hello Johannes,
>>>
>>> On Mon, Mar 06, 2017 at 11:23:35AM +0100, Johannes Thumshirn wrote:
zram can handle at
On 03/07/2017 08:23 AM, Minchan Kim wrote:
> Hi Hannes,
>
> On Tue, Mar 7, 2017 at 4:00 PM, Hannes Reinecke wrote:
>> On 03/07/2017 06:22 AM, Minchan Kim wrote:
>>> Hello Johannes,
>>>
>>> On Mon, Mar 06, 2017 at 11:23:35AM +0100, Johannes Thumshirn wrote:
zram can handle at most
On 03/07/2017 09:55 AM, Minchan Kim wrote:
> On Tue, Mar 07, 2017 at 08:48:06AM +0100, Hannes Reinecke wrote:
>> On 03/07/2017 08:23 AM, Minchan Kim wrote:
>>> Hi Hannes,
>>>
>>> On Tue, Mar 7, 2017 at 4:00 PM, Hannes Reinecke wrote:
On 03/07/2017 06:22 AM, Minchan Kim wrote:
On 03/07/2017 09:55 AM, Minchan Kim wrote:
> On Tue, Mar 07, 2017 at 08:48:06AM +0100, Hannes Reinecke wrote:
>> On 03/07/2017 08:23 AM, Minchan Kim wrote:
>>> Hi Hannes,
>>>
>>> On Tue, Mar 7, 2017 at 4:00 PM, Hannes Reinecke wrote:
On 03/07/2017 06:22 AM, Minchan Kim wrote:
> Hello
On Tue, Mar 07, 2017 at 08:48:06AM +0100, Hannes Reinecke wrote:
> On 03/07/2017 08:23 AM, Minchan Kim wrote:
> > Hi Hannes,
> >
> > On Tue, Mar 7, 2017 at 4:00 PM, Hannes Reinecke wrote:
> >> On 03/07/2017 06:22 AM, Minchan Kim wrote:
> >>> Hello Johannes,
> >>>
> >>> On Mon, Mar
On Tue, Mar 07, 2017 at 08:48:06AM +0100, Hannes Reinecke wrote:
> On 03/07/2017 08:23 AM, Minchan Kim wrote:
> > Hi Hannes,
> >
> > On Tue, Mar 7, 2017 at 4:00 PM, Hannes Reinecke wrote:
> >> On 03/07/2017 06:22 AM, Minchan Kim wrote:
> >>> Hello Johannes,
> >>>
> >>> On Mon, Mar 06, 2017 at
On 03/07/2017 06:22 AM, Minchan Kim wrote:
> Hello Johannes,
>
> On Mon, Mar 06, 2017 at 11:23:35AM +0100, Johannes Thumshirn wrote:
>> zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When using
>> the NVMe over Fabrics loopback target which potentially sends a huge bulk of
>>
On 03/07/2017 06:22 AM, Minchan Kim wrote:
> Hello Johannes,
>
> On Mon, Mar 06, 2017 at 11:23:35AM +0100, Johannes Thumshirn wrote:
>> zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When using
>> the NVMe over Fabrics loopback target which potentially sends a huge bulk of
>>
Hi Hannes,
On Tue, Mar 7, 2017 at 4:00 PM, Hannes Reinecke wrote:
> On 03/07/2017 06:22 AM, Minchan Kim wrote:
>> Hello Johannes,
>>
>> On Mon, Mar 06, 2017 at 11:23:35AM +0100, Johannes Thumshirn wrote:
>>> zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When
Hi Hannes,
On Tue, Mar 7, 2017 at 4:00 PM, Hannes Reinecke wrote:
> On 03/07/2017 06:22 AM, Minchan Kim wrote:
>> Hello Johannes,
>>
>> On Mon, Mar 06, 2017 at 11:23:35AM +0100, Johannes Thumshirn wrote:
>>> zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When using
>>> the
Hello Johannes,
On Mon, Mar 06, 2017 at 11:23:35AM +0100, Johannes Thumshirn wrote:
> zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When using
> the NVMe over Fabrics loopback target which potentially sends a huge bulk of
> pages attached to the bio's bvec this results in a
Hello Johannes,
On Mon, Mar 06, 2017 at 11:23:35AM +0100, Johannes Thumshirn wrote:
> zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When using
> the NVMe over Fabrics loopback target which potentially sends a huge bulk of
> pages attached to the bio's bvec this results in a
On 03/06/2017 01:18 PM, Andrew Morton wrote:
> On Mon, 6 Mar 2017 08:21:11 -0700 Jens Axboe wrote:
>
>> On 03/06/2017 03:23 AM, Johannes Thumshirn wrote:
>>> zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When using
>>> the NVMe over Fabrics loopback target which
On 03/06/2017 01:18 PM, Andrew Morton wrote:
> On Mon, 6 Mar 2017 08:21:11 -0700 Jens Axboe wrote:
>
>> On 03/06/2017 03:23 AM, Johannes Thumshirn wrote:
>>> zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When using
>>> the NVMe over Fabrics loopback target which potentially
On Mon, 6 Mar 2017 08:21:11 -0700 Jens Axboe wrote:
> On 03/06/2017 03:23 AM, Johannes Thumshirn wrote:
> > zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When using
> > the NVMe over Fabrics loopback target which potentially sends a huge bulk of
> > pages
On Mon, 6 Mar 2017 08:21:11 -0700 Jens Axboe wrote:
> On 03/06/2017 03:23 AM, Johannes Thumshirn wrote:
> > zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When using
> > the NVMe over Fabrics loopback target which potentially sends a huge bulk of
> > pages attached to the
On 03/06/2017 03:23 AM, Johannes Thumshirn wrote:
> zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When using
> the NVMe over Fabrics loopback target which potentially sends a huge bulk of
> pages attached to the bio's bvec this results in a kernel panic because of
> array out
On 03/06/2017 03:23 AM, Johannes Thumshirn wrote:
> zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When using
> the NVMe over Fabrics loopback target which potentially sends a huge bulk of
> pages attached to the bio's bvec this results in a kernel panic because of
> array out
On (03/06/17 11:23), Johannes Thumshirn wrote:
> zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When using
> the NVMe over Fabrics loopback target which potentially sends a huge bulk of
> pages attached to the bio's bvec this results in a kernel panic because of
> array out of
On (03/06/17 11:23), Johannes Thumshirn wrote:
> zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When using
> the NVMe over Fabrics loopback target which potentially sends a huge bulk of
> pages attached to the bio's bvec this results in a kernel panic because of
> array out of
On 03/06/2017 11:23 AM, Johannes Thumshirn wrote:
> zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When using
> the NVMe over Fabrics loopback target which potentially sends a huge bulk of
> pages attached to the bio's bvec this results in a kernel panic because of
> array out
On 03/06/2017 11:23 AM, Johannes Thumshirn wrote:
> zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When using
> the NVMe over Fabrics loopback target which potentially sends a huge bulk of
> pages attached to the bio's bvec this results in a kernel panic because of
> array out
zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When using
the NVMe over Fabrics loopback target which potentially sends a huge bulk of
pages attached to the bio's bvec this results in a kernel panic because of
array out of bounds accesses in zram_decompress_page().
zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When using
the NVMe over Fabrics loopback target which potentially sends a huge bulk of
pages attached to the bio's bvec this results in a kernel panic because of
array out of bounds accesses in zram_decompress_page().
40 matches
Mail list logo