On 16.11.2017 09:38, Qu Wenruo wrote:
>
>
> On 2017年11月16日 14:54, Nikolay Borisov wrote:
>>
>>
>> On 16.11.2017 04:18, Qu Wenruo wrote:
>>> Hi all,
>>>
>>> [Background]
>>> Recently I'm considering the possibility to use checksum from filesystem
>>> to enhance device-mapper raid.
>>>
>>> The
On 2017年11月16日 14:54, Nikolay Borisov wrote:
>
>
> On 16.11.2017 04:18, Qu Wenruo wrote:
>> Hi all,
>>
>> [Background]
>> Recently I'm considering the possibility to use checksum from filesystem
>> to enhance device-mapper raid.
>>
>> The idea behind it is quite simple, since most modern
On 11/16/2017 4:22 PM, Ingo Molnar wrote:
* Byungchul Park wrote:
On Sat, Nov 11, 2017 at 10:45:24PM +0900, Byungchul Park wrote:
This is the big one including all of version 3.
You can take only this.
Hello Ingo,
Could you consider this?
Yeah, I'll have a look
* Byungchul Park wrote:
> On Sat, Nov 11, 2017 at 10:45:24PM +0900, Byungchul Park wrote:
> > This is the big one including all of version 3.
> >
> > You can take only this.
>
> Hello Ingo,
>
> Could you consider this?
Yeah, I'll have a look in a few days, but right
On 15 November 2017 at 14:07, Adrian Hunter wrote:
> On 15/11/17 12:55, Ulf Hansson wrote:
>> Linus, Adrian,
>>
>> Apologize for sidetracking the discussion, just wanted to add some
>> minor comments.
>>
>> [...]
>>
>>>
> But what I think is nice in doing it around
On 16.11.2017 04:18, Qu Wenruo wrote:
> Hi all,
>
> [Background]
> Recently I'm considering the possibility to use checksum from filesystem
> to enhance device-mapper raid.
>
> The idea behind it is quite simple, since most modern filesystems have
> checksum for their metadata, and even some
On 11/15/2017 05:08 PM, Ming Lei wrote:
> Once blk_set_queue_dying() is done in blk_cleanup_queue(), we call
> blk_freeze_queue() and wait for q->q_usage_counter becoming zero. But if
> there are tasks blocked in get_request(), q->q_usage_counter can never
> become zero. So we have to wake up all
For now, wait_for_completion() / complete() works with lockdep.
Add lock_page() / unlock_page() and its family to lockdep support.
Byungchul Park (3):
lockdep: Apply crossrelease to PG_locked locks
lockdep: Apply lock_acquire(release) on __Set(__Clear)PageLocked
lockdep: Move data of
Although lock_page() and its family can cause deadlock, lockdep have not
worked with them, because unlock_page() might be called in a different
context from the acquire context, which violated lockdep's assumption.
Now CONFIG_LOCKDEP_CROSSRELEASE has been introduced, lockdep can work
with page
CONFIG_LOCKDEP_PAGELOCK needs to keep lockdep_map_cross per page. Since
it's a debug feature, it's preferred to keep it in struct page_ext
rather than struct page. Move it to struct page_ext.
Signed-off-by: Byungchul Park
---
include/linux/mm_types.h | 4 ---
Usually PG_locked bit is updated by lock_page() or unlock_page().
However, it can be also updated through __SetPageLocked() or
__ClearPageLockded(). They have to be considered, to get paired between
acquire and release.
Furthermore, e.g. __SetPageLocked() in add_to_page_cache_lru() is called
Hi all,
[Background]
Recently I'm considering the possibility to use checksum from filesystem
to enhance device-mapper raid.
The idea behind it is quite simple, since most modern filesystems have
checksum for their metadata, and even some (btrfs) have checksum for data.
And for btrfs RAID1/10
Once blk_set_queue_dying() is done in blk_cleanup_queue(), we call
blk_freeze_queue() and wait for q->q_usage_counter becoming zero. But if
there are tasks blocked in get_request(), q->q_usage_counter can never
become zero. So we have to wake up all these tasks in blk_set_queue_dying()
first.
On Sat, Nov 11, 2017 at 10:45:24PM +0900, Byungchul Park wrote:
> This is the big one including all of version 3.
>
> You can take only this.
Hello Ingo,
Could you consider this?
I want to offer a better base to someone who helps the doc enhanced. Of
course, in the case you agree with this
On Wed, Nov 15, 2017 at 03:27:29PM -0800, Shaohua Li wrote:
> For io.low, latency target 0 is legit. 0 for rbps/wbps/rios/wios is ok
> too. And we use 0 to clear io.low settings.
>
> Cc: Tejun Heo
> Signed-off-by: Shaohua Li
Acked-by: Tejun Heo
For io.low, latency target 0 is legit. 0 for rbps/wbps/rios/wios is ok
too. And we use 0 to clear io.low settings.
Cc: Tejun Heo
Signed-off-by: Shaohua Li
---
block/blk-throttle.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git
Hi!
> On 9/14/17 00:37, Philipp Guendisch wrote:
> > This patch adds a software based secure erase option to improve data
> > confidentiality. The CONFIG_BLK_DEV_SECURE_ERASE option enables a mount
> > flag called 'sw_secure_erase'. When you mount a volume with this flag,
> > every discard call
On 14/11/17 23:17, Linus Walleij wrote:
> We have the following risk factors:
>
> - Observed performance degradation of 1% (on x86 SDHI I guess)
> - The kernel crashes if SD card is removed (both patch sets)
I haven't been able to reproduce that. Do you have more information?
On 15/11/17 12:55, Ulf Hansson wrote:
> Linus, Adrian,
>
> Apologize for sidetracking the discussion, just wanted to add some
> minor comments.
>
> [...]
>
>>
But what I think is nice in doing it around
each request is that since mmc_put_card() calls mmc_release_host()
contains
On Wed, Nov 15, 2017 at 07:28:00PM +0900, James Bottomley wrote:
> On Wed, 2017-11-15 at 18:09 +0800, Ming Lei wrote:
> > On Tue, Nov 14, 2017 at 10:14:52AM -0800, James Bottomley wrote:
> > >
> > > On Tue, 2017-11-14 at 08:55 +0800, Ming Lei wrote:
> > > >
> > > > Hi James,
> > > >
> > > > On
Linus, Adrian,
Apologize for sidetracking the discussion, just wanted to add some
minor comments.
[...]
>
>>> But what I think is nice in doing it around
>>> each request is that since mmc_put_card() calls mmc_release_host()
>>> contains this:
>>>
>>> if (--host->claim_cnt) { (...)
>>>
>>> So
On Wed, 2017-11-15 at 18:09 +0800, Ming Lei wrote:
> On Tue, Nov 14, 2017 at 10:14:52AM -0800, James Bottomley wrote:
> >
> > On Tue, 2017-11-14 at 08:55 +0800, Ming Lei wrote:
> > >
> > > Hi James,
> > >
> > > On Mon, Nov 13, 2017 at 10:55:52AM -0800, James Bottomley wrote:
> > > >
> > > >
>
Hi all,
In order to be compliant with a pass-throug drive behavior, RAID queue
limits should be calculated in a way that minimal io, optimal io and
discard granularity size will be met from a single drive perspective.
Currently MD driver is ignoring queue limits reported by members and all
[...]
>> Moreover, for reasons brought up while reviewing Adrian's series,
>> regarding if mq is "ready", and because I see that the diff for patch
>> 12 is small, I suggest that we just skip the step adding a Kconfig
>> option to allow an opt-in of the mq path. In other words, *the* patch
>>
On Tue, Nov 14, 2017 at 10:14:52AM -0800, James Bottomley wrote:
> On Tue, 2017-11-14 at 08:55 +0800, Ming Lei wrote:
> > Hi James,
> >
> > On Mon, Nov 13, 2017 at 10:55:52AM -0800, James Bottomley wrote:
> > >
> > > On Sat, 2017-11-11 at 10:43 +0800, Ming Lei wrote:
> > > >
> > > > So from
25 matches
Mail list logo