Re: [patch 3/4] mm: filemap: pass __GFP_WRITE from grab_cache_page_write_begin()
On Tue, Sep 20, 2011 at 03:45:14PM +0200, Johannes Weiner wrote: Tell the page allocator that pages allocated through grab_cache_page_write_begin() are expected to become dirty soon. Signed-off-by: Johannes Weiner jwei...@redhat.com Reviewed-by: Minchan Kim minchan@gmail.com -- Kinds regards, Minchan Kim -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [patch 1/4 v2] mm: exclude reserved pages from dirtyable memory
On Wed, Sep 28, 2011 at 01:55:51PM +0900, Minchan Kim wrote: Hi Hannes, On Fri, Sep 23, 2011 at 04:38:17PM +0200, Johannes Weiner wrote: The amount of dirtyable pages should not include the full number of free pages: there is a number of reserved pages that the page allocator and kswapd always try to keep free. The closer (reclaimable pages - dirty pages) is to the number of reserved pages, the more likely it becomes for reclaim to run into dirty pages: +--+ --- | anon | | +--+ | | | | | | -- dirty limit new-- flusher new | file | | | | | | | | | -- dirty limit old-- flusher old | || +--+ --- reclaim | reserved | +--+ | kernel | +--+ This patch introduces a per-zone dirty reserve that takes both the lowmem reserve as well as the high watermark of the zone into account, and a global sum of those per-zone values that is subtracted from the global amount of dirtyable pages. The lowmem reserve is unavailable to page cache allocations and kswapd tries to keep the high watermark free. We don't want to end up in a situation where reclaim has to clean pages in order to balance zones. Not treating reserved pages as dirtyable on a global level is only a conceptual fix. In reality, dirty pages are not distributed equally across zones and reclaim runs into dirty pages on a regular basis. But it is important to get this right before tackling the problem on a per-zone level, where the distance between reclaim and the dirty pages is mostly much smaller in absolute numbers. Signed-off-by: Johannes Weiner jwei...@redhat.com --- include/linux/mmzone.h |6 ++ include/linux/swap.h |1 + mm/page-writeback.c|6 -- mm/page_alloc.c| 19 +++ 4 files changed, 30 insertions(+), 2 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 1ed4116..37a61e7 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -317,6 +317,12 @@ struct zone { */ unsigned long lowmem_reserve[MAX_NR_ZONES]; + /* +* This is a per-zone reserve of pages that should not be +* considered dirtyable memory. +*/ + unsigned long dirty_balance_reserve; + #ifdef CONFIG_NUMA int node; /* diff --git a/include/linux/swap.h b/include/linux/swap.h index b156e80..9021453 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -209,6 +209,7 @@ struct swap_list_t { /* linux/mm/page_alloc.c */ extern unsigned long totalram_pages; extern unsigned long totalreserve_pages; +extern unsigned long dirty_balance_reserve; extern unsigned int nr_free_buffer_pages(void); extern unsigned int nr_free_pagecache_pages(void); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index da6d263..c8acf8a 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -170,7 +170,8 @@ static unsigned long highmem_dirtyable_memory(unsigned long total) NODE_DATA(node)-node_zones[ZONE_HIGHMEM]; x += zone_page_state(z, NR_FREE_PAGES) + -zone_reclaimable_pages(z); +zone_reclaimable_pages(z) - +zone-dirty_balance_reserve; } /* * Make sure that the number of highmem pages is never larger @@ -194,7 +195,8 @@ static unsigned long determine_dirtyable_memory(void) { unsigned long x; - x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages(); + x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages() - + dirty_balance_reserve; if (!vm_highmem_is_dirtyable) x -= highmem_dirtyable_memory(x); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1dba05e..f8cba89 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -96,6 +96,14 @@ EXPORT_SYMBOL(node_states); unsigned long totalram_pages __read_mostly; unsigned long totalreserve_pages __read_mostly; +/* + * When calculating the number of globally allowed dirty pages, there + * is a certain number of per-zone reserves that should not be + * considered dirtyable memory. This is the sum of those reserves + * over all existing zones that contribute dirtyable memory. + */ +unsigned long dirty_balance_reserve __read_mostly; + int percpu_pagelist_fraction; gfp_t gfp_allowed_mask __read_mostly = GFP_BOOT_MASK; @@ -5076,8 +5084,19 @@ static void calculate_totalreserve_pages(void) if (max zone-present_pages) max = zone-present_pages;
Re: [patch 1/2/4] mm: writeback: cleanups in preparation for per-zone dirty limits
On Fri, Sep 23, 2011 at 04:41:07PM +0200, Johannes Weiner wrote: On Thu, Sep 22, 2011 at 10:52:42AM +0200, Johannes Weiner wrote: On Wed, Sep 21, 2011 at 04:02:26PM -0700, Andrew Morton wrote: Should we rename determine_dirtyable_memory() to global_dirtyable_memory(), to get some sense of its relationship with zone_dirtyable_memory()? Sounds good. --- The next patch will introduce per-zone dirty limiting functions in addition to the traditional global dirty limiting. Rename determine_dirtyable_memory() to global_dirtyable_memory() before adding the zone-specific version, and fix up its documentation. Also, move the functions to determine the dirtyable memory and the function to calculate the dirty limit based on that together so that their relationship is more apparent and that they can be commented on as a group. Signed-off-by: Johannes Weiner jwei...@redhat.com Acked-by: Mel Gorman m...@suse.de -- Mel Gorman SUSE Labs -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [patch 2/2/4] mm: try to distribute dirty pages fairly across zones
On Fri, Sep 23, 2011 at 04:42:48PM +0200, Johannes Weiner wrote: The maximum number of dirty pages that exist in the system at any time is determined by a number of pages considered dirtyable and a user-configured percentage of those, or an absolute number in bytes. This number of dirtyable pages is the sum of memory provided by all the zones in the system minus their lowmem reserves and high watermarks, so that the system can retain a healthy number of free pages without having to reclaim dirty pages. But there is a flaw in that we have a zoned page allocator which does not care about the global state but rather the state of individual memory zones. And right now there is nothing that prevents one zone from filling up with dirty pages while other zones are spared, which frequently leads to situations where kswapd, in order to restore the watermark of free pages, does indeed have to write pages from that zone's LRU list. This can interfere so badly with IO from the flusher threads that major filesystems (btrfs, xfs, ext4) mostly ignore write requests from reclaim already, taking away the VM's only possibility to keep such a zone balanced, aside from hoping the flushers will soon clean pages from that zone. Enter per-zone dirty limits. They are to a zone's dirtyable memory what the global limit is to the global amount of dirtyable memory, and try to make sure that no single zone receives more than its fair share of the globally allowed dirty pages in the first place. As the number of pages considered dirtyable exclude the zones' lowmem reserves and high watermarks, the maximum number of dirty pages in a zone is such that the zone can always be balanced without requiring page cleaning. As this is a placement decision in the page allocator and pages are dirtied only after the allocation, this patch allows allocators to pass __GFP_WRITE when they know in advance that the page will be written to and become dirty soon. The page allocator will then attempt to allocate from the first zone of the zonelist - which on NUMA is determined by the task's NUMA memory policy - that has not exceeded its dirty limit. At first glance, it would appear that the diversion to lower zones can increase pressure on them, but this is not the case. With a full high zone, allocations will be diverted to lower zones eventually, so it is more of a shift in timing of the lower zone allocations. Workloads that previously could fit their dirty pages completely in the higher zone may be forced to allocate from lower zones, but the amount of pages that 'spill over' are limited themselves by the lower zones' dirty constraints, and thus unlikely to become a problem. For now, the problem of unfair dirty page distribution remains for NUMA configurations where the zones allowed for allocation are in sum not big enough to trigger the global dirty limits, wake up the flusher threads and remedy the situation. Because of this, an allocation that could not succeed on any of the considered zones is allowed to ignore the dirty limits before going into direct reclaim or even failing the allocation, until a future patch changes the global dirty throttling and flusher thread activation so that they take individual zone states into account. Signed-off-by: Johannes Weiner jwei...@redhat.com Acked-by: Mel Gorman mgor...@suse.de -- Mel Gorman SUSE Labs -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [GIT PULL] ENOSPC rework and random fixes for next merge window
Excerpts from Josef Bacik's message of 2011-09-26 17:36:32 -0400: Hello, Chris can you pull from git://github.com/josefbacik/linux.git for-chris I've pulled this into a new integration-test branch where I'm starting to pile things in for the next merge window. Thanks Josef! -chris -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] Btrfs: only inherit btrfs specific flags when creating files
On 09/27/2011 08:59 PM, Liu Bo wrote: On 09/27/2011 11:02 PM, Josef Bacik wrote: Xfstests 79 was failing because we were inheriting the S_APPEND flag when we weren't supposed to. There isn't any specific documentation on this so I'm taking the test as the standard of how things work, and having S_APPEND set on a directory doesn't mean that S_APPEND gets inherited by its children according to this test. So only inherit btrfs specific things. This will let us set compress/nocompress on specific directories and everything in the directories will inherit this flag, same with nodatacow. With this patch test 79 passes. Thanks, I've checked ext34, they have such comments: /* Flags that should be inherited by new inodes from their parent. */ #define EXT3_FL_INHERITED (EXT3_SECRM_FL | EXT3_UNRM_FL | EXT3_COMPR_FL |\ EXT3_SYNC_FL | EXT3_IMMUTABLE_FL | EXT3_APPEND_FL |\ EXT3_NODUMP_FL | EXT3_NOATIME_FL | EXT3_COMPRBLK_FL|\ EXT3_NOCOMPR_FL | EXT3_JOURNAL_DATA_FL |\ EXT3_NOTAIL_FL | EXT3_DIRSYNC_FL) It shows EXT[3,4]_APPEND_FL should be inherited from their parent, is this the standard? I have no idea actually, it was just failing on xfstest 79 and when I took out the inheritance thing it passed so I took the test to be the standard, maybe we should open this up to a wider audience. Thanks, Josef -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] Btrfs: fix missing clear_extent_bit
On 09/28/2011 06:00 AM, Liu Bo wrote: We forget to clear inode's dirty_bytes and EXTENT_DIRTY at the end of write. We don't set EXTENT_DIRTY unless we failed to read a block and that's to keep track of the area we are re-reading, unless I'm missing something? Thanks, Josef -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [GIT PULL] scrub updates for 3.2
On 28.09.2011 15:17, Arne Jansen wrote: Hi Chris, I rebased my readahead-patches for scrub to your current integration-test branch (83f4e90fd11) and pushed it to: g...@github.com:sensille/linux.git for-chris git://github.com/sensille/linux.git for-chris of course... It just contains the readahead patch, which gives a significant performance improvement for scrub. Currently scrub is the only consumer. Thanks, Arne Arne Jansen (7): btrfs: add an extra wait mode to read_extent_buffer_pages btrfs: add READAHEAD extent buffer flag btrfs: state information for readahead btrfs: initial readahead code and prototypes btrfs: hooks for readahead btrfs: test ioctl for readahead btrfs: use readahead API for scrub fs/btrfs/Makefile|3 +- fs/btrfs/ctree.h | 21 ++ fs/btrfs/disk-io.c | 85 +- fs/btrfs/disk-io.h |2 + fs/btrfs/extent_io.c |9 +- fs/btrfs/extent_io.h |4 + fs/btrfs/ioctl.c | 93 +- fs/btrfs/ioctl.h | 16 + fs/btrfs/reada.c | 949 ++ fs/btrfs/scrub.c | 116 +++ fs/btrfs/volumes.c |8 + fs/btrfs/volumes.h |8 + 12 files changed, 1239 insertions(+), 75 deletions(-) create mode 100644 fs/btrfs/reada.c -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: File compression control, again.
Li Zefan lizf at cn.fujitsu.com writes: See this Per file/directory controls for COW and compression: http://marc.info/?l=linux-btrfsm=130078867208491w=2 And the user tool patch (which got no reply): http://marc.info/?l=linux-btrfsm=130311215721242w=2 So you can create a directory, and set the no-compress flag for it, and then any file created in that dir will inherit the flag. Thanks, Li, but how do I set the no-compress flag? The patched chattr you mention can only set the FS_COMPR_FL. The 'C' argument is now used for FS_NOCOW_FL. Could we use another flag for copy-on-write control in chattr? -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] Btrfs: fix missing clear_extent_bit
Excerpts from Josef Bacik's message of 2011-09-28 08:34:03 -0400: On 09/28/2011 06:00 AM, Liu Bo wrote: We forget to clear inode's dirty_bytes and EXTENT_DIRTY at the end of write. We don't set EXTENT_DIRTY unless we failed to read a block and that's to keep track of the area we are re-reading, unless I'm missing something? Thanks, Josef and I have been talking about this one on IRC. We do set EXTENT_DIRTY during set_extent_delalloc, but as far as I can tell we no longer need to. Can you please experiment with just not setting the dirty bit during delalloc instead? -chris -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] Btrfs: only inherit btrfs specific flags when creating files
On Wed, Sep 28, 2011 at 08:26:09AM -0400, Josef Bacik wrote: It shows EXT[3,4]_APPEND_FL should be inherited from their parent, is this the standard? I have no idea actually, it was just failing on xfstest 79 and when I took out the inheritance thing it passed so I took the test to be the standard, maybe we should open this up to a wider audience. Thanks, We had a little discussion on this when Stefan Behrens made this test generic, and the conclusion was that the other filesystems should adopt the xfs behaviour. -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: File compression control, again.
Li Zefan lizf at cn.fujitsu.com writes: See this Per file/directory controls for COW and compression: http://marc.info/?l=linux-btrfsm=130078867208491w=2 Thanks again! I wrote a program to see if ioctl compression control works ( https://gist.github.com/1248085 ) and it does! : ) -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [patch 2/2/4] mm: try to distribute dirty pages fairly across zones
On Wed, Sep 28, 2011 at 09:11:54AM +0200, Johannes Weiner wrote: On Wed, Sep 28, 2011 at 02:56:40PM +0900, Minchan Kim wrote: On Fri, Sep 23, 2011 at 04:42:48PM +0200, Johannes Weiner wrote: The maximum number of dirty pages that exist in the system at any time is determined by a number of pages considered dirtyable and a user-configured percentage of those, or an absolute number in bytes. It's explanation of old approach. What do you mean? This does not change with this patch. We still have a number of dirtyable pages and a limit that is applied relatively to this number. This number of dirtyable pages is the sum of memory provided by all the zones in the system minus their lowmem reserves and high watermarks, so that the system can retain a healthy number of free pages without having to reclaim dirty pages. It's a explanation of new approach. Same here, this aspect is also not changed with this patch! But there is a flaw in that we have a zoned page allocator which does not care about the global state but rather the state of individual memory zones. And right now there is nothing that prevents one zone from filling up with dirty pages while other zones are spared, which frequently leads to situations where kswapd, in order to restore the watermark of free pages, does indeed have to write pages from that zone's LRU list. This can interfere so badly with IO from the flusher threads that major filesystems (btrfs, xfs, ext4) mostly ignore write requests from reclaim already, taking away the VM's only possibility to keep such a zone balanced, aside from hoping the flushers will soon clean pages from that zone. It's a explanation of old approach, again! Shoudn't we move above phrase of new approach into below? Everything above describes the current behaviour (at the point of this patch, so respecting lowmem_reserve e.g. is part of the current behaviour by now) and its problems. And below follows a description of how the patch tries to fix it. It seems that it's not a good choice to use old and new terms. Hannes, please ignore, it's not a biggie. -- Kind regards, Minchan Kim -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [patch 1/4 v2] mm: exclude reserved pages from dirtyable memory
On Wed, Sep 28, 2011 at 09:50:54AM +0200, Johannes Weiner wrote: On Wed, Sep 28, 2011 at 01:55:51PM +0900, Minchan Kim wrote: Hi Hannes, On Fri, Sep 23, 2011 at 04:38:17PM +0200, Johannes Weiner wrote: The amount of dirtyable pages should not include the full number of free pages: there is a number of reserved pages that the page allocator and kswapd always try to keep free. The closer (reclaimable pages - dirty pages) is to the number of reserved pages, the more likely it becomes for reclaim to run into dirty pages: +--+ --- | anon | | +--+ | | | | | | -- dirty limit new-- flusher new | file | | | | | | | | | -- dirty limit old-- flusher old | || +--+ --- reclaim | reserved | +--+ | kernel | +--+ This patch introduces a per-zone dirty reserve that takes both the lowmem reserve as well as the high watermark of the zone into account, and a global sum of those per-zone values that is subtracted from the global amount of dirtyable pages. The lowmem reserve is unavailable to page cache allocations and kswapd tries to keep the high watermark free. We don't want to end up in a situation where reclaim has to clean pages in order to balance zones. Not treating reserved pages as dirtyable on a global level is only a conceptual fix. In reality, dirty pages are not distributed equally across zones and reclaim runs into dirty pages on a regular basis. But it is important to get this right before tackling the problem on a per-zone level, where the distance between reclaim and the dirty pages is mostly much smaller in absolute numbers. Signed-off-by: Johannes Weiner jwei...@redhat.com --- include/linux/mmzone.h |6 ++ include/linux/swap.h |1 + mm/page-writeback.c|6 -- mm/page_alloc.c| 19 +++ 4 files changed, 30 insertions(+), 2 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 1ed4116..37a61e7 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -317,6 +317,12 @@ struct zone { */ unsigned long lowmem_reserve[MAX_NR_ZONES]; + /* + * This is a per-zone reserve of pages that should not be + * considered dirtyable memory. + */ + unsigned long dirty_balance_reserve; + #ifdef CONFIG_NUMA int node; /* diff --git a/include/linux/swap.h b/include/linux/swap.h index b156e80..9021453 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -209,6 +209,7 @@ struct swap_list_t { /* linux/mm/page_alloc.c */ extern unsigned long totalram_pages; extern unsigned long totalreserve_pages; +extern unsigned long dirty_balance_reserve; extern unsigned int nr_free_buffer_pages(void); extern unsigned int nr_free_pagecache_pages(void); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index da6d263..c8acf8a 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -170,7 +170,8 @@ static unsigned long highmem_dirtyable_memory(unsigned long total) NODE_DATA(node)-node_zones[ZONE_HIGHMEM]; x += zone_page_state(z, NR_FREE_PAGES) + - zone_reclaimable_pages(z); + zone_reclaimable_pages(z) - + zone-dirty_balance_reserve; } /* * Make sure that the number of highmem pages is never larger @@ -194,7 +195,8 @@ static unsigned long determine_dirtyable_memory(void) { unsigned long x; - x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages(); + x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages() - + dirty_balance_reserve; if (!vm_highmem_is_dirtyable) x -= highmem_dirtyable_memory(x); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1dba05e..f8cba89 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -96,6 +96,14 @@ EXPORT_SYMBOL(node_states); unsigned long totalram_pages __read_mostly; unsigned long totalreserve_pages __read_mostly; +/* + * When calculating the number of globally allowed dirty pages, there + * is a certain number of per-zone reserves that should not be + * considered dirtyable memory. This is the sum of those reserves + * over all existing zones that contribute dirtyable memory. + */ +unsigned long dirty_balance_reserve __read_mostly; + int percpu_pagelist_fraction; gfp_t gfp_allowed_mask __read_mostly = GFP_BOOT_MASK; @@ -5076,8 +5084,19 @@ static void
群发软件+买家搜索机+最新广交会买家、海关数据,B2B询盘买家500万。
群发软件+买家搜索机+109届广交会买家、展会买家、海关数据,B2B询盘买家500万。 一共8个包(数据是全行业的,按照行业分好类,并且可以按照关键词查询的): 1,2011春季109届广交会买家数据库新鲜出炉,超级新鲜买家,新鲜数据,容易成单! 2,最新全球买家库,共451660条数据。 3,2008年,2009年,2010年 春季+秋季广交会买家名录,103 104 105 106 107 108 共六届 共120.6万数据。 4,2010年国际促销协会(PPAI)成员名单 PPAI Members Directory,非常重要的大买家。 5,2010年到香港采购的国外客人名录(香港贸发局提供),共7.2万数据,超级重要的买家。 6,60.8万条最新国外B2B买家询盘。 7,2009年海关提单数据piers版数据 1千万。 8,群发软件,群发软件的部署与安装。 共 500万个买家,每个均有Email. 保证每天都有买家回复。 保证每天都有买家回复。 要的抓紧联系QQ: 1339625218 或者立即回复邮箱: 1339625...@qq.com 要的抓紧联系QQ: 1339625218 或者立即回复邮箱: 1339625...@qq.com 要的抓紧联系QQ: 1339625218 或者立即回复邮箱: 1339625...@qq.com 诚信为本,如果不信任本人,可以走淘宝交易,收货验证后再付款,这是对您最好的保障了。 保证每天都有买家回复。 保证每天都有买家回复。 保证每天都有买家回复。 广交会买家按产品类别分类,分为以下几类: 1 办公设备 2 编织及藤铁工艺品 3 玻璃 4 餐厨用具 5 车辆 6 大型机械及设备 7 电子电气 8 电子消费品 9 纺织 10 服装 11 个人护理 12 工程机械 13 工具 14 化工 15 计算机及通讯 16 家居用品 17 家居装饰 18 家具 19 家用电器 20 建筑及装饰材料 21 节日用品 22 礼品及赠品 23 摩托车 24 汽车配件 25 食品 26 陶瓷 27 铁石 28 玩具 29 卫浴 30 五金 31 小型机械 32 鞋 33 休闲用品 34 医疗 35 浴室产品 36 园林 37 照明产品 38 钟表眼镜 39 自行车 40 包 保证每天都有买家回复。 保证每天都有买家回复。 保证每天都有买家回复。 保证每天都有买家回复。 保证每天都有买家回复。
Re: [PATCH] Btrfs: fix missing clear_extent_bit
On 09/28/2011 09:44 PM, Chris Mason wrote: Excerpts from Josef Bacik's message of 2011-09-28 08:34:03 -0400: On 09/28/2011 06:00 AM, Liu Bo wrote: We forget to clear inode's dirty_bytes and EXTENT_DIRTY at the end of write. We don't set EXTENT_DIRTY unless we failed to read a block and that's to keep track of the area we are re-reading, unless I'm missing something? Thanks, Josef and I have been talking about this one on IRC. We do set EXTENT_DIRTY during set_extent_delalloc, but as far as I can tell we no longer need to. Can you please experiment with just not setting the dirty bit during delalloc instead? Sure. So this EXTENT_DIRTY is only for METADATA use. thanks, liubo -chris -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html