From: ChenLiang <chenlian...@huawei.com> V5-->V4 * Fix two issues: one is cache_insert don't update the page which has been in the cache. Another avoiding the risk that run xbzrle_encode_buffer on changing data.
a. Optimization the xbzrle remarkable decrease the cache misses. The efficiency of compress increases more than fifty times. Before the patch set, the cache almost totally miss when the number of cache item less than the dirty page number. Now the hot pages in the cache will not be replaced by other pages. b. Reducing the data copy c. Fix one corruption issues. ChenLiang (10): XBZRLE: Fix one XBZRLE corruption issues migration: Add counts of updating the dirty bitmap migration: expose the bitmap_sync_count to the end user migration: expose xbzrle cache miss rate XBZRLE: optimize XBZRLE to decrease the cache misses XBZRLE: rebuild the cache_is_cached function xbzrle: don't check the value in the vm ram repeatedly xbzrle: check 8 bytes at a time after an concurrency scene migration: optimize xbzrle by reducing data copy migration: clear the dead code arch_init.c | 74 +++++++++++++++++------------- docs/xbzrle.txt | 8 ++++ hmp.c | 4 ++ include/migration/migration.h | 2 + include/migration/page_cache.h | 10 ++-- migration.c | 3 ++ page_cache.c | 101 +++++++++++------------------------------ qapi-schema.json | 9 +++- qmp-commands.hx | 15 ++++-- xbzrle.c | 48 ++++++++++++++------ 10 files changed, 144 insertions(+), 130 deletions(-) -- 1.7.12.4