dio_free_pagev(pages, num_pages, false);
ceph_osdc_put_request(req);
if (ret)
break;
在 2015/9/28 11:17, Yan, Zheng 写道:
Hi, zhucaifeng
The patch generally looks good. But I think there is minor flaw, see
my comment below. Besides, could you write a patc
.
That is my reason why iov_iter APIs are unfit for combining mutli iovs
into one page vector.
Best Regards
On Wed, Sep 30, 2015 at 5:40 PM, zhucaifeng <zhucaif...@unissoft-nj.com> wrote:
Hi, Yan
iov_iter APIs seems unsuitable for the direct io manipulation below.
iov_iter APIs
hi
.
That is my reason why iov_iter APIs are unfit for combining mutli iovs
into one page vector.
Best Regards
On Wed, Sep 30, 2015 at 5:40 PM, zhucaifeng <zhucaif...@unissoft-nj.com> wrote:
Hi, Yan
iov_iter APIs seems unsuitable for the direct io manipulation below.
iov_iter APIs
hi
.
That is my reason why iov_iter APIs are unfit for combining mutli iovs
into one page vector.
Best Regards
在 2015/9/30 20:13, Yan, Zheng 写道:
On Wed, Sep 30, 2015 at 5:40 PM, zhucaifeng <zhucaif...@unissoft-nj.com> wrote:
Hi, Yan
iov_iter APIs seems unsuitable for the direct io manipu
in dio_alloc_pagev, this
loop invariant
has been ensured indirectly. Please see my inline comments.
I'll deliver a new patch for the newest kernel, using iov_iter KPIs.
Best Regards!
在 2015/9/28 11:17, Yan, Zheng 写道:
Hi, zhucaifeng
The patch generally looks good. But I think there is minor flaw
Hi, all
When using cephfs, we find that cephfs direct io is very slow.
For example, installing Windows 7 takes more than 1 hour on a
virtual machine whose disk is a file in cephfs.
The cause is that when doing direct io, both ceph_sync_direct_write
and ceph_sync_read iterate iov elements one by