Re: Read ahead affect Ceph read performance much

2013-07-31 Thread Li Wang
@vger.kernel.org; Sage Weil Subject: Re: Read ahead affect Ceph read performance much On 07/29/2013 05:24 AM, Li Wang wrote: We performed Iozone read test on a 32-node HPC server. Regarding the hardware of each node, the CPU is very powerful, so does the network, with a bandwidth 1.5 GB/s. 64GB memory

RE: Read ahead affect Ceph read performance much

2013-07-30 Thread Chen, Xiaoxi
: Monday, July 29, 2013 10:49 PM To: Li Wang Cc: ceph-devel@vger.kernel.org; Sage Weil Subject: Re: Read ahead affect Ceph read performance much On 07/29/2013 05:24 AM, Li Wang wrote: We performed Iozone read test on a 32-node HPC server. Regarding the hardware of each node, the CPU is very powerful

Read ahead affect Ceph read performance much

2013-07-29 Thread Li Wang
We performed Iozone read test on a 32-node HPC server. Regarding the hardware of each node, the CPU is very powerful, so does the network, with a bandwidth 1.5 GB/s. 64GB memory, the IO is relatively slow, the throughput measured by ‘dd’ locally is around 70MB/s. We configured a Ceph cluster

Re: Read ahead affect Ceph read performance much

2013-07-29 Thread Andrey Korolyov
Wow, very glad to hear that. I tried with the regular FS tunable and there was almost no effect on the regular test, so I thought that reads cannot be improved at all in this direction. On Mon, Jul 29, 2013 at 2:24 PM, Li Wang liw...@ubuntukylin.com wrote: We performed Iozone read test on a

Re: Read ahead affect Ceph read performance much

2013-07-29 Thread Mark Nelson
On 07/29/2013 05:24 AM, Li Wang wrote: We performed Iozone read test on a 32-node HPC server. Regarding the hardware of each node, the CPU is very powerful, so does the network, with a bandwidth 1.5 GB/s. 64GB memory, the IO is relatively slow, the throughput measured by ‘dd’ locally is around