Agree with David

Its being cached , you can try
- oflag options for dd
- monitor system cache during dd


- Karan -

On Fri, Jun 17, 2016 at 1:58 AM, David <[email protected]> wrote:

> I'm probably misunderstanding the question but if you're getting 3GB/s
> from your dd, you're already caching. Can you provide some more detail on
> what you're trying to achieve.
> On 16 Jun 2016 21:53, "Patrick McGarry" <[email protected]> wrote:
>
>> Moving this over to ceph-user where it’ll get the eyeballs you need.
>>
>> On Mon, Jun 13, 2016 at 2:58 AM, Marcus Strasser
>> <[email protected]> wrote:
>> > Hello!
>> >
>> >
>> >
>> > I have a little test cluster with 2 server. Each Server have an osd
>> with 800
>> > GB, there is a 10 Gbps Link between the servers.
>> >
>> > On a ceph-client i have configured a cephfs, mount kernelspace. The
>> client
>> > is also connected with a 10 Gbps Link.
>> >
>> > All 3 use debian
>> >
>> > 4.5.5 kernel
>> >
>> > 64 GB mem
>> >
>> > There is no special configuration.
>> >
>> >
>> >
>> > Now the question:
>> >
>> > When i use the dd (~11GB) command in the cephfs mount, i get a result
>> of 3
>> > GB/s
>> >
>> >
>> >
>> > dd if=/dev/zero of=/cephtest/test bs=1M count=10240
>> >
>> >
>> >
>> > Is it possble to transfer the data faster (use full capacity oft he
>> network)
>> > and cache it with the memory?
>> >
>> >
>> >
>> > Thanks,
>> >
>> > Marcus Strasser
>> >
>> >
>> >
>> >
>> >
>> > Marcus Strasser
>> >
>> > Linux Systeme
>> >
>> > Russmedia IT GmbH
>> >
>> > A-6850 Schwarzach, Gutenbergstr. 1
>> >
>> >
>> >
>> > T +43 5572 501-872
>> >
>> > F +43 5572 501-97872
>> >
>> > [email protected]
>> >
>> > highspeed.vol.at
>> >
>> >
>> >
>> >
>>
>>
>>
>> --
>>
>> Best Regards,
>>
>> Patrick McGarry
>> Director Ceph Community || Red Hat
>> http://ceph.com  ||  http://community.redhat.com
>> @scuttlemonkey || @ceph
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to