Re: [ceph-users] RBD+KVM problems with sequential read

2014-02-07 Thread Konrad Gutkowski
Hi,W dniu 07.02.2014 o 08:14 Ирек Фасихов malm...@gmail.com pisze:[...]Why might such a low speed sequential read? Do ideas on this issue? Iirc you need to set your readahead for the device higher (inside the vm) to compensate for network rtt.blockdev --setra x /dev/vdaThanks.--С уважением,

Re: [ceph-users] RBD Caching - How to enable?

2014-02-07 Thread Alexandre DERUMIER
This page reads If you set rbd_cache=true, you must set cache=writeback or risk data loss. ... Because if you don't set writeback in qemu, qemu don't send flush requests. And if you enable rbd cache, and your host is crashing, you'll lost datas and possible corrupt the filesystem. if you

Re: [ceph-users] filesystem fragmentation on ext4 OSD

2014-02-07 Thread Christian Kauhaus
Am 06.02.2014 16:24, schrieb Mark Nelson: Hi Christian, can you tell me a little bit about how you are using Ceph and what kind of IO you are doing? Just forgot to mention: we're running Ceph 0.72.2 on Linux 3.10 (both storage servers and inside VMs) and Qemu-KVM 1.5.3. Regards Christian --

[ceph-users] Questions about coming cache pool

2014-02-07 Thread Alexandre DERUMIER
Hi, I have some questions about comming cache pool feature. Is it only a cache ? (are the datas on both cache pool and main pool ?) Or are the datas migrated from the main pool to cache pool ? Do we need to enable replication on the cache pool ? What happen if we loose osds from cache pool ?

Re: [ceph-users] RBD+KVM problems with sequential read

2014-02-07 Thread Ирек Фасихов
echo noop/sys/block/vda/queue/scheduler echo 1000 /sys/block/vda/queue/nr_requests echo 8192/sys/block/vda/queue/read_ahead_kb [root@nfs tmp]# dd if=test of=/dev/null 39062500+0 records in 39062500+0 records out 200 bytes (20 GB) copied, 244.024 s, 82.0 MB/s Changing these parameters

Re: [ceph-users] RBD+KVM problems with sequential read

2014-02-07 Thread Daniel Schwager
setup a bitter value for read_ahead_kb ? I tested with 256 MB read ahead cache ( From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of ??? Sent: Friday, February 07, 2014 10:55 AM To: Konrad Gutkowski Cc: ceph-users@lists.ceph.com Subject: Re:

Re: [ceph-users] RBD+KVM problems with sequential read

2014-02-07 Thread Daniel Schwager
I'm sorry, but I did not understand you :) Sorry (-: My finger touched the RETURN-key to fast... Try to setup a bigger value for the read ahead cache, maybe 256 MB? echo 262144/sys/block/vda/queue/read_ahead_kb Try also fio performance tool - it will show more detailed information.

Re: [ceph-users] rbd-fuse rbd_list: error %d Numerical result out of range

2014-02-07 Thread Graeme Lambert
Hi, Does anyone know what the issue is with this? Thanks *Graeme* On 06/02/14 13:21, Graeme Lambert wrote: Hi all, Can anyone advise what the problem below is with rbd-fuse? From http://mail.blameitonlove.com/lists/ceph-devel/msg14723.html it looks like this has happened before but

Re: [ceph-users] filesystem fragmentation on ext4 OSD

2014-02-07 Thread Mark Nelson
On 02/06/2014 01:41 PM, Christian Kauhaus wrote: Am 06.02.2014 16:24, schrieb Mark Nelson: Hi Christian, can you tell me a little bit about how you are using Ceph and what kind of IO you are doing? Sure. We're using it almost exclusively for serving VM images that are accessed from Qemu's

Re: [ceph-users] radosgw machines virtualization

2014-02-07 Thread Dominik Mostowiec
Thakns! -- Regards Dominik 2014-02-06 14:18 GMT+01:00 Dan van der Ster daniel.vanders...@cern.ch: Hi, Our three radosgw's are OpenStack VMs. Seems to work for our (limited) testing, and I don't see a reason why it shouldn't work. Cheers, Dan -- Dan van der Ster || Data Storage Services ||

Re: [ceph-users] filesystem fragmentation on ext4 OSD

2014-02-07 Thread Christian Kauhaus
Am 07.02.2014 14:42, schrieb Mark Nelson: Ok, so the reason I was wondering about the use case is if you were doing RBD specifically. Fragmentation has been something we've periodically kind of battled with but still see in some cases. BTRFS especially can get pretty spectacularly fragmented

Re: [ceph-users] RGW Replication

2014-02-07 Thread Craig Lewis
I have confirmed this in production, with the default max-entries. I have a bucket that I'm no longer writing to. Radosgw-agent had stopped replicating this bucket. radosgw-admin bucket stats shows that the slave is missing ~600k objects. I uploaded a 1 byte file to the bucket. On the

[ceph-users] [ANN] ceph-deploy 1.3.5 released!

2014-02-07 Thread Alfredo Deza
Hi All, There is a new release of ceph-deploy, the easy deployment tool for Ceph. Although this is primarily a bug-fix release, the library that ceph-deploy uses to connect to remote hosts (execnet) was updated with the latest stable release. A full list of changes can be found in the

Re: [ceph-users] filesystem fragmentation on ext4 OSD

2014-02-07 Thread Christian Balzer
On Fri, 07 Feb 2014 18:46:31 +0100 Christian Kauhaus wrote: Am 07.02.2014 14:42, schrieb Mark Nelson: Ok, so the reason I was wondering about the use case is if you were doing RBD specifically. Fragmentation has been something we've periodically kind of battled with but still see in some

Re: [ceph-users] filesystem fragmentation on ext4 OSD

2014-02-07 Thread Sage Weil
On Sat, 8 Feb 2014, Christian Balzer wrote: On Fri, 07 Feb 2014 18:46:31 +0100 Christian Kauhaus wrote: Am 07.02.2014 14:42, schrieb Mark Nelson: Ok, so the reason I was wondering about the use case is if you were doing RBD specifically. Fragmentation has been something we've

Re: [ceph-users] RBD Caching - How to enable?

2014-02-07 Thread Peter Matulis
On 02/07/2014 03:11 AM, Alexandre DERUMIER wrote: This page reads If you set rbd_cache=true, you must set cache=writeback or risk data loss. ... if you enable writeback,guest send flush request. If the host is crashing, you'll lost datas but it'll not corrupt the guest filesystem. So the

Re: [ceph-users] filesystem fragmentation on ext4 OSD

2014-02-07 Thread Sage Weil
On Sat, 8 Feb 2014, Christian Balzer wrote: On Fri, 7 Feb 2014 19:22:54 -0800 (PST) Sage Weil wrote: On Sat, 8 Feb 2014, Christian Balzer wrote: On Fri, 07 Feb 2014 18:46:31 +0100 Christian Kauhaus wrote: Am 07.02.2014 14:42, schrieb Mark Nelson: Ok, so the reason I was