Re: [ceph-users] rbd cache + libvirt

2015-06-08 Thread Josh Durgin
On 06/08/2015 11:19 AM, Alexandre DERUMIER wrote: Hi, looking at the latest version of QEMU, It's seem that it's was already this behaviour since the add of rbd_cache parsing in rbd.c by josh in 2012

[ceph-users] Ceph hangs on starting

2015-06-08 Thread Karanvir Singh
Hi, I am trying to compile/create packages latest ceph version ( 519c3c9) from hammer branch on an arm platform.For google-perftools i am compiling those from  https://code.google.com/p/gperftools/ . The packages are generated fineI have used the same branch/commit and commands to create package

Re: [ceph-users] rbd cache + libvirt

2015-06-08 Thread Alexandre DERUMIER
Hi, looking at the latest version of QEMU, It's seem that it's was already this behaviour since the add of rbd_cache parsing in rbd.c by josh in 2012

Re: [ceph-users] rbd cache + libvirt

2015-06-08 Thread Andrey Korolyov
On Mon, Jun 8, 2015 at 10:43 PM, Josh Durgin jdur...@redhat.com wrote: On 06/08/2015 11:19 AM, Alexandre DERUMIER wrote: Hi, looking at the latest version of QEMU, It's seem that it's was already this behaviour since the add of rbd_cache parsing in rbd.c by josh in 2012

Re: [ceph-users] OSD trashed by simple reboot (Debian Jessie, systemd?)

2015-06-08 Thread Mark Kirkwood
Right - I see from the 0.80.8 notes that we merged a fix for #9073. However (unfortunately) there were a number of patches that we experimented with on this issue - and this looks like one of the earlier ones (i.e not what we merged into master at the time), which is a bit confusing (maybe it was

Re: [ceph-users] monitor election

2015-06-08 Thread Gregory Farnum
On Thu, Jun 4, 2015 at 1:13 AM, Luis Periquito periqu...@gmail.com wrote: Hi all, I've seen several chats on the monitor elections, and how the one with the lowest IP is always the master. Is there any way to change or influence this behaviour? Other than changing the IP of the monitor

Re: [ceph-users] Cephfs: one ceph account per directory?

2015-06-08 Thread Francois Lafont
Hi, Gregory Farnum wrote: 1. Can you confirm to me that currently it's impossible to restrict the read and write access of a ceph account to a specific directory of a cephfs? It's sadly impossible to restrict access to the filesystem hierarchy at this time, yes. By making use of the file

Re: [ceph-users] rbd cache + libvirt

2015-06-08 Thread Andrey Korolyov
On Mon, Jun 8, 2015 at 6:50 PM, Jason Dillaman dilla...@redhat.com wrote: Hmm ... looking at the latest version of QEMU, it appears that the RBD cache settings are changed prior to reading the configuration file instead of overriding the value after the configuration file has been read [1].

Re: [ceph-users] rbd format v2 support

2015-06-08 Thread David Z
Hi Ilya, Thanks for the reply. I knew that v2 image can be mapped if using default striping parameters without --stripe-unit or --stripe-count. It is just the rbd performance (IOPS bandwidth) we tested hasn't met our goal. We found at this point OSDs seemed not to be the bottleneck, so we want

Re: [ceph-users] Complete freeze of a cephfs client (unavoidable hard reboot)

2015-06-08 Thread Francois Lafont
Hi, On 27/05/2015 22:34, Gregory Farnum wrote: Sorry for the delay; I've been traveling. No problem, me too, I'm not really fast to answer. ;) Ok, I see. According to the online documentation, the way to close a cephfs client session is: ceph daemon mds.$id session ls # to get

Re: [ceph-users] rbd cache + libvirt

2015-06-08 Thread Alexandre DERUMIER
In the short-term, you can remove the rbd cache setting from your ceph.conf That's not true, you need to remove the ceph.conf file. Removing rbd_cache is not enough or default rbd_cache=false will apply. I have done tests, here the result matrix host ceph.conf : no rbd_cache: guest

Re: [ceph-users] rbd cache + libvirt

2015-06-08 Thread Alexandre DERUMIER
oops, sorry, my bad, I had wrong settings when testing. you are right, remove rbd_cache from ceph.conf is enough to remove overloading host conf : no value : guest cache=writeback : result : cache host conf : rbd_cache=false : guest cache=writeback : result : nocache (wrong) host

[ceph-users] rbd_cache, limiting read on high iops around 40k

2015-06-08 Thread Alexandre DERUMIER
Hi, I'm doing benchmark (ceph master branch), with randread 4k qdepth=32, and rbd_cache=true seem to limit the iops around 40k no cache 1 client - rbd_cache=false - 1osd : 38300 iops 1 client - rbd_cache=false - 2osd : 69073 iops 1 client - rbd_cache=false - 3osd : 78292 iops cache

Re: [ceph-users] rbd cache + libvirt

2015-06-08 Thread Alexandre DERUMIER
previous matrix was with ceph giant with ceph =giant, rbd_cache=true by default, so cache=none not working if a ceph.conf exist. host conf : no value: guest cache=writeback : result : cache host conf : rbd_cache=false : guest cache=writeback : result : nocache (wrong) host

Re: [ceph-users] OSD trashed by simple reboot (Debian Jessie, systemd?)

2015-06-08 Thread Mark Kirkwood
Trying out some tests on my pet VMs with 0.80.9 does not elicit any journal failures...However ISTR that running on the bare metal was the most reliable way to reproduce...(proceeding - currently cannot get ceph-deploy to install this configuration...I'll investigate further tomorrow)!

Re: [ceph-users] Multiple journals and an OSD on one SSD doable?

2015-06-08 Thread Christian Balzer
Hello, On Mon, 8 Jun 2015 18:01:28 +1200 cameron.scr...@solnet.co.nz wrote: Just used the method in the link you sent me to test one of the EVO 850s, with one job it reached a speed of around 2.5MB/s but it didn't max out until around 32 jobs at 24MB/s: I'm not the author of that page,

Re: [ceph-users] OSD trashed by simple reboot (Debian Jessie, systemd?)

2015-06-08 Thread Christian Balzer
Mark, one would hope you can't with 0.80.9 as per the release notes, while 0.80.7 definitely was susceptible. Christian On Mon, 08 Jun 2015 20:05:20 +1200 Mark Kirkwood wrote: Trying out some tests on my pet VMs with 0.80.9 does not elicit any journal failures...However ISTR that running

Re: [ceph-users] ceph-disk activate /dev/sda1 seem to get stuck?

2015-06-08 Thread Christian Balzer
Hello, All I can tell you is that I'm seeing the same thing frequently on Debian Jessie and that it indeed seems to be a race condition between udev and ceph-deploy (ceph-disk). I solved this by killing of the process stuck on the target node (the one with the tmp/mnt directory) and then doing

Re: [ceph-users] Multiple journals and an OSD on one SSD doable?

2015-06-08 Thread Jan Schermer
I recently did some testing of a few SSDs and found some surprising, and some not so surprising things: 1) performance varies wildly with firmware, especially with cheaper drives 2) performance varies with time - even with S3700 - slows down after ~40-80GB and then creeps back up 3) cheaper

Re: [ceph-users] Multiple journals and an OSD on one SSD doable?

2015-06-08 Thread Christian Balzer
On Mon, 8 Jun 2015 09:44:54 +0200 Jan Schermer wrote: I recently did some testing of a few SSDs and found some surprising, and some not so surprising things: 1) performance varies wildly with firmware, especially with cheaper drives 2) performance varies with time - even with S3700 -

Re: [ceph-users] Multiple journals and an OSD on one SSD doable?

2015-06-08 Thread Jan Schermer
On 08 Jun 2015, at 10:07, Christian Balzer ch...@gol.com wrote: On Mon, 8 Jun 2015 09:44:54 +0200 Jan Schermer wrote: I recently did some testing of a few SSDs and found some surprising, and some not so surprising things: 1) performance varies wildly with firmware, especially with

Re: [ceph-users] Multiple journals and an OSD on one SSD doable?

2015-06-08 Thread Cameron . Scrace
Just used the method in the link you sent me to test one of the EVO 850s, with one job it reached a speed of around 2.5MB/s but it didn't max out until around 32 jobs at 24MB/s: sudo fio --filename=/dev/sdh --direct=1 --sync=1 --rw=write --bs=4k --numjobs=32 --iodepth=1 --runtime=60

Re: [ceph-users] ceph-disk activate /dev/sda1 seem to get stuck?

2015-06-08 Thread Jelle de Jong
On 05/06/15 21:50, Jelle de Jong wrote: I am new to ceph and I am trying to build a cluster for testing. after running: ceph-deploy osd prepare --zap-disk ceph02:/dev/sda It seems udev rules find the disk and try to activate them, but then gets stuck:

Re: [ceph-users] Multiple journals and an OSD on one SSD doable?

2015-06-08 Thread Jan Schermer
On 08 Jun 2015, at 10:40, Christian Balzer ch...@gol.com wrote: On Mon, 8 Jun 2015 10:12:02 +0200 Jan Schermer wrote: On 08 Jun 2015, at 10:07, Christian Balzer ch...@gol.com wrote: On Mon, 8 Jun 2015 09:44:54 +0200 Jan Schermer wrote: I recently did some testing of a few SSDs and

[ceph-users] rbd cache + libvirt

2015-06-08 Thread Arnaud Virlet
Hi Actually we use libvirt VM with ceph rbd pool for storage. By default we want to have disk cache=writeback for all disks in libvirt. In /etc/ceph/ceph.conf, we have rbd cache = true and for each VMs XML we set cache=writeback for all disks in VMs configuration. We want to use one ocfs2

[ceph-users] osd cvrashing

2015-06-08 Thread Cristian Falcas
Hello, On a fresh install of ceph, I started toget those errors on one osd: 0 2015-06-08 14:22:25.582417 7f21d9239880 -1 osd/OSD.h: In function 'OSDMapRef OSDService::get_map(epoch_t)' thread 7f21d9239880 time 2015-06-08 14:22:25.579846 osd/OSD.h: 716: FAILED assert(ret) ceph version

Re: [ceph-users] ceph-disk activate /dev/sda1 seem to get stuck?

2015-06-08 Thread Jelle de Jong
Thank you for taking the time to reply. I removed the file /lib/udev/rules.d/95-ceph-osd.rules from all my nodes and tried to recreate the osd's. The bellow pastebin is an example of the commands: ceph-deploy disk zap ceph02:/dev/sdc ceph-deploy osd prepare --zap-disk ceph02:sda:/dev/sdc

Re: [ceph-users] rbd cache + libvirt

2015-06-08 Thread Andrey Korolyov
On Mon, Jun 8, 2015 at 1:24 PM, Arnaud Virlet avir...@easter-eggs.com wrote: Hi Actually we use libvirt VM with ceph rbd pool for storage. By default we want to have disk cache=writeback for all disks in libvirt. In /etc/ceph/ceph.conf, we have rbd cache = true and for each VMs XML we set

Re: [ceph-users] Multiple journals and an OSD on one SSD doable?

2015-06-08 Thread Christian Balzer
On Mon, 8 Jun 2015 10:12:02 +0200 Jan Schermer wrote: On 08 Jun 2015, at 10:07, Christian Balzer ch...@gol.com wrote: On Mon, 8 Jun 2015 09:44:54 +0200 Jan Schermer wrote: I recently did some testing of a few SSDs and found some surprising, and some not so surprising things:

[ceph-users] how do i install ceph from apt on debian jessie?

2015-06-08 Thread Jelle de Jong
Hello everybody, I could not get ceph to work with the ceph packages shipped with debian jessie: http://paste.debian.net/211771/ So I tried to use apt-pinning to use the eu.ceph.com apt repository, but there are to many dependencies that are unresolved. This is my apt configuration:

Re: [ceph-users] how do i install ceph from apt on debian jessie?

2015-06-08 Thread Jelle de Jong
On 08/06/15 13:22, Jelle de Jong wrote: I could not get ceph to work with the ceph packages shipped with debian jessie: http://paste.debian.net/211771/ So I tried to use apt-pinning to use the eu.ceph.com apt repository, but there are to many dependencies that are unresolved. This is my

Re: [ceph-users] rbd cache + libvirt

2015-06-08 Thread Andrey Korolyov
On Mon, Jun 8, 2015 at 2:48 PM, Arnaud Virlet avir...@easter-eggs.com wrote: Thanks for you reply, On 06/08/2015 12:31 PM, Andrey Korolyov wrote: On Mon, Jun 8, 2015 at 1:24 PM, Arnaud Virlet avir...@easter-eggs.com wrote: Hi Actually we use libvirt VM with ceph rbd pool for storage. By

Re: [ceph-users] how do i install ceph from apt on debian jessie?

2015-06-08 Thread Christian Balzer
On Mon, 08 Jun 2015 14:14:51 +0200 Jelle de Jong wrote: On 08/06/15 13:22, Jelle de Jong wrote: I could not get ceph to work with the ceph packages shipped with debian jessie: http://paste.debian.net/211771/ So I tried to use apt-pinning to use the eu.ceph.com apt repository, but

[ceph-users] ceph breizh camp

2015-06-08 Thread eric mourgaya
Hey, The next ceph breizh camp will take place at Rennes (Britany) the 16th June. The meetup will begin at 10h00 at: IRISA - Institut de Recherche en Informatique et Systèmes Aléatoires 263 Avenue Général Leclerc 35000 Rennes building IRISA/Inria 12 F, allée Jean Perrin

Re: [ceph-users] rbd cache + libvirt

2015-06-08 Thread Arnaud Virlet
Thanks for you reply, On 06/08/2015 12:31 PM, Andrey Korolyov wrote: On Mon, Jun 8, 2015 at 1:24 PM, Arnaud Virlet avir...@easter-eggs.com wrote: Hi Actually we use libvirt VM with ceph rbd pool for storage. By default we want to have disk cache=writeback for all disks in libvirt. In

Re: [ceph-users] rbd cache + libvirt

2015-06-08 Thread Jan Schermer
Isn’t the right parameter “network=writeback” for network devices like RBD? Jan On 08 Jun 2015, at 12:31, Andrey Korolyov and...@xdel.ru wrote: On Mon, Jun 8, 2015 at 1:24 PM, Arnaud Virlet avir...@easter-eggs.com wrote: Hi Actually we use libvirt VM with ceph rbd pool for storage. By

Re: [ceph-users] rbd cache + libvirt

2015-06-08 Thread Andrey Korolyov
On Mon, Jun 8, 2015 at 3:44 PM, Arnaud Virlet avir...@easter-eggs.com wrote: On 06/08/2015 01:59 PM, Andrey Korolyov wrote: Am I understand you right that you are using certain template engine for both OCFS- and RBD-backed volumes within a single VM` config and it does not allow per-disk

Re: [ceph-users] rbd cache + libvirt

2015-06-08 Thread Arnaud Virlet
On 06/08/2015 01:59 PM, Andrey Korolyov wrote: Am I understand you right that you are using certain template engine for both OCFS- and RBD-backed volumes within a single VM` config and it does not allow per-disk cache mode separation in a suggested way? My VM has 3 disks on RBD backend.

Re: [ceph-users] rbd cache + libvirt

2015-06-08 Thread Andrey Korolyov
On Mon, Jun 8, 2015 at 6:36 PM, Arnaud Virlet avir...@easter-eggs.com wrote: On 06/08/2015 03:17 PM, Andrey Korolyov wrote: On Mon, Jun 8, 2015 at 3:44 PM, Arnaud Virlet avir...@easter-eggs.com wrote: On 06/08/2015 01:59 PM, Andrey Korolyov wrote: Am I understand you right that you

Re: [ceph-users] Blueprint Submission Open for CDS Jewel

2015-06-08 Thread Haomai Wang
Hi Partick, It looks confusing to use this. Is it need that we upload a txt file to describe blueprint instead of editing directly online? On Wed, May 27, 2015 at 5:05 AM, Patrick McGarry pmcga...@redhat.com wrote: It's that time again, time to gird up our loins and submit blueprints for all

Re: [ceph-users] rbd cache + libvirt

2015-06-08 Thread Jason Dillaman
Hmm ... looking at the latest version of QEMU, it appears that the RBD cache settings are changed prior to reading the configuration file instead of overriding the value after the configuration file has been read [1]. Try specifying the path to a new configuration file via the