Re: [ceph-users] how to test journal?

2017-11-21 Thread Loris Cuoghi
Le Tue, 21 Nov 2017 10:52:43 +0200, Rudi Ahlers a écrit : > [snip, snap] > > Maybe I'm confusing the terminology. I have created a DB and WAL > device for my Bluestore, but I presume that's not a journal > (anymore?). They never were, please read further in the

Re: [ceph-users] Moving bluestore WAL and DB after bluestore creation

2017-11-16 Thread Loris Cuoghi
Le Wed, 15 Nov 2017 19:46:48 +, Shawn Edwards a écrit : > On Wed, Nov 15, 2017, 11:07 David Turner > wrote: > > > I'm not going to lie. This makes me dislike Bluestore quite a > > bit. Using multiple OSDs to an SSD journal allowed for you to

Re: [ceph-users] How to set up bluestore manually?

2017-07-03 Thread Loris Cuoghi
If yes, how to handle > them in the CRUSH map (I have different categories of OSD hosts for > different use cases, split by appropriate CRUSH rules). > > Thanks > > Martin > > -Ursprüngliche Nachricht- > Von: Loris Cuoghi [mailto:loris.cuo...@artific

Re: [ceph-users] How to set up bluestore manually?

2017-07-03 Thread Loris Cuoghi
Le Mon, 3 Jul 2017 11:30:04 +, Martin Emrich a écrit : > Hi! > > Thanks for the hint, but I get this error: > > [ceph_deploy][ERROR ] ConfigError: Cannot load config: [Errno 2] No > such file or directory: 'ceph.conf'; has `ceph-deploy new` been run > in this

Re: [ceph-users] CephFS

2017-01-17 Thread Loris Cuoghi
Hello, Le 17/01/2017 à 13:38, Kingsley Tart a écrit : How did you find the fuse client performed? I'm more interested in the fuse client because I'd like to use CephFS for shared volumes, and my understanding of the kernel client is that it uses the volume as a block device. I think you're

Re: [ceph-users] ceph.com outages

2017-01-16 Thread Loris Cuoghi
> > On Mon, Jan 16, 2017 at 9:07 AM, Patrick McGarry <pmcga...@redhat.com> wrote: >> Hey cephers, >> >> Please bear with us as we migrate ceph.com as there may be some >> outages. They should be quick and over soon. Thanks! >> >> >> -- >>

Re: [ceph-users] How to fix: HEALTH_ERR 45 pgs are stuck inactive for more than 300 seconds; 19 pgs degraded; 45 pgs stuck inactive; 19 pgs stuck unclean; 19 pgs undersized; recovery 2514/5028 objects

2017-01-16 Thread Loris Cuoghi
Hello, Le 16/01/2017 à 11:50, Stéphane Klein a écrit : Hi, I have two OSD and Mon nodes. I'm going to add third osd and mon on this cluster but before I want to fix this error: > > [SNIP SNAP] You've just created your cluster. With the standard CRUSH rules you need one OSD on three

Re: [ceph-users] rbd cache mode with qemu

2016-08-30 Thread Loris Cuoghi
Hello, Le 30/08/2016 à 14:08, Steffen Weißgerber a écrit : Hello, after correcting the configuration for different qemu vm's with rbd disks (we removed the cache=writethrough option to have the default writeback mode) we have a strange behaviour after restarting the vm's. For most of them the

Re: [ceph-users] osds udev rules not triggered on reboot (jewel, jessie)

2016-04-27 Thread Loris Cuoghi
they at least are started at boot time. Great! But only if the disks keep their device names, right ? Best regards, Karsten 2016-04-27 13:57 GMT+02:00 Loris Cuoghi <l...@stella-telecom.fr>: Le 27/04/2016 13:51, Karsten Heymann a écrit : Hi Loris, thank you for your feedback. As I plan

Re: [ceph-users] osds udev rules not triggered on reboot (jewel, jessie)

2016-04-27 Thread Loris Cuoghi
really would expect ceph to work out of the box with the version in jessie, so I'd rather try to find the cause of the problem and help fixing it. That's my plan too, updating was just a troubleshoting method. ;) best regards 2016-04-27 13:36 GMT+02:00 Loris Cuoghi <l...@stella-telecom

Re: [ceph-users] osds udev rules not triggered on reboot (jewel, jessie)

2016-04-27 Thread Loris Cuoghi
Hi Karsten, I've had the same experience updating our test cluster (Debian 8) from Infernalis to Jewel. I've update udev/systemd to the one in testing (so, from 215 to 229), and it worked much better at reboot. So... Are the udev rules written for the udev version in RedHat (219) or

Re: [ceph-users] Qemu+RBD recommended cache mode and AIO settings

2016-03-22 Thread Loris Cuoghi
Hi Jason, Le 22/03/2016 14:12, Jason Dillaman a écrit : We actually recommend that OpenStack be configured to use writeback cache [1]. If the guest OS is properly issuing flush requests, the cache will still provide crash-consistency. By default, the cache will automatically start up in

Re: [ceph-users] Qemu+RBD recommended cache mode and AIO settings

2016-03-22 Thread Loris Cuoghi
Hi Wido, Le 22/03/2016 13:52, Wido den Hollander a écrit : Hi, I've been looking on the internet regarding two settings which might influence performance with librbd. When attaching a disk with Qemu you can set a few things: - cache - aio The default for libvirt (in both CloudStack and

Re: [ceph-users] cephx capabilities to forbid rbd creation

2016-03-15 Thread Loris Cuoghi
would like to prove us wrong? :) Le 15/03/2016 22:33, David Casier a écrit : > Hi, > Maybe (not tested) : > [osd ]allow * object_prefix ? > > > > 2016-03-15 22:18 GMT+01:00 Loris Cuoghi <l...@stella-telecom.fr>: >> Hi David, >> >> One pool per virtual

Re: [ceph-users] cephx capabilities to forbid rbd creation

2016-03-15 Thread Loris Cuoghi
; 2016-02-12 3:34 GMT+01:00 Loris Cuoghi <l...@stella-telecom.fr>: >> Hi! >> >> We are on version 9.2.0, 5 mons and 80 OSDS distributed on 10 hosts. >> >> How could we twist cephx capabilities so to forbid our KVM+QEMU+libvirt >> hosts any RBD creati

Re: [ceph-users] Can I rebuild object maps while VMs are running ?

2016-03-08 Thread Loris Cuoghi
Le 07/03/2016 17:58, Jason Dillaman a écrit : > Documentation of these new RBD features is definitely lacking and I've opened a tracker ticket to improve it [1]. > > [1] http://tracker.ceph.com/issues/15000 > Hey, thank you Jason! :) > That's disheartening to hear that your RBD images were

Re: [ceph-users] Can I rebuild object maps while VMs are running ?

2016-03-05 Thread Loris Cuoghi
Le 05/03/2016 09:35, Lindsay Mathieson a écrit : > On 5/03/2016 3:31 AM, Christoph Adomeit wrote: >> I just updated our ceph-cluster to infernalis and now I want to >> enable the new image features. > > > Semi related - is there a description of these new features some where? > Been there, done

[ceph-users] cephx capabilities to forbid rbd creation

2016-02-11 Thread Loris Cuoghi
Hi! We are on version 9.2.0, 5 mons and 80 OSDS distributed on 10 hosts. How could we twist cephx capabilities so to forbid our KVM+QEMU+libvirt hosts any RBD creation capability ? We currently have an rbd-user key like so : caps: [mon] allow r caps: [osd] allow x

Re: [ceph-users] Max Replica Size

2016-02-09 Thread Loris Cuoghi
Pool ;) Le 10/02/2016 06:43, Shinobu Kinjo a écrit : What is poll? Rgds, Shinobu - Original Message - From: "Swapnil Jain" To: ceph-users@lists.ceph.com Sent: Wednesday, February 10, 2016 2:20:08 PM Subject: [ceph-users] Max Replica Size Hi, What is the maximum

Re: [ceph-users] Ceph rdb question about possibilities

2016-01-28 Thread Loris Cuoghi
Le 28/01/2016 11:06, Sándor Szombat a écrit : Hello all! I check the Ceph FS, but it is beta now unfortunatelly. I start to meet with the rdb. It is possible to create an image in a pool, mount it as a block device (for example /dev/rbd0), and format this as HDD, and mount it on 2 host? I tried

Re: [ceph-users] OSD behavior, in case of its journal disk (either HDD or SSD) failure

2016-01-25 Thread Loris Cuoghi
Le 25/01/2016 15:28, Mihai Gheorghe a écrit : As far as i know you will not lose data, but it will be unaccessable untill you bring the journal back online. http://www.sebastien-han.fr/blog/2014/11/27/ceph-recover-osds-after-ssd-journal-failure/ After this, we should be able to restart the

Re: [ceph-users] Ceph Write process

2016-01-25 Thread Loris Cuoghi
Le 25/01/2016 11:04, Sam Huracan a écrit : > Hi Cephers, > > When an Ceph write made, does it write to all File Stores of Primary OSD > and Secondary OSD before sending ACK to client, or it writes to journal > of OSD and sending ACK without writing to File Store? > > I think it would write to

Re: [ceph-users] Intel S3710 400GB and Samsung PM863 480GB fio results

2015-12-23 Thread Loris Cuoghi
Le 22/12/2015 20:03, koukou73gr a écrit : Even the cheapest stuff nowadays has some more or less decent wear leveling algorithm built into their controller so this won't be a problem. Wear leveling algorithms cycle the blocks internally so wear evens out on the whole disk. But it would wear

Re: [ceph-users] ceph journal failed?

2015-12-22 Thread Loris Cuoghi
Le 22/12/2015 09:42, yuyang a écrit : Hello, everyone, [snip snap] Hi > If the SSD failed or down, can the OSD work? > Is the osd down or only can be read? If you don't have a journal anymore, the OSD has already quit, as it can't continue writing, nor it can assure data consistency, since

Re: [ceph-users] active+undersized+degraded

2015-12-17 Thread Loris Cuoghi
Le 17/12/2015 13:57, Loris Cuoghi a écrit : Le 17/12/2015 13:52, Burkhard Linke a écrit : Hi, On 12/17/2015 01:41 PM, Dan Nica wrote: And the osd tree: $ ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 21.81180 root default -2 21.81180 host rimu 0 7.27060

Re: [ceph-users] active+undersized+degraded

2015-12-17 Thread Loris Cuoghi
Le 17/12/2015 13:52, Burkhard Linke a écrit : Hi, On 12/17/2015 01:41 PM, Dan Nica wrote: And the osd tree: $ ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 21.81180 root default -2 21.81180 host rimu 0 7.27060 osd.0 up 1.0 1.0

Re: [ceph-users] SSD only pool without journal

2015-12-17 Thread Loris Cuoghi
Le 17/12/2015 16:47, Misa a écrit : Hello everyone, does it make sense to create SSD only pool from OSDs without journal? From my point of view, the SSDs are so fast that OSD journal on the SSD will not make much of a difference. Cheers Misa ___

Re: [ceph-users] [crush] Selecting the current rack

2015-11-25 Thread Loris Cuoghi
Le 25/11/2015 14:37, Emmanuel Lacour a écrit : Le 24/11/2015 21:48, Gregory Farnum a écrit : Yeah, this is the old "two copies in one rack, a third copy elsewhere" replication scheme that lots of stuff likes but CRUSH doesn't really support. Assuming new enough clients and servers (some of the

Re: [ceph-users] Changing CRUSH map ids

2015-11-02 Thread Loris Cuoghi
ts cluster. Thanks All :) Le 02/11/2015 16:14, Gregory Farnum a écrit : Regardless of what the crush tool does, I wouldn't muck around with the IDs of the OSDs. The rest of Celh will probably not handle it well if the crush IDs don't match the OSD numbers. -Greg On Monday, November 2, 2

[ceph-users] Changing CRUSH map ids

2015-11-02 Thread Loris Cuoghi
Hi All, We're currently on version 0.94.5 with three monitors and 75 OSDs. I've peeked at the decompiled CRUSH map, and I see that all ids are commented with '# Here be dragons!', or more literally : '# do not change unnecessarily'. Now, what would happen if an incautious user would happen

Re: [ceph-users] Changing CRUSH map ids

2015-11-02 Thread Loris Cuoghi
Le 02/11/2015 12:47, Wido den Hollander a écrit : On 02-11-15 12:30, Loris Cuoghi wrote: Hi All, We're currently on version 0.94.5 with three monitors and 75 OSDs. I've peeked at the decompiled CRUSH map, and I see that all ids are commented with '# Here be dragons!', or more literally