Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-08-29 Thread Sebastien Han
your ceph setting) didn’t bring much, now I can reach 3,5K IOPS. By any chance, would it be possible for you to test with a single OSD SSD? On 28 Aug 2014, at 18:11, Sebastien Han sebastien@enovance.com wrote: Hey all, It has been a while since the last thread performance related on the ML

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-08-29 Thread Sebastien Han
@Dan: thanks for sharing your config, with all your flags I don’t seem to get more that 3,4K IOPS and they even seem to slow me down :( This is really weird. Yes I already tried to run to simultaneous processes and only half of 3,4K for each of them. @Kasper: thanks for these results, I believe

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-01 Thread Sebastien Han
, at 11:13, Sebastien Han sebastien@enovance.com wrote: Mark, thanks a lot for experimenting this for me. I’m gonna try master soon and will tell you how much I can get. It’s interesting to see that using 2 SSDs brings up more performance, even both SSDs are under-utilized… They should

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-02 Thread Sebastien Han
original - De: Sebastien Han sebastien@enovance.com À: Somnath Roy somnath@sandisk.com Cc: ceph-users@lists.ceph.com Envoyé: Mardi 2 Septembre 2014 02:19:16 Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS Mark and all, Ceph IOPS performance

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-02 Thread Sebastien Han
, Sebastien Han sebastien@enovance.com a écrit : Hey, Well I ran an fio job that simulates the (more or less) what ceph is doing (journal writes with dsync and o_direct) and the ssd gave me 29K IOPS too. I could do this, but for me it definitely looks like a major waste since we don’t even

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-02 Thread Sebastien Han
to bench them with firefly and master. Is a debian wheezy gitbuilder repository available ? (I'm a bit lazy to compile all packages) - Mail original - De: Sebastien Han sebastien@enovance.com À: Alexandre DERUMIER aderum...@odiso.com Cc: ceph-users@lists.ceph.com, Cédric

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-08 Thread Sebastien Han
/2014 22:11, Sebastien Han a écrit : Hi Warren, What do mean exactly by secure erase? At the firmware level with constructor softwares? SSDs were pretty new so I don’t we hit that sort of things. I believe that only aged SSDs have this behaviour but I might be wrong. Sorry I forgot

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-16 Thread Sebastien Han
enregistrements écrits 70565888 octets (71 MB) copiés, 70,4098 s, 1,0 MB/s I'll do tests with intel s3500 tomorrow to compare - Mail original - De: Sebastien Han sebastien@enovance.com À: Warren Wang warren_w...@cable.comcast.com Cc: ceph-users@lists.ceph.com Envoyé: Lundi 8

Re: [ceph-users] vdb busy error when attaching to instance

2014-09-16 Thread Sebastien Han
Did you follow this ceph.com/docs/master/rbd/rbd-openstack/ to configure your env? On 12 Sep 2014, at 14:38, m.channappa.nega...@accenture.com wrote: Hello Team, I have configured ceph as a multibackend for openstack. I have created 2 pools . 1. Volumes (replication size =3 )

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-23 Thread Sebastien Han
] On Behalf Of Sebastien Han Sent: Tuesday, September 16, 2014 9:33 PM To: Alexandre DERUMIER Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS Hi, Thanks for keeping us updated on this subject. dsync is definitely killing

Re: [ceph-users] rbd + openstack nova instance snapshots?

2014-10-01 Thread Sebastien Han
Hi, Unfortunately this is expected. If you take a snapshot you should not expect a clone but a RBD snapshot. Please see this BP: https://blueprints.launchpad.net/nova/+spec/implement-rbd-snapshots-instead-of-qemu-snapshots A major part of the code is ready, however we missed nova-specs feature

Re: [ceph-users] rbd + openstack nova instance snapshots?

2014-10-01 Thread Sebastien Han
On 01 Oct 2014, at 15:26, Jonathan Proulx j...@jonproulx.com wrote: On Wed, Oct 1, 2014 at 2:57 AM, Sebastien Han sebastien@enovance.com wrote: Hi, Unfortunately this is expected. If you take a snapshot you should not expect a clone but a RBD snapshot. Unfortunate that it doesn't

Re: [ceph-users] RBD on openstack glance+cinder CoW?

2014-10-08 Thread Sebastien Han
Hum I just tried on a devstack and on firefly stable, it works for me. Looking at your config it seems that the glance_api_version=2 is put in the wrong section. Please move it to [DEFAULT] and let me know if it works. On 08 Oct 2014, at 14:28, Nathan Stratton nat...@robotics.net wrote: On

Re: [ceph-users] Micro Ceph summit during the OpenStack summit

2014-10-13 Thread Sebastien Han
Hey all, I just saw this thread, I’ve been working on this and was about to share it: https://etherpad.openstack.org/p/kilo-ceph Since the ceph etherpad is down I think we should switch to this one as an alternative. Loic, feel free to work on this one and add more content :). On 13 Oct 2014,

Re: [ceph-users] Performance doesn't scale well on a full ssd cluster.

2014-10-16 Thread Sebastien Han
Mark, please read this: https://www.mail-archive.com/ceph-users@lists.ceph.com/msg12486.html On 16 Oct 2014, at 19:19, Mark Wu wud...@gmail.com wrote: Thanks for the detailed information. but I am already using fio with rbd engine. Almost 4 volumes can reach the peak. 2014 年 10 月 17 日

Re: [ceph-users] All SSD storage and journals

2014-10-27 Thread Sebastien Han
They were some investigations as well around F2FS (https://www.kernel.org/doc/Documentation/filesystems/f2fs.txt), the last time I tried to install an OSD dir under f2fs it failed. I tried to run the OSD on f2fs however ceph-osd mkfs got stuck on a xattr test: fremovexattr(10,

Re: [ceph-users] Tool or any command to inject metadata/data corruption on rbd

2014-12-04 Thread Sebastien Han
AFAIK there is no tool to do this. You simply rm object or dd a new content in the object (fill with zero) On 04 Dec 2014, at 13:41, Mallikarjun Biradar mallikarjuna.bira...@gmail.com wrote: Hi all, I would like to know which tool or cli that all users are using to simulate

Re: [ceph-users] Suitable SSDs for journal

2014-12-04 Thread Sebastien Han
Eneko, I do have plan to push to a performance initiative section on the ceph.com/docs sooner or later so people will put their own results through github PR. On 04 Dec 2014, at 16:09, Eneko Lacunza elacu...@binovo.es wrote: Thanks, will look back in the list archive. On 04/12/14 15:47,

Re: [ceph-users] Watch for fstrim running on your Ubuntu systems

2014-12-09 Thread Sebastien Han
Good to know. Thanks for sharing! On 09 Dec 2014, at 10:21, Wido den Hollander w...@42on.com wrote: Hi, Last sunday I got a call early in the morning that a Ceph cluster was having some issues. Slow requests and OSDs marking each other down. Since this is a 100% SSD cluster I was a bit

Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Sebastien Han
Discard works with virtio-scsi controllers for disks in QEMU. Just use discard=unmap in the disk section (scsi disk). On 12 Dec 2014, at 13:17, Max Power mailli...@ferienwohnung-altenbeken.de wrote: Wido den Hollander w...@42on.com hat am 12. Dezember 2014 um 12:53 geschrieben: It

Re: [ceph-users] Number of SSD for OSD journal

2014-12-15 Thread Sebastien Han
Salut, The general recommended ratio (for me at least) is 3 journals per SSD. Using 200GB Intel DC S3700 is great. If you’re going with a low perf scenario I don’t think you should bother buying SSD, just remove them from the picture and do 12 SATA 7.2K 4TB. For medium and medium ++ perf using

Re: [ceph-users] Openstack Nova not removing RBD volumes after removing of instance

2014-04-04 Thread Sebastien Han
: @enovance On 04 Apr 2014, at 09:56, Mariusz Gronczewski mariusz.gronczew...@efigence.com wrote: Nope, one from RDO packages http://openstack.redhat.com/Main_Page On Thu, 3 Apr 2014 23:22:15 +0200, Sebastien Han sebastien@enovance.com wrote: Are you running Havana with josh’s branch? (https

Re: [ceph-users] ceph osd creation error ---Please help me

2014-04-08 Thread Sebastien Han
Try ceph auth del osd.1 And then repeat step 6 Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 11 bis, rue Roquépine - 75008 Paris Web : www.enovance.com - Twitter : @enovance On 08

Re: [ceph-users] ceph-brag installation

2014-04-22 Thread Sebastien Han
Hey Loïc, The machine was setup a while ago :). The server side is ready, there is just no graphical interface, everything appears as plain text. It’s not necessary to upgrade. Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72

Re: [ceph-users] rdb - huge disk - slow ceph

2014-04-22 Thread Sebastien Han
To speed up the deletion, you can remove the rbd_header (if the image is empty) and then remove it. For example: $ rados -p rbd ls huge.rbd rbd_directory $ rados -p rbd rm huge.rbd $ time rbd rm huge 2013-12-10 09:35:44.168695 7f9c4a87d780 -1 librbd::ImageCtx: error finding header: (2) No

Re: [ceph-users] OpenStack Icehouse and ephemeral disks created from image

2014-04-25 Thread Sebastien Han
This is a COW clone, but the BP you pointed doesn’t match the feature you described. This might explain Greg’s answer. The BP refers to the libvirt_image_type functionality for Nova. What do you get now when you try to create a volume from an image? Sébastien Han Cloud Engineer Always

Re: [ceph-users] OpenStack Icehouse and ephemeral disks created from image

2014-04-25 Thread Sebastien Han
giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 11 bis, rue Roquépine - 75008 Paris Web : www.enovance.com - Twitter : @enovance On 25 Apr 2014, at 16:37, Sebastien Han sebastien@enovance.com wrote: g signature.asc Description: Message signed

Re: [ceph-users] OpenStack Icehouse and ephemeral disks created from image

2014-04-28 Thread Sebastien Han
: @enovance On 25 Apr 2014, at 18:16, Sebastien Han sebastien@enovance.com wrote: I just tried, I have the same problem, it looks like a regression… It’s weird because the code didn’t change that much during the Icehouse cycle. I just reported the bug here: https://bugs.launchpad.net/cinder

Re: [ceph-users] OpenStack Icehouse and ephemeral disks created from image

2014-04-28 Thread Sebastien Han
- Twitter : @enovance On 28 Apr 2014, at 16:10, Maciej Gałkiewicz mac...@shellycloud.com wrote: On 28 April 2014 15:58, Sebastien Han sebastien@enovance.com wrote: FYI It’s fixed here: https://review.openstack.org/#/c/90644/1 I already have this patch and it didn't help. Have it fixed

Re: [ceph-users] Help -Ceph deployment in Single node Like Devstack

2014-05-09 Thread Sebastien Han
http://www.sebastien-han.fr/blog/2014/05/01/vagrant-up-install-ceph-in-one-command/ Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 11 bis, rue Roquépine - 75008 Paris Web :

Re: [ceph-users] OpenStack Icehouse and ephemeral disks created from image

2014-05-15 Thread Sebastien Han
, at 09:02, Maciej Gałkiewicz mac...@shellycloud.com wrote: On 15 May 2014 04:05, Maciej Gałkiewicz mac...@shellycloud.com wrote: On 28 April 2014 16:11, Sebastien Han sebastien@enovance.com wrote: Yes yes, just restart cinder-api and cinder-volume. It worked for me. In my case the image

Re: [ceph-users] Storage Multi Tenancy

2014-05-16 Thread Sebastien Han
Jeroen, Actually this is more a question for the OpenStack ML. All the use cases you described are not possible at the moment. The only thing you can get is shared ressources across all the tenants, you can’t really pin any ressource to a specific tenant. This could done I guess, but not

[ceph-users] Is it still unsafe to map a RBD device on an OSD server?

2014-06-10 Thread Sebastien Han
Hi all, A couple of years ago, I heard that it wasn’t safe to map a krbd block on an OSD host. It was more or less like mounting a NFS mount on the NFS server, we can potentially end up with some deadlocks. At least, I tried again recently and didn’t encounter any problem. What do you think?

Re: [ceph-users] question about feature set mismatch

2014-06-10 Thread Sebastien Han
FYI I encountered the same problem for krbd, removing the ec pool didn’t solve my problem. I’m running 3.13 Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood. Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 11 bis, rue Roquépine - 75008

Re: [ceph-users] Is it still unsafe to map a RBD device on an OSD server?

2014-06-10 Thread Sebastien Han
, Jean-Charles LOPEZ jeanchlo...@mac.com wrote: Hi Sébastien, still the case. Depending on what you do, the OSD process will get to a hang and will suicide. Regards JC On Jun 10, 2014, at 09:46, Sebastien Han sebastien@enovance.com wrote: Hi all, A couple of years ago, I

Re: [ceph-users] qemu image create failed

2014-07-15 Thread Sebastien Han
Can you connect to your Ceph cluster? You can pass options to the cmd line like this: $ qemu-img create -f rbd rbd:instances/vmdisk01:id=leseb:conf=/etc/ceph/ceph-leseb.conf 2G Cheers. Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood. Phone: +33 (0)1 49 70 99

Re: [ceph-users] Moving Journal to SSD

2014-08-11 Thread Sebastien Han
Hi Dane, If you deployed with ceph-deploy, you will see that the journal is just a symlink. Take a look at /var/lib/ceph/osd/osd-id/journal The link should point to the first partition of your hard drive disk, so no filesystem for the journal, just a block device. Roughly you should try:

Re: [ceph-users] presentation videos from Ceph Day London?

2013-10-31 Thread Sebastien Han
Nothing has been recorded as far as I know. However I’ve seen some guys from Scality recording sessions with a cam. Scality? Are you there? :) Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com

Re: [ceph-users] Intel 520/530 SSD for ceph

2013-11-22 Thread Sebastien Han
I used a blocksize of 350k as my graphes shows me that this is the average workload we have on the journal. Pretty interesting metric Stefan. Has anyone seen the same behaviour? Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72

Re: [ceph-users] LevelDB Backend For Ceph OSD Preview

2013-11-25 Thread Sebastien Han
Nice job Haomai! Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 10, rue de la Victoire - 75009 Paris Web : www.enovance.com - Twitter : @enovance On 25 Nov 2013, at 02:50, Haomai

Re: [ceph-users] alternative approaches to CEPH-FS

2013-11-25 Thread Sebastien Han
Hi, 1) nfs over rbd (http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/) This has been in production for more than a year now and heavily tested before. Performance was not expected since frontend server mainly do read (90%). Cheers. Sébastien Han Cloud Engineer Always give

Re: [ceph-users] alternative approaches to CEPH-FS

2013-11-25 Thread Sebastien Han
:39 AM, Sebastien Han sebastien@enovance.com wrote: Hi, 1) nfs over rbd (http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/) This has been in production for more than a year now and heavily tested before. Performance was not expected since frontend server mainly do read (90

Re: [ceph-users] LevelDB Backend For Ceph OSD Preview

2013-11-26 Thread Sebastien Han
: 10, rue de la Victoire - 75009 Paris Web : www.enovance.com - Twitter : @enovance On 25 Nov 2013, at 10:00, Sebastien Han sebastien@enovance.com wrote: Nice job Haomai! Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99

Re: [ceph-users] how to Testing cinder and glance with CEPH

2013-11-26 Thread Sebastien Han
Hi, Well after restarting the services run: $ cinder create 1 Then you can check both status in Cinder and Ceph: For Cinder run: $ cinder list For Ceph run: $ rbd -p cinder-pool ls If the image is there, you’re good. Cheers. Sébastien Han Cloud Engineer Always give 100%. Unless

Re: [ceph-users] Docker

2013-11-29 Thread Sebastien Han
Hi guys! Some experiment here: http://www.sebastien-han.fr/blog/2013/09/19/how-I-barely-got-my-first-ceph-mon-running-in-docker/ Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 10,

Re: [ceph-users] Journal, SSD and OS

2013-12-05 Thread Sebastien Han
Hi guys, I won’t do a RAID 1 with SSDs since they both write the same data. Thus, they are more likely to “almost” die at the same time. What I will try to do instead is to use both disk in JBOD mode or (degraded RAID0). Then I will create a tiny root partition for the OS. Then I’ll still have

Re: [ceph-users] Journal, SSD and OS

2013-12-06 Thread Sebastien Han
way to monitor SSD write life SMART data - at least it gives a guide as to device condition compared to its rated life. That can be done with smartmontools, but it would be nice to have it on the InkTank dashboard for example. On 2013-12-05 14:26, Sebastien Han wrote: Hi guys, I won’t

Re: [ceph-users] My experience with ceph now documentted

2013-12-17 Thread Sebastien Han
The ceph doc is currently being updated. See https://github.com/ceph/ceph/pull/906 Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 10, rue de la Victoire - 75009 Paris Web :

Re: [ceph-users] [Ceph-community] Ceph User Committee elections : call for participation

2014-01-01 Thread Sebastien Han
...@dachary.org wrote: On 01/01/2014 02:39, Sebastien Han wrote: Hi, I’m not sure to have the whole visibility of the role but I will be more than happy to take over. I believe that I can allocate some time for this. Your name is added to the http://pad.ceph.com/p/ceph-user-committee

Re: [ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Sebastien Han
Hi Alexandre, Are you going with a 10Gb network? It’s not an issue for IOPS but more for the bandwidth. If so read the following: I personally won’t go with a ratio of 1:6 for the journal. I guess 1:5 (or even 1:4) is preferable. SAS 10K gives you around 140MB/sec for sequential writes. So if

Re: [ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Sebastien Han
Hum the Crucial m500 is pretty slow. The biggest one doesn’t even reach 300MB/s. Intel DC S3700 100G showed around 200MB/sec for us. Actually, I don’t know the price difference between the crucial and the intel but the intel looks more suitable for me. Especially after Mark’s comment.

Re: [ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Sebastien Han
On 15 Jan 2014, at 15:46, Stefan Priebe s.pri...@profihost.ag wrote: Am 15.01.2014 15:44, schrieb Mark Nelson: On 01/15/2014 08:39 AM, Stefan Priebe wrote: Am 15.01.2014 15:34, schrieb Sebastien Han: Hum the Crucial m500 is pretty slow. The biggest one doesn’t even reach 300MB/s. Intel DC

Re: [ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Sebastien Han
- 75009 Paris Web : www.enovance.com - Twitter : @enovance On 15 Jan 2014, at 15:49, Sebastien Han sebastien@enovance.com wrote: Sorry I was only looking at the 4K aligned results. Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood.” Phone: +33 (0)1

Re: [ceph-users] Openstack Havana release installation with ceph

2014-01-24 Thread Sebastien Han
Usually you would like to start here: http://ceph.com/docs/master/rbd/rbd-openstack/ Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 10, rue de la Victoire - 75009 Paris Web :

Re: [ceph-users] OSD port usage

2014-01-24 Thread Sebastien Han
Greg, Do you have any estimation about how heartbeat messages use the network? How busy is it? At some point (if the cluster gets big enough), could this degrade the network performance? Will it make sense to have a separate network for this? So in addition to public and storage we will have

Re: [ceph-users] OSD port usage

2014-01-24 Thread Sebastien Han
I agree but somehow this generates more traffic too. We just need to find a good balance. But I don’t think this will change the scenario where the cluster network is down and OSDs die because of this… Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood.” Phone:

Re: [ceph-users] OSD port usage

2014-01-24 Thread Sebastien Han
2014, at 18:22, Gregory Farnum g...@inktank.com wrote: On Friday, January 24, 2014, Sebastien Han sebastien@enovance.com wrote: Greg, Do you have any estimation about how heartbeat messages use the network? How busy is it? Not very. It's one very small message per OSD peer per...second

Re: [ceph-users] During copy new rbd image is totally thick

2014-02-03 Thread Sebastien Han
I have the same behaviour here. I believe this is somehow expected since you’re calling “copy”, clone will do the cow. Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 10, rue de la

Re: [ceph-users] get virtual size and used

2014-02-03 Thread Sebastien Han
Hi, $ rbd diff rbd/toto | awk '{ SUM += $2 } END { print SUM/1024/1024 MB }’ Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 10, rue de la Victoire - 75009 Paris Web :

Re: [ceph-users] Meetup in Frankfurt, before the Ceph day

2014-02-05 Thread Sebastien Han
Hi Alexandre, We have a meet up in Paris. Please see: http://www.meetup.com/Ceph-in-Paris/events/158942372/ Cheers. Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 10, rue de la

Re: [ceph-users] Block Devices and OpenStack

2014-02-17 Thread Sebastien Han
Hi, Can I see your ceph.conf? I suspect that [client.cinder] and [client.glance] sections are missing. Cheers. Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 10, rue de la Victoire -

Re: [ceph-users] Block Devices and OpenStack

2014-02-17 Thread Sebastien Han
auth_service_required = cephx auth_client_required = cephx filestore_xattr_use_omap = true If I provide admin.keyring file to openstack node (in /etc/ceph) it works fine and issue is gone . Thanks Ashish On Mon, Feb 17, 2014 at 2:03 PM, Sebastien Han sebastien@enovance.com wrote

Re: [ceph-users] Unable top start instance in openstack

2014-02-20 Thread Sebastien Han
Which distro and packages? libvirt_image_type is broken on cloud archive, please patch with https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd Cheers. Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail:

Re: [ceph-users] How to Configure Cinder to access multiple pools

2014-02-25 Thread Sebastien Han
Hi, Please have a look at the cinder multi-backend functionality: examples here: http://www.sebastien-han.fr/blog/2013/04/25/ceph-and-cinder-multi-backend/ Cheers. Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail:

Re: [ceph-users] storage

2014-02-25 Thread Sebastien Han
Hi, RBD blocks are stored as objects on a filesystem usually under: /var/lib/ceph/osd/osd.id/current/pg.id/ RBD is just an abstraction layer. Cheers. Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail:

Re: [ceph-users] Size of objects in Ceph

2014-02-25 Thread Sebastien Han
Hi, The value can be set during the image creation. Start with this: http://ceph.com/docs/master/man/8/rbd/#striping Followed by the example section. Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail:

Re: [ceph-users] qemu-rbd

2014-03-17 Thread Sebastien Han
There is a RBD engine for FIO, have a look at http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 11

Re: [ceph-users] qemu non-shared storage migration of nova instances?

2014-03-17 Thread Sebastien Han
Hi, I use the following live migration flags: VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST It deletes the libvirt.xml and re-creates it on the other side. Cheers. Sébastien Han Cloud Engineer Always give 100%. Unless you're giving blood.”

Re: [ceph-users] how to enable MDS service in a running Ceph cluster

2013-03-15 Thread Sebastien Han
Hi,* Edit `ceph.conf` and add a MDS section like so: [mds] mds data = ""> keyring = /var/lib/ceph/mds/mds.$id/mdsi.$id.keyring [mds.0] host = {hostname}* Create the authentication key (if you use cephx):$ sudo ceph auth get-or-create mds.0 mds 'allow rwx' mds 'allow *' osd 'allow *'

Re: [ceph-users] using Ceph FS as OpenStack Glance's backend

2013-03-21 Thread Sebastien Han
Hi,Storing the image as an object with RADOS or RGW will result as a single big object stored somewhere in Ceph. However with RBD the image is spread across thousands of objects across the entire cluster. At the end, you get way more performance by using RBD since you intensively use the entire

Re: [ceph-users] Journal size

2013-03-27 Thread Sebastien Han
Yes I will :-), thank you for pointing out this to me.Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."PHONE :+33 (0)1 49 70 99 72–MOBILE :+33 (0)6 52 84 44 70EMAIL :sebastien@enovance.com–SKYPE :han.sbastienADDRESS :10, rue de la Victoire – 75009 ParisWEB

Re: [ceph-users] Object location

2013-03-27 Thread Sebastien Han
Ok,I just noticed that the documentation seems to be wrong, the correct command to find the location of an object is:$ ceph odd map pool-name object-nameThen, the error that you raised is pretty strange because even the object doesn't exist, the command will calculate the eventual location.Could

Re: [ceph-users] Object location

2013-03-27 Thread Sebastien Han
re – 75009 ParisWEB :www.enovance.com–TWITTER :@enovanceOn Mar 27, 2013, at 11:36 PM, Sebastien Han sebastien@enovance.com wrote:Ok,I just noticed that the documentation seems to be wrong, the correct command to find the location of an object is:$ ceph odd map pool-name object-nameThen, the erro

Re: [ceph-users] Puppet modules for Ceph finally landed!

2013-03-28 Thread Sebastien Han
:+33 (0)1 49 70 99 72–MOBILE :+33 (0)6 52 84 44 70EMAIL :sebastien@enovance.com–SKYPE :han.sbastienADDRESS :10, rue de la Victoire – 75009 ParisWEB :www.enovance.com–TWITTER :@enovanceOn Mar 28, 2013, at 2:08 PM, Mark Nelson mark.nel...@inktank.com wrote:On 03/28/2013 04:34 AM, Sebastien Han w

Re: [ceph-users] Question about Backing Up RBD Volumes in Openstack

2013-04-09 Thread Sebastien Han
Dave,OpenStack does"qemu-img snapshot" command to create a snapshot, here's the method:https://github.com/openstack/nova/blob/stable/folsom/nova/virt/libvirt/utils.py#L335-L347So the memory is _not_ saved, only the disk is. Note that it's always hard to make consistent snapshot. I assume that

Re: [ceph-users] Online resizing RBD kernel module

2013-04-09 Thread Sebastien Han
Good to know that it also works for RBD qemu-driver. I'm not really surprised though :).Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."PHONE :+33 (0)1 49 70 99 72–MOBILE :+33 (0)6 52 84 44 70EMAIL :sebastien@enovance.com–SKYPE :han.sbastienADDRESS :10, rue de la

Re: [ceph-users] qemu-1.4.2 rbd-fixed ubuntu packages

2013-05-28 Thread Sebastien Han
re – 75009 ParisWeb :www.enovance.com–Twitter :@enovance On May 29, 2013, at 12:19 AM, Sebastien Han sebastien@enovance.com wrote:Wolgang,I'm interested, and I assume I'm not the only one, thus can't you just make it public for everyone?Thanks.Sébastien HanCloud Engineer"Always give 100%. Unles

Re: [ceph-users] Live Migration: KVM-Libvirt Shared-storage

2013-06-05 Thread Sebastien Han
I did, what would like to know?Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."Phone :+33 (0)1 49 70 99 72–Mobile :+33 (0)6 52 84 44 70Email :sebastien@enovance.com–Skype :han.sbastienAddress :10, rue de la Victoire – 75009 ParisWeb :www.enovance.com–Twitter

Re: [ceph-users] QEMU -drive setting (if=none) for rbd

2013-06-13 Thread Sebastien Han
OpenStack doesn't know how to set different caching options for attached block device.See the following blueprint,https://blueprints.launchpad.net/nova/+spec/enable-rbd-tuning-optionsThis might be implemented for Havana.Cheers.Sébastien HanCloud Engineer"Always give 100%. Unless you're giving

Re: [ceph-users] Live Migrations with cephFS

2013-06-16 Thread Sebastien Han
In OpenStack, a VM booted from a volume (where the disk is located on RBD) supports the live-migration without any problems.Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."Phone :+33 (0)1 49 70 99 72–Mobile :+33 (0)6 52 84 44 70Email :sebastien@enovance.com–Skype

Re: [ceph-users] Live Migrations with cephFS

2013-06-17 Thread Sebastien Han
Thank you, Sebastien Han. I am sure many are thankful you've published your thoughts and experiences with Ceph and even OpemStack.Thanks Bo! :)If I may, I would like to reword my question/statement with greater clarity: To force all instances to always boot from RBD volumes, would a person would

Re: [ceph-users] Problem with multiple hosts RBD + Cinder

2013-06-20 Thread Sebastien Han
Hi,No this must always be the same UUID. You can only specify one in cinder.conf.Btw nova does the attachment this is why it needs the uuid and secret.The first secret import generates an UUID, then always re-use the same one for all your compute node, do something like:secret ephemeral='no'

Re: [ceph-users] Problem with multiple hosts RBD + Cinder

2013-06-21 Thread Sebastien Han
De rien, cool :)Yes start from the libvirt section.Cheers!Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."Phone :+33 (0)1 49 70 99 72–Mobile :+33 (0)6 52 84 44 70Email :sebastien@enovance.com–Skype :han.sbastienAddress :10, rue de la Victoire – 75009 ParisWeb

[ceph-users] RADOS Bench strange behavior

2013-07-09 Thread Sebastien Han
Hi all,While running some benchmarks with the internal rados benchmarker I noticed something really strange. First of all, this is the line I used to run it:$sudo rados -p 07:59:54_performance bench 300 write -b 4194304 -t 1 --no-cleanupSo I want to test an IO with a concurrency of 1. I had a look

Re: [ceph-users] Openstack on ceph rbd installation failure

2013-07-23 Thread Sebastien Han
Can you send your ceph.conf too?Is /etc/ceph/ceph.conf present? Is the key of user volume present too?Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."Phone :+33 (0)1 49 70 99 72–Mobile :+33 (0)6 52 84 44 70Email :sebastien@enovance.com–Skype :han.sbastienAddress

Re: [ceph-users] RBD Mapping

2013-07-23 Thread Sebastien Han
Hi Greg,Just tried the list watchers, on a rbd with the QEMU driver and I got:root@ceph:~# rados -p volumes listwatchers rbd_header.789c2ae8944awatcher=client.30882 cookie=1I also tried with the kernel module but didn't see anything…No IP addresses anywhere… :/, any idea?Nice tip btw

Re: [ceph-users] RBD Mapping

2013-07-23 Thread Sebastien Han
Arf no worries. Even after a quick dive into the logs, I haven't find anything. (default log level).Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."Phone :+33 (0)1 49 70 99 72–Mobile :+33 (0)6 52 84 44 70Email :sebastien@enovance.com–Skype :han.sbastienAddress

Re: [ceph-users] Journals on all SSD cluster

2015-01-21 Thread Sebastien Han
It has been proven that the OSDs can’t take advantage of the SSD, so I’ll probably collocate both journal and osd data. Search in the ML for [Single OSD performance on SSD] Can't go over 3, 2K IOPS You will see that there is no difference it terms of performance between the following: * 1 SSD

Re: [ceph-users] how do I show active ceph configuration

2015-01-21 Thread Sebastien Han
You can use the admin socket: $ ceph daemon mon.id config show or locally ceph --admin-daemon /var/run/ceph/ceph-osd.2.asok config show On 21 Jan 2015, at 19:46, Robert Fantini robertfant...@gmail.com wrote: Hello Is there a way to see running / acrive ceph.conf configuration items?

Re: [ceph-users] Ceph as backend for Swift

2015-01-08 Thread Sebastien Han
You can have a look of what I did here with Christian: * https://github.com/stackforge/swift-ceph-backend * https://github.com/enovance/swiftceph-ansible If you have further question just let us know. On 08 Jan 2015, at 15:51, Robert LeBlanc rob...@leblancnet.us wrote: Anyone have a

Re: [ceph-users] reset osd perf counters

2015-01-14 Thread Sebastien Han
It was added in 0.90 On 13 Jan 2015, at 00:11, Gregory Farnum g...@gregs42.com wrote: perf reset on the admin socket. I'm not sure what version it went in to; you can check the release logs if it doesn't work on whatever you have installed. :) -Greg On Mon, Jan 12, 2015 at 2:26 PM,

Re: [ceph-users] Spark/Mesos on top of Ceph/Btrfs

2015-01-14 Thread Sebastien Han
Hey What do you want to use from Ceph? RBD? CephFS? It is not really clear, you mentioned ceph/btfrs which makes me either think of using btrfs for OSD store or btrfs on top of a RBD device. Later you mentioned HDFS, does that mean you want to use CephFS? I don’t know much about Mesos, but

Re: [ceph-users] Sparse RBD instance snapshots in OpenStack

2015-03-12 Thread Sebastien Han
Several patches aim to solve that by using RBD snapshots instead of QEMU snapshots. Unfortunately I doubt we will have something ready for OpenStack Juno. Hopefully Liberty will be the release that fixes that. Having RAW images is not that bad since booting from that snapshot will do a clone.

Re: [ceph-users] ceph cluster on docker containers

2015-03-29 Thread Sebastien Han
You can have a look at: https://github.com/ceph/ceph-docker On 23 Mar 2015, at 17:16, Pavel V. Kaygorodov pa...@inasan.ru wrote: Hi! I'm using ceph cluster, packed to a number of docker containers. There are two things, which you need to know: 1. Ceph OSDs are using FS attributes,

Re: [ceph-users] OSD on LVM volume

2015-02-24 Thread Sebastien Han
A while ago, I managed to have this working but this was really tricky. See my comment here: https://github.com/ceph/ceph-ansible/issues/9#issuecomment-37127128 One use case I had was a system with 2 SSD for the OS and a couple of OSDs. Both SSD were in RAID1 and the system was configured with

[ceph-users] Ceph recovery network?

2015-04-26 Thread Sebastien Han
Hi list, While reading this http://ceph.com/docs/master/rados/configuration/network-config-ref/#ceph-networks, I came across the following sentence: You can also establish a separate cluster network to handle OSD heartbeat, object replication and recovery traffic” I didn’t know it was

Re: [ceph-users] Ceph is Full

2015-04-28 Thread Sebastien Han
You can try to push the full ratio a bit further and then delete some objects. On 28 Apr 2015, at 15:51, Ray Sun xiaoq...@gmail.com wrote: More detail about ceph health detail [root@controller ~]# ceph health detail HEALTH_ERR 20 pgs backfill_toofull; 20 pgs degraded; 20 pgs stuck unclean;

Re: [ceph-users] Ceph is Full

2015-04-29 Thread Sebastien Han
With mon_osd_full_ratio you should restart the monitors and this should’t be a problem. For the unclean PG, looks like something is preventing them to be healthy, look at the state of the OSD responsible for these 2 PGs. On 29 Apr 2015, at 05:06, Ray Sun xiaoq...@gmail.com wrote: mon osd

Re: [ceph-users] Ceph recovery network?

2015-04-27 Thread Sebastien Han
'cluster network = network/mask' in your ceph.conf. It is useful to remember that replication, recovery and backfill traffic are pretty much the same thing, just at different points in time. On Sun, Apr 26, 2015 at 4:39 PM, Sebastien Han sebastien@enovance.com wrote: Hi list, While

Re: [ceph-users] Find out the location of OSD Journal

2015-05-11 Thread Sebastien Han
Under the OSD directory, you can look where the symlink points. This is generally called ‘journal’, it should point to a device. On 06 May 2015, at 06:54, Patrik Plank p.pl...@st-georgen-gusen.at wrote: Hi, i cant remember on which drive I install which OSD journal :-|| Is there any

  1   2   >