your ceph setting) didn’t bring much, now I can reach 3,5K IOPS.
By any chance, would it be possible for you to test with a single OSD SSD?
On 28 Aug 2014, at 18:11, Sebastien Han sebastien@enovance.com wrote:
Hey all,
It has been a while since the last thread performance related on the ML
@Dan: thanks for sharing your config, with all your flags I don’t seem to get
more that 3,4K IOPS and they even seem to slow me down :( This is really weird.
Yes I already tried to run to simultaneous processes and only half of 3,4K for
each of them.
@Kasper: thanks for these results, I believe
, at 11:13, Sebastien Han sebastien@enovance.com wrote:
Mark, thanks a lot for experimenting this for me.
I’m gonna try master soon and will tell you how much I can get.
It’s interesting to see that using 2 SSDs brings up more performance, even
both SSDs are under-utilized…
They should
original -
De: Sebastien Han sebastien@enovance.com
À: Somnath Roy somnath@sandisk.com
Cc: ceph-users@lists.ceph.com
Envoyé: Mardi 2 Septembre 2014 02:19:16
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Mark and all, Ceph IOPS performance
, Sebastien Han sebastien@enovance.com a écrit :
Hey,
Well I ran an fio job that simulates the (more or less) what ceph is doing
(journal writes with dsync and o_direct) and the ssd gave me 29K IOPS too.
I could do this, but for me it definitely looks like a major waste since we
don’t even
to bench them with firefly and master.
Is a debian wheezy gitbuilder repository available ? (I'm a bit lazy to
compile all packages)
- Mail original -
De: Sebastien Han sebastien@enovance.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: ceph-users@lists.ceph.com, Cédric
/2014 22:11, Sebastien Han a écrit :
Hi Warren,
What do mean exactly by secure erase? At the firmware level with constructor
softwares?
SSDs were pretty new so I don’t we hit that sort of things. I believe that
only aged SSDs have this behaviour but I might be wrong.
Sorry I forgot
enregistrements écrits
70565888 octets (71 MB) copiés, 70,4098 s, 1,0 MB/s
I'll do tests with intel s3500 tomorrow to compare
- Mail original -
De: Sebastien Han sebastien@enovance.com
À: Warren Wang warren_w...@cable.comcast.com
Cc: ceph-users@lists.ceph.com
Envoyé: Lundi 8
Did you follow this ceph.com/docs/master/rbd/rbd-openstack/ to configure your
env?
On 12 Sep 2014, at 14:38, m.channappa.nega...@accenture.com wrote:
Hello Team,
I have configured ceph as a multibackend for openstack.
I have created 2 pools .
1. Volumes (replication size =3 )
] On Behalf Of
Sebastien Han
Sent: Tuesday, September 16, 2014 9:33 PM
To: Alexandre DERUMIER
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Hi,
Thanks for keeping us updated on this subject.
dsync is definitely killing
Hi,
Unfortunately this is expected.
If you take a snapshot you should not expect a clone but a RBD snapshot.
Please see this BP:
https://blueprints.launchpad.net/nova/+spec/implement-rbd-snapshots-instead-of-qemu-snapshots
A major part of the code is ready, however we missed nova-specs feature
On 01 Oct 2014, at 15:26, Jonathan Proulx j...@jonproulx.com wrote:
On Wed, Oct 1, 2014 at 2:57 AM, Sebastien Han
sebastien@enovance.com wrote:
Hi,
Unfortunately this is expected.
If you take a snapshot you should not expect a clone but a RBD snapshot.
Unfortunate that it doesn't
Hum I just tried on a devstack and on firefly stable, it works for me.
Looking at your config it seems that the glance_api_version=2 is put in the
wrong section.
Please move it to [DEFAULT] and let me know if it works.
On 08 Oct 2014, at 14:28, Nathan Stratton nat...@robotics.net wrote:
On
Hey all,
I just saw this thread, I’ve been working on this and was about to share it:
https://etherpad.openstack.org/p/kilo-ceph
Since the ceph etherpad is down I think we should switch to this one as an
alternative.
Loic, feel free to work on this one and add more content :).
On 13 Oct 2014,
Mark, please read this:
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg12486.html
On 16 Oct 2014, at 19:19, Mark Wu wud...@gmail.com wrote:
Thanks for the detailed information. but I am already using fio with rbd
engine. Almost 4 volumes can reach the peak.
2014 年 10 月 17 日
They were some investigations as well around F2FS
(https://www.kernel.org/doc/Documentation/filesystems/f2fs.txt), the last time
I tried to install an OSD dir under f2fs it failed.
I tried to run the OSD on f2fs however ceph-osd mkfs got stuck on a xattr test:
fremovexattr(10,
AFAIK there is no tool to do this.
You simply rm object or dd a new content in the object (fill with zero)
On 04 Dec 2014, at 13:41, Mallikarjun Biradar
mallikarjuna.bira...@gmail.com wrote:
Hi all,
I would like to know which tool or cli that all users are using to simulate
Eneko,
I do have plan to push to a performance initiative section on the ceph.com/docs
sooner or later so people will put their own results through github PR.
On 04 Dec 2014, at 16:09, Eneko Lacunza elacu...@binovo.es wrote:
Thanks, will look back in the list archive.
On 04/12/14 15:47,
Good to know. Thanks for sharing!
On 09 Dec 2014, at 10:21, Wido den Hollander w...@42on.com wrote:
Hi,
Last sunday I got a call early in the morning that a Ceph cluster was
having some issues. Slow requests and OSDs marking each other down.
Since this is a 100% SSD cluster I was a bit
Discard works with virtio-scsi controllers for disks in QEMU.
Just use discard=unmap in the disk section (scsi disk).
On 12 Dec 2014, at 13:17, Max Power mailli...@ferienwohnung-altenbeken.de
wrote:
Wido den Hollander w...@42on.com hat am 12. Dezember 2014 um 12:53
geschrieben:
It
Salut,
The general recommended ratio (for me at least) is 3 journals per SSD. Using
200GB Intel DC S3700 is great.
If you’re going with a low perf scenario I don’t think you should bother buying
SSD, just remove them from the picture and do 12 SATA 7.2K 4TB.
For medium and medium ++ perf using
: @enovance
On 04 Apr 2014, at 09:56, Mariusz Gronczewski
mariusz.gronczew...@efigence.com wrote:
Nope, one from RDO packages http://openstack.redhat.com/Main_Page
On Thu, 3 Apr 2014 23:22:15 +0200, Sebastien Han
sebastien@enovance.com wrote:
Are you running Havana with josh’s branch?
(https
Try ceph auth del osd.1
And then repeat step 6
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance
On 08
Hey Loïc,
The machine was setup a while ago :).
The server side is ready, there is just no graphical interface, everything
appears as plain text.
It’s not necessary to upgrade.
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
To speed up the deletion, you can remove the rbd_header (if the image is empty)
and then remove it.
For example:
$ rados -p rbd ls
huge.rbd
rbd_directory
$ rados -p rbd rm huge.rbd
$ time
rbd rm huge
2013-12-10 09:35:44.168695 7f9c4a87d780 -1 librbd::ImageCtx: error finding
header: (2)
No
This is a COW clone, but the BP you pointed doesn’t match the feature you
described. This might explain Greg’s answer.
The BP refers to the libvirt_image_type functionality for Nova.
What do you get now when you try to create a volume from an image?
Sébastien Han
Cloud Engineer
Always
giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance
On 25 Apr 2014, at 16:37, Sebastien Han sebastien@enovance.com wrote:
g
signature.asc
Description: Message signed
: @enovance
On 25 Apr 2014, at 18:16, Sebastien Han sebastien@enovance.com wrote:
I just tried, I have the same problem, it looks like a regression…
It’s weird because the code didn’t change that much during the Icehouse cycle.
I just reported the bug here: https://bugs.launchpad.net/cinder
- Twitter : @enovance
On 28 Apr 2014, at 16:10, Maciej Gałkiewicz mac...@shellycloud.com wrote:
On 28 April 2014 15:58, Sebastien Han sebastien@enovance.com wrote:
FYI It’s fixed here: https://review.openstack.org/#/c/90644/1
I already have this patch and it didn't help. Have it fixed
http://www.sebastien-han.fr/blog/2014/05/01/vagrant-up-install-ceph-in-one-command/
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008 Paris
Web :
, at 09:02, Maciej Gałkiewicz mac...@shellycloud.com wrote:
On 15 May 2014 04:05, Maciej Gałkiewicz mac...@shellycloud.com wrote:
On 28 April 2014 16:11, Sebastien Han sebastien@enovance.com wrote:
Yes yes, just restart cinder-api and cinder-volume.
It worked for me.
In my case the image
Jeroen,
Actually this is more a question for the OpenStack ML.
All the use cases you described are not possible at the moment.
The only thing you can get is shared ressources across all the tenants, you
can’t really pin any ressource to a specific tenant.
This could done I guess, but not
Hi all,
A couple of years ago, I heard that it wasn’t safe to map a krbd block on an
OSD host.
It was more or less like mounting a NFS mount on the NFS server, we can
potentially end up with some deadlocks.
At least, I tried again recently and didn’t encounter any problem.
What do you think?
FYI I encountered the same problem for krbd, removing the ec pool didn’t solve
my problem.
I’m running 3.13
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008
, Jean-Charles LOPEZ jeanchlo...@mac.com
wrote:
Hi Sébastien,
still the case. Depending on what you do, the OSD process will get to a hang
and will suicide.
Regards
JC
On Jun 10, 2014, at 09:46, Sebastien Han sebastien@enovance.com wrote:
Hi all,
A couple of years ago, I
Can you connect to your Ceph cluster?
You can pass options to the cmd line like this:
$ qemu-img create -f rbd
rbd:instances/vmdisk01:id=leseb:conf=/etc/ceph/ceph-leseb.conf 2G
Cheers.
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.
Phone: +33 (0)1 49 70 99
Hi Dane,
If you deployed with ceph-deploy, you will see that the journal is just a
symlink.
Take a look at /var/lib/ceph/osd/osd-id/journal
The link should point to the first partition of your hard drive disk, so no
filesystem for the journal, just a block device.
Roughly you should try:
Nothing has been recorded as far as I know.
However I’ve seen some guys from Scality recording sessions with a cam.
Scality? Are you there? :)
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
I used a blocksize of 350k as my graphes shows me that this is the
average workload we have on the journal.
Pretty interesting metric Stefan.
Has anyone seen the same behaviour?
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Nice job Haomai!
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victoire - 75009 Paris
Web : www.enovance.com - Twitter : @enovance
On 25 Nov 2013, at 02:50, Haomai
Hi,
1) nfs over rbd (http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/)
This has been in production for more than a year now and heavily tested before.
Performance was not expected since frontend server mainly do read (90%).
Cheers.
Sébastien Han
Cloud Engineer
Always give
:39 AM, Sebastien Han sebastien@enovance.com
wrote:
Hi,
1) nfs over rbd (http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/)
This has been in production for more than a year now and heavily tested
before.
Performance was not expected since frontend server mainly do read (90
: 10, rue de la Victoire - 75009 Paris
Web : www.enovance.com - Twitter : @enovance
On 25 Nov 2013, at 10:00, Sebastien Han sebastien@enovance.com wrote:
Nice job Haomai!
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99
Hi,
Well after restarting the services run:
$ cinder create 1
Then you can check both status in Cinder and Ceph:
For Cinder run:
$ cinder list
For Ceph run:
$ rbd -p cinder-pool ls
If the image is there, you’re good.
Cheers.
Sébastien Han
Cloud Engineer
Always give 100%. Unless
Hi guys!
Some experiment here:
http://www.sebastien-han.fr/blog/2013/09/19/how-I-barely-got-my-first-ceph-mon-running-in-docker/
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10,
Hi guys,
I won’t do a RAID 1 with SSDs since they both write the same data.
Thus, they are more likely to “almost” die at the same time.
What I will try to do instead is to use both disk in JBOD mode or (degraded
RAID0).
Then I will create a tiny root partition for the OS.
Then I’ll still have
way to monitor SSD write life SMART data - at least it
gives a guide as to device condition compared to its rated life. That can be
done with smartmontools, but it would be nice to have it on the InkTank
dashboard for example.
On 2013-12-05 14:26, Sebastien Han wrote:
Hi guys,
I won’t
The ceph doc is currently being updated. See
https://github.com/ceph/ceph/pull/906
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victoire - 75009 Paris
Web :
...@dachary.org wrote:
On 01/01/2014 02:39, Sebastien Han wrote:
Hi,
I’m not sure to have the whole visibility of the role but I will be more
than happy to take over.
I believe that I can allocate some time for this.
Your name is added to the
http://pad.ceph.com/p/ceph-user-committee
Hi Alexandre,
Are you going with a 10Gb network? It’s not an issue for IOPS but more for the
bandwidth. If so read the following:
I personally won’t go with a ratio of 1:6 for the journal. I guess 1:5 (or even
1:4) is preferable.
SAS 10K gives you around 140MB/sec for sequential writes.
So if
Hum the Crucial m500 is pretty slow. The biggest one doesn’t even reach 300MB/s.
Intel DC S3700 100G showed around 200MB/sec for us.
Actually, I don’t know the price difference between the crucial and the intel
but the intel looks more suitable for me. Especially after Mark’s comment.
On 15 Jan 2014, at 15:46, Stefan Priebe s.pri...@profihost.ag wrote:
Am 15.01.2014 15:44, schrieb Mark Nelson:
On 01/15/2014 08:39 AM, Stefan Priebe wrote:
Am 15.01.2014 15:34, schrieb Sebastien Han:
Hum the Crucial m500 is pretty slow. The biggest one doesn’t even
reach 300MB/s.
Intel DC
- 75009 Paris
Web : www.enovance.com - Twitter : @enovance
On 15 Jan 2014, at 15:49, Sebastien Han sebastien@enovance.com wrote:
Sorry I was only looking at the 4K aligned results.
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1
Usually you would like to start here:
http://ceph.com/docs/master/rbd/rbd-openstack/
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victoire - 75009 Paris
Web :
Greg,
Do you have any estimation about how heartbeat messages use the network?
How busy is it?
At some point (if the cluster gets big enough), could this degrade the network
performance? Will it make sense to have a separate network for this?
So in addition to public and storage we will have
I agree but somehow this generates more traffic too. We just need to find a
good balance.
But I don’t think this will change the scenario where the cluster network is
down and OSDs die because of this…
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone:
2014, at 18:22, Gregory Farnum g...@inktank.com wrote:
On Friday, January 24, 2014, Sebastien Han sebastien@enovance.com wrote:
Greg,
Do you have any estimation about how heartbeat messages use the network?
How busy is it?
Not very. It's one very small message per OSD peer per...second
I have the same behaviour here.
I believe this is somehow expected since you’re calling “copy”, clone will do
the cow.
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la
Hi,
$ rbd diff rbd/toto | awk '{ SUM += $2 } END { print SUM/1024/1024 MB }’
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victoire - 75009 Paris
Web :
Hi Alexandre,
We have a meet up in Paris.
Please see: http://www.meetup.com/Ceph-in-Paris/events/158942372/
Cheers.
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la
Hi,
Can I see your ceph.conf?
I suspect that [client.cinder] and [client.glance] sections are missing.
Cheers.
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victoire -
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
If I provide admin.keyring file to openstack node (in /etc/ceph) it works
fine and issue is gone .
Thanks
Ashish
On Mon, Feb 17, 2014 at 2:03 PM, Sebastien Han sebastien@enovance.com
wrote
Which distro and packages?
libvirt_image_type is broken on cloud archive, please patch with
https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd
Cheers.
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail:
Hi,
Please have a look at the cinder multi-backend functionality: examples here:
http://www.sebastien-han.fr/blog/2013/04/25/ceph-and-cinder-multi-backend/
Cheers.
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail:
Hi,
RBD blocks are stored as objects on a filesystem usually under:
/var/lib/ceph/osd/osd.id/current/pg.id/
RBD is just an abstraction layer.
Cheers.
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail:
Hi,
The value can be set during the image creation.
Start with this: http://ceph.com/docs/master/man/8/rbd/#striping
Followed by the example section.
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail:
There is a RBD engine for FIO, have a look at
http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11
Hi,
I use the following live migration flags:
VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST
It deletes the libvirt.xml and re-creates it on the other side.
Cheers.
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Hi,* Edit `ceph.conf` and add a MDS section like so: [mds] mds data = ""> keyring = /var/lib/ceph/mds/mds.$id/mdsi.$id.keyring [mds.0] host = {hostname}* Create the authentication key (if you use cephx):$ sudo ceph auth get-or-create mds.0 mds 'allow rwx' mds 'allow *' osd 'allow *'
Hi,Storing the image as an object with RADOS or RGW will result as a single big object stored somewhere in Ceph. However with RBD the image is spread across thousands of objects across the entire cluster. At the end, you get way more performance by using RBD since you intensively use the entire
Yes I will :-), thank you for pointing out this to me.Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."PHONE :+33 (0)1 49 70 99 72–MOBILE :+33 (0)6 52 84 44 70EMAIL :sebastien@enovance.com–SKYPE :han.sbastienADDRESS :10, rue de la Victoire – 75009 ParisWEB
Ok,I just noticed that the documentation seems to be wrong, the correct command to find the location of an object is:$ ceph odd map pool-name object-nameThen, the error that you raised is pretty strange because even the object doesn't exist, the command will calculate the eventual location.Could
re – 75009 ParisWEB :www.enovance.com–TWITTER :@enovanceOn Mar 27, 2013, at 11:36 PM, Sebastien Han sebastien@enovance.com wrote:Ok,I just noticed that the documentation seems to be wrong, the correct command to find the location of an object is:$ ceph odd map pool-name object-nameThen, the erro
:+33 (0)1 49 70 99 72–MOBILE :+33 (0)6 52 84 44 70EMAIL :sebastien@enovance.com–SKYPE :han.sbastienADDRESS :10, rue de la Victoire – 75009 ParisWEB :www.enovance.com–TWITTER :@enovanceOn Mar 28, 2013, at 2:08 PM, Mark Nelson mark.nel...@inktank.com wrote:On 03/28/2013 04:34 AM, Sebastien Han w
Dave,OpenStack does"qemu-img snapshot" command to create a snapshot, here's the method:https://github.com/openstack/nova/blob/stable/folsom/nova/virt/libvirt/utils.py#L335-L347So the memory is _not_ saved, only the disk is. Note that it's always hard to make consistent snapshot. I assume that
Good to know that it also works for RBD qemu-driver. I'm not really surprised though :).Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."PHONE :+33 (0)1 49 70 99 72–MOBILE :+33 (0)6 52 84 44 70EMAIL :sebastien@enovance.com–SKYPE :han.sbastienADDRESS :10, rue de la
re – 75009 ParisWeb :www.enovance.com–Twitter :@enovance
On May 29, 2013, at 12:19 AM, Sebastien Han sebastien@enovance.com wrote:Wolgang,I'm interested, and I assume I'm not the only one, thus can't you just make it public for everyone?Thanks.Sébastien HanCloud Engineer"Always give 100%. Unles
I did, what would like to know?Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."Phone :+33 (0)1 49 70 99 72–Mobile :+33 (0)6 52 84 44 70Email :sebastien@enovance.com–Skype :han.sbastienAddress :10, rue de la Victoire – 75009 ParisWeb :www.enovance.com–Twitter
OpenStack doesn't know how to set different caching options for attached block device.See the following blueprint,https://blueprints.launchpad.net/nova/+spec/enable-rbd-tuning-optionsThis might be implemented for Havana.Cheers.Sébastien HanCloud Engineer"Always give 100%. Unless you're giving
In OpenStack, a VM booted from a volume (where the disk is located on RBD) supports the live-migration without any problems.Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."Phone :+33 (0)1 49 70 99 72–Mobile :+33 (0)6 52 84 44 70Email :sebastien@enovance.com–Skype
Thank you, Sebastien Han. I am sure many are thankful you've published your thoughts and experiences with Ceph and even OpemStack.Thanks Bo! :)If I may, I would like to reword my question/statement with greater clarity: To force all instances to always boot from RBD volumes, would a person would
Hi,No this must always be the same UUID. You can only specify one in cinder.conf.Btw nova does the attachment this is why it needs the uuid and secret.The first secret import generates an UUID, then always re-use the same one for all your compute node, do something like:secret ephemeral='no'
De rien, cool :)Yes start from the libvirt section.Cheers!Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."Phone :+33 (0)1 49 70 99 72–Mobile :+33 (0)6 52 84 44 70Email :sebastien@enovance.com–Skype :han.sbastienAddress :10, rue de la Victoire – 75009 ParisWeb
Hi all,While running some benchmarks with the internal rados benchmarker I noticed something really strange. First of all, this is the line I used to run it:$sudo rados -p 07:59:54_performance bench 300 write -b 4194304 -t 1 --no-cleanupSo I want to test an IO with a concurrency of 1. I had a look
Can you send your ceph.conf too?Is /etc/ceph/ceph.conf present? Is the key of user volume present too?Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."Phone :+33 (0)1 49 70 99 72–Mobile :+33 (0)6 52 84 44 70Email :sebastien@enovance.com–Skype :han.sbastienAddress
Hi Greg,Just tried the list watchers, on a rbd with the QEMU driver and I got:root@ceph:~# rados -p volumes listwatchers rbd_header.789c2ae8944awatcher=client.30882 cookie=1I also tried with the kernel module but didn't see anything…No IP addresses anywhere… :/, any idea?Nice tip btw
Arf no worries. Even after a quick dive into the logs, I haven't find anything. (default log level).Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."Phone :+33 (0)1 49 70 99 72–Mobile :+33 (0)6 52 84 44 70Email :sebastien@enovance.com–Skype :han.sbastienAddress
It has been proven that the OSDs can’t take advantage of the SSD, so I’ll
probably collocate both journal and osd data.
Search in the ML for [Single OSD performance on SSD] Can't go over 3, 2K IOPS
You will see that there is no difference it terms of performance between the
following:
* 1 SSD
You can use the admin socket:
$ ceph daemon mon.id config show
or locally
ceph --admin-daemon /var/run/ceph/ceph-osd.2.asok config show
On 21 Jan 2015, at 19:46, Robert Fantini robertfant...@gmail.com wrote:
Hello
Is there a way to see running / acrive ceph.conf configuration items?
You can have a look of what I did here with Christian:
* https://github.com/stackforge/swift-ceph-backend
* https://github.com/enovance/swiftceph-ansible
If you have further question just let us know.
On 08 Jan 2015, at 15:51, Robert LeBlanc rob...@leblancnet.us wrote:
Anyone have a
It was added in 0.90
On 13 Jan 2015, at 00:11, Gregory Farnum g...@gregs42.com wrote:
perf reset on the admin socket. I'm not sure what version it went in
to; you can check the release logs if it doesn't work on whatever you
have installed. :)
-Greg
On Mon, Jan 12, 2015 at 2:26 PM,
Hey
What do you want to use from Ceph? RBD? CephFS?
It is not really clear, you mentioned ceph/btfrs which makes me either think of
using btrfs for OSD store or btrfs on top of a RBD device.
Later you mentioned HDFS, does that mean you want to use CephFS?
I don’t know much about Mesos, but
Several patches aim to solve that by using RBD snapshots instead of QEMU
snapshots.
Unfortunately I doubt we will have something ready for OpenStack Juno.
Hopefully Liberty will be the release that fixes that.
Having RAW images is not that bad since booting from that snapshot will do a
clone.
You can have a look at: https://github.com/ceph/ceph-docker
On 23 Mar 2015, at 17:16, Pavel V. Kaygorodov pa...@inasan.ru wrote:
Hi!
I'm using ceph cluster, packed to a number of docker containers.
There are two things, which you need to know:
1. Ceph OSDs are using FS attributes,
A while ago, I managed to have this working but this was really tricky.
See my comment here:
https://github.com/ceph/ceph-ansible/issues/9#issuecomment-37127128
One use case I had was a system with 2 SSD for the OS and a couple of OSDs.
Both SSD were in RAID1 and the system was configured with
Hi list,
While reading this
http://ceph.com/docs/master/rados/configuration/network-config-ref/#ceph-networks,
I came across the following sentence:
You can also establish a separate cluster network to handle OSD heartbeat,
object replication and recovery traffic”
I didn’t know it was
You can try to push the full ratio a bit further and then delete some objects.
On 28 Apr 2015, at 15:51, Ray Sun xiaoq...@gmail.com wrote:
More detail about ceph health detail
[root@controller ~]# ceph health detail
HEALTH_ERR 20 pgs backfill_toofull; 20 pgs degraded; 20 pgs stuck unclean;
With mon_osd_full_ratio you should restart the monitors and this should’t be a
problem.
For the unclean PG, looks like something is preventing them to be healthy, look
at the state of the OSD responsible for these 2 PGs.
On 29 Apr 2015, at 05:06, Ray Sun xiaoq...@gmail.com wrote:
mon osd
'cluster network = network/mask' in your ceph.conf.
It is useful to remember that replication, recovery and backfill
traffic are pretty much the same thing, just at different points in
time.
On Sun, Apr 26, 2015 at 4:39 PM, Sebastien Han
sebastien@enovance.com wrote:
Hi list,
While
Under the OSD directory, you can look where the symlink points. This is
generally called ‘journal’, it should point to a device.
On 06 May 2015, at 06:54, Patrik Plank p.pl...@st-georgen-gusen.at wrote:
Hi,
i cant remember on which drive I install which OSD journal :-||
Is there any
1 - 100 of 108 matches
Mail list logo