Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-08-31 Thread Alexandre DERUMIER
Allegedly this model ssd (128G m550) can do 75K 4k random write IOPS (running fio on the filesystem I've seen 70K IOPS so is reasonably believable). So anyway we are not getting anywhere near the max IOPS from our devices. Hi, Just check this:

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-02 Thread Alexandre DERUMIER
Hi Sebastien, I got 6340 IOPS on a single OSD SSD. (journal and data on the same partition). Shouldn't it better to have 2 partitions, 1 for journal and 1 for datas ? (I'm thinking about filesystem write syncs) - Mail original - De: Sebastien Han sebastien@enovance.com À:

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-02 Thread Alexandre DERUMIER
Do you have same results, if you launch 2 fio benchs in parallel on 2 differents rbd volumes ? - Mail original - De: Sebastien Han sebastien@enovance.com À: Cédric Lemarchand c.lemarch...@yipikai.org Cc: Alexandre DERUMIER aderum...@odiso.com, ceph-users@lists.ceph.com Envoyé

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-02 Thread Alexandre DERUMIER
Han sebastien@enovance.com À: Alexandre DERUMIER aderum...@odiso.com Cc: ceph-users@lists.ceph.com, Cédric Lemarchand c.lemarch...@yipikai.org Envoyé: Mardi 2 Septembre 2014 15:25:05 Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS Well the last time I ran

Re: [ceph-users] Ceph Day Paris Schedule Posted

2014-09-05 Thread Alexandre DERUMIER
I was waiting for the schedule, topics seem to be interesting. I'm going to register now :) BTW, are the speeches in french or english? (As I see loic,sebastian and yann as speakers) - Mail original - De: Patrick McGarry patr...@inktank.com À: Ceph Devel ceph-de...@vger.kernel.org,

Re: [ceph-users] resizing the OSD

2014-09-05 Thread Alexandre DERUMIER
Is there a way to resize the OSD without bringing the cluster down? What is the HEALTH state of your cluster ? If it's OK, simply replace the osd disk by a bigger one ? - Mail original - De: JIten Shah jshah2...@me.com À: ceph-us...@ceph.com Envoyé: Samedi 6 Septembre 2014

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-11 Thread Alexandre DERUMIER
: Le 11/09/2014 08:20, Alexandre DERUMIER a écrit : Hi Sebastien, here my first results with crucial m550 (I'll send result with intel s3500 later): - 3 nodes - dell r620 without expander backplane - sas controller : lsi LSI 9207 (no hardware raid or cache) - 2 x E5-2603v2 1.8GHz

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-11 Thread Alexandre DERUMIER
=write --bs=4k --numjobs=2 --group_reporting --invalidate=0 --name=ab --sync=1 bw=177575KB/s, iops=44393 - Mail original - De: Alexandre DERUMIER aderum...@odiso.com À: Cedric Lemarchand ced...@yipikai.org Cc: ceph-users@lists.ceph.com Envoyé: Vendredi 12 Septembre 2014 04:55:21

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-12 Thread Alexandre DERUMIER
/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sdb 0,00 29,00 0,00 3075,00 0,00 36748,50 23,90 0,29 0,10 0,00 0,10 0,05 15,20 So, the write bottleneck seem to be in ceph. I will send s3500 result today - Mail original - De: Alexandre DERUMIER

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-12 Thread Alexandre DERUMIER
svctm %util sdb 0,00 1563,000,00 9880,00 0,00 75223,5015,23 2,090,210,000,21 0,07 80,00 - Mail original - De: Alexandre DERUMIER aderum...@odiso.com À: Cedric Lemarchand ced...@yipikai.org Cc: ceph-users@lists.ceph.com Envoyé: Vendredi 12

Re: [ceph-users] OpTracker optimization

2014-09-13 Thread Alexandre DERUMIER
Hi, as ceph user, It could be wonderfull to have it for Giant, optracker performance impact is really huge (See my ssd benchmark on ceph user mailing) Regards, Alexandre Derumier - Mail original - De: Somnath Roy somnath@sandisk.com À: Samuel Just sam.j...@inktank.com Cc: Sage

[ceph-users] best libleveldb version ?

2014-09-15 Thread Alexandre DERUMIER
Hi, I would like to known with libleveldb should be us with firefly. I'm using debian wheezy which provide really old libleveldb (I don't use it), and in wheezy backport 1.17 is provided. But in intank repositories , I see that 1.9 is provide for some distribs. So, what is the best/tested

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-16 Thread Alexandre DERUMIER
be great if we could share experience about ceph and ssd. Alexandre. - Mail original - De: Sebastien Han sebastien@enovance.com À: Alexandre DERUMIER aderum...@odiso.com Cc: ceph-users@lists.ceph.com Envoyé: Mardi 16 Septembre 2014 15:32:59 Objet: Re: [ceph-users] [Single OSD

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-17 Thread Alexandre DERUMIER
- De: Somnath Roy somnath@sandisk.com À: Mark Kirkwood mark.kirkw...@catalyst.net.nz, Alexandre DERUMIER aderum...@odiso.com, Sebastien Han sebastien@enovance.com Cc: ceph-users@lists.ceph.com Envoyé: Mercredi 17 Septembre 2014 03:22:05 Objet: RE: [ceph-users] [Single OSD

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-18 Thread Alexandre DERUMIER
tomorrow to compare firefly and giant. - Mail original - De: Jian Zhang jian.zh...@intel.com À: Sebastien Han sebastien@enovance.com, Alexandre DERUMIER aderum...@odiso.com Cc: ceph-users@lists.ceph.com Envoyé: Jeudi 18 Septembre 2014 08:12:32 Objet: RE: [ceph-users] [Single OSD

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-19 Thread Alexandre DERUMIER
= 5 filestore_op_threads = 4 bw=62094KB/s, iops=15523 giant with same tuning --- bw=247073KB/s, iops=61768 ! I think I could reach more, but my 2 gigabit link are satured. - Mail original - De: Alexandre DERUMIER aderum...@odiso.com À: Jian Zhang jian.zh

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-19 Thread Alexandre DERUMIER
- De: Jian Zhang jian.zh...@intel.com À: Alexandre DERUMIER aderum...@odiso.com Cc: ceph-users@lists.ceph.com Envoyé: Vendredi 19 Septembre 2014 10:21:38 Objet: RE: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS Thanks for this great information. We are using

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-24 Thread Alexandre DERUMIER
bound on node (8 cores E5-2603 v2 @ 1.80GHz 100% cpu for 2 osd) - Mail original - De: Sebastien Han sebastien@enovance.com À: Jian Zhang jian.zh...@intel.com Cc: Alexandre DERUMIER aderum...@odiso.com, ceph-users@lists.ceph.com Envoyé: Mardi 23 Septembre 2014 17:41:38 Objet: Re

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-25 Thread Alexandre DERUMIER
Sep 2014 20:49:21 +0200 (CEST) Alexandre DERUMIER wrote: What about writes with Giant? I'm around - 4k iops (4k random) with 1osd (1 node - 1 osd) - 8k iops (4k random) with 2 osd (1 node - 2 osd) - 16K iops (4k random) with 4 osd (2 nodes - 2 osd by node) - 22K iops (4k random

Re: [ceph-users] Ceph Developer Summit: Hammer

2014-10-01 Thread Alexandre DERUMIER
Hi, any news about this blueprint ? https://wiki.ceph.com/Planning/Blueprints/Giant/rbd%3A_journaling Regards, Alexandre - Mail original - De: Sage Weil sw...@redhat.com À: Patrick McGarry patr...@inktank.com Cc: Ceph-User ceph-us...@ceph.com, ceph-de...@vger.kernel.org Envoyé:

Re: [ceph-users] ceph at Universite de Lorraine

2014-10-10 Thread Alexandre DERUMIER
Hi Stéphane, Inktank also provide support through ceph enterprise, and also early design. http://www.inktank.com/enterprise/ Someone told me : there is no need for professional support , just buy the equipment , install it and let it run ceph I think it's depend if you have time or human

Re: [ceph-users] Performance doesn't scale well on a full ssd cluster.

2014-10-17 Thread Alexandre DERUMIER
Hi, With 0.86, the following options and disabling debugging can improve obviously. osd enable op tracker = false I think this one has been optimized by Somnath https://github.com/ceph/ceph/commit/184773d67aed7470d167c954e786ea57ab0ce74b - Mail original - De: Mark Wu

Re: [ceph-users] Same rbd mount from multiple servers

2014-10-21 Thread Alexandre DERUMIER
Thank you for your quick response! Okay I see, is there any preferred clustered FS in this case? OCFS2, GFS? Hi, I'm using ocfs2 on top of rbd in production, works fine. (you need to disable writeback/rbd_cache) - Mail original - De: Mihály Árva-Tóth

[ceph-users] rhel7 krbd backported module repo ?

2014-11-02 Thread Alexandre DERUMIER
Hi, I would like to known if a repository is available for rhel7/centos7 with last krbd module backported ? I known that such module is available in ceph enterprise repos, but is it available for non subscribers ? Regards, Alexandre ___ ceph-users

Re: [ceph-users] rhel7 krbd backported module repo ?

2014-11-03 Thread Alexandre DERUMIER
Not that I know of. krbd *fixes* are getting backported to stable kernels regularly though. Thanks. (I was thinking more about new features support like coming discard support in 3.18 for example) - Mail original - De: Ilya Dryomov ilya.dryo...@inktank.com À: Alexandre DERUMIER

Re: [ceph-users] rhel7 krbd backported module repo ?

2014-11-03 Thread Alexandre DERUMIER
: Alexandre DERUMIER aderum...@odiso.com, ceph-users ceph-users@lists.ceph.com Envoyé: Lundi 3 Novembre 2014 10:17:51 Objet: Re: [ceph-users] rhel7 krbd backported module repo ? There's this one: http://gitbuilder.ceph.com/kmod-rpm-rhel7beta-x86_64-basic/ref/rhel7/x86_64/ But that hasn't

Re: [ceph-users] Full backup/restore of Ceph cluster?

2014-11-05 Thread Alexandre DERUMIER
Is RBD snapshotting what I'm looking for? Is this even possible? Yes, you can use rbd snapshoting, export / import http://ceph.com/dev-notes/incremental-snapshots-with-rbd/ But you need to do it for each rbd volume. Here a script to do it: http://www.rapide.nl/blog/item/ceph_-_rbd_replication

Re: [ceph-users] Full backup/restore of Ceph cluster?

2014-11-05 Thread Alexandre DERUMIER
...@opdemand.com À: Alexandre DERUMIER aderum...@odiso.com Cc: ceph-users@lists.ceph.com Envoyé: Mercredi 5 Novembre 2014 10:08:49 Objet: Re: [ceph-users] Full backup/restore of Ceph cluster? Hi Alexandre, Thanks for the link! Unless I'm misunderstanding, this is to replicate an RBD volume from

Re: [ceph-users] Typical 10GbE latency

2014-11-07 Thread Alexandre DERUMIER
Mellanox is also doing ethernet now, http://www.mellanox.com/page/products_dyn?product_family=163mtag=sx1012 for example - 220nsec for 40GbE - 280nsec for 10GbE And I think it's also possible to do Roce (rdma over ethernet) with mellanox connect-x3 adapters - Mail original - De:

Re: [ceph-users] Typical 10GbE latency

2014-11-11 Thread Alexandre DERUMIER
Don't have yet 10GBE, but here my result my simple lacp on 2 gigabit links with a cisco 6500 rtt min/avg/max/mdev = 0.179/0.202/0.221/0.019 ms (Seem to be lower than your 10gbe nexus) - Mail original - De: Wido den Hollander w...@42on.com À: ceph-users@lists.ceph.com Envoyé:

Re: [ceph-users] Typical 10GbE latency

2014-11-12 Thread Alexandre DERUMIER
Is this with a 8192 byte payload? Oh, sorry it was with 1500. I'll try to send a report with 8192 tomorrow. - Mail original - De: Robert LeBlanc rob...@leblancnet.us À: Alexandre DERUMIER aderum...@odiso.com Cc: Wido den Hollander w...@42on.com, ceph-users@lists.ceph.com Envoyé

Re: [ceph-users] RBD read-ahead didn't improve 4K read performance

2014-11-20 Thread Alexandre DERUMIER
Hi, I don't have tested yet rbd readhead, but maybe do you reach qemu limit. (by default qemu can use only 1thread/1core to manage ios, check you qemu cpu). Do you have some performance results ? how many iops ? but I have had 4x improvement in qemu-kvm, with virtio-scsi + num_queues +

Re: [ceph-users] Virtual machines using RBD remount read-only on OSD slow requests

2014-11-24 Thread Alexandre DERUMIER
Hi, try to mount your filesystems with errors=continue option From the mount (8) man page errors={continue|remount-ro|panic} Define the behaviour when an error is encountered. (Either ignore errors and just mark the filesystem erroneous and continue, or remount the filesystem read-only, or

Re: [ceph-users] Fastest way to shrink/rewrite rbd image ?

2014-12-01 Thread Alexandre DERUMIER
I think if you enable TRIM support on your RBD, then run fstrim on your filesystems inside the guest (assuming ext4 / XFS guest filesystem), Ceph should reclaim the trimmed space. Yes, it's working fine. (you need to use virtio-scsi and enable discard option) - Mail original - De:

Re: [ceph-users] Fastest way to shrink/rewrite rbd image ?

2014-12-01 Thread Alexandre DERUMIER
- Mail original - De: Daniel Swarbrick daniel.swarbr...@profitbricks.com À: ceph-users@lists.ceph.com Envoyé: Lundi 1 Décembre 2014 13:32:15 Objet: Re: [ceph-users] Fastest way to shrink/rewrite rbd image ? On 01/12/14 10:22, Alexandre DERUMIER wrote: Yes, it's working fine

Re: [ceph-users] 答复: Re: RBD read-ahead didn't improve 4K read performance

2014-12-04 Thread Alexandre DERUMIER
that seq 4k ios, are aggregated, and so bigger and less ios are going to ceph. So performance should improve. - Mail original - De: duan xufeng duan.xuf...@zte.com.cn À: Alexandre DERUMIER aderum...@odiso.com Cc: ceph-users ceph-us...@ceph.com, si dawei si.da...@zte.com.cn Envoyé

Re: [ceph-users] rbd snapshot slow restore

2014-12-16 Thread Alexandre DERUMIER
Alexandre Derumier Ingénieur système et stockage Fixe : 03 20 68 90 88 Fax : 03 20 68 90 81 45 Bvd du Général Leclerc 59100 Roubaix 12 rue Marivaux 75002 Paris MonSiteEstLent.com - Blog dédié à la webperformance et la gestion de pics de trafic De: Wido den Hollander w

Re: [ceph-users] rbd snapshot slow restore

2014-12-16 Thread Alexandre DERUMIER
with distributed storage) - Mail original - De: aderumier aderum...@odiso.com À: Wido den Hollander w...@42on.com Cc: ceph-users ceph-users@lists.ceph.com Envoyé: Mardi 16 Décembre 2014 17:02:12 Objet: Re: [ceph-users] rbd snapshot slow restore Alexandre Derumier Ingénieur système et

Re: [ceph-users] Help with SSDs

2014-12-17 Thread Alexandre DERUMIER
what to you mean by not playing well with D_SYNC? Hi, check this blog: http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/ - Mail original - De: Mikaël Cluseau mclus...@isi.nc À: Bryson McCutcheon brysonmccutch...@gmail.com,

[ceph-users] qemu + rbd block driver with cache=writeback, is live migration safe ?

2014-04-12 Thread Alexandre DERUMIER
Hello, I known that qemu live migration with disk with cache=writeback are not safe with storage like nfs,iscsi... Is it also true with rbd ? If yes, it is possible to disable manually writeback online with qmp ? Best Regards, Alexandre ___

Re: [ceph-users] qemu + rbd block driver with cache=writeback, is live migration safe ?

2014-04-12 Thread Alexandre DERUMIER
, but the impact of having no caching by qemu is in the order of a 2 magnitudes easily. Beers, Christian Cheers Alex On 12/04/14 16:01, Alexandre DERUMIER wrote: Hello, I known that qemu live migration with disk with cache=writeback are not safe with storage like nfs,iscsi

Re: [ceph-users] [Qemu-devel] qemu + rbd block driver with cache=writeback, is live migration safe ?

2014-04-18 Thread Alexandre DERUMIER
, that is; cache.writeback is guest visible and can therefore only be toggled by the guest) yes, that's what I have in mind, toggling cache.direct=on before migration, then disable it after the migration. - Mail original - De: Kevin Wolf kw...@redhat.com À: Alexandre DERUMIER aderum

Re: [ceph-users] [Qemu-devel] qemu + rbd block driver with cache=writeback, is live migration safe ?

2014-04-20 Thread Alexandre DERUMIER
...@inktank.com À: Alexandre DERUMIER aderum...@odiso.com, Kevin Wolf kw...@redhat.com Cc: ceph-users@lists.ceph.com, qemu-devel qemu-de...@nongnu.org Envoyé: Samedi 19 Avril 2014 00:33:12 Objet: Re: [ceph-users] [Qemu-devel] qemu + rbd block driver with cache=writeback, is live migration safe ? On 04

Re: [ceph-users] [Qemu-devel] qemu + rbd block driver with cache=writeback, is live migration safe ?

2014-04-24 Thread Alexandre DERUMIER
: Kevin Wolf kw...@redhat.com À: Josh Durgin josh.dur...@inktank.com Cc: Alexandre DERUMIER aderum...@odiso.com, ceph-users@lists.ceph.com, qemu-devel qemu-de...@nongnu.org Envoyé: Mardi 22 Avril 2014 11:08:08 Objet: Re: [Qemu-devel] [ceph-users] qemu + rbd block driver with cache=writeback

Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Alexandre DERUMIER
This is a very good news, congratulations ! (do you known if Ceph Enterprise subscription price will remain the same ? I'm looking to take support next year) - Mail original - De: Sage Weil s...@inktank.com À: ceph-de...@vger.kernel.org, ceph-us...@ceph.com Envoyé: Mercredi 30 Avril

Re: [ceph-users] v0.80 Firefly released

2014-05-07 Thread Alexandre DERUMIER
Do we need a journal when using this back-end? no,they are no journal with key value store - Mail original - De: Kenneth Waegeman kenneth.waege...@ugent.be À: Sage Weil s...@inktank.com Cc: ceph-us...@ceph.com Envoyé: Mercredi 7 Mai 2014 15:06:50 Objet: Re: [ceph-users] v0.80

Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-07 Thread Alexandre DERUMIER
Hi Christian, Do you have tried without raid6, to have more osd ? (how many disks do you have begin the raid6 ?) Aslo, I known that direct ios can be quite slow with ceph, maybe can you try without --direct=1 and also enable rbd_cache ceph.conf [client] rbd cache = true - Mail

Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-08 Thread Alexandre DERUMIER
): 9.92578 Min bandwidth (MB/sec): 0 Average Latency: 0.0444653 Stddev Latency: 0.121887 Max latency: 2.80917 Min latency: 0.001958 --- So this is even worse, just about 1500 IOPS. Regards, Christian -Greg On Wednesday, May 7, 2014, Alexandre DERUMIER aderum...@odiso.com wrote

Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-08 Thread Alexandre DERUMIER
) - Mail original - De: Christian Balzer ch...@gol.com À: ceph-users@lists.ceph.com Cc: Alexandre DERUMIER aderum...@odiso.com Envoyé: Jeudi 8 Mai 2014 08:52:15 Objet: Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices On Thu, 08 May 2014 08:41:54 +0200 (CEST

Re: [ceph-users] ceph firefly PGs in active+clean+scrubbing state

2014-05-12 Thread Alexandre DERUMIER
Hi, I observe the same behaviour on a test ceph cluster (upgrade from emperor to firefly) cluster 819ea8af-c5e2-4e92-81f5-4348e23ae9e8 health HEALTH_OK monmap e3: 3 mons at ..., election epoch 12, quorum 0,1,2 0,1,2 osdmap e94: 12 osds: 12 up, 12 in pgmap v19001: 592

Re: [ceph-users] Ceph booth at http://www.solutionslinux.fr/

2014-05-12 Thread Alexandre DERUMIER
I'll be there ! (Do you known if it'll be possible to buy some ceph t-shirts ?) - Mail original - De: Loic Dachary l...@dachary.org À: ceph-users ceph-users@lists.ceph.com Envoyé: Lundi 12 Mai 2014 16:38:31 Objet: [ceph-users] Ceph booth at http://www.solutionslinux.fr/ Hi Ceph,

Re: [ceph-users] ceph firefly PGs in active+clean+scrubbing state

2014-05-12 Thread Alexandre DERUMIER
on the IRC channel yesterday that this is a known problem with Firefly which is due to be fixed with the release (possibly today?) of 0.80.1. Simon On 12/05/14 14:53, Alexandre DERUMIER wrote: Hi, I observe the same behaviour on a test ceph cluster (upgrade from emperor to firefly

Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-13 Thread Alexandre DERUMIER
. That should be interesting to try at a variety of block sizes. You could also try runnin RADOS bench and smalliobench at a few different sizes. -Greg On Wednesday, May 7, 2014, Alexandre DERUMIER aderum...@odiso.com wrote: Hi Christian, Do you have tried without

Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-13 Thread Alexandre DERUMIER
storage and may benefit from SSD backed OSDs, though may also be limited on high performance SSDs. Maybe Intank could comment about the 4000iops by osd ? - Mail original - De: Christian Balzer ch...@gol.com À: ceph-users@lists.ceph.com Cc: Alexandre DERUMIER aderum...@odiso.com Envoyé

Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-13 Thread Alexandre DERUMIER
% MonSiteEstLent.com - Blog dédié à la webperformance et la gestion de pics de trafic - Mail original - De: Christian Balzer ch...@gol.com À: Alexandre DERUMIER aderum...@odiso.com Cc: ceph-users@lists.ceph.com Envoyé: Mardi 13 Mai 2014 14:38:57 Objet: Re: [ceph-users] Slow

Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-13 Thread Alexandre DERUMIER
in 2007...) So, it miss some features like crc32 and sse4 for examples, which can help a lot ceph (I'll try to do some osd tuning (threads,...) to see if I can improve performance. - Mail original - De: Christian Balzer ch...@gol.com À: Alexandre DERUMIER aderum...@odiso.com Cc

Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-13 Thread Alexandre DERUMIER
on client is at 100% cpu of osd are around 70%/1core now. So, seem to have a bottleneck client side somewhere. (I remember some tests from Stefan Priebe on this mailing, with a full ssd cluster, having almost same results) - Mail original - De: Alexandre DERUMIER aderum

Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-13 Thread Alexandre DERUMIER
direct write could be pretty slow, that why rbd_cache is recommended, to aggregate small writes in bigger one) - Mail original - De: Christian Balzer ch...@gol.com À: Alexandre DERUMIER aderum...@odiso.com Cc: ceph-users@lists.ceph.com Envoyé: Mardi 13 Mai 2014 18:31:18 Objet: Re

Re: [ceph-users] Performance stats

2014-05-15 Thread Alexandre DERUMIER
Thanks for the reply. I need an FIO installation file for Ubuntu platform and also could you send any links for examples and documentation. Hi, you can build it (with rbd support recently) http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html #apt-get install

Re: [ceph-users] Ceph booth in Paris at solutionlinux.fr

2014-05-20 Thread Alexandre DERUMIER
It was nice to meet you guys ! I'll try to come to next meetup. (or any ceph workshop). Could be fantastic to have some kind of ceph full day meetup in the future, too many questions for this (too) short meetup ;) See you soon, Regards, Alexandre - Mail original - De: Loic

[ceph-users] full osd ssd cluster advise : replication 2x or 3x ?

2014-05-22 Thread Alexandre DERUMIER
Hi, I'm looking to build a full osd ssd cluster, with this config: 6 nodes, each node 10 osd/ ssd drives (dual 10gbit network). (1journal + datas on each osd) ssd drive will be entreprise grade, maybe intel sc3500 800GB (well known ssd) or new Samsung SSD PM853T 960GB (don't have too much

Re: [ceph-users] full osd ssd cluster advise : replication 2x or 3x ?

2014-05-22 Thread Alexandre DERUMIER
: replication 2x or 3x ? Hello, On Thu, 22 May 2014 18:00:56 +0200 (CEST) Alexandre DERUMIER wrote: Hi, I'm looking to build a full osd ssd cluster, with this config: What is your main goal for that cluster, high IOPS, high sequential writes or reads? Remember my Slow IOPS on RBD... thread

Re: [ceph-users] full osd ssd cluster advise : replication 2x or 3x ?

2014-05-23 Thread Alexandre DERUMIER
) my main concern, is to known if it's really needed to have replication x3 (mainly for cost price). But I can wait to have lower ssd price next year, and go to 3x if necessary. - Mail original - De: Alexandre DERUMIER aderum...@odiso.com À: Christian Balzer ch...@gol.com Cc: ceph

Re: [ceph-users] collectd / graphite / grafana .. calamari?

2014-05-23 Thread Alexandre DERUMIER
https://github.com/rochaporto/collectd-ceph It has a set of collectd plugins pushing metrics which mostly map what the ceph commands return. In the setup we have it pushes them to graphite and the displays rely on grafana (check for a screenshot in the link above). Thanks for sharing

Re: [ceph-users] pgs incomplete; pgs stuck inactive; pgs stuck unclean

2014-05-23 Thread Alexandre DERUMIER
Hi, if you use debian, try to use a recent kernel from backport (3.10) also check your libleveldb1 version, it should be 1.9.0-1~bpo70+1 (debian wheezy version is too old) I don't see it in ceph repo: http://ceph.com/debian-firefly/pool/main/l/leveldb/ (only for squeeze ~bpo60+1) but you

Re: [ceph-users] pgs incomplete; pgs stuck inactive; pgs stuck unclean

2014-05-23 Thread Alexandre DERUMIER
jan.zel...@id.unibe.ch À: aderum...@odiso.com Cc: ceph-users@lists.ceph.com Envoyé: Vendredi 23 Mai 2014 13:36:04 Objet: AW: [ceph-users] pgs incomplete; pgs stuck inactive; pgs stuck unclean -Ursprüngliche Nachricht- Von: Alexandre DERUMIER [mailto:aderum...@odiso.com] Gesendet

Re: [ceph-users] ceph-deploy or manual?

2014-05-27 Thread Alexandre DERUMIER
Are people using automation tools like puppet or ansible? http://www.sebastien-han.fr/blog/2014/05/01/vagrant-up-install-ceph-in-one-command/ enjoy ;) - Mail original - De: Don Talton (dotalton) dotal...@cisco.com À: ceph-users@lists.ceph.com Envoyé: Mardi 27 Mai 2014 18:19:00

Re: [ceph-users] Ceph User Committee : welcome Eric Mourgaya

2014-05-29 Thread Alexandre DERUMIER
Congratulations Eric ! - Mail original - De: Loic Dachary l...@dachary.org À: ceph-users ceph-users@lists.ceph.com Envoyé: Jeudi 29 Mai 2014 12:28:56 Objet: [ceph-users] Ceph User Committee : welcome Eric Mourgaya Hi Ceph, Welcome Eric Mourgaya, head of the Ceph User Committee

Re: [ceph-users] ceph nodes operanting system suggested

2014-05-29 Thread Alexandre DERUMIER
Hi, I think you can check this wiki: http://ceph.com/docs/master/start/os-recommendations/ currently, only ubuntu 12.04 is deeply tested with inktank (but I think it'll be rhel7 soon ;) the wiki don't have been updated yet for firefly. I known that ceph enterprise users are using dumpling for

Re: [ceph-users] full osd ssd cluster advise : replication 2x or 3x ?

2014-05-29 Thread Alexandre DERUMIER
jagiello.luk...@gmail.com À: Alexandre DERUMIER aderum...@odiso.com Cc: ceph-users@lists.ceph.com Envoyé: Mercredi 28 Mai 2014 01:25:40 Objet: Re: [ceph-users] full osd ssd cluster advise : replication 2x or 3x ? Hi, I've got 16 nodes cluster ssd only. Each node is 6x600GB, 10Gbit uplink

Re: [ceph-users] Designing a cluster with ceph and benchmark (ceph vs ext4)

2014-06-01 Thread Alexandre DERUMIER
hi! See design here: http://adminlinux.com.br/cluster_design.txt # dpkg -l |grep ceph ii ceph 0.41-1ubuntu2.1 distributed storage ii ceph-common0.41-1ubuntu2.1 common utilities to mount and interact

[ceph-users] mellanox SX1012 ethernet|infiniband switch, somebody use it for ceph ?

2014-06-02 Thread Alexandre DERUMIER
it for ceph ? if yes, what about stability|performance ? with comming rdma support in ceph, it seem to be the perfect solution (and price is very good). Regards, Alexandre Derumier ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com

Re: [ceph-users] mellanox SX1012 ethernet|infiniband switch, somebody use it for ceph ?

2014-06-02 Thread Alexandre DERUMIER
Of Alexandre DERUMIER Sent: Monday, June 2, 2014 12:30 PM To: ceph-users Subject: [ceph-users] mellanox SX1012 ethernet|infiniband switch, somebody use it for ceph ? Hi, I'm looking for a fast and cheap 10gbe ethernet switch. I just found this: Mellanox SX1012 http

Re: [ceph-users] mellanox SX1012 ethernet|infiniband switch, somebody use it for ceph ?

2014-06-02 Thread Alexandre DERUMIER
I just found this: http://www.mellanox.com/related-docs/whitepapers/WP_Deploying_Ceph_over_High_Performance_Networks.pdf Good to see than ceph begin to be tested by hardware vendor :) Whitepaper include radosbench and fio results - Mail original - De: Alexandre DERUMIER aderum

Re: [ceph-users] Ceph networks, to bond or not to bond?

2014-06-05 Thread Alexandre DERUMIER
Hi, My low-budget setup consists of two gigabit switches, capable of LACP, but not stackable. For redundancy, I'd like to have my links spread evenly over both switches. If you want to do lacp with both switches, they need to be stackable. (or use active-backup bonding) My question where I

Re: [ceph-users] Ceph networks, to bond or not to bond?

2014-06-05 Thread Alexandre DERUMIER
). No multipathing like iscsi for example. - Mail original - De: Sven Budde sven.bu...@itgration-gmbh.de À: Alexandre DERUMIER aderum...@odiso.com Cc: ceph-users@lists.ceph.com Envoyé: Jeudi 5 Juin 2014 18:27:32 Objet: AW: [ceph-users] Ceph networks, to bond or not to bond? Hi Alexandre

[ceph-users] french proxmox meetup

2014-06-05 Thread Alexandre DERUMIER
Hi everybody, we are going to do our first french proxmox in Paris in september http://www.meetup.com/Proxmox-VE-French-Meetup/ And of course, we'll talk about ceph integration in proxmox. So if you are interested, feel free to join us ! Regards, Alexandre

Re: [ceph-users] french proxmox meetup

2014-06-06 Thread Alexandre DERUMIER
. Thanks fo the info Loic. For the moment, I have a room in my company building in Paris (for 10-15 peoples), But good to known. I'm waiting for the next ceph meetup too :) - Mail original - De: Loic Dachary l...@dachary.org À: Alexandre DERUMIER aderum...@odiso.com, ceph-users ceph

Re: [ceph-users] Minimal io block in rbd

2014-06-10 Thread Alexandre DERUMIER
Do for every read 1 Kb rbd will read 4MB from hdd? for write? rados support partial read|write. Note that with erasure code, write need to full rewrite object. (so 4MB) I think that with key-value-store backend (like leveldb), read/write are full too. some interesting notes here :

[ceph-users] tiering : hit_set_count hit_set_period memory usage ?

2014-06-11 Thread Alexandre DERUMIER
Hi, I'm reading tiering doc here http://ceph.com/docs/firefly/dev/cache-pool/ The hit_set_count and hit_set_period define how much time each HitSet should cover, and how many such HitSets to store. Binning accesses over time allows Ceph to independently determine whether an object was

Re: [ceph-users] tiering : hit_set_count hit_set_period memory usage ?

2014-06-11 Thread Alexandre DERUMIER
backups jobs running each week, reading all theses cold datas) - Mail original - De: Gregory Farnum g...@inktank.com À: Alexandre DERUMIER aderum...@odiso.com Cc: ceph-users ceph-users@lists.ceph.com Envoyé: Mercredi 11 Juin 2014 21:56:29 Objet: Re: [ceph-users] tiering

Re: [ceph-users] Poor performance on all SSD cluster

2014-06-23 Thread Alexandre DERUMIER
Hi Greg, So the only way to improve performance would be to not use O_DIRECT (as this should bypass rbd cache as well, right?). yes, indeed O_DIRECT bypass cache. BTW, Do you need to use mysql with O_DIRECT ? default innodb_flush_method is fdatasync, so it should work with cache. (but you

[ceph-users] ceph.com centos7 repository ?

2014-07-09 Thread Alexandre DERUMIER
Hi, I would like to known if a centos7 respository will be available soon ? Or can I use current rhel7 for the moment ? http://ceph.com/rpm-firefly/rhel7/x86_64/ Cheers, Alexandre ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] inktank-mellanox webinar access ?

2014-07-10 Thread Alexandre DERUMIER
Hi, sorry to spam the mailing list, but they are a inktank mellanox webinar in 10minutes, and I don't have receive access since I have been registered yesterday (same for my co-worker). and the webinar mellanox contact email (conta...@mellanox.com), does not exist Maybe somebody from

Re: [ceph-users] inktank-mellanox webinar access ?

2014-07-10 Thread Alexandre DERUMIER
Ok, sorry, we have finally receive the login a bit late. Sorry again to have spam the mailing list - Mail original - De: Alexandre DERUMIER aderum...@odiso.com À: ceph-users ceph-us...@ceph.com Envoyé: Jeudi 10 Juillet 2014 16:55:22 Objet: [ceph-users] inktank-mellanox webinar access

Re: [ceph-users] Mixing CEPH versions on new ceph nodes...

2014-07-14 Thread Alexandre DERUMIER
Hi, But in reality (yum update or by using ceph-deploy install nodename) - the package manager does restart ALL ceph services on that node by its own... debian packages don't restart ceph services on package update, maybe it's a bug in rpm packaging ? - Mail original - De:

Re: [ceph-users] running Firefly client (0.80.1) against older version (dumpling 0.67.10) cluster?

2014-08-14 Thread Alexandre DERUMIER
Same question here, I'm contributor on proxmox, and we don't known if we can upgrade librbd safely, for users with dumpling cluster. Also, for ceph enterprise , s oes inktank support dumpling enterprise + firefly librbd ? - Mail original - De: Nigel Williams

Re: [ceph-users] Is it safe to enable rbd cache with qemu?

2014-08-22 Thread Alexandre DERUMIER
Hi, for RHEL5, I'm not sure be barriers supported is maybe not implemented in virtio devices,lvm,dm raid and some filesystem, depend of the kernel version. Not sure what is backported in rhel5 kernel see http://monolight.cc/2011/06/barriers-caches-filesystems/ - Mail original -

Re: [ceph-users] Is it safe to enable rbd cache with qemu?

2014-08-23 Thread Alexandre DERUMIER
/3 - Mail original - De: Yufang yufang521...@gmail.com À: Alexandre DERUMIER aderum...@odiso.com Cc: ceph-users@lists.ceph.com Envoyé: Vendredi 22 Août 2014 18:05:32 Objet: Re: [ceph-users] Is it safe to enable rbd cache with qemu? Thanks, Alexandre. But what about Windows? Does

[ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Alexandre DERUMIER
bug,.) Does someone have experience with supermicro, and give me advise for a good motherboard model? Best Regards, Alexandre Derumier ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users

Re: [ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Alexandre DERUMIER
applications. The journal is purely sequential (small seq block, IIRC Stephan mentioned 370k blocks). I will instead use with a SSD with large sequential capabilities like 525 series 120GB. Ok, thanks! - Mail original - De: Sebastien Han sebastien@enovance.com À: Alexandre DERUMIER

Re: [ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Alexandre DERUMIER
Seem that s3700 have supercapacitor too http://www.thessdreview.com/our-reviews/s3700/ The S3700 has power loss protection to keep a sudden outage from corrupting data, but if the system detects a fault in the two capacitors powering the system, it will voluntarily disable the volatile cache

Re: [ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Alexandre DERUMIER
didn't that supermicro have 2utwin with 2x12disk, seem really interesting ! - Mail original - De: Robert van Leeuwen robert.vanleeu...@spilgames.com À: Alexandre DERUMIER aderum...@odiso.com, ceph-users@lists.ceph.com Envoyé: Mercredi 15 Janvier 2014 14:36:32 Objet: RE: [ceph-users

Re: [ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Alexandre DERUMIER
) On 01/15/2014 08:04 AM, Alexandre DERUMIER wrote: We are using Supermicro 2uTwin nodes. These have 2 nodes in 2u with each 12 disks. We use X9DRT-HF+ mainboards, 2x Intel DC S3500 SSD and 10x 2.5 1TB 7.2k HDD Seagate Constellation.2 They have SAS2008 controllers on board which can

Re: [ceph-users] Ceph / Dell hardware recommendation

2014-01-15 Thread Alexandre DERUMIER
Hi Derek, thanks for the information about r720xd. Seem that 24 drive chassis is also available. What is the advantage to use flexbay for ssd ? Bypass the back-plane ? - Mail original - De: Derek Yarnell de...@umiacs.umd.edu À: ceph-users@lists.ceph.com Envoyé: Mercredi 15 Janvier

Re: [ceph-users] Ceph / Dell hardware recommendation

2014-01-15 Thread Alexandre DERUMIER
recommendation Le 15/01/2014 17:34, Alexandre DERUMIER a écrit : Hi Derek, thanks for the information about r720xd. Seem that 24 drive chassis is also available. What is the advantage to use flexbay for ssd ? Bypass the back-plane ? From what I understand the flexbay are inside

Re: [ceph-users] One specific OSD process using much more CPU than all the others

2014-01-23 Thread Alexandre DERUMIER
Thanks. Does It need to rebuild the whole ceph packages with libleveldb-dev ? Or can I simply backport libleveldb1 and use ceph packages from intank repository ? - Mail original - De: Sylvain Munaut s.mun...@whatever-company.com À: Alexandre DERUMIER aderum...@odiso.com Cc: Mark

Re: [ceph-users] One specific OSD process using much more CPU than all the others

2014-01-23 Thread Alexandre DERUMIER
/Packages So,I think Intank should add libleveldb1_1.9.0-1~bpo70+1_amd64.deb in wheezy repo. - Mail original - De: Stefan Priebe s.pri...@profihost.ag À: Alexandre DERUMIER aderum...@odiso.com, Sylvain Munaut s.mun...@whatever-company.com Cc: ceph-users@lists.ceph.com Envoyé: Jeudi

Re: [ceph-users] Proxmox VE Ceph Server released (beta)

2014-01-24 Thread Alexandre DERUMIER
Any comment and feedback is welcome! I think it could be great to add some osd statistics (io/s,...), I think it's possible through ceph api. Also maybe an email alerting system if an osd state change (up/down/) - Mail original - De: Martin Maurer mar...@proxmox.com À:

Re: [ceph-users] Meetup in Frankfurt, before the Ceph day

2014-02-05 Thread Alexandre DERUMIER
Hi Loic, do you known if a ceph meetup is planned soon in France or Belgium ? I miss the Fosdem this year, and I'll be very happy to meet some ceph users/devs. Regards, Alexandre - Mail original - De: Loic Dachary l...@dachary.org À: ceph-users ceph-users@lists.ceph.com Envoyé:

  1   2   3   4   5   >