Allegedly this model ssd (128G m550) can do 75K 4k random write IOPS
(running fio on the filesystem I've seen 70K IOPS so is reasonably
believable). So anyway we are not getting anywhere near the max IOPS
from our devices.
Hi,
Just check this:
Hi Sebastien,
I got 6340 IOPS on a single OSD SSD. (journal and data on the same
partition).
Shouldn't it better to have 2 partitions, 1 for journal and 1 for datas ?
(I'm thinking about filesystem write syncs)
- Mail original -
De: Sebastien Han sebastien@enovance.com
À:
Do you have same results, if you launch 2 fio benchs in parallel on 2
differents rbd volumes ?
- Mail original -
De: Sebastien Han sebastien@enovance.com
À: Cédric Lemarchand c.lemarch...@yipikai.org
Cc: Alexandre DERUMIER aderum...@odiso.com, ceph-users@lists.ceph.com
Envoyé
Han sebastien@enovance.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: ceph-users@lists.ceph.com, Cédric Lemarchand c.lemarch...@yipikai.org
Envoyé: Mardi 2 Septembre 2014 15:25:05
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Well the last time I ran
I was waiting for the schedule, topics seem to be interesting.
I'm going to register now :)
BTW, are the speeches in french or english? (As I see loic,sebastian and yann
as speakers)
- Mail original -
De: Patrick McGarry patr...@inktank.com
À: Ceph Devel ceph-de...@vger.kernel.org,
Is there a way to resize the OSD without bringing the cluster down?
What is the HEALTH state of your cluster ?
If it's OK, simply replace the osd disk by a bigger one ?
- Mail original -
De: JIten Shah jshah2...@me.com
À: ceph-us...@ceph.com
Envoyé: Samedi 6 Septembre 2014
:
Le 11/09/2014 08:20, Alexandre DERUMIER a écrit :
Hi Sebastien,
here my first results with crucial m550 (I'll send result with intel s3500
later):
- 3 nodes
- dell r620 without expander backplane
- sas controller : lsi LSI 9207 (no hardware raid or cache)
- 2 x E5-2603v2 1.8GHz
=write --bs=4k --numjobs=2
--group_reporting --invalidate=0 --name=ab --sync=1
bw=177575KB/s, iops=44393
- Mail original -
De: Alexandre DERUMIER aderum...@odiso.com
À: Cedric Lemarchand ced...@yipikai.org
Cc: ceph-users@lists.ceph.com
Envoyé: Vendredi 12 Septembre 2014 04:55:21
/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await
w_await svctm %util
sdb 0,00 29,00 0,00 3075,00 0,00 36748,50 23,90 0,29 0,10 0,00 0,10 0,05 15,20
So, the write bottleneck seem to be in ceph.
I will send s3500 result today
- Mail original -
De: Alexandre DERUMIER
svctm %util
sdb 0,00 1563,000,00 9880,00 0,00 75223,5015,23
2,090,210,000,21 0,07 80,00
- Mail original -
De: Alexandre DERUMIER aderum...@odiso.com
À: Cedric Lemarchand ced...@yipikai.org
Cc: ceph-users@lists.ceph.com
Envoyé: Vendredi 12
Hi,
as ceph user, It could be wonderfull to have it for Giant,
optracker performance impact is really huge (See my ssd benchmark on ceph user
mailing)
Regards,
Alexandre Derumier
- Mail original -
De: Somnath Roy somnath@sandisk.com
À: Samuel Just sam.j...@inktank.com
Cc: Sage
Hi,
I would like to known with libleveldb should be us with firefly.
I'm using debian wheezy which provide really old libleveldb (I don't use it),
and in wheezy backport 1.17 is provided.
But in intank repositories , I see that 1.9 is provide for some distribs.
So, what is the best/tested
be great if we could
share experience about ceph and ssd.
Alexandre.
- Mail original -
De: Sebastien Han sebastien@enovance.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: ceph-users@lists.ceph.com
Envoyé: Mardi 16 Septembre 2014 15:32:59
Objet: Re: [ceph-users] [Single OSD
-
De: Somnath Roy somnath@sandisk.com
À: Mark Kirkwood mark.kirkw...@catalyst.net.nz, Alexandre DERUMIER
aderum...@odiso.com, Sebastien Han sebastien@enovance.com
Cc: ceph-users@lists.ceph.com
Envoyé: Mercredi 17 Septembre 2014 03:22:05
Objet: RE: [ceph-users] [Single OSD
tomorrow to compare firefly and giant.
- Mail original -
De: Jian Zhang jian.zh...@intel.com
À: Sebastien Han sebastien@enovance.com, Alexandre DERUMIER
aderum...@odiso.com
Cc: ceph-users@lists.ceph.com
Envoyé: Jeudi 18 Septembre 2014 08:12:32
Objet: RE: [ceph-users] [Single OSD
= 5
filestore_op_threads = 4
bw=62094KB/s, iops=15523
giant with same tuning
---
bw=247073KB/s, iops=61768 !
I think I could reach more, but my 2 gigabit link are satured.
- Mail original -
De: Alexandre DERUMIER aderum...@odiso.com
À: Jian Zhang jian.zh
-
De: Jian Zhang jian.zh...@intel.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: ceph-users@lists.ceph.com
Envoyé: Vendredi 19 Septembre 2014 10:21:38
Objet: RE: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Thanks for this great information.
We are using
bound on node (8 cores E5-2603 v2 @ 1.80GHz 100% cpu
for 2 osd)
- Mail original -
De: Sebastien Han sebastien@enovance.com
À: Jian Zhang jian.zh...@intel.com
Cc: Alexandre DERUMIER aderum...@odiso.com, ceph-users@lists.ceph.com
Envoyé: Mardi 23 Septembre 2014 17:41:38
Objet: Re
Sep 2014 20:49:21 +0200 (CEST) Alexandre DERUMIER wrote:
What about writes with Giant?
I'm around
- 4k iops (4k random) with 1osd (1 node - 1 osd)
- 8k iops (4k random) with 2 osd (1 node - 2 osd)
- 16K iops (4k random) with 4 osd (2 nodes - 2 osd by node)
- 22K iops (4k random
Hi,
any news about this blueprint ?
https://wiki.ceph.com/Planning/Blueprints/Giant/rbd%3A_journaling
Regards,
Alexandre
- Mail original -
De: Sage Weil sw...@redhat.com
À: Patrick McGarry patr...@inktank.com
Cc: Ceph-User ceph-us...@ceph.com, ceph-de...@vger.kernel.org
Envoyé:
Hi Stéphane,
Inktank also provide support through ceph enterprise, and also early design.
http://www.inktank.com/enterprise/
Someone told me : there is no need for professional support , just buy the
equipment , install it and let it run ceph
I think it's depend if you have time or human
Hi,
With 0.86, the following options and disabling debugging can improve
obviously.
osd enable op tracker = false
I think this one has been optimized by Somnath
https://github.com/ceph/ceph/commit/184773d67aed7470d167c954e786ea57ab0ce74b
- Mail original -
De: Mark Wu
Thank you for your quick response! Okay I see, is there any preferred
clustered FS in this case? OCFS2, GFS?
Hi, I'm using ocfs2 on top of rbd in production, works fine. (you need to
disable writeback/rbd_cache)
- Mail original -
De: Mihály Árva-Tóth
Hi,
I would like to known if a repository is available for rhel7/centos7 with last
krbd module backported ?
I known that such module is available in ceph enterprise repos, but is it
available for non subscribers ?
Regards,
Alexandre
___
ceph-users
Not that I know of. krbd *fixes* are getting backported to stable
kernels regularly though.
Thanks. (I was thinking more about new features support like coming discard
support in 3.18 for example)
- Mail original -
De: Ilya Dryomov ilya.dryo...@inktank.com
À: Alexandre DERUMIER
: Alexandre DERUMIER aderum...@odiso.com, ceph-users
ceph-users@lists.ceph.com
Envoyé: Lundi 3 Novembre 2014 10:17:51
Objet: Re: [ceph-users] rhel7 krbd backported module repo ?
There's this one:
http://gitbuilder.ceph.com/kmod-rpm-rhel7beta-x86_64-basic/ref/rhel7/x86_64/
But that hasn't
Is RBD snapshotting what I'm looking for? Is this even possible?
Yes, you can use rbd snapshoting, export / import
http://ceph.com/dev-notes/incremental-snapshots-with-rbd/
But you need to do it for each rbd volume.
Here a script to do it:
http://www.rapide.nl/blog/item/ceph_-_rbd_replication
...@opdemand.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: ceph-users@lists.ceph.com
Envoyé: Mercredi 5 Novembre 2014 10:08:49
Objet: Re: [ceph-users] Full backup/restore of Ceph cluster?
Hi Alexandre,
Thanks for the link! Unless I'm misunderstanding, this is to replicate an RBD
volume from
Mellanox is also doing ethernet now,
http://www.mellanox.com/page/products_dyn?product_family=163mtag=sx1012
for example
- 220nsec for 40GbE
- 280nsec for 10GbE
And I think it's also possible to do Roce (rdma over ethernet) with mellanox
connect-x3 adapters
- Mail original -
De:
Don't have yet 10GBE, but here my result my simple lacp on 2 gigabit links with
a cisco 6500
rtt min/avg/max/mdev = 0.179/0.202/0.221/0.019 ms
(Seem to be lower than your 10gbe nexus)
- Mail original -
De: Wido den Hollander w...@42on.com
À: ceph-users@lists.ceph.com
Envoyé:
Is this with a 8192 byte payload?
Oh, sorry it was with 1500.
I'll try to send a report with 8192 tomorrow.
- Mail original -
De: Robert LeBlanc rob...@leblancnet.us
À: Alexandre DERUMIER aderum...@odiso.com
Cc: Wido den Hollander w...@42on.com, ceph-users@lists.ceph.com
Envoyé
Hi,
I don't have tested yet rbd readhead,
but maybe do you reach qemu limit. (by default qemu can use only 1thread/1core
to manage ios, check you qemu cpu).
Do you have some performance results ? how many iops ?
but I have had 4x improvement in qemu-kvm, with virtio-scsi + num_queues +
Hi,
try to mount your filesystems with errors=continue option
From the mount (8) man page
errors={continue|remount-ro|panic}
Define the behaviour when an error is encountered. (Either ignore errors
and just mark the filesystem erroneous and continue, or remount the
filesystem read-only, or
I think if you enable TRIM support on your RBD, then run fstrim on your
filesystems inside the guest (assuming ext4 / XFS guest filesystem),
Ceph should reclaim the trimmed space.
Yes, it's working fine.
(you need to use virtio-scsi and enable discard option)
- Mail original -
De:
- Mail original -
De: Daniel Swarbrick daniel.swarbr...@profitbricks.com
À: ceph-users@lists.ceph.com
Envoyé: Lundi 1 Décembre 2014 13:32:15
Objet: Re: [ceph-users] Fastest way to shrink/rewrite rbd image ?
On 01/12/14 10:22, Alexandre DERUMIER wrote:
Yes, it's working fine
that seq 4k ios, are aggregated, and so bigger and less ios
are going to ceph.
So performance should improve.
- Mail original -
De: duan xufeng duan.xuf...@zte.com.cn
À: Alexandre DERUMIER aderum...@odiso.com
Cc: ceph-users ceph-us...@ceph.com, si dawei si.da...@zte.com.cn
Envoyé
Alexandre Derumier
Ingénieur système et stockage
Fixe : 03 20 68 90 88
Fax : 03 20 68 90 81
45 Bvd du Général Leclerc 59100 Roubaix
12 rue Marivaux 75002 Paris
MonSiteEstLent.com - Blog dédié à la webperformance et la gestion de pics de
trafic
De: Wido den Hollander w
with distributed
storage)
- Mail original -
De: aderumier aderum...@odiso.com
À: Wido den Hollander w...@42on.com
Cc: ceph-users ceph-users@lists.ceph.com
Envoyé: Mardi 16 Décembre 2014 17:02:12
Objet: Re: [ceph-users] rbd snapshot slow restore
Alexandre Derumier
Ingénieur système et
what to you mean by not playing well with D_SYNC?
Hi, check this blog:
http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
- Mail original -
De: Mikaël Cluseau mclus...@isi.nc
À: Bryson McCutcheon brysonmccutch...@gmail.com,
Hello,
I known that qemu live migration with disk with cache=writeback are not safe
with storage like nfs,iscsi...
Is it also true with rbd ?
If yes, it is possible to disable manually writeback online with qmp ?
Best Regards,
Alexandre
___
,
but the impact of having no caching by qemu is in the order of a 2
magnitudes easily.
Beers,
Christian
Cheers
Alex
On 12/04/14 16:01, Alexandre DERUMIER wrote:
Hello,
I known that qemu live migration with disk with cache=writeback are
not safe with storage like nfs,iscsi
, that is; cache.writeback is guest
visible and can therefore only be toggled by the guest)
yes, that's what I have in mind, toggling cache.direct=on before migration,
then disable it after the migration.
- Mail original -
De: Kevin Wolf kw...@redhat.com
À: Alexandre DERUMIER aderum
...@inktank.com
À: Alexandre DERUMIER aderum...@odiso.com, Kevin Wolf kw...@redhat.com
Cc: ceph-users@lists.ceph.com, qemu-devel qemu-de...@nongnu.org
Envoyé: Samedi 19 Avril 2014 00:33:12
Objet: Re: [ceph-users] [Qemu-devel] qemu + rbd block driver with
cache=writeback, is live migration safe ?
On 04
: Kevin Wolf kw...@redhat.com
À: Josh Durgin josh.dur...@inktank.com
Cc: Alexandre DERUMIER aderum...@odiso.com, ceph-users@lists.ceph.com,
qemu-devel qemu-de...@nongnu.org
Envoyé: Mardi 22 Avril 2014 11:08:08
Objet: Re: [Qemu-devel] [ceph-users] qemu + rbd block driver with
cache=writeback
This is a very good news, congratulations !
(do you known if Ceph Enterprise subscription price will remain the same ?
I'm looking to take support next year)
- Mail original -
De: Sage Weil s...@inktank.com
À: ceph-de...@vger.kernel.org, ceph-us...@ceph.com
Envoyé: Mercredi 30 Avril
Do we need a journal when using this back-end?
no,they are no journal with key value store
- Mail original -
De: Kenneth Waegeman kenneth.waege...@ugent.be
À: Sage Weil s...@inktank.com
Cc: ceph-us...@ceph.com
Envoyé: Mercredi 7 Mai 2014 15:06:50
Objet: Re: [ceph-users] v0.80
Hi Christian,
Do you have tried without raid6, to have more osd ?
(how many disks do you have begin the raid6 ?)
Aslo, I known that direct ios can be quite slow with ceph,
maybe can you try without --direct=1
and also enable rbd_cache
ceph.conf
[client]
rbd cache = true
- Mail
): 9.92578
Min bandwidth (MB/sec): 0
Average Latency: 0.0444653
Stddev Latency: 0.121887
Max latency: 2.80917
Min latency: 0.001958
---
So this is even worse, just about 1500 IOPS.
Regards,
Christian
-Greg
On Wednesday, May 7, 2014, Alexandre DERUMIER aderum...@odiso.com
wrote
)
- Mail original -
De: Christian Balzer ch...@gol.com
À: ceph-users@lists.ceph.com
Cc: Alexandre DERUMIER aderum...@odiso.com
Envoyé: Jeudi 8 Mai 2014 08:52:15
Objet: Re: [ceph-users] Slow IOPS on RBD compared to journal and backing
devices
On Thu, 08 May 2014 08:41:54 +0200 (CEST
Hi, I observe the same behaviour on a test ceph cluster (upgrade from emperor
to firefly)
cluster 819ea8af-c5e2-4e92-81f5-4348e23ae9e8
health HEALTH_OK
monmap e3: 3 mons at ..., election epoch 12, quorum 0,1,2 0,1,2
osdmap e94: 12 osds: 12 up, 12 in
pgmap v19001: 592
I'll be there !
(Do you known if it'll be possible to buy some ceph t-shirts ?)
- Mail original -
De: Loic Dachary l...@dachary.org
À: ceph-users ceph-users@lists.ceph.com
Envoyé: Lundi 12 Mai 2014 16:38:31
Objet: [ceph-users] Ceph booth at http://www.solutionslinux.fr/
Hi Ceph,
on the IRC channel yesterday that this is a known problem
with Firefly which is due to be fixed with the release (possibly today?)
of 0.80.1.
Simon
On 12/05/14 14:53, Alexandre DERUMIER wrote:
Hi, I observe the same behaviour on a test ceph cluster (upgrade from emperor
to firefly
. That should be interesting to try at a
variety of block sizes. You could also try runnin RADOS bench and
smalliobench at a few different sizes.
-Greg
On Wednesday, May 7, 2014, Alexandre DERUMIER aderum...@odiso.com
wrote:
Hi Christian,
Do you have tried without
storage and may benefit from SSD backed OSDs, though may also be limited on
high performance SSDs.
Maybe Intank could comment about the 4000iops by osd ?
- Mail original -
De: Christian Balzer ch...@gol.com
À: ceph-users@lists.ceph.com
Cc: Alexandre DERUMIER aderum...@odiso.com
Envoyé
%
MonSiteEstLent.com - Blog dédié à la webperformance et la gestion de pics de
trafic
- Mail original -
De: Christian Balzer ch...@gol.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: ceph-users@lists.ceph.com
Envoyé: Mardi 13 Mai 2014 14:38:57
Objet: Re: [ceph-users] Slow
in 2007...)
So, it miss some features like crc32 and sse4 for examples, which can help a
lot ceph
(I'll try to do some osd tuning (threads,...) to see if I can improve
performance.
- Mail original -
De: Christian Balzer ch...@gol.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc
on client is at 100%
cpu of osd are around 70%/1core now.
So, seem to have a bottleneck client side somewhere.
(I remember some tests from Stefan Priebe on this mailing, with a full ssd
cluster,
having almost same results)
- Mail original -
De: Alexandre DERUMIER aderum
direct write could
be pretty slow,
that why rbd_cache is recommended, to aggregate small writes in bigger one)
- Mail original -
De: Christian Balzer ch...@gol.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: ceph-users@lists.ceph.com
Envoyé: Mardi 13 Mai 2014 18:31:18
Objet: Re
Thanks for the reply. I need an FIO installation file for Ubuntu platform and
also could you send any links for examples and documentation.
Hi, you can build it (with rbd support recently)
http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html
#apt-get install
It was nice to meet you guys !
I'll try to come to next meetup. (or any ceph workshop).
Could be fantastic to have some kind of ceph full day meetup in the future, too
many questions for this (too) short meetup ;)
See you soon,
Regards,
Alexandre
- Mail original -
De: Loic
Hi,
I'm looking to build a full osd ssd cluster, with this config:
6 nodes,
each node 10 osd/ ssd drives (dual 10gbit network). (1journal + datas on each
osd)
ssd drive will be entreprise grade,
maybe intel sc3500 800GB (well known ssd)
or new Samsung SSD PM853T 960GB (don't have too much
: replication 2x or 3x ?
Hello,
On Thu, 22 May 2014 18:00:56 +0200 (CEST) Alexandre DERUMIER wrote:
Hi,
I'm looking to build a full osd ssd cluster, with this config:
What is your main goal for that cluster, high IOPS, high sequential writes
or reads?
Remember my Slow IOPS on RBD... thread
)
my main concern, is to known if it's really needed to have replication x3
(mainly for cost price).
But I can wait to have lower ssd price next year, and go to 3x if necessary.
- Mail original -
De: Alexandre DERUMIER aderum...@odiso.com
À: Christian Balzer ch...@gol.com
Cc: ceph
https://github.com/rochaporto/collectd-ceph
It has a set of collectd plugins pushing metrics which mostly map what
the ceph commands return. In the setup we have it pushes them to
graphite and the displays rely on grafana (check for a screenshot in
the link above).
Thanks for sharing
Hi,
if you use debian,
try to use a recent kernel from backport (3.10)
also check your libleveldb1 version, it should be 1.9.0-1~bpo70+1 (debian
wheezy version is too old)
I don't see it in ceph repo:
http://ceph.com/debian-firefly/pool/main/l/leveldb/
(only for squeeze ~bpo60+1)
but you
jan.zel...@id.unibe.ch
À: aderum...@odiso.com
Cc: ceph-users@lists.ceph.com
Envoyé: Vendredi 23 Mai 2014 13:36:04
Objet: AW: [ceph-users] pgs incomplete; pgs stuck inactive; pgs stuck unclean
-Ursprüngliche Nachricht-
Von: Alexandre DERUMIER [mailto:aderum...@odiso.com]
Gesendet
Are people using automation tools like puppet or ansible?
http://www.sebastien-han.fr/blog/2014/05/01/vagrant-up-install-ceph-in-one-command/
enjoy ;)
- Mail original -
De: Don Talton (dotalton) dotal...@cisco.com
À: ceph-users@lists.ceph.com
Envoyé: Mardi 27 Mai 2014 18:19:00
Congratulations Eric !
- Mail original -
De: Loic Dachary l...@dachary.org
À: ceph-users ceph-users@lists.ceph.com
Envoyé: Jeudi 29 Mai 2014 12:28:56
Objet: [ceph-users] Ceph User Committee : welcome Eric Mourgaya
Hi Ceph,
Welcome Eric Mourgaya, head of the Ceph User Committee
Hi,
I think you can check this wiki:
http://ceph.com/docs/master/start/os-recommendations/
currently, only ubuntu 12.04 is deeply tested with inktank (but I think it'll
be rhel7 soon ;)
the wiki don't have been updated yet for firefly.
I known that ceph enterprise users are using dumpling for
jagiello.luk...@gmail.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: ceph-users@lists.ceph.com
Envoyé: Mercredi 28 Mai 2014 01:25:40
Objet: Re: [ceph-users] full osd ssd cluster advise : replication 2x or 3x ?
Hi,
I've got 16 nodes cluster ssd only. Each node is 6x600GB, 10Gbit uplink
hi!
See design here: http://adminlinux.com.br/cluster_design.txt
# dpkg -l |grep ceph
ii ceph 0.41-1ubuntu2.1
distributed storage
ii ceph-common0.41-1ubuntu2.1
common utilities to mount and interact
it for ceph ? if yes, what about stability|performance ?
with comming rdma support in ceph, it seem to be the perfect solution (and
price is very good).
Regards,
Alexandre Derumier
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
Of
Alexandre DERUMIER
Sent: Monday, June 2, 2014 12:30 PM
To: ceph-users
Subject: [ceph-users] mellanox SX1012 ethernet|infiniband switch,
somebody use it for ceph ?
Hi,
I'm looking for a fast and cheap 10gbe ethernet switch.
I just found this:
Mellanox SX1012
http
I just found this:
http://www.mellanox.com/related-docs/whitepapers/WP_Deploying_Ceph_over_High_Performance_Networks.pdf
Good to see than ceph begin to be tested by hardware vendor :)
Whitepaper include radosbench and fio results
- Mail original -
De: Alexandre DERUMIER aderum
Hi,
My low-budget setup consists of two gigabit switches, capable of LACP,
but not stackable. For redundancy, I'd like to have my links spread
evenly over both switches.
If you want to do lacp with both switches, they need to be stackable.
(or use active-backup bonding)
My question where I
). No
multipathing like iscsi for example.
- Mail original -
De: Sven Budde sven.bu...@itgration-gmbh.de
À: Alexandre DERUMIER aderum...@odiso.com
Cc: ceph-users@lists.ceph.com
Envoyé: Jeudi 5 Juin 2014 18:27:32
Objet: AW: [ceph-users] Ceph networks, to bond or not to bond?
Hi Alexandre
Hi everybody,
we are going to do our first french proxmox in Paris in september
http://www.meetup.com/Proxmox-VE-French-Meetup/
And of course, we'll talk about ceph integration in proxmox.
So if you are interested, feel free to join us !
Regards,
Alexandre
.
Thanks fo the info Loic.
For the moment, I have a room in my company building in Paris (for 10-15
peoples), But good to known.
I'm waiting for the next ceph meetup too :)
- Mail original -
De: Loic Dachary l...@dachary.org
À: Alexandre DERUMIER aderum...@odiso.com, ceph-users
ceph
Do for every read 1 Kb rbd will read 4MB from hdd? for write?
rados support partial read|write.
Note that with erasure code, write need to full rewrite object. (so 4MB)
I think that with key-value-store backend (like leveldb), read/write are full
too.
some interesting notes here :
Hi,
I'm reading tiering doc here
http://ceph.com/docs/firefly/dev/cache-pool/
The hit_set_count and hit_set_period define how much time each HitSet should
cover, and how many such HitSets to store. Binning accesses over time allows
Ceph to independently determine whether an object was
backups jobs
running each week, reading all theses cold datas)
- Mail original -
De: Gregory Farnum g...@inktank.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: ceph-users ceph-users@lists.ceph.com
Envoyé: Mercredi 11 Juin 2014 21:56:29
Objet: Re: [ceph-users] tiering
Hi Greg,
So the only way to improve performance would be to not use O_DIRECT (as this
should bypass rbd cache as well, right?).
yes, indeed O_DIRECT bypass cache.
BTW, Do you need to use mysql with O_DIRECT ? default innodb_flush_method is
fdatasync, so it should work with cache.
(but you
Hi,
I would like to known if a centos7 respository will be available soon ?
Or can I use current rhel7 for the moment ?
http://ceph.com/rpm-firefly/rhel7/x86_64/
Cheers,
Alexandre
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi,
sorry to spam the mailing list,
but they are a inktank mellanox webinar in 10minutes,
and I don't have receive access since I have been registered yesterday (same
for my co-worker).
and the webinar mellanox contact email (conta...@mellanox.com), does not
exist
Maybe somebody from
Ok, sorry, we have finally receive the login a bit late.
Sorry again to have spam the mailing list
- Mail original -
De: Alexandre DERUMIER aderum...@odiso.com
À: ceph-users ceph-us...@ceph.com
Envoyé: Jeudi 10 Juillet 2014 16:55:22
Objet: [ceph-users] inktank-mellanox webinar access
Hi,
But in reality (yum update or by using ceph-deploy install nodename) -
the package manager does restart ALL ceph services on that node by its own...
debian packages don't restart ceph services on package update, maybe it's a bug
in rpm packaging ?
- Mail original -
De:
Same question here,
I'm contributor on proxmox, and we don't known if we can upgrade librbd safely,
for users with dumpling cluster.
Also, for ceph enterprise , s oes inktank support dumpling enterprise +
firefly librbd ?
- Mail original -
De: Nigel Williams
Hi,
for RHEL5, I'm not sure
be barriers supported is maybe not implemented in virtio devices,lvm,dm raid
and some filesystem,
depend of the kernel version.
Not sure what is backported in rhel5 kernel
see
http://monolight.cc/2011/06/barriers-caches-filesystems/
- Mail original -
/3
- Mail original -
De: Yufang yufang521...@gmail.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: ceph-users@lists.ceph.com
Envoyé: Vendredi 22 Août 2014 18:05:32
Objet: Re: [ceph-users] Is it safe to enable rbd cache with qemu?
Thanks, Alexandre. But what about Windows? Does
bug,.)
Does someone have experience with supermicro, and give me advise for a good
motherboard model?
Best Regards,
Alexandre Derumier
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
applications. The journal is purely sequential (small
seq block, IIRC Stephan mentioned 370k blocks).
I will instead use with a SSD with large sequential capabilities like 525
series 120GB.
Ok, thanks!
- Mail original -
De: Sebastien Han sebastien@enovance.com
À: Alexandre DERUMIER
Seem that s3700 have supercapacitor too
http://www.thessdreview.com/our-reviews/s3700/
The S3700 has power loss protection to keep a sudden outage from corrupting
data, but if the system detects a fault in the two capacitors powering the
system, it will voluntarily disable the volatile cache
didn't that supermicro have 2utwin with 2x12disk, seem really interesting !
- Mail original -
De: Robert van Leeuwen robert.vanleeu...@spilgames.com
À: Alexandre DERUMIER aderum...@odiso.com, ceph-users@lists.ceph.com
Envoyé: Mercredi 15 Janvier 2014 14:36:32
Objet: RE: [ceph-users
)
On 01/15/2014 08:04 AM, Alexandre DERUMIER wrote:
We are using Supermicro 2uTwin nodes.
These have 2 nodes in 2u with each 12 disks.
We use X9DRT-HF+ mainboards, 2x Intel DC S3500 SSD and 10x 2.5 1TB 7.2k
HDD Seagate Constellation.2
They have SAS2008 controllers on board which can
Hi Derek,
thanks for the information about r720xd.
Seem that 24 drive chassis is also available.
What is the advantage to use flexbay for ssd ? Bypass the back-plane ?
- Mail original -
De: Derek Yarnell de...@umiacs.umd.edu
À: ceph-users@lists.ceph.com
Envoyé: Mercredi 15 Janvier
recommendation
Le 15/01/2014 17:34, Alexandre DERUMIER a écrit :
Hi Derek,
thanks for the information about r720xd.
Seem that 24 drive chassis is also available.
What is the advantage to use flexbay for ssd ? Bypass the back-plane ?
From what I understand the flexbay are inside
Thanks.
Does It need to rebuild the whole ceph packages with libleveldb-dev ?
Or can I simply backport libleveldb1 and use ceph packages from intank
repository ?
- Mail original -
De: Sylvain Munaut s.mun...@whatever-company.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: Mark
/Packages
So,I think Intank should add libleveldb1_1.9.0-1~bpo70+1_amd64.deb in wheezy
repo.
- Mail original -
De: Stefan Priebe s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com, Sylvain Munaut
s.mun...@whatever-company.com
Cc: ceph-users@lists.ceph.com
Envoyé: Jeudi
Any comment and feedback is welcome!
I think it could be great to add some osd statistics (io/s,...), I think it's
possible through ceph api.
Also maybe an email alerting system if an osd state change (up/down/)
- Mail original -
De: Martin Maurer mar...@proxmox.com
À:
Hi Loic,
do you known if a ceph meetup is planned soon in France or Belgium ?
I miss the Fosdem this year, and I'll be very happy to meet some ceph
users/devs.
Regards,
Alexandre
- Mail original -
De: Loic Dachary l...@dachary.org
À: ceph-users ceph-users@lists.ceph.com
Envoyé:
1 - 100 of 401 matches
Mail list logo