Hi,
the next Berlin Ceph meetup is scheduled for September 22.
http://www.meetup.com/Ceph-Berlin/events/198884162/
Our host Christian will present the Ceph cluster they use for education
at the Berlin College of Further Education for Information Technology
and Medical Equipment Technology
Hi John,
On 02/09/2014 05:29, Jakes John wrote: Hi,
I have some general questions regarding the crush map. It would be helpful
if someone can help me out by clarifying them.
1. I saw that a bucket 'host' is always created for the crush maps which are
automatically generated by ceph.
Hi,
We have 4 NIC controllers on ceph servers. Each server have installed
few osd's and one monitor. How should we setup networking on this hosts
with division on frontend network (10.20.8.0/22) and backend network
(10.20.4.0/22)?
At this time we are using this configuration of network:
Hi Sebastien,
I got 6340 IOPS on a single OSD SSD. (journal and data on the same
partition).
Shouldn't it better to have 2 partitions, 1 for journal and 1 for datas ?
(I'm thinking about filesystem write syncs)
- Mail original -
De: Sebastien Han sebastien@enovance.com
À:
Hey,
Well I ran an fio job that simulates the (more or less) what ceph is doing
(journal writes with dsync and o_direct) and the ssd gave me 29K IOPS too.
I could do this, but for me it definitely looks like a major waste since we
don’t even get a third of the ssd performance.
On 02 Sep 2014,
Hi Sebastien,
That sounds promising. Did you enable the sharded ops to get this result?
Cheers, Dan
On 02 Sep 2014, at 02:19, Sebastien Han sebastien@enovance.com wrote:
Mark and all, Ceph IOPS performance has definitely improved with Giant.
With this version: ceph version
On 02/09/14 19:38, Alexandre DERUMIER wrote:
Hi Sebastien,
I got 6340 IOPS on a single OSD SSD. (journal and data on the same partition).
Shouldn't it better to have 2 partitions, 1 for journal and 1 for datas ?
(I'm thinking about filesystem write syncs)
Oddly enough, it does not seem
Hi Sebastian,
Le 2 sept. 2014 à 10:41, Sebastien Han sebastien@enovance.com a écrit :
Hey,
Well I ran an fio job that simulates the (more or less) what ceph is doing
(journal writes with dsync and o_direct) and the ssd gave me 29K IOPS too.
I could do this, but for me it definitely
@Dan, hop my bad I forgot to use these settings, I’ll try again and see how
much I can get on the read performance side.
@Mark, thanks again and yes I believe that due to some hardware variance we
have difference results, I won’t say that the deviance is decent but results
are close enough to
Do you have same results, if you launch 2 fio benchs in parallel on 2
differents rbd volumes ?
- Mail original -
De: Sebastien Han sebastien@enovance.com
À: Cédric Lemarchand c.lemarch...@yipikai.org
Cc: Alexandre DERUMIER aderum...@odiso.com, ceph-users@lists.ceph.com
Envoyé:
FYI it is a known issue : http://tracker.ceph.com/issues/6109
On 01/09/2014 00:02, Loic Dachary wrote:
Hi Ceph,
In a mixed dumpling / emperor cluster, because osd 2 has been removed but is
still in
might_have_unfound: [
{ osd: 2,
status: osd
We've chosen to use the gitbuilder site to make sure we get the same version
when we rebuild nodes, etc.
http://gitbuilder.ceph.com/ceph-deb-precise-x86_64-basic/
So our sources list looks like:
deb http://gitbuilder.ceph.com/ceph-deb-precise-x86_64-basic/ref/v0.80.5
precise main
Warren
Thanks Loic.
On Mon, Sep 1, 2014 at 11:31 PM, Loic Dachary l...@dachary.org wrote:
Hi John,
On 02/09/2014 05:29, Jakes John wrote: Hi,
I have some general questions regarding the crush map. It would be
helpful if someone can help me out by clarifying them.
1. I saw that a bucket
I'm going to install next week a small 3 nodes test ssd cluster,
I have some intel s3500 and crucial m550.
I'll try to bench them with firefly and master.
Is a debian wheezy gitbuilder repository available ? (I'm a bit lazy to compile
all packages)
- Mail original -
De: Sebastien
It would nice if you could post the results :)
Yup gitbuilder is available on debian 7.6 wheezy.
On 02 Sep 2014, at 17:55, Alexandre DERUMIER aderum...@odiso.com wrote:
I'm going to install next week a small 3 nodes test ssd cluster,
I have some intel s3500 and crucial m550.
I'll try to
Hi Sebastien,
Something I didn't see in the thread so far, did you secure erase the SSDs
before they got used? I assume these were probably repurposed for this test. We
have seen some pretty significant garbage collection issue on various SSD and
other forms of solid state storage to the point
On Sat, Aug 30, 2014 at 11:35 PM, Christian Balzer ch...@gol.com wrote:
Hello,
On Sat, 30 Aug 2014 20:24:00 -0400 J David wrote:
While adding some nodes to a ceph emperor cluster using ceph-deploy,
the new nodes somehow wound up with 0.80.1, which I think is a Firefly
release.
This was
On Tue, Sep 2, 2014 at 1:00 PM, Alfredo Deza alfredo.d...@inktank.com wrote:
correct, if you don't specify what release you want/need, ceph-deploy
will use the latest stable release (firefly as of this writing)
So, ceph-deploy set up emperor repositories in
/etc/apt/sources.list.d/ceph.list and
Hi,
You need to set higher priority for ceph repo, check ceph-deploy with
--release (--stable) for dumpling? thread.
W dniu 02.09.2014 o 19:18 J David j.david.li...@gmail.com pisze:
On Tue, Sep 2, 2014 at 1:00 PM, Alfredo Deza alfredo.d...@inktank.com
wrote:
correct, if you don't specify
On Tue, Sep 2, 2014 at 2:50 PM, Konrad Gutkowski
konrad.gutkow...@ffs.pl wrote:
You need to set higher priority for ceph repo, check ceph-deploy with
--release (--stable) for dumpling? thread.
Right, this is the same issue as that. It looks like the 0.80.1
packages are coming from Ubuntu; this
On Tue, Sep 2, 2014 at 3:47 PM, Alfredo Deza alfredo.d...@inktank.com wrote:
This is an actual issue, so I created:
http://tracker.ceph.com/issues/9319
And should be fixing it soon.
Thank you!
___
ceph-users mailing list
ceph-users@lists.ceph.com
21 matches
Mail list logo