Can I ask what xio and simple messenger are and the differences?
Kind regards
Kevin Walker
+968 9765 1742
On 1 Mar 2015, at 18:38, Alexandre DERUMIER aderum...@odiso.com wrote:
Hi Mark,
I found an previous bench from Vu Pham (it's was about simplemessenger vs
xiomessenger)
Again, penultimately you will need to sit down, compile and compare the
numbers.
Start with this:
http://ark.intel.com/products/family/83425/Data-Center-SSDs
Pay close attention to the 3610 SSDs, while slightly more expensive they
offer 10 times the endurance.
Guestimate the amount of data
Now, I've never setup a journal on a separate disk, I assume you have 4
partitions at 10GB / partition, I noticed in the docs they referred to 10
GB, as a good starter. Would it be better to have 4 partitions @ 10g ea or
4 @20?
I know I'll take a speed hit, but unless I can get my work to buy
On 01/03/2015, at 06.03, Sudarshan Pathak sushan@gmail.com wrote:
Mail is landed in Spam.
Here is message from google:
Why is this message in Spam? It has a from address in yahoo.com but has
failed yahoo.com's required tests for authentication. Learn more
Maybe Tony didn't send
Hi Christian,
I didn't create the partitions beforehand, which is probably the reason why we
had differing outcomes.
The actual partitions were fine, I could mount the root partition from a live
cd. However when the server tried to boot it never loaded grub, it just sat
there with a blinking
Hi Mark,
I found an previous bench from Vu Pham (it's was about simplemessenger vs
xiomessenger)
http://www.spinics.net/lists/ceph-devel/msg22414.html
and with 1 osd, he was able to reach 105k iops with simple messenger
. ~105k iops (4K random read, 20 cores used, numjobs=8, iopdepth=32)
On Sun, Mar 1, 2015 at 10:18 PM, Christian Balzer ch...@gol.com wrote:
On Sun, 1 Mar 2015 21:26:16 -0600 Tony Harris wrote:
On Sun, Mar 1, 2015 at 6:32 PM, Christian Balzer ch...@gol.com wrote:
Again, penultimately you will need to sit down, compile and compare the
numbers.
On Sun, 1 Mar 2015 22:47:48 -0600 Tony Harris wrote:
On Sun, Mar 1, 2015 at 10:18 PM, Christian Balzer ch...@gol.com wrote:
On Sun, 1 Mar 2015 21:26:16 -0600 Tony Harris wrote:
On Sun, Mar 1, 2015 at 6:32 PM, Christian Balzer ch...@gol.com
wrote:
Again, penultimately you
Can I ask what xio and simple messenger are and the differences?
simple messenger is the classic messenger protocol used since the begining of
ceph.
xio messenger is for rdma (infiniband or Roce over ethernet)
they are also a new async messenger.
They should help to reduce latencies (and also
when I create bucket, why rgw create 2 objects in the domain root pool.
and one object store struct RGWBucketInfo and the other object store struct
RGWBucketEntryPoint
and when I delete the bucket , why rgw only delete one object.
On Sun, Mar 1, 2015 at 6:32 PM, Christian Balzer ch...@gol.com wrote:
Again, penultimately you will need to sit down, compile and compare the
numbers.
Start with this:
http://ark.intel.com/products/family/83425/Data-Center-SSDs
Pay close attention to the 3610 SSDs, while slightly more
On Sun, 1 Mar 2015 21:26:16 -0600 Tony Harris wrote:
On Sun, Mar 1, 2015 at 6:32 PM, Christian Balzer ch...@gol.com wrote:
Again, penultimately you will need to sit down, compile and compare the
numbers.
Start with this:
http://ark.intel.com/products/family/83425/Data-Center-SSDs
Hello
got several OSD's that are crashing. Migrated last week from Ubuntu 14.10
(don't know CEPH version) to 14.04 and since then several OSD's are flapping.
Here a log file from one of the OSD that is flapping. Would appreciate any help
in getting back a stable CEPH cluster.
There are 3
Don't know if this email got delivered some days ago. It may seem not.
Re-sending it again hope someone can add something :)
Thank you,
Gian
On 26 Feb 2015, at 19:20, ceph-users ceph-us...@pinguozzo.com wrote:
Hi All,
I've been provided by this hardware:
4xHP server G8
18 HD 1TB per server
Well, although I have 7 now per node, you make a good point and I'm in a
position where I can either increase to 8 and split 4/4 and have 2 ssds, or
reduce to 5 and use a single osd per node (the system is not in production
yet).
Do all the DC lines have caps in them or just the DC s line?
-Tony
I would not use a single ssd for 5 osds. I would recommend the 3-4 osds max per
ssd or you will get the bottleneck on the ssd side.
I've had a reasonable experience with Intel 520 ssds (which are not produced
anymore). I've found Samsung 840 Pro to be horrible!
Otherwise, it seems that
Ok, any size suggestion? Can I get a 120 and be ok? I see I can get
DCS3500 120GB for within $120/drive so it's possible to get 6 of them...
-Tony
On Sun, Mar 1, 2015 at 12:46 PM, Andrei Mikhailovsky and...@arhont.com
wrote:
I would not use a single ssd for 5 osds. I would recommend the 3-4
I am not sure about the enterprise grade and underprovisioning, but for the
Intel 520s i've got 240gbs (the speeds of 240 is a bit better than 120s). and
i've left 50% underprovisioned. I've got 10GB for journals and I am using 4
osds per ssd.
Andrei
- Original Message -
From:
18 matches
Mail list logo