Hi
What type of clients do you have.
- Are they Linux physical OR VM mounting Ceph RBD or CephFS ??
- Or they are simply openstack / cloud instances using Ceph as cinder volumes
or something like that ??
- Karan -
On 28 Jul 2015, at 11:53, Shneur Zalman Mattern shz...@eimsys.co.il wrote:
We've built Ceph cluster:
3 mon nodes (one of them is combined with mds)
3 osd nodes (each one have 10 osd + 2 ssd for journaling)
switch 24 ports x 10G
10 gigabit - for public network
20 gigabit bonding - between osds
Ubuntu 12.04.05
Ceph 0.87.2
...@formann.de
Sent: Tuesday, July 28, 2015 12:46 PM
To: Shneur Zalman Mattern
Subject: Re: [ceph-users] Did maximum performance reached?
Hi,
size=3 would decrease your performance. But with size=2 your results are not
bad too:
Math:
size=2 means each write is written 4 times (2 copies, first journal
28, 2015 12:46 PM
To: Shneur Zalman Mattern
Subject: Re: [ceph-users] Did maximum performance reached?
Hi,
size=3 would decrease your performance. But with size=2 your results are not
bad too:
Math:
size=2 means each write is written 4 times (2 copies, first journal, later
disk). Calculating
Hi, Karan!
That's physical CentOS clients of CephFS mounted by kernel-module (kernel 4.1.3)
Thanks
Hi
What type of clients do you have.
- Are they Linux physical OR VM mounting Ceph RBD or CephFS ??
- Or they are simply openstack / cloud instances using Ceph as cinder volumes
or something
Oh, now I've to cry :-)
not because it's not SSDs... it's SAS2 HDDs
Because, I need to build something for 140 clients... 4200 OSDs
:-(
Looks like, I can pickup my performance by SSDs, but I need a huge capacity ~
2PB
Perhaps, tiering cache pool can save my money, but I've read here - that
replicated automatically.
OK, I'll check it,
Regards, Shneur
From: Johannes Formann mlm...@formann.de
Sent: Tuesday, July 28, 2015 12:09 PM
To: Shneur Zalman Mattern
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Did maximum performance reached?
Hello
Hello,
what is the „size“ parameter of your pool?
Some math do show the impact:
size=3 means each write is written 6 times (3 copies, first journal, later
disk). Calculating with 1.300MB/s „Client“ Bandwidth that means:
3 (size) * 1300 MB/s / 6 (SSD) = 650MB/s per SSD
3 (size) * 1300 MB/s / 30
...@formann.de
Sent: Tuesday, July 28, 2015 12:46 PM
To: Shneur Zalman Mattern
Subject: Re: [ceph-users] Did maximum performance reached?
Hi,
size=3 would decrease your performance. But with size=2 your results are not
bad too:
Math:
size=2 means each write is written 4 times (2 copies
On 28/07/15 11:17, Shneur Zalman Mattern wrote:
Oh, now I've to cry :-)
not because it's not SSDs... it's SAS2 HDDs
Because, I need to build something for 140 clients... 4200 OSDs
:-(
Looks like, I can pickup my performance by SSDs, but I need a huge capacity ~
2PB
Perhaps, tiering cache
On 28/07/15 11:53, John Spray wrote:
On 28/07/15 11:17, Shneur Zalman Mattern wrote:
Oh, now I've to cry :-)
not because it's not SSDs... it's SAS2 HDDs
Because, I need to build something for 140 clients... 4200 OSDs
:-(
Looks like, I can pickup my performance by SSDs, but I need a huge
As I'm understanding now that's in this case (30 disks) 10Gbit Network is not a
bottleneck!
With other HW config ( + 5 OSD nodes = + 50 disks ) I'd get 3400 MB/s,
and 3 clients can work on full bandwidth, yes?
OK, let's try ! ! ! ! ! ! !
Perhaps, somebody has more suggestions for increasing
Hi,
On 28.07.2015 12:02, Shneur Zalman Mattern wrote:
Hi!
And so, in your math
I need to build size = osd, 30 replicas for my cluster of 120TB - to get my
demans
30 replicas is the wrong math! Less replicas = more speed (because of
less writing).
More replicas less speed.
Fore data
13 matches
Mail list logo