I accidentally replied to the wrong thread.  this was meant for a different one.

________________________________

[cid:[email protected]]<https://storagecraft.com>       David 
Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943

________________________________

If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.

________________________________

________________________________
From: Shinobu Kinjo [[email protected]]
Sent: Thursday, December 29, 2016 4:03 PM
To: David Turner
Cc: Bryan Henderson; [email protected]
Subject: Re: [ceph-users] ceph program uses lots of memory

And we may be interested in your cluster's configuration.

 # ceph --show-config > $(hostname).$(date +%Y%m%d).ceph_conf.txt

On Fri, Dec 30, 2016 at 7:48 AM, David Turner 
<[email protected]<mailto:[email protected]>> wrote:

Another thing that I need to make sure on is that your number of PGs in the 
pool with 90% of the data is a power of 2 (256, 512, 1024, 2048, etc).  If that 
is the case, then I need the following information.

1) Pool replica size
2) The number of the pool with the data
3) A copy of your osdmap (ceph osd getmap -o osd_map.bin)
4) Full output of (ceph osd tree)
5) Full output of (ceph osd df)

With that I can generate a new crushmap that is balanced for your cluster to 
equalize all of the osds % used.

Our clusters have more than 1k osds and the difference between the top used osd 
and the least used osd is within 2% in those clusters.  We have 99.9% of our 
data in 1 pool.

________________________________

[cid:[email protected]]<https://storagecraft.com>       David 
Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760<tel:(801)%20871-2760> | Mobile: 
385.224.2943<tel:(385)%20224-2943>

________________________________

If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.

________________________________

________________________________________
From: ceph-users 
[[email protected]<mailto:[email protected]>] 
on behalf of Bryan Henderson 
[[email protected]<mailto:[email protected]>]
Sent: Thursday, December 29, 2016 3:31 PM
To: [email protected]<mailto:[email protected]>
Subject: [ceph-users] ceph program uses lots of memory


Does anyone know why the 'ceph' program uses so much memory?  If I run it with
an address space rlimit of less than 300M, it usually dies with messages about
not being able to allocate memory.

I'm curious as to what it could be doing that requires so much address space.

It doesn't matter what specific command I'm doing and it does this even with
there is no ceph cluster running, so it must be something pretty basic.

--
Bryan Henderson                                   San Jose, California
_______________________________________________
ceph-users mailing list
[email protected]<mailto:[email protected]>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
[email protected]<mailto:[email protected]>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to