Re: [ceph-users] CEPH RBD and OpenStack

2015-02-08 Thread Georgios Dimitrakakis
By the way, on the link that John send I believe there is a typo. In the very beginning of the Open Required Ports session the port range says 6800:7810 where below is mentioned as 6800:7100. I think that the former is a typo based on previous documentation where the ports where declared to

Re: [ceph-users] Introducing Learning Ceph : The First ever Book on Ceph

2015-02-08 Thread Vickey Singh
Amazing piece of work Karan , this was something which is missing since long , thanks for filling the gap. I got my book today and just finished reading couple of pages , excellent introduction to Ceph. Thanks again , its worth purchasing this book. Best Regards Vicky On Fri, Feb 6, 2015

Re: [ceph-users] CEPH RBD and OpenStack

2015-02-08 Thread John Spray
Thanks for spotting the inconsistency. The 7100 number is out of date, the upper port bound has been 7300 for some time. The 7810 number does indeed look like a simple typo. The 6810 number is an example rather than the upper bound -- the body of the text explains that it is up to the

[ceph-users] Mount CEPH RBD devices into OpenSVC service

2015-02-08 Thread Florent MONTHEL
Hi List, Fisrt tutorial to map/unmap RBD devices into OpenSVC service : http://www.flox-arts.net/article29/monter-un-disque-ceph-dans-service-opensvc-step-1 Sorry it’s in French Next step : Christophe Varoqui has just integrated CEPH in core OpenSVC code with snapshots clones managing, I will

[ceph-users] ceph Performance vs PG counts

2015-02-08 Thread Sumit Gaur
Hi I have installed 6 node ceph cluster and doing a performance bench mark for the same using Nova VMs. What I have observed that FIO random write reports around 250 MBps for 1M block size and PGs 4096 and *650MBps for iM block size and PG counts 2048* . Can some body let me know if I am missing

[ceph-users] crush tunables : optimal : upgrade from firefly to hammer behaviour ?

2015-02-08 Thread Alexandre DERUMIER
Hi, I'm currently use crush tunables optimal value. If I upgrade from firefly to hammer, does the optimal value will upgrade to optimal values for hammer. So, does my clients (qemu-librbd) need to be also upgraded to hammer to support new hammer features ? If yes, I think to: - change

Re: [ceph-users] crush tunables : optimal : upgrade from firefly to hammer behaviour ?

2015-02-08 Thread Sage Weil
On Mon, 9 Feb 2015, Alexandre DERUMIER wrote: Hi, I'm currently use crush tunables optimal value. If I upgrade from firefly to hammer, does the optimal value will upgrade to optimal values for hammer. The tunables won't change on upgrade, and optimal on firefly != optimal on hammer. In

Re: [ceph-users] crush tunables : optimal : upgrade from firefly to hammer behaviour ?

2015-02-08 Thread Alexandre DERUMIER
Ah ok, Great ! I was just a bit worried about upgrade. Thanks for your response sage ! - Mail original - De: Sage Weil s...@newdream.net À: aderumier aderum...@odiso.com Cc: ceph-users ceph-users@lists.ceph.com Envoyé: Lundi 9 Février 2015 07:11:46 Objet: Re: [ceph-users] crush tunables

[ceph-users] ct_target_max_mem_mb 1000000

2015-02-08 Thread Aquino, Ben O
Hello ceph teams, Anyone can provide or confirm? ct_target_max_mem_mb is cache target pool's maximum memory in MB, the cache pool's maximum memory it can used? Additional details would be appreciated. Regards; _benaquino ___ ceph-users mailing list

Re: [ceph-users] ceph Performance vs PG counts

2015-02-08 Thread Gregory Farnum
On Sun, Feb 8, 2015 at 6:00 PM, Sumit Gaur sumitkg...@gmail.com wrote: Hi I have installed 6 node ceph cluster and doing a performance bench mark for the same using Nova VMs. What I have observed that FIO random write reports around 250 MBps for 1M block size and PGs 4096 and 650MBps for iM

[ceph-users] Applied crush rules to pool but not working.

2015-02-08 Thread Vickie ch
Dear cephers: My cluster( 0.87) got an odd incident. The incident is when I marked the default crush rule replicated_ruleset and set new rule called new_rule1. Content of rule new_rule1 is just like replicated_ruleset. Only difference is ruleset number . After applied new map into crush then used

Re: [ceph-users] Ceph Supermicro hardware recommendation

2015-02-08 Thread Scott Laird
Does anyone have a good recommendation for per-OSD memory for EC? My EC test blew up in my face when my OSDs suddenly spiked to 10+ GB per OSD process as soon as any reconstruction was needed. Which (of course) caused OSDs to OOM, which meant more reconstruction, which fairly immediately led to

[ceph-users] requests are blocked 32 sec woes

2015-02-08 Thread Matthew Monaco
Hello! *** Shameless plug: Sage, I'm working with Dirk Grunwald on this cluster; I believe some of the members of your thesis committee were students of his =) We have a modest cluster at CU Boulder and are frequently plagued by requests are blocked issues. I'd greatly appreciate any insight or