Re: [ceph-users] Assistance in deploying Ceph cluster

2014-07-01 Thread Luc Dumaine
Hello, From the output we can also see that the server fails to install wget. On each on your servers you have to set the proxy environment variables: https_proxy, http_proxy etc.. For redhat/centos you can do it globally in a file in /etc/profile.d For ubuntu / debian you have to define

Re: [ceph-users] external monitoring tools for ceph

2014-07-01 Thread Georgios Dimitrakakis
Hi Craig, I am also interested at the Zabbix templates and scripts if you can publish them. Regards, G. On Mon, 30 Jun 2014 18:15:12 -0700, Craig Lewis wrote: You should check out Calamari (https://github.com/ceph/calamari [3]), Inktanks monitoring and administration tool. I started

Re: [ceph-users] Some OSD and MDS crash

2014-07-01 Thread Pierre BLONDEAU
Hi, I join : - osd.20 is one of osd that I detect which makes crash other OSD. - osd.23 is one of osd which crash when i start osd.20 - mds, is one of my MDS I cut log file because they are to big but. All is here : https://blondeau.users.greyc.fr/cephlog/ Regards Le 30/06/2014 17:35,

Re: [ceph-users] external monitoring tools for ceph

2014-07-01 Thread Pierre BLONDEAU
Hi, May be you can use that : https://github.com/thelan/ceph-zabbix, but i am interested to view Craig's script and template. Regards Le 01/07/2014 10:16, Georgios Dimitrakakis a écrit : Hi Craig, I am also interested at the Zabbix templates and scripts if you can publish them. Regards,

[ceph-users] iscsi and cache pool

2014-07-01 Thread Никитенко Виталий
Good day! I have server with Ubunu 14.04 and installed ceph firefly. Configured main_pool (2 osd) and ssd_pool (1 ssd osd). I want use ssd_pool as cache pool for main_pool ceph osd tier add main_pool ssd_pool ceph osd tier cache-mode ssd_pool writeback ceph osd tier set-overlay

Re: [ceph-users] external monitoring tools for ceph

2014-07-01 Thread Christian Eichelmann
Hi all, if it should be nagios/icinga and not Zabbix, there is a remote check from me that can be found here: https://github.com/Crapworks/check_ceph_dash This one uses ceph-dash to monitor the overall cluster status via http: https://github.com/Crapworks/ceph-dash But it can be easily

[ceph-users] 回复: Re: Ask a performance question for the RGW

2014-07-01 Thread baijia...@126.com
I know FileStore.ondisk_finisher handle C_OSD_OpCommit , and from journaled_completion_queue to op_commit cost 3.6 seconds, maybe cost in the function of ReplicatedPG::op_commit . Through OpTracker , I find that ReplicatedPG::op_commit first lock pg, but it sometimes cost from 0.5 to 1 second

[ceph-users] Replacing an OSD

2014-07-01 Thread Sylvain Munaut
Hi, As an exercise, I killed an OSD today, just killed the process and removed its data directory. To recreate it, I recreated an empty data dir, then ceph-osd -c /etc/ceph/ceph.conf -i 3 --monmap /tmp/monmap --mkfs (I tried with and without giving the monmap). I then restored the keyring

[ceph-users] ceph data replication not even on every osds

2014-07-01 Thread Cao, Buddy
Hi, I set the same weight for all the hosts, same weight for all the osds under the hosts in crushmap, and set pool replica size to 3. However, after upload 1M/4M/400M/900M files to the pool, I found the data replication is not even on every osds and the utilization for the osds are not the

Re: [ceph-users] Replacing an OSD

2014-07-01 Thread Loic Dachary
Hi, On 01/07/2014 17:48, Sylvain Munaut wrote: Hi, As an exercise, I killed an OSD today, just killed the process and removed its data directory. To recreate it, I recreated an empty data dir, then ceph-osd -c /etc/ceph/ceph.conf -i 3 --monmap /tmp/monmap --mkfs (I tried with and

Re: [ceph-users] iscsi and cache pool

2014-07-01 Thread Gregory Farnum
It looks like you're using a kernel RBD mount in the second case? I imagine your kernel doesn't support caching pools and you'd need to upgrade for it to work. -Greg On Tuesday, July 1, 2014, Никитенко Виталий v1...@yandex.ru wrote: Good day! I have server with Ubunu 14.04 and installed ceph

Re: [ceph-users] Replacing an OSD

2014-07-01 Thread Sylvain Munaut
Hi, And then I start the process, and it starts fine. http://pastebin.com/TPzNth6P I even see one active tcp connection to a mon from that process. But the osd never becomes up or do anything ... I suppose there are error messages in logs somewhere regarding the fact that monitors and

Re: [ceph-users] Replacing an OSD

2014-07-01 Thread Loic Dachary
On 01/07/2014 18:21, Sylvain Munaut wrote: Hi, And then I start the process, and it starts fine. http://pastebin.com/TPzNth6P I even see one active tcp connection to a mon from that process. But the osd never becomes up or do anything ... I suppose there are error messages in logs

[ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
I'm pulling my hair out with ceph. I am testing things with a 5 server cluster. I have 3 monitors, and two storage machines each with 4 osd's. I have started from scratch 4 times now, and can't seem to figure out how to get a clean status. Ceph health reports: HEALTH_WARN 34 pgs degraded; 192

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Gregory Farnum
What's the output of ceph osd map? Your CRUSH map probably isn't trying to segregate properly, with 2 hosts and 4 OSDs each. Software Engineer #42 @ http://inktank.com | http://ceph.com On Tue, Jul 1, 2014 at 11:22 AM, Brian Lovett brian.lov...@prosperent.com wrote: I'm pulling my hair out

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
Brian Lovett brian.lovett@... writes: I restarted all of the osd's and noticed that ceph shows 2 osd's up even if the servers are completely powered down: osdmap e95: 8 osds: 2 up, 8 in Why would that be? ___ ceph-users mailing list

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
Gregory Farnum greg@... writes: What's the output of ceph osd map? Your CRUSH map probably isn't trying to segregate properly, with 2 hosts and 4 OSDs each. Software Engineer #42 at http://inktank.com | http://ceph.com Is this what you are looking for? ceph osd map rbd ceph osdmap

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Gregory Farnum
On Tue, Jul 1, 2014 at 11:33 AM, Brian Lovett brian.lov...@prosperent.com wrote: Brian Lovett brian.lovett@... writes: I restarted all of the osd's and noticed that ceph shows 2 osd's up even if the servers are completely powered down: osdmap e95: 8 osds: 2 up, 8 in Why would that be? The

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Gregory Farnum
On Tue, Jul 1, 2014 at 11:45 AM, Gregory Farnum g...@inktank.com wrote: On Tue, Jul 1, 2014 at 11:33 AM, Brian Lovett brian.lov...@prosperent.com wrote: Brian Lovett brian.lovett@... writes: I restarted all of the osd's and noticed that ceph shows 2 osd's up even if the servers are

Re: [ceph-users] How to improve performance of ceph objcect storage cluster

2014-07-01 Thread Aronesty, Erik
I've never worked enough with rbd to be sure. I know for files, when I turned on striping, I got far better performance. It seems like for RBD, the default is: Just to see if it helps with rbd, I would try stripe_count=4, stripe_unit=1mb... or something like that. If you tinker with

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
Gregory Farnum greg@... writes: ...and one more time, because apparently my brain's out to lunch today: ceph osd tree *sigh* haha, we all have those days. [root@monitor01 ceph]# ceph osd tree # idweight type name up/down reweight -1 14.48 root default -2 7.24

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Gregory Farnum
On Tue, Jul 1, 2014 at 11:57 AM, Brian Lovett brian.lov...@prosperent.com wrote: Gregory Farnum greg@... writes: ...and one more time, because apparently my brain's out to lunch today: ceph osd tree *sigh* haha, we all have those days. [root@monitor01 ceph]# ceph osd tree # id

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
Gregory Farnum greg@... writes: So those disks are actually different sizes, in proportion to their weights? It could be having an impact on this, although it *shouldn't* be an issue. And your tree looks like it's correct, which leaves me thinking that something is off about your crush rules.

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Gregory Farnum
On Tue, Jul 1, 2014 at 1:26 PM, Brian Lovett brian.lov...@prosperent.com wrote: profile: bobtail, Okay. That's unusual. What's the oldest client you need to support, and what Ceph version are you using? You probably want to set the crush tunables to optimal; the bobtail ones are going to

Re: [ceph-users] Permissions spontaneously changing in cephfs

2014-07-01 Thread Erik Logtenberg
Hi Zheng, Yes, it was mounted implicitly with acl's enabled. I disabled it by adding noacl to the mount command, and now the behaviour is correct! No more changing permissions. So it appears to be related to acl's indeed, even though I didn't actually set any acl's. Simply mounting with acl's

Re: [ceph-users] Some OSD and MDS crash

2014-07-01 Thread Samuel Just
Can you reproduce with debug osd = 20 debug filestore = 20 debug ms = 1 ? -Sam On Tue, Jul 1, 2014 at 1:21 AM, Pierre BLONDEAU pierre.blond...@unicaen.fr wrote: Hi, I join : - osd.20 is one of osd that I detect which makes crash other OSD. - osd.23 is one of osd which crash when i start

Re: [ceph-users] iscsi and cache pool

2014-07-01 Thread Никитенко Виталий
Thank you. try to do it 02.07.2014, 05:30, Gregory Farnum g...@inktank.com: Yeah, the features are new from January or something so you need a very new kernel to support it. There are no options to set. But in general I wouldn't use krbd if you can use librbd instead; it's easier to update

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
Gregory Farnum greg@... writes: On Tue, Jul 1, 2014 at 1:26 PM, Brian Lovett brian.lovett@... wrote: profile: bobtail, Okay. That's unusual. What's the oldest client you need to support, and what Ceph version are you using? This is a fresh install (as of today) running the

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Christian Balzer
Hello, Even though you did set the pool default size to 2 in your ceph configuration, I think this value (and others) is ignored in the initial setup, for the default pools. So either make sure these pools really have a replication of 2 by deleting and re-creating them or add a third storage

Re: [ceph-users] Permissions spontaneously changing in cephfs

2014-07-01 Thread Yan, Zheng
On Wed, Jul 2, 2014 at 5:19 AM, Erik Logtenberg e...@logtenberg.eu wrote: Hi Zheng, Yes, it was mounted implicitly with acl's enabled. I disabled it by adding noacl to the mount command, and now the behaviour is correct! No more changing permissions. So it appears to be related to acl's