Hello,
From the output we can also see that the server fails to install wget.
On each on your servers you have to set the proxy environment variables:
https_proxy, http_proxy etc..
For redhat/centos you can do it globally in a file in /etc/profile.d
For ubuntu / debian you have to define
Hi Craig,
I am also interested at the Zabbix templates and scripts if you can
publish them.
Regards,
G.
On Mon, 30 Jun 2014 18:15:12 -0700, Craig Lewis wrote:
You should check out Calamari (https://github.com/ceph/calamari [3]),
Inktanks monitoring and administration tool.
I started
Hi,
I join :
- osd.20 is one of osd that I detect which makes crash other OSD.
- osd.23 is one of osd which crash when i start osd.20
- mds, is one of my MDS
I cut log file because they are to big but. All is here :
https://blondeau.users.greyc.fr/cephlog/
Regards
Le 30/06/2014 17:35,
Hi,
May be you can use that : https://github.com/thelan/ceph-zabbix, but i
am interested to view Craig's script and template.
Regards
Le 01/07/2014 10:16, Georgios Dimitrakakis a écrit :
Hi Craig,
I am also interested at the Zabbix templates and scripts if you can
publish them.
Regards,
Good day!
I have server with Ubunu 14.04 and installed ceph firefly. Configured main_pool
(2 osd) and ssd_pool (1 ssd osd). I want use ssd_pool as cache pool for
main_pool
ceph osd tier add main_pool ssd_pool
ceph osd tier cache-mode ssd_pool writeback
ceph osd tier set-overlay
Hi all,
if it should be nagios/icinga and not Zabbix, there is a remote check
from me that can be found here:
https://github.com/Crapworks/check_ceph_dash
This one uses ceph-dash to monitor the overall cluster status via http:
https://github.com/Crapworks/ceph-dash
But it can be easily
I know FileStore.ondisk_finisher handle C_OSD_OpCommit , and from
journaled_completion_queue to op_commit cost 3.6 seconds, maybe cost in the
function of ReplicatedPG::op_commit .
Through OpTracker , I find that ReplicatedPG::op_commit first lock pg, but it
sometimes cost from 0.5 to 1 second
Hi,
As an exercise, I killed an OSD today, just killed the process and
removed its data directory.
To recreate it, I recreated an empty data dir, then
ceph-osd -c /etc/ceph/ceph.conf -i 3 --monmap /tmp/monmap --mkfs
(I tried with and without giving the monmap).
I then restored the keyring
Hi,
I set the same weight for all the hosts, same weight for all the osds under the
hosts in crushmap, and set pool replica size to 3. However, after upload
1M/4M/400M/900M files to the pool, I found the data replication is not even on
every osds and the utilization for the osds are not the
Hi,
On 01/07/2014 17:48, Sylvain Munaut wrote:
Hi,
As an exercise, I killed an OSD today, just killed the process and
removed its data directory.
To recreate it, I recreated an empty data dir, then
ceph-osd -c /etc/ceph/ceph.conf -i 3 --monmap /tmp/monmap --mkfs
(I tried with and
It looks like you're using a kernel RBD mount in the second case? I imagine
your kernel doesn't support caching pools and you'd need to upgrade for it
to work.
-Greg
On Tuesday, July 1, 2014, Никитенко Виталий v1...@yandex.ru wrote:
Good day!
I have server with Ubunu 14.04 and installed ceph
Hi,
And then I start the process, and it starts fine.
http://pastebin.com/TPzNth6P
I even see one active tcp connection to a mon from that process.
But the osd never becomes up or do anything ...
I suppose there are error messages in logs somewhere regarding the fact that
monitors and
On 01/07/2014 18:21, Sylvain Munaut wrote:
Hi,
And then I start the process, and it starts fine.
http://pastebin.com/TPzNth6P
I even see one active tcp connection to a mon from that process.
But the osd never becomes up or do anything ...
I suppose there are error messages in logs
I'm pulling my hair out with ceph. I am testing things with a 5 server
cluster. I have 3 monitors, and two storage machines each with 4 osd's. I
have started from scratch 4 times now, and can't seem to figure out how to
get a clean status. Ceph health reports:
HEALTH_WARN 34 pgs degraded; 192
What's the output of ceph osd map?
Your CRUSH map probably isn't trying to segregate properly, with 2
hosts and 4 OSDs each.
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Tue, Jul 1, 2014 at 11:22 AM, Brian Lovett
brian.lov...@prosperent.com wrote:
I'm pulling my hair out
Brian Lovett brian.lovett@... writes:
I restarted all of the osd's and noticed that ceph shows 2 osd's up even if
the servers are completely powered down: osdmap e95: 8 osds: 2 up, 8 in
Why would that be?
___
ceph-users mailing list
Gregory Farnum greg@... writes:
What's the output of ceph osd map?
Your CRUSH map probably isn't trying to segregate properly, with 2
hosts and 4 OSDs each.
Software Engineer #42 at http://inktank.com | http://ceph.com
Is this what you are looking for?
ceph osd map rbd ceph
osdmap
On Tue, Jul 1, 2014 at 11:33 AM, Brian Lovett
brian.lov...@prosperent.com wrote:
Brian Lovett brian.lovett@... writes:
I restarted all of the osd's and noticed that ceph shows 2 osd's up even if
the servers are completely powered down: osdmap e95: 8 osds: 2 up, 8 in
Why would that be?
The
On Tue, Jul 1, 2014 at 11:45 AM, Gregory Farnum g...@inktank.com wrote:
On Tue, Jul 1, 2014 at 11:33 AM, Brian Lovett
brian.lov...@prosperent.com wrote:
Brian Lovett brian.lovett@... writes:
I restarted all of the osd's and noticed that ceph shows 2 osd's up even if
the servers are
I've never worked enough with rbd to be sure. I know for files, when I turned
on striping, I got far better performance. It seems like for RBD, the default
is:
Just to see if it helps with rbd, I would try stripe_count=4,
stripe_unit=1mb... or something like that. If you tinker with
Gregory Farnum greg@... writes:
...and one more time, because apparently my brain's out to lunch today:
ceph osd tree
*sigh*
haha, we all have those days.
[root@monitor01 ceph]# ceph osd tree
# idweight type name up/down reweight
-1 14.48 root default
-2 7.24
On Tue, Jul 1, 2014 at 11:57 AM, Brian Lovett
brian.lov...@prosperent.com wrote:
Gregory Farnum greg@... writes:
...and one more time, because apparently my brain's out to lunch today:
ceph osd tree
*sigh*
haha, we all have those days.
[root@monitor01 ceph]# ceph osd tree
# id
Gregory Farnum greg@... writes:
So those disks are actually different sizes, in proportion to their
weights? It could be having an impact on this, although it *shouldn't*
be an issue. And your tree looks like it's correct, which leaves me
thinking that something is off about your crush rules.
On Tue, Jul 1, 2014 at 1:26 PM, Brian Lovett
brian.lov...@prosperent.com wrote:
profile: bobtail,
Okay. That's unusual. What's the oldest client you need to support,
and what Ceph version are you using? You probably want to set the
crush tunables to optimal; the bobtail ones are going to
Hi Zheng,
Yes, it was mounted implicitly with acl's enabled. I disabled it by
adding noacl to the mount command, and now the behaviour is correct!
No more changing permissions.
So it appears to be related to acl's indeed, even though I didn't
actually set any acl's. Simply mounting with acl's
Can you reproduce with
debug osd = 20
debug filestore = 20
debug ms = 1
?
-Sam
On Tue, Jul 1, 2014 at 1:21 AM, Pierre BLONDEAU
pierre.blond...@unicaen.fr wrote:
Hi,
I join :
- osd.20 is one of osd that I detect which makes crash other OSD.
- osd.23 is one of osd which crash when i start
Thank you. try to do it
02.07.2014, 05:30, Gregory Farnum g...@inktank.com:
Yeah, the features are new from January or something so you need a
very new kernel to support it. There are no options to set.
But in general I wouldn't use krbd if you can use librbd instead; it's
easier to update
Gregory Farnum greg@... writes:
On Tue, Jul 1, 2014 at 1:26 PM, Brian Lovett
brian.lovett@... wrote:
profile: bobtail,
Okay. That's unusual. What's the oldest client you need to support,
and what Ceph version are you using?
This is a fresh install (as of today) running the
Hello,
Even though you did set the pool default size to 2 in your ceph
configuration, I think this value (and others) is ignored in the initial
setup, for the default pools.
So either make sure these pools really have a replication of 2 by deleting
and re-creating them or add a third storage
On Wed, Jul 2, 2014 at 5:19 AM, Erik Logtenberg e...@logtenberg.eu wrote:
Hi Zheng,
Yes, it was mounted implicitly with acl's enabled. I disabled it by
adding noacl to the mount command, and now the behaviour is correct!
No more changing permissions.
So it appears to be related to acl's
30 matches
Mail list logo