On 10/27/2015 06:21 PM, Ken Dreyer wrote:
> Thanks, I've deleted it from the download.ceph.com web server.
Thanks a lot!
My mirror is up-to-date now.
Björn Lässig
___
ceph-users mailing list
ceph-users@lists.ceph.com
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Sorry for raising this topic from the dead, but i'm having the same
issues with NFS-GANESHA /w the wrong user/group information.
Do you maybe have a working ganesha.conf? I'm assuming I might
mis-configured something in this file. It's also nice to
HI, Experts and Supporters
I am newer for CEPH so maybe the the question looks simple and stupids sometimes
i want to create the Ceph Cluster with the algorithm of Reed solomn Raid 6,
Jerasure has the plugin "reed_sol_r6_op"
but it seems i can't bind the pool with the OSDs
the steps:
Hi,
During a repo sync, I got:
Package ceph-debuginfo-0.94.5-0.el7.centos.x86_64.rpm is not signed
Indeed:
# rpm -K
http://download.ceph.com/rpm-hammer/el7/x86_64/ceph-debuginfo-0.94.5-0.el7.centos.x86_64.rpm
hello,
After installing ceph I tried to watch it with ceph -w,
2015-10-28 14:54:08.035995 mon.0 [INF] pgmap v82: 192 pgs: 104
active+degraded+remapped, 88 creating+incomplete; 0 bytes data, 36775 MB
used, 113 GB / 156 GB avail
2015-10-28 14:54:12.327050 mon.0 [INF] pgmap v83: 192 pgs: 104
On 21-10-15 15:30, Mark Nelson wrote:
>
>
> On 10/21/2015 01:59 AM, Wido den Hollander wrote:
>> On 10/20/2015 07:44 PM, Mark Nelson wrote:
>>> On 10/20/2015 09:00 AM, Wido den Hollander wrote:
Hi,
In the "newstore direction" thread on ceph-devel I wrote that I'm using
Hi all,
I'm trying to get the real disk usage of a Cinder volume converting this
bash commands to python:
http://cephnotes.ksperis.com/blog/2013/08/28/rbd-image-real-size
I wrote a small test function which has already worked in many cases but it
stops with a core dump while trying to calculate
On the RBD performance issue, you may want to look at:
http://tracker.ceph.com/issues/9192
Eric
On Tue, Oct 27, 2015 at 8:59 PM, FaHui Lin wrote:
> Dear Ceph experts,
>
> I found something strange about the performance of my Ceph cluster: Read-out
> much slower than
Hi Dennis,
We're using NFS Ganesha here as well. I can send you my configuration which is
working but we squash users and groups down to a particular uid/gid, so it may
not be super helpful for you.
I think files not being immediately visible is working as intended, due to
directory caching.
Hi, all
As my understand, command "ceph daemon osd.x perf dump objecters" should
output the perf data of osdc(librados). But when i use this command,
why all those values are zero expcept map_epoch and map_inc. Follow is
the result(It has fio test with rbd ioengine on the cluster):
$ sudo ceph
Hi All,
I am testing a 5 node, 4+1 EC cluster using some simple python code
https://gist.github.com/brynmathias/03c60569499dbf3f6be4
when I run this from an external machine one of my 5 nodes experiences very
high cpu usage (3-400%) per osd
and the others show very low usage.
see here:
Could you try with this kernel and bump the readahead on the RBD device up
to at least 32MB?
http://gitbuilder.ceph.com/kernel-deb-precise-x86_64-basic/ref/ra-bring-back
/
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Eric Eastman
>
Hi,
On 10/28/2015 03:08 PM, Dennis Kramer (DT) wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Sorry for raising this topic from the dead, but i'm having the same
issues with NFS-GANESHA /w the wrong user/group information.
Do you maybe have a working ganesha.conf? I'm assuming I might
Hi,
On 10/26/2015 01:43 PM, Yan, Zheng wrote:
On Thu, Oct 22, 2015 at 2:55 PM, Burkhard Linke
wrote:
Hi,
On 10/22/2015 02:54 AM, Gregory Farnum wrote:
On Sun, Oct 18, 2015 at 8:27 PM, Yan, Zheng wrote:
On Sat, Oct 17,
> Thanks for your reply, why not rebuild object-map when object-map feature is
> enabled.
>
> Cheers,
> xinxin
>
My initial motivation was to avoid a potentially lengthy rebuild when enabling
the feature. Perhaps that option could warn you to rebuild the object map
after its been enabled.
On Thu, Oct 29, 2015 at 1:10 AM, Burkhard Linke
wrote:
> Hi,
>
>
> On 10/26/2015 01:43 PM, Yan, Zheng wrote:
>>
>> On Thu, Oct 22, 2015 at 2:55 PM, Burkhard Linke
>> wrote:
>>>
>>> Hi,
>>>
>>>
>>>
On 29 October 2015 at 11:39, Lindsay Mathieson
wrote:
> Is there a way to benchmark individual OSD's?
nb - Non-destructive :)
--
Lindsay
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hello,
this shows the content of crush-map file, what content should I change
for selecting osd instead of host? thanks in advance.
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable
Is there a ceph sub-command existing instead of changing the config file? :)
On 2015/10/29 星期四 9:24, Li, Chengyuan wrote:
Try " osd crush chooseleaf type = 0" in /etc/ceph/.conf
Regards,
CY.
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
I still see rsync errors due to permissions on the remote side:
rsync: send_files failed to open
"/rpm-dumpling/rhel6/x86_64/ceph-debuginfo-0.67.8-0.el6.x86_64.rpm.iVHKKi" (in
ceph): Permission denied (13)
rsync: send_files failed to open
A Google search should have lead you the rest of the way.
Follow this [1] and in the rule section on step choose leaf change host to
osd. You won't need to change the configuration this way, it is saved in
the CRUSH map.
[1]
Is there a way to benchmark individual OSD's?
--
Lindsay
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I have had this issue before, and I don't think I have resolved it. I
have been using the RGW admin api to set quota based on the docs[0].
But I can't seem to be able to get it to cough up and show me the quota
now. Any ideas I get a 200 back but no body, I have tested this on a
Firefly
[ Removed ceph-devel ]
On Wednesday, October 28, 2015, Libin Wu wrote:
> Hi, all
>
> As my understand, command "ceph daemon osd.x perf dump objecters" should
> output the perf data of osdc(librados). But when i use this command,
> why all those values are zero expcept
Hello,
$ ceph osd stat
osdmap e18: 2 osds: 2 up, 2 in
this is what it shows.
does it mean I need to add up to 3 osds? I just use the default setup.
thx.
On 2015/10/28 星期三 19:53, Gurjar, Unmesh wrote:
Are all the OSDs being reported as 'up' and 'in'? This can be checked by
executing
On 29 October 2015 at 10:29, Wah Peng wrote:
> $ ceph osd stat
> osdmap e18: 2 osds: 2 up, 2 in
>
> this is what it shows.
> does it mean I need to add up to 3 osds? I just use the default setup.
>
If you went with the defaults then your pool size will be 3, meaning
Hello,
Just did it, but still no good health. can you help? thanks.
ceph@ceph:~/my-cluster$ ceph osd stat
osdmap e24: 3 osds: 3 up, 3 in
ceph@ceph:~/my-cluster$ ceph health
HEALTH_WARN 89 pgs degraded; 67 pgs incomplete; 67 pgs stuck inactive;
192 pgs stuck unclean
On 2015/10/29 星期四
Please paste 'ceph osd tree'.
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Oct 28, 2015 6:54 PM, "Wah Peng" wrote:
> Hello,
>
> Just did it, but still no good health. can you help? thanks.
>
> ceph@ceph:~/my-cluster$ ceph osd stat
> osdmap
You need to change the CRUSH map to select osd instead of host.
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Oct 28, 2015 7:00 PM, "Wah Peng" wrote:
> $ ceph osd tree
> # idweight type name up/down reweight
> -1 0.24root
$ ceph osd tree
# idweight type name up/down reweight
-1 0.24root default
-2 0.24host ceph2
0 0.07999 osd.0 up 1
1 0.07999 osd.1 up 1
2 0.07999 osd.2 up 1
On 2015/10/29
wow this sounds hard to me. can you show the details?
thanks a lot.
On 2015/10/29 星期四 9:01, Robert LeBlanc wrote:
You need to change the CRUSH map to select osd instead of host.
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Oct 28, 2015 7:00 PM, "Wah Peng"
Try " osd crush chooseleaf type = 0" in /etc/ceph/.conf
Regards,
CY.
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wah
Peng
Sent: 2015年10月29日 9:14
To: Robert LeBlanc
Cc: Lindsay Mathieson; Gurjar, Unmesh; ceph-users@lists.ceph.com
Subject:
32 matches
Mail list logo