Re: [ceph-users] What file system does ceph use for an individual OSD, is it still EBOFS?

2016-09-18 Thread xxhdx1985126
Thanks, sir:-) At 2016-09-19 13:00:18, "Ian Colle" wrote: Some use xfs, others btrfs, and still others use (gasp) zfs and ext4. Upstream automated testing currently only runs on xfs, if that gives you a sense of the community's comfort level, but there are strong

[ceph-users] What file system does ceph use for an individual OSD, is it still EBOFS?

2016-09-18 Thread xxhdx1985126
Hi, everyone. I'm newbie for Ceph. According to Sage A. Weil's paper, Ceph was using EBOFS as the file system for its OSDs. However, I looked into the source code of Ceph and could hardly find any code of EBOFS. Is Ceph still using EBOFS or has opted to use other types of file system for a

Re: [ceph-users] [EXTERNAL] Re: Increase PG number

2016-09-18 Thread Will . Boege
How many PGs do you have - and how many are you increasing it to? Increasing PG counts can be disruptive if you are increasing by a large proportion of the initial count because all the PG peering involved. If you are doubling the amount of PGs it might be good to do it in stages to minimize

Re: [ceph-users] swiftclient call radosgw, it always response 401 Unauthorized

2016-09-18 Thread Brian Chang-Chien
no body meet this situation? Can somebody help me slove the issue, please !!! THX 2016-09-16 13:02 GMT+08:00 Brian Chang-Chien : > Can anyone know this problem,please help me to watch this > > 2016年9月13日 下午5:58,"Brian Chang-Chien" 寫道: > >>

Re: [ceph-users] cephfs-client Segmentation fault with not-root mount point

2016-09-18 Thread yu2xiangyang
Thank you for your reply. I will recomplie the code and test if it works. I will let you know if it works. At 2016-09-18 19:18:18, "Goncalo Borges" wrote: >Hi... > >I think you are seeing an issue we saw some time ago. Your segfault seems the >same we had but

Re: [ceph-users] problem starting osd ; PGLog.cc: 984: FAILED assert hammer 0.94.9

2016-09-18 Thread Ronny Aasen
added debug journal = 20 and got some new lines in the log. that i added to the end of this email. any of you can make something out of them ? kind regards Ronny Aasen On 18.09.2016 18:59, Kostis Fardelas wrote: If you are aware of the problematic PGs and they are exportable, then

Re: [ceph-users] Increase PG number

2016-09-18 Thread Matteo Dacrema
Hi , thanks for your reply. Yes, I’don’t any near full osd. The problem is not the rebalancing process but the process of creation of new pgs. I’ve only 2 host running Ceph Firefly version with 3 SSDs for journaling each. During the creation of new pgs all the volumes attached stop to read or

Re: [ceph-users] problem starting osd ; PGLog.cc: 984: FAILED assert hammer 0.94.9

2016-09-18 Thread Kostis Fardelas
If you are aware of the problematic PGs and they are exportable, then ceph-objectstore-tool is a viable solution. If not, then running gdb and/or higher debug osd level logs may prove useful (to understand more about the problem or collect info to ask for more in ceph-devel). On 13 September 2016

Re: [ceph-users] Recover pgs from cephfs metadata pool (sharing experience)

2016-09-18 Thread Kostis Fardelas
Hello Goncalo, afaik the authoritative shard is concluded based on deep-scrub object checksums which was included in Hammer. Is this in-line with your experience? If yes, is there any other method of concluding for the auth shard besides object timestamps for ceph < jewel? Kostis On 13 September

Re: [ceph-users] cephfs-client Segmentation fault with not-root mount point

2016-09-18 Thread Goncalo Borges
Hi... I think you are seeing an issue we saw some time ago. Your segfault seems the same we had but please confirm against the info in https://github.com/ceph/ceph/pull/10027 We solve it by recompiling ceph with the patch described above. I think it should be solved in the next bug release

Re: [ceph-users] Increase PG number

2016-09-18 Thread Goncalo Borges
Hi I am assuming that you do not have any near full osd (either before or along the pg splitting process) and that your cluster is healthy. To minimize the impact on the clients during recover or operations like pg splitting, it is good to set the following configs. Obviously the whole