[ceph-users] changing my cluster network ip

2018-09-26 Thread Joshua Chen
Hello all, I am buinding my testing cluster with public_network and cluster_network interface. For some reason, the testing cluster need to do peer connection with my colleague's machines so it's better I change my original cluster_network from 172.20.x.x to 10.32.67.x. Now if I don't want to

Re: [ceph-users] Purge Ceph Node and reuse it for another cluster

2018-09-26 Thread Vasu Kulkarni
you can do that safely, we do it all the time on test clusters, make sure you zap the disks on all osd nodes so that any partition data is erased. Also try to use the latest docs from 'master' branch ( I see the link you have is based on 'giant') On Wed, Sep 26, 2018 at 2:06 PM Marcus Müller

[ceph-users] inexplicably slow bucket listing at top level

2018-09-26 Thread Graham Allan
I have one user bucket, where inexplicably (to me), the bucket takes an eternity to list, though only on the top level. There are two subfolders, each of which lists individually at a completely normal speed... eg (using minio client): [~] % time ./mc ls fried/friedlab/ [2018-09-26 16:15:48

[ceph-users] Purge Ceph Node and reuse it for another cluster

2018-09-26 Thread Marcus Müller
Hi all, Is it safe to purge a ceph osd / mon node like described here: http://docs.ceph.com/docs/giant/rados/deployment/ceph-deploy-purge/ and later use this node with the same OS again for another production ceph cluster?

Re: [ceph-users] total_used statistic incorrect

2018-09-26 Thread Mike Cave
I’m sorry I completely missed the text you wrote at the top of the reply. It at first appeared that you just quoted a previous reply without adding. My mistake! Thank you for the answer as it completely correlates with what I've found after doing some other digging. Cheers, Mike

[ceph-users] MDS damaged after mimic 13.2.1 to 13.2.2 upgrade

2018-09-26 Thread Sergey Malinin
Hello, Followed standard upgrade procedure to upgrade from 13.2.1 to 13.2.2. After upgrade MDS cluster is down, mds rank 0 and purge_queue journal are damaged. Resetting purge_queue does not seem to work well as journal still appears to be damaged. Can anybody help? mds log: -789>

Re: [ceph-users] ACL '+' not shown in 'ls' on kernel cephfs mount

2018-09-26 Thread Eugen Block
Hi, I can confirm this for: ceph --version ceph version 12.2.5-419-g8cbf63d997 (8cbf63d997fb5cdc783fe7bfcd4f5032ee140c0c) luminous (stable) Setting ACLs on a file works as expected (restrict file access to specific user), getfacl displays correct information, but 'ls -la' does not

Re: [ceph-users] ceph-fuse using excessive memory

2018-09-26 Thread Yan, Zheng
Thinks for the log. I think it's caused by http://tracker.ceph.com/issues/36192 Regards Yan, Zheng On Wed, Sep 26, 2018 at 1:51 AM Andras Pataki wrote: > > Hi Zheng, > > Here is a debug dump: > https://users.flatironinstitute.org/apataki/public_www/7f0011f676112cd4/ > I have also included some

Re: [ceph-users] Fwd: [Ceph-community] After Mimic upgrade OSD's stuck at booting.

2018-09-26 Thread KEVIN MICHAEL HRPCEK
Hey, don't lose hope. I just went through 2 3-5 day outages after a mimic upgrade with no data loss. I'd recommend looking through the thread about it to see how close it is to your issue. From my point of view there seems to be some similarities.

Re: [ceph-users] No space left on device

2018-09-26 Thread Zhenshi Zhou
Answer to myself, What I get wrong is that the file number is much more than 2. My db shows the directory has 52 files so that it alarmed "no space left". I solve this by increasing "mds_bal_fragment_size_max" to 100. Thanks Zhenshi Zhou 于2018年9月26日周三 下午4:58写道: > Hi, > > I

Re: [ceph-users] Fwd: [Ceph-community] After Mimic upgrade OSD's stuck at booting.

2018-09-26 Thread Eugen Block
Hi, I'm not sure how the recovery "still works" with the flag "norecover". Anyway, I think you should unset the flags norecover, nobackfill. Even if not all OSDs come back up you should allow the cluster to backfill PGs. Not sure, but unsetting norebalance could also be useful, but that

[ceph-users] How many objects to expect?

2018-09-26 Thread Thomas Sumpter
Hello, I have two independent but almost identical systems, one of them (A) the total number of objects stays around 200, the other (B) has been steadily increasing and now seems to have levelled off at around 4000 objects. The total used data remains roughly the same, but this data is

Re: [ceph-users] Fwd: [Ceph-community] After Mimic upgrade OSD's stuck at booting.

2018-09-26 Thread by morphin
Hello Eugen. Thank you for your answer. I was loosing my hope to get an answer here. I faced so many times with losing 2/3 mons but I never faced any problem like this on luminous. The recovery still works and its have been 30hours. The last state of my cluster is:

Re: [ceph-users] Bluestore DB showing as ssd

2018-09-26 Thread Eugen Block
Hi, how did you create the OSDs? Were they built from scratch with the respective command options (--block.db /dev/)? You could check what the bluestore tool tells you about the block.db: ceph1:~ # ceph-bluestore-tool show-label --dev /var/lib/ceph/osd/ceph-21/block | grep path

[ceph-users] No space left on device

2018-09-26 Thread Zhenshi Zhou
Hi, I encountered an issue similar to bug 19438 . I have an attachment directory in cephfs and one of the sub directory includes about 20-30 thousand small files from 100k-1M each. When I created file in this sub directory it alarms: "cannot touch

Re: [ceph-users] Bluestore DB showing as ssd

2018-09-26 Thread Hervé Ballans
Hi, By testing the command on my side, it gives me the right information (modulo the fact that the disk is a nvme and not ssd) : # ceph osd metadata 1 |grep bluefs_db     "bluefs_db_access_mode": "blk",     "bluefs_db_block_size": "4096",     "bluefs_db_dev": "259:3",    

[ceph-users] v13.2.2 Mimic released

2018-09-26 Thread Abhishek Lekshmanan
This is the second bugfix release of v13.2.x long term stable release series. This release contains many fixes across all the components of ceph and we recommend that all users upgrade. We thank everyone for contributing towards this release. The release notes are up at

Re: [ceph-users] Fwd: [Ceph-community] After Mimic upgrade OSD's stuck at booting.

2018-09-26 Thread Eugen Block
Hi, could this be related to this other Mimic upgrade thread [1]? Your failing MONs sound a bit like the problem described there, eventually the user reported recovery success. You could try the described steps: - disable cephx auth with 'auth_cluster_required = none' - set the