[ceph-users] CephFS damaged and cannot recover

2019-06-19 Thread Wei Jin
There are plenty of data in this cluster (2PB), please help us, thx. Before doing this dangerous operations(http://docs.ceph.com/docs/master/cephfs/disaster-recovery-experts/#disaster-recovery-experts) , any suggestions? Ceph version: 12.2.12 ceph fs status: cephfs - 1057 clients ==

Re: [ceph-users] rbd mirror journal data

2018-11-06 Thread Wei Jin
with the IO workload. So it is confusing why there are so many journal data that cannot be trimmed immediately. (Local cluster also has capability to do more IO workload including trimming operations.) > On Nov 6, 2018, at 9:25 PM, Jason Dillaman wrote: > > On Tue, Nov 6, 2018 at 1:12 A

Re: [ceph-users] rbd mirror journal data

2018-11-05 Thread Wei Jin
] > On Nov 6, 2018, at 3:39 AM, Jason Dillaman wrote: > > On Sun, Nov 4, 2018 at 11:59 PM Wei Jin wrote: >> >> Hi, Jason, >> >> I have a question about rbd mirroring. When enable mirroring, we observed >> that there are a lot of objects prefix with journal_d

Re: [ceph-users] No more Luminous packages for Debian Jessie ??

2018-03-07 Thread Wei Jin
Same issue here. Will Ceph community support Debian Jessie in the future? On Mon, Mar 5, 2018 at 6:33 PM, Florent B wrote: > Jessie is no more supported ?? > https://download.ceph.com/debian-luminous/dists/jessie/main/binary-amd64/Packages > only contains ceph-deploy package

Re: [ceph-users] cephfs miss data for 15s when master mds rebooting

2017-12-17 Thread Wei Jin
On Fri, Dec 15, 2017 at 6:08 PM, John Spray wrote: > On Fri, Dec 15, 2017 at 1:45 AM, 13605702...@163.com > <13605702...@163.com> wrote: >> hi >> >> i used 3 nodes to deploy mds (each node also has mon on it) >> >> my config: >> [mds.ceph-node-10-101-4-17] >> mds_standby_replay

Re: [ceph-users] cephfs mds millions of caps

2017-12-14 Thread Wei Jin
> > So, questions: does that really matter? What are possible impacts? What > could have caused this 2 hosts to hold so many capabilities? > 1 of the hosts are for tests purposes, traffic is close to zero. The other > host wasn't using cephfs at all. All services stopped. > The reason might be

Re: [ceph-users] ceph-deploy failed to deploy osd randomly

2017-11-15 Thread Wei Jin
I tried to do purge/purgedata and then redo the deploy command for a few times, and it still fails to start osd. And there is no error log, anyone know what's the problem? BTW, my os is dedian with 4.4 kernel. Thanks. On Wed, Nov 15, 2017 at 8:24 PM, Wei Jin <wjin...@gmail.com> wrote: >

[ceph-users] ceph-deploy failed to deploy osd randomly

2017-11-15 Thread Wei Jin
Hi, List, My machine has 12 ssd There are some errors for ceph-deploy. It failed randomly root@n10-075-012:~# *ceph-deploy osd create --zap-disk n10-075-094:sdb:sdb* [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.39):

[ceph-users] ceph-deploy failed to deploy osd randomly

2017-11-15 Thread Wei Jin
Hi, List, My machine has 12 SSDs disk, and I use ceph-deploy to deploy them. But for some machine/disks,it failed to start osd. I tried many times, some success but others failed. But there is no error info. Following is ceph-deploy log for one disk: root@n10-075-012:~# ceph-deploy osd create

Re: [ceph-users] pg inconsistent and repair doesn't work

2017-10-25 Thread Wei Jin
t;: "0x", "data_digest": "0x" }, { "osd": 133, "errors": [ "size_mismatch_oi" ],

[ceph-users] pg inconsistent and repair doesn't work

2017-10-24 Thread Wei Jin
Hi, list, We ran into pg deep scrub error. And we tried to repair it by `ceph pg repair pgid`. But it didn't work. We also verified object files, and found both 3 replicas were zero size. What's the problem, whether it is a bug? And how to fix the inconsistent? I haven't restarted the osds so

Re: [ceph-users] clock skew

2017-04-01 Thread Wei Jin
On Sat, Apr 1, 2017 at 5:17 PM, mj wrote: > Hi, > > Despite ntp, we keep getting clock skews that auto disappear again after a > few minutes. > > To prevent the unneccerasy HEALTH_WARNs, I have increased the > mon_clock_drift_allowed to 0.2, as can be seen below: > >>

Re: [ceph-users] OSDs are flapping and marked down wrongly

2016-10-17 Thread Wei Jin
On Mon, Oct 17, 2016 at 3:16 PM, Somnath Roy wrote: > Hi Sage et. al, > > I know this issue is reported number of times in community and attributed to > either network issue or unresponsive OSDs. > Recently, we are seeing this issue when our all SSD cluster (Jewel based)