There are plenty of data in this cluster (2PB), please help us, thx.
Before doing this dangerous
operations(http://docs.ceph.com/docs/master/cephfs/disaster-recovery-experts/#disaster-recovery-experts)
, any suggestions?
Ceph version: 12.2.12
ceph fs status:
cephfs - 1057 clients
==
with the IO workload. So it is confusing why there are so many
journal data that cannot be trimmed immediately. (Local cluster also has
capability to do more IO workload including trimming operations.)
> On Nov 6, 2018, at 9:25 PM, Jason Dillaman wrote:
>
> On Tue, Nov 6, 2018 at 1:12 A
]
> On Nov 6, 2018, at 3:39 AM, Jason Dillaman wrote:
>
> On Sun, Nov 4, 2018 at 11:59 PM Wei Jin wrote:
>>
>> Hi, Jason,
>>
>> I have a question about rbd mirroring. When enable mirroring, we observed
>> that there are a lot of objects prefix with journal_d
Same issue here.
Will Ceph community support Debian Jessie in the future?
On Mon, Mar 5, 2018 at 6:33 PM, Florent B wrote:
> Jessie is no more supported ??
> https://download.ceph.com/debian-luminous/dists/jessie/main/binary-amd64/Packages
> only contains ceph-deploy package
On Fri, Dec 15, 2017 at 6:08 PM, John Spray wrote:
> On Fri, Dec 15, 2017 at 1:45 AM, 13605702...@163.com
> <13605702...@163.com> wrote:
>> hi
>>
>> i used 3 nodes to deploy mds (each node also has mon on it)
>>
>> my config:
>> [mds.ceph-node-10-101-4-17]
>> mds_standby_replay
>
> So, questions: does that really matter? What are possible impacts? What
> could have caused this 2 hosts to hold so many capabilities?
> 1 of the hosts are for tests purposes, traffic is close to zero. The other
> host wasn't using cephfs at all. All services stopped.
>
The reason might be
I tried to do purge/purgedata and then redo the deploy command for a
few times, and it still fails to start osd.
And there is no error log, anyone know what's the problem?
BTW, my os is dedian with 4.4 kernel.
Thanks.
On Wed, Nov 15, 2017 at 8:24 PM, Wei Jin <wjin...@gmail.com> wrote:
>
Hi, List,
My machine has 12 ssd
There are some errors for ceph-deploy.
It failed randomly
root@n10-075-012:~# *ceph-deploy osd create --zap-disk n10-075-094:sdb:sdb*
[ceph_deploy.conf][DEBUG ] found configuration file at:
/root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39):
Hi, List,
My machine has 12 SSDs disk, and I use ceph-deploy to deploy them. But for some
machine/disks,it failed to start osd.
I tried many times, some success but others failed. But there is no error info.
Following is ceph-deploy log for one disk:
root@n10-075-012:~# ceph-deploy osd create
t;: "0x",
"data_digest": "0x"
},
{
"osd": 133,
"errors": [
"size_mismatch_oi"
],
Hi, list,
We ran into pg deep scrub error. And we tried to repair it by `ceph pg
repair pgid`. But it didn't work. We also verified object files, and
found both 3 replicas were zero size. What's the problem, whether it
is a bug? And how to fix the inconsistent? I haven't restarted the
osds so
On Sat, Apr 1, 2017 at 5:17 PM, mj wrote:
> Hi,
>
> Despite ntp, we keep getting clock skews that auto disappear again after a
> few minutes.
>
> To prevent the unneccerasy HEALTH_WARNs, I have increased the
> mon_clock_drift_allowed to 0.2, as can be seen below:
>
>>
On Mon, Oct 17, 2016 at 3:16 PM, Somnath Roy wrote:
> Hi Sage et. al,
>
> I know this issue is reported number of times in community and attributed to
> either network issue or unresponsive OSDs.
> Recently, we are seeing this issue when our all SSD cluster (Jewel based)
13 matches
Mail list logo