Den tis 21 maj 2019 kl 02:12 skrev mr. non non :
> Does anyone have this issue before? As research, many people have issue
> with rgw.index which related to small small number of index sharding (too
> many objects per index).
> I also check on this thread
>
Den fre 12 apr. 2019 kl 16:37 skrev Jason Dillaman :
> On Fri, Apr 12, 2019 at 9:52 AM Magnus Grönlund
> wrote:
> >
> > Hi Jason,
> >
> > Tried to follow the instructions and setting the debug level to 15
> worked OK, but the daemon appeared to silently igno
;,
> > "leader_instance_id": "889074",
> > "leader": true,
> > "instances": [],
> > "local_cluster_admin_socket":
> "/var/run/ceph/client.backup.1936211.backup.94225678548048
36211.backup.94225678548048.asok",
"remote_cluster_admin_socket":
"/var/run/ceph/client.productionbackup.1936211.production.94225679621728.asok",
"sync_throttler": {
"max_parallel_syncs": 5,
"running_syncs": 0,
"waiting
Den tis 9 apr. 2019 kl 17:14 skrev Jason Dillaman :
> On Tue, Apr 9, 2019 at 11:08 AM Magnus Grönlund
> wrote:
> >
> > >On Tue, Apr 9, 2019 at 10:40 AM Magnus Grönlund
> wrote:
> > >>
> > >> Hi,
> > >> We have configured one-way repl
>On Tue, Apr 9, 2019 at 10:40 AM Magnus Grönlund wrote:
>>
>> Hi,
>> We have configured one-way replication of pools between a production
cluster and a backup cluster. But unfortunately the rbd-mirror or the
backup cluster is unable to keep up with the production cluster so
Hi,
We have configured one-way replication of pools between a production
cluster and a backup cluster. But unfortunately the rbd-mirror or the
backup cluster is unable to keep up with the production cluster so the
replication fails to reach replaying state.
And the journals on the rbd volumes keep
kl 15:24 skrev Magnus Grönlund :
> Hi,
>
> I’m trying to setup one-way rbd-mirroring for a ceph-cluster used by an
> openstack cloud, but the rbd-mirror is unable to “catch up” with the
> changes. However it appears to me as if it's not due to the ceph-clusters
> or the network but
Hi,
I’m trying to setup one-way rbd-mirroring for a ceph-cluster used by an
openstack cloud, but the rbd-mirror is unable to “catch up” with the
changes. However it appears to me as if it's not due to the ceph-clusters
or the network but due to the server running the rbd-mirror process running
Hi Jocelyn,
I'm in the process of setting up rdb-mirroring myself and stumbled on the
same problem. But I think that the "trick" here is to _not_ colocate the
RDB-mirror daemon with any other part of the cluster(s), it should be run
on a separate host. That way you can change the CLUSTER_NAME
AFAIK (correct
> me if im wrong, but i think we ran into this once) if your mons and osds
> are different versions.
>
> // david
>
> On jul 12 2018, at 11:45 am, Magnus Grönlund wrote:
>
>
> Hi list,
>
> Things went from bad to worse, tried to upgrade some OSDs
on the disks at
least, but maybe it is too late?
/Magnus
2018-07-11 21:10 GMT+02:00 Magnus Grönlund :
> Hi Paul,
>
> No all OSDs are still jewel , the issue started before I had even started
> to upgrade the first OSD and they don't appear to be flapping.
> ceph -w shows a lot of s
up restarting the OSDs which were stuck in that state and they
> immediately fixed themselfs.
> It should also work to just "out" the problem-OSDs and immeditly up them
> again to fix it.
>
> - Kevin
>
> 2018-07-11 20:30 GMT+02:00 Magnus Grönlund :
>
>> Hi,
>&g
t;
> Paul
>
> 2018-07-11 20:30 GMT+02:00 Magnus Grönlund :
>
>> Hi,
>>
>> Started to upgrade a ceph-cluster from Jewel (10.2.10) to Luminous
>> (12.2.6)
>>
>> After upgrading and restarting the mons everything looked OK, the mons
>> had quorum, al
14 matches
Mail list logo