Hi,
I am facing an issue on Cephadm cluster setup. Whenever, I try to add remote devices as OSDs, command just hangs.
The steps I have followed :
sudo ceph orch daemon add osd node1:device
For the setup I have followed the steps mentioned in :https://ralph.blog.imixs.com/2020/04/14/cep
Hi all,
Is there any telegram group for communicating with ceph users?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I did encounter the same issue. I found that I missed the restart progress,
and after restart the rgw I can commit period.
What's more, I rename the default zone as well as zonegroup.
Sailaja Yedugundla 于2020年5月26日周二 上午11:06写道:
> Yes. I restarted the rgw service on master zone before committing
Yes. I restarted the rgw service on master zone before committing the period.
Still facing the issue.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Did you restart rgw service on master zone?
Sailaja Yedugundla 于2020年5月26日周二 上午1:09写道:
> Hi,
> I am trying to setup multisite cluster with 2 sites. I created master
> zonegroup and zone by following the instructions given in the
> documentation. On the secondary zone cluster I could pull the mas
Hi,
Just want to post the updates here. The object count decreased to 4 now.
But I don't know if it is time matter or a system reboot make it normal.
All nodes rebooted after scheduled system updates, but i forgot to jot down
the object counts before the maintenance.
Anyways, issue fixed.
Thanks e
Hi Sailaja,
Maybe you can try to restart rgw on master zone before commit period on
secondary zone.
Sailaja Yedugundla 于2020年5月25日周一 下午11:24写道:
> I am also facing the same problem. Did you find any solution?
> ___
> ceph-users mailing list -- ceph-use
Quick and easy depends on your network infrastructure. Sometimes it is
difficult or impossible to retrofit a live cluster without disruption.
> On May 25, 2020, at 1:03 AM, Marc Roos wrote:
>
>
> I am interested. I am always setting mtu to 9000. To be honest I cannot
> imagine there is n
Hello,
I didn’t find any information about the replication factor in the zone group.
Assume I have three ceph clusters with Rados Gateway in one zone group each
with replica size 3. How many replicas of an object I’ll get in total?
Is it possible to define several regions, each with several
Hi Mark
Thank you! This is 14.2.8, on Ubuntu Bionic. Some with kernel 4.15, some
with 5.3, but that does not seem to make a difference here. Transparent
Huge Pages are not used according to
grep -i AnonHugePages /proc/meminfo
Workload is a mix of OpenStack volumes (replicated) and RGW on EC 8
Hi Sebastian,
Thank you for the reply.
When I ran that command I got:
[17:09] [root] [vx-rg23-rk65-u43-130 ~] # ceph mon ok-to-stop
mon.vx-rg23-rk65-u43-130
quorum should be preserved (vx-rg23-rk65-u43-130,vx-rg23-rk65-u43-130-1) after
stopping [mon.vx-rg23-rk65-u43-130]
Does this mean upgrad
Hi,
I am trying to setup multisite cluster with 2 sites. I created master zonegroup
and zone by following the instructions given in the documentation. On the
secondary zone cluster I could pull the master zone. I created secondary zone.
When I tried to commit the period I am getting the follo
I understand now.
Thank you very much for your input.
On 5/25/2020 3:28 PM, lin yunfan wrote:
I think the shard number recommendation is 100K objects/per shard/per
bucket. If you have many objects but they are spread in many
buckets/containers and each bucket/container have less than 1.6M
obje
Am 22.05.20 um 19:28 schrieb Gencer W. Genç:
> Upgrade: It is NOT safe to stop mon.vx-rg23-rk65-u43-130
please make sure,
ceph mon ok-to-stop mon.vx-rg23-rk65-u43-130
return ok
--
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg). Geschäfts
Hi,
Does this help?
https://github.com/cernceph/ceph-scripts/blob/master/tools/scrubbing/autorepair.sh
Cheers, Dan
On Mon, May 25, 2020 at 5:18 PM Daniel Aberger - Profihost AG
wrote:
>
> Hello,
>
> we are currently experiencing problems with ceph pg repair not working
> on Ceph Nautilus 14.2.
I am also facing the same problem. Did you find any solution?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello,
we are currently experiencing problems with ceph pg repair not working
on Ceph Nautilus 14.2.8.
ceph health detail is showing us an inconsistent pg:
[ax- ~]# ceph health detail
HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent
OSD_SCRUB_ERRORS 1 scrub errors
PG_DA
Hello,
> Sorry for the late reply.
> I have pasted crush map in below url : https://pastebin.com/ASPpY2VB
> and this my osd tree output and this issue are only when i use it with
> filelayout.
could send the output of "ceph osd pool ls detail" please ?
Yoann
> ID CLASS WEIGHTTYPE NAME
Hi,
you didn't really clear things up so I'll just summerarize what I
understood so far. Please also share 'ceph osd pool ls detail' and
'ceph fs status'.
One of the pools is configured with min_size 2 and size 2, this will
pause IO if one node goes down as it's very likely that this node
Hello everyone
I've got a fresh ceph octopus installation and I'm trying to set up a cephfs
with erasure code configuration.
The metadata pool was set up as default.
The erasure code pool was set up with this command:
-> ceph osd pool create ec-data_fs 128 erasure default
Enabled overwrites:
->
Hi Kamil,
We got a similar setup, and thats our config:
osd advanced osd_max_scrubs
1
osd advanced osd_recovery_max_active
4
osd advanced osd_recovery
Hello,
We will be having a Ceph science/research/big cluster call on Wednesday
May 27th. If anyone wants to discuss something specific they can add it
to the pad linked below. If you have questions or comments you can
contact me.
This is an informal open call of community members mostly from
I think the shard number recommendation is 100K objects/per shard/per
bucket. If you have many objects but they are spread in many
buckets/containers and each bucket/container have less than 1.6M
objects(max_shards=16) then you should be ok.
linyunfan
Adrian Nicolae 于2020年5月25日周一 下午3:04写道:
>
> I
Hi
We have some clusters which are rbd only. Each time someone uses
radosgw-admin by mistake on those clusters, rgw pools are auto created.
Is there a way to disable that? I mean the part:
"When radosgw first tries to operate on a zone pool that does not exist, it
will create that pool with t
Hi,
I've 4 node cluster with 13x15TB 7.2k OSDs each and around 300TB data inside.
I'm having issues with deep scrub/scrub not being done in time, any tips to
handle these operations with large disks like this?
osd pool default size = 2
osd deep scrub interval = 2592000
osd scrub begin hour = 23
In our lab-setup, I'm simulating the future migration of centos 7 + ceph
14.2.x to cepntos 8 + ceph 15.2.x.
At the moment, I upgraded one of the nodes, which is a combined
mon+mgr+mds+osd, to el8 + 15.2.2. The other node (also a combined one) is
still on el7 + 14.2.9.
The osd was detected and re-ad
I'm using only Swift , not S3. We have a container for every customer.
Right now there are thousands of containers.
On 5/25/2020 9:02 AM, lin yunfan wrote:
Can you store your data in different buckets?
linyunfan
Adrian Nicolae 于2020年5月19日周二 下午3:32写道:
Hi,
I have the following Ceph Mimic
Den mån 25 maj 2020 kl 10:03 skrev Marc Roos :
>
> I am interested. I am always setting mtu to 9000. To be honest I cannot
> imagine there is no optimization since you have less interrupt requests,
> and you are able x times as much data. Every time there something
> written about optimizing the f
Hi all,
I have a Nautilus cluster mostly used for RBD (openstack) and CephFS.
I have been using rbd perf command from time to time but it doesn't
work anymore. I have tried several images in different pools but
there's no output at all except for
client:~ $ rbd perf image iostat --format
I am interested. I am always setting mtu to 9000. To be honest I cannot
imagine there is no optimization since you have less interrupt requests,
and you are able x times as much data. Every time there something
written about optimizing the first thing mention is changing to the mtu
9000. Beca
30 matches
Mail list logo