[ceph-users] Re: Ceph Snapshot Children not exists / children relation broken

2020-08-20 Thread Konstantin Shalygin
On 8/3/20 2:07 PM, Torsten Ennenbach wrote: Hi Jason. Well, I don't tried that, because I am afraid to break something :/ I don’t really understand what are you doing there:( Thanks anyways. May be you catch this [1] bug? I have how-to solution [2] to resolve this, please try again.

[ceph-users] Re: 1 pg inconsistent

2020-07-14 Thread Konstantin Shalygin
On 7/14/20 4:13 PM, Abhimnyu Dhobale wrote: Ceph is showing below error frequently. every time after pg repair it is resolved. [root@vpsapohmcs01 ~]# ceph health detail HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent OSD_SCRUB_ERRORS 1 scrub errors PG_DAMAGED Possible data

[ceph-users] Re: Ceph Zabbix Monitoring : No such file or directory

2020-07-06 Thread Konstantin Shalygin
On 7/6/20 7:54 PM, etiennem...@gmail.com wrote: We are trying to make work the Zabbix module of our ceph cluster but im encountering an issue that got me stuck. Configuration of the module looks ok and we manage to send data using zabbix_sender to the host that is configured on Zabbix. We can

[ceph-users] Re: Lifecycle message on logs

2020-07-02 Thread Konstantin Shalygin
On 6/25/20 7:53 PM, Marcelo Miziara wrote: Hello...it's the first time I need to use the lifecycle, and I created a bucket and set it to expire in one day with s3cmd: s3cmd expire --expiry-days=1 s3://bucket The rgw_lifecycle_work_time is set to the default values(00:00-06:00). But I

[ceph-users] Re: changing acces vlan for all the OSDs - potential downtime ?

2020-06-04 Thread Konstantin Shalygin
On 6/4/20 4:26 PM, Adrian Nicolae wrote: Hi all, I have a Ceph cluster with a standard setup : - the public network : MONs and OSDs conected in the same agg switch with ports in the same access vlan - private network :  OSDs connected in another switch with a second eth connected in

[ceph-users] Re: Nautilus to Octopus Upgrade mds without downtime

2020-05-27 Thread Konstantin Shalygin
On 5/27/20 8:43 PM, Andreas Schiefer wrote: if I understand correctly: if we upgrade from an running nautilus cluster to octopus we have a downtime on an update of MDS. Is this correct? This is always when upgrade major or minor version for MDS. It's hang for restart, actually clients

[ceph-users] Re: looking for telegram group in English or Chinese

2020-05-26 Thread Konstantin Shalygin
On 5/26/20 1:13 PM, Zhenshi Zhou wrote: Is there any telegram group for communicating with ceph users? AFAIK there is only Russian (CIS) group [1], but feel free to join with English! [1] https://t.me/ceph_ru k ___ ceph-users mailing list --

[ceph-users] Re: Multisite RADOS Gateway replication factor in zonegroup

2020-05-26 Thread Konstantin Shalygin
On 5/25/20 9:50 PM, alexander.vysoc...@megafon.ru wrote: I didn’t find any information about the replication factor in the zone group. Assume  I  have three ceph clusters with Rados Gateway in one zone group each with replica size 3. How many replicas of an object  I’ll get in total? Is it

[ceph-users] Re: Ceph modules

2020-05-15 Thread Konstantin Shalygin
Hi, On 5/15/20 2:37 PM, Alfredo De Luca wrote: Just a quick one. Are there any ansible modules for ceph around? https://github.com/ceph/ceph-ansible k ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] Re: Migrating clusters (and versions)

2020-05-14 Thread Konstantin Shalygin
On 5/14/20 1:27 PM, Kees Meijs wrote: Thank you very much. That's a good question. The implementations of OpenStack and Ceph and "the other" OpenStack and Ceph are, apart from networking, completely separate. Actually I was thinking you perform OpenStack and Ceph upgrade, not migration to

[ceph-users] Re: Migrating clusters (and versions)

2020-05-13 Thread Konstantin Shalygin
On 5/8/20 2:32 AM, Kees Meijs wrote: I'm in the middle of an OpenStack migration (obviously Ceph backed) and stumble into some huge virtual machines. To ensure downtime is kept to a minimum, I'm thinking of using Ceph's snapshot features using rbd export-diff and import-diff. However, is it

[ceph-users] Re: Check if upmap is supported by client?

2020-04-24 Thread Konstantin Shalygin
On 4/13/20 4:52 PM, Frank Schilder wrote: Is there a way to check if a client supports upmap? Yes, and actually is not hard, example: # echo 0x27018fb86aa42ada | python detect_upmap.py Upmap is supported The Gist: https://gist.github.com/k0ste/96905ebd1c73c5411dd8d03a9c14b0ea k

[ceph-users] Re: Ceph pool quotas

2020-03-20 Thread Konstantin Shalygin
On 3/18/20 10:09 PM, Stolte, Felix wrote: a short question about pool quotas. Do they apply to stats attributes “stored” or “bytes_used” (Is replication counted for or not)? Quotas is for total used space for this pool on OSD's. So this is threshold for bytes_used. k

[ceph-users] Re: ceph objecy storage client gui

2020-03-20 Thread Konstantin Shalygin
On 3/18/20 7:06 PM, Ignazio Cassano wrote: Hello All, I am looking for object storage freee/opensource client gui (linux and windows) for end users . I tried swiftstack but it is only for personal use. Help, please ? Ignazio You should try to look for S3Browser [1] AFAIK, this is Windows

[ceph-users] Re: Is there a better way to make a samba/nfs gateway?

2020-03-16 Thread Konstantin Shalygin
On 3/16/20 4:10 PM, mj wrote: Just out of curiosity: We are currently running a samba server with RBD disks as a VM on our proxmox/ceph cluster. I see the advantage of having vfs_ceph_snapshots of the samba user-data. But then again: re-sharing data using samba vfs_ceph adds a layer of

[ceph-users] Re: Is there a better way to make a samba/nfs gateway?

2020-03-15 Thread Konstantin Shalygin
On 3/14/20 3:08 AM, Seth Galitzer wrote: Thanks to all who have offered advise on this. I have been looking at using vfs_ceph in samba, but I'm unsure how to get it on Centos7. As I understand it, it's optional at compile time. When searching for a package for it, I see one glusterfs

[ceph-users] Re: Is there a better way to make a samba/nfs gateway?

2020-03-15 Thread Konstantin Shalygin
On 3/13/20 8:49 PM, Marc Roos wrote: Can you also create snapshots via the vfs_ceph solution? Yes! Since Samba 4.11 this supported via vfs_ceph_snapshots module. k ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] Re: Is there a better way to make a samba/nfs gateway?

2020-03-14 Thread Konstantin Shalygin
On 3/13/20 10:47 PM, Chip Cox wrote: Konstantin - in your Windows environment, would it be beneficial to have the ability to have NTFS data land as S3 (object store) on a Ceph storage appliance?  Or does it have to be NFS? Thanks and look forward to hearing back. Nope, for windows we use

[ceph-users] Re: Is there a better way to make a samba/nfs gateway?

2020-03-12 Thread Konstantin Shalygin
On 3/11/20 11:16 PM, Seth Galitzer wrote: I have a hybrid environment and need to share with both Linux and Windows clients. For my previous iterations of file storage, I exported nfs and samba shares directly from my monolithic file server. All Linux clients used nfs and all Windows clients

[ceph-users] Re: MGRs failing once per day and generally slow response times

2020-03-12 Thread Konstantin Shalygin
On 3/13/20 12:57 AM, Janek Bevendorff wrote: NTPd is running, all the nodes have the same time to the second. I don't think that is the problem. As always in such cases - try to switch your ntpd to default EL7 daemon - chronyd. k ___ ceph-users

[ceph-users] Re: reset pgs not deep-scrubbed in time

2020-03-11 Thread Konstantin Shalygin
On 3/10/20 5:33 PM, Stefan Priebe - Profihost AG wrote: is there any way to reset deep-scrubbed time for pgs? The cluster was accidently in state nodeep-scrub and is now unable to deep scrub fast enough. Is there any way to force mark all pgs as deep scrubbed to start from 0 again? you can

[ceph-users] Re: Question about ceph-balancer and OSD reweights

2020-02-26 Thread Konstantin Shalygin
On 2/26/20 3:40 AM, shubjero wrote: I'm running a Ceph Mimic cluster 13.2.6 and we use the ceph-balancer in upmap mode. This cluster is fairly old and pre-Mimic we used to set osd reweights to balance the standard deviation of the cluster. Since moving to Mimic about 9 months ago I enabled the

[ceph-users] Re: Running MDS server on a newer version than monitoring nodes

2020-02-26 Thread Konstantin Shalygin
On 2/26/20 12:49 AM, Martin Palma wrote: is it possible to run MDS on a newer version than the monitoring nodes? I mean we run monitoring nodes on 12.2.10 and would like to upgrade the MDS to 12.2.13 is this possible? Just upgrade your cluster to 12.2.13. Luminous is safe and very stable.

[ceph-users] Re: RGW do not show up in 'ceph status'

2020-02-21 Thread Konstantin Shalygin
On 2/21/20 3:04 PM, Andreas Haupt wrote: As you can see, only the first, old RGW (ceph-s3) is listed. Is there any place where the RGWs need to get "announced"? Any idea, how to debug this? You was try to restart active mgr? k ___ ceph-users

[ceph-users] Re: centos7 / nautilus where to get kernel 5.5 from?

2020-02-17 Thread Konstantin Shalygin
On 2/14/20 9:18 PM, Marc Roos wrote: I have default centos7 setup with nautilus. I have been asked to install 5.5 to check a 'bug'. Where should I get this from? I read that the elrepo kernel is not compiled like rhel. http://elrepo.org/tiki/kernel-ml k

[ceph-users] Re: ceph fs dir-layouts and sub-directory mounts

2020-01-29 Thread Konstantin Shalygin
On 1/29/20 6:03 PM, Frank Schilder wrote: I would like to (in this order) - set the data pool for the root "/" of a ceph-fs to a custom value, say "P" (not the initial data pool used in fs new) - create a sub-directory of "/", for example "/a" - mount the sub-directory "/a" with a client key

[ceph-users] Re: Benchmark results for Seagate Exos2X14 Dual Actuator HDDs

2020-01-16 Thread Konstantin Shalygin
On 1/15/20 11:58 PM, Paul Emmerich wrote: we ran some benchmarks with a few samples of Seagate's new HDDs that some of you might find interesting: Blog post: https://croit.io/2020/01/06/2020-01-06-benchmark-mach2 GitHub repo with scripts and raw data:

[ceph-users] Re: Changing failure domain

2020-01-13 Thread Konstantin Shalygin
On 1/6/20 5:50 PM, Francois Legrand wrote: I still have few questions before going on. It seems that some metadata should remains on the original data pool, preventing it's deletion (http://ceph.com/geen-categorie/ceph-pool-migration/ and

[ceph-users] Re: rgw logs

2019-12-24 Thread Konstantin Shalygin
On 12/24/19 6:26 AM, Frank R wrote: I have about 1TB of data in the pool default.rgw.logs. What logs are stored in this pool and can they be safely deleted? It's can be trimmed via `radosgw-admin usage trim`. k ___ ceph-users mailing list --

[ceph-users] Re: Changing failure domain

2019-12-23 Thread Konstantin Shalygin
On 12/19/19 10:22 PM, Francois Legrand wrote: Thus my question is *how can I migrate a data pool in EC of a cephfs to another EC pool ?* I suggest this: # create you new ec pool # `ceph osd pool application enable ec_new cephfs` # `ceph fs add_data_pool cephfs ec_new` # `setfattr -n

[ceph-users] Re: Balancing PGs across OSDs

2019-12-22 Thread Konstantin Shalygin
On 12/18/19 2:16 PM, Lars Täuber wrote: the situation after moving the PGs with osdmaptool is not really better than without: $ ceph osd df class hdd […] MIN/MAX VAR: 0.86/1.08 STDDEV: 2.04 The OSD with the fewest PGs has 66 of them, the one with the most has 83. Is this the expected

[ceph-users] Re: Balancing PGs across OSDs

2019-12-16 Thread Konstantin Shalygin
On 12/16/19 3:25 PM, Lars Täuber wrote: Here it comes. Maybe some bug in osdmaptool, when defined pools is less than one no actually do_upmap is executed. Try like this: `osdmaptool osdmap.om --upmap upmap.sh --upmap-pool=cephfs_data --upmap-pool=cephfs_metadata --upmap-deviation=0

[ceph-users] Re: Balancing PGs across OSDs

2019-12-16 Thread Konstantin Shalygin
On 12/16/19 2:42 PM, Lars Täuber wrote: There seems to be a bug in nautilus. I think about increasing the number of PG's for the data pool again, because the average number of PG's per OSD now is 76.8. What do you say? May be bug in Nautilus, may be in osdmaptool. Please, upload your binary

[ceph-users] Re: Balancing PGs across OSDs

2019-12-04 Thread Konstantin Shalygin
On 12/4/19 4:04 PM, Lars Täuber wrote: So I just wait for the remapping and merging being done and see what happens. Thanks so far! Also don't forget to call `ceph osd crush weight-set rm-compat`. And stop mgr balancer `ceph balancer off`. After your rebalance is complete you can try: `ceph

[ceph-users] Re: Building a petabyte cluster from scratch

2019-12-04 Thread Konstantin Shalygin
On 12/4/19 3:06 AM, Fabien Sirjean wrote: * ZFS on RBD, exposed via samba shares (cluster with failover) Why not use samba vfs_ceph instead? It's scalable direct access. * What about CephFS ? We'd like to use RBD diff for backups but it looks impossible to use snapshot diff with

[ceph-users] Re: Changing failure domain

2019-12-02 Thread Konstantin Shalygin
On 12/2/19 5:56 PM, Francois Legrand wrote: For replica, what is the best way to change crush profile ? Is it to create a new replica profile, and set this profile as crush rulest for the pool (something like ceph osd pool set {pool-name} crush_ruleset my_new_rule) ? Indeed. Then you can

[ceph-users] Re: Balancing PGs across OSDs

2019-12-02 Thread Konstantin Shalygin
On 12/2/19 5:55 PM, Lars Täuber wrote: Here we have a similar situation. After adding some OSDs to the cluster the PGs are not equally distributed over the OSDs. The balancing mode is set to upmap. The docshttps://docs.ceph.com/docs/master/rados/operations/balancer/#modes say: "This CRUSH

[ceph-users] Re: Dual network board setup info

2019-11-27 Thread Konstantin Shalygin
On 11/27/19 8:04 PM, Rodrigo Severo - Fábrica wrote: I have a CephFS instance and I am also planning on also deploying an Object Storage interface. My servers have 2 network boards each. I would like to use the current local one to talk to Cephs clients (both CephFS and Object Storage) and use

[ceph-users] Re: ceph user list respone

2019-11-25 Thread Konstantin Shalygin
On 11/26/19 4:10 AM, Frank R wrote: Do you mean the block.db size should be 3, 30 or 300GB and nothing else? Yes, if not - you will get data spillover of your RocksDB to slow_db at compaction rounds. k ___ ceph-users mailing list --

[ceph-users] Re: Single mount X multiple mounts

2019-11-25 Thread Konstantin Shalygin
On 11/25/19 7:41 PM, Rodrigo Severo - Fábrica wrote: I would like to know the impacts of having one single CephFS mount X having several. If I have several subdirectories in my CephFS that should be accessible to different users, with users needing access to different sets of mounts, would it

[ceph-users] Re: EC pool used space high

2019-11-25 Thread Konstantin Shalygin
On 11/25/19 6:05 PM, Erdem Agaoglu wrote: What I can't find is the 138,509 G difference between the ceph_cluster_total_used_bytes and ceph_pool_stored_raw. This is not static BTW, checking the same data historically shows we have about 1.12x of what we expect. This seems to make our 1.5x EC

[ceph-users] Re: PG in state: creating+down

2019-11-17 Thread Konstantin Shalygin
On 11/15/19 5:22 PM, Thomas Schneider wrote: root@ld3955:~# ceph pg dump_stuck inactive ok PG_STAT STATE UP    UP_PRIMARY ACTING    ACTING_PRIMARY 59.1c   creating+down [426,438]    426 [426,438]    426 I think this is classic PG OD [1] [1]

[ceph-users] Re: rgw recovering shards

2019-10-30 Thread Konstantin Shalygin
On 10/29/19 10:56 PM, Frank R wrote: oldest incremental change not applied: 2019-10-22 00:24:09.0.720448s May be zone period is not the same on both sides? k ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] Re: Correct Migration Workflow Replicated -> Erasure Code

2019-10-29 Thread Konstantin Shalygin
On 10/29/19 1:40 AM, Mac Wynkoop wrote: So, I'm in the process of trying to migrate our rgw.buckets.data pool from a replicated rule pool to an erasure coded pool. I've gotten the EC pool set up, good EC profile and crush ruleset, pool created successfully, but when I go to "rados cppool

[ceph-users] Re: rgw recovering shards

2019-10-28 Thread Konstantin Shalygin
On 10/27/19 6:01 AM, Frank R wrote: I hate to be a pain but I have one more question. After I run radosgw-admin reshard stale-instances rm if I run radosgw-admin reshard stale-instances list some new entries appear for a bucket that no longer exists. Is there a way to cancel the operation

[ceph-users] Re: Unbalanced data distribution

2019-10-24 Thread Konstantin Shalygin
On 10/24/19 6:54 PM, Thomas Schneider wrote: this is understood. I needed to start reweighting specific OSD because rebalancing was not working and I got a warning in Ceph that some OSDs are running out of space. Still, the main your issue is that your buckets is uneven, 350TB vs 79TB, more

[ceph-users] Re: rgw recovering shards

2019-10-24 Thread Konstantin Shalygin
On 10/24/19 11:00 PM, Frank R wrote: After an RGW upgrade from 12.2.7 to 12.2.12 for RGW multisite a few days ago the "sync status" has constantly shown a few "recovering shards", ie: - #  radosgw-admin sync status           realm 8f7fd3fd-f72d-411d-b06b-7b4b579f5f2f (prod)      

[ceph-users] Re: Unbalanced data distribution

2019-10-23 Thread Konstantin Shalygin
On 10/23/19 2:46 PM, Thomas Schneider wrote: Sure, here's the pastebin. Some of your 1.6Tb OSD's is reweighted, like osd.89 is 0.8, osd.100 is 0.7, etc... By this reason this OSD's get less PG's then other. k

[ceph-users] Re: Unbalanced data distribution

2019-10-23 Thread Konstantin Shalygin
On 10/23/19 1:14 PM, Thomas Schneider wrote: My understanding is that Ceph's algorithm should be smart enough to determine which object should be placed where and ensure balanced utilisation. I agree that I have a major impact if a node with 7.2TB disks go down, though. Ceph is don't care

[ceph-users] Re: Unbalanced data distribution

2019-10-22 Thread Konstantin Shalygin
On 10/22/19 7:52 PM, Thomas wrote: Node 1 48x 1.6TB Node 2 48x 1.6TB Node 3 48x 1.6TB Node 4 48x 1.6TB Node 5 48x 7.2TB Node 6 48x 7.2TB Node 7 48x 7.2TB I suggest to balance disks in hosts, e.g. ~ 28x1.6TB + 20x7.2TB per host. Why is the data distribution on the 1.6TB disks unequal? How can

[ceph-users] Re: mds log showing msg with HANGUP

2019-10-20 Thread Konstantin Shalygin
On 10/18/19 8:43 PM, Amudhan P wrote: I am getting below error msg in ceph nautilus cluster, do I need to worry about this? Oct 14 06:25:02 mon01 ceph-mds[35067]: 2019-10-14 06:25:02.209 7f55a4c48700 -1 received  signal: Hangup from killall -q -1 ceph-mon ceph-mgr ceph-mds ceph-osd ceph-fuse

[ceph-users] Re: CephFS exposing public storage network

2019-10-08 Thread Konstantin Shalygin
On 10/7/19 6:06 PM, Jaan Vaks wrote: I'm evaluation cephfs to serve our business as a file share that span across our 3 datacenters. One concern that I have is that when using cephfs and OpenStack Manila is that all guest vms needs access to the public storage net. This to me feels like a

[ceph-users] Re: Nautilus: BlueFS spillover

2019-10-02 Thread Konstantin Shalygin
On 9/27/19 3:54 PM, Eugen Block wrote: Update: I expanded all rocksDB devices, but the warnings still appear: After expanding you should tell to OSD compact command, like `ceph tell osd.0 compact`. k ___ ceph-users mailing list --

[ceph-users] Re: Nautilus: BlueFS spillover

2019-09-26 Thread Konstantin Shalygin
On 9/26/19 9:45 PM, Eugen Block wrote: I'm following the discussion for a tracker issue [1] about spillover warnings that affects our upgraded Nautilus cluster. Just to clarify, would a resize of the rocksDB volume (and expanding with 'ceph-bluestore-tool bluefs-bdev-expand...') resolve that or

[ceph-users] Re: upmap supported in SLES 12SPx

2019-09-16 Thread Konstantin Shalygin
On 9/16/19 3:59 PM, Thomas wrote: I tried to run this command with failure: root@ld3955:/mnt/rbd# ceph osd set-require-min-compat-client luminous Error EPERM: cannot set require_min_compat_client to luminous: 6 connected client(s) look like jewel (missing 0xa20); 19 connected

[ceph-users] Re: Delete objects on large bucket very slow

2019-09-12 Thread Konstantin Shalygin
On 9/13/19 10:46 AM, tuan dung wrote: thank for your answer. Can you explain clearly for me about: - How does ceph's deleting object operator work? (it mean about how delete object flow in ceph works?) - using s3cmd tool which install in client to delete object, how to improve speed of delete

[ceph-users] Re: vfs_ceph and permissions

2019-09-11 Thread Konstantin Shalygin
On 9/12/19 2:04 AM, ceph-us...@dxps31.33mail.com wrote: Thanks both for the pointer! Even with the vfs objects on the same line I get the same result. This is the testparm for the share (I'm logged in as SAMDOM\Administrator): [data] acl group control = Yes admin users =

[ceph-users] Re: vfs_ceph and permissions

2019-09-09 Thread Konstantin Shalygin
On 9/7/19 8:59 PM, ceph-us...@dxps31.33mail.com wrote: [data2] browseable = yes force create mode = 0660 force directory mode = 0660 valid users = @"Domain Users", @"Domain Admins", @"Domain Admins" read list = write list = @"Domain Users", @"Domain Admins"

[ceph-users] Re: Out of memory

2019-09-09 Thread Konstantin Shalygin
On 9/2/19 5:32 PM, Sylvain PORTIER wrote: Hi, Thank you for your response. I am using Nautilus version. Sylvain PORTIER. You should decrease osd memory usage via `osd_memory_target` option. Default is 4GB. k ___ ceph-users mailing list --

[ceph-users] Re: removing/flattening a bucket without data movement?

2019-08-31 Thread Konstantin Shalygin
On 8/31/19 4:14 PM, Zoltan Arnold Nagy wrote: Could you elaborate a bit more? upmap is used to map specific PGs to specific OSDs in order to deal with CRUSH inefficiencies. Why would I want to add a layer of indirection when the goal is to remove the bucket entirely? As I understood you

[ceph-users] Re: removing/flattening a bucket without data movement?

2019-08-30 Thread Konstantin Shalygin
On 8/31/19 3:42 AM, Zoltan Arnold Nagy wrote: Originally our osd tree looked like this: ID  CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF  -1   2073.15186 root default -14    176.63100 rack s01-rack -19    176.63100 host s01 -15   

[ceph-users] Re: Out of memory

2019-08-30 Thread Konstantin Shalygin
On 8/30/19 9:20 PM, Sylvain PORTIER wrote: On my ceph osd servers I have lot of "out of memory messages". My servers are configured with :     - 32 G of memory     - 11 HDD (3,5 T each) (+ 2 HDD for the system) And the error messages are : /[101292.017968] Out of memory: Kill process 2597

[ceph-users] Re: Safe to reboot host?

2019-08-30 Thread Konstantin Shalygin
On 8/31/19 12:33 AM, Brett Chancellor wrote: Before I write something that's already been done, are there any built in utilities or tools that can tell me if it's safe to reboot a host? I'm looking for something better than just checking the health status, but rather checking pg status and

[ceph-users] Re: Howto define OSD weight in Crush map

2019-08-30 Thread Konstantin Shalygin
On 8/30/19 5:01 PM, 74cmo...@gmail.com wrote: Hi, after adding an OSD to Ceph it is adviseable to create a relevant entry in Crush map using a weight size depending on disk size. Example: ceph osd crush set osd. root=default host= Question: How is the weight defined depending on disk size?

<    1   2   3   4