[ceph-users] Don’t Work With The Stressed Mind Rather Use Assignment Help

2020-08-24 Thread Ricky Paul
What you do when you have the pressure of producing outstanding outcomes? Does your mind feel stressed or tired? Do you feel lethargic for writing your assignment? Well, it is quite natural that you feel exhausted when you have to work under stress. Due to this, you may not pay full attention

[ceph-users] Undo ceph osd destroy?

2020-08-24 Thread Michael Fladischer
Hi, I accidentally destroyed the wrong OSD in my cluster. It is now marked as "destroyed" but the HDD is still there and data was not touch AFAICT. I was able to avtivate it again using ceph-volume lvm activate and I can make the OSD as "in" but it's status is not changing from "destroyed".

[ceph-users] Re: Adding OSD

2020-08-24 Thread jcharles
Hello, Yes cache tier are replicated size 3 JC ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Don’t Work With The Stressed Mind Rather Use Assignment Help

2020-08-24 Thread Ricky Paul
What you do when you have the pressure of producing outstanding outcomes? Does your mind feel stressed or tired? Do you feel lethargic for writing your assignment? Well, it is quite natural that you feel exhausted when you have to work under stress. Due to this, you may not pay full attention

[ceph-users] Re: [doc] drivegroups advanced case

2020-08-24 Thread Jan Fajerski
On Fri, Aug 21, 2020 at 06:52:19PM +, Tony Liu wrote: Hi, Regarding to the YAML in this section, https://ceph.readthedocs.io/en/latest/cephadm/drivegroups/#the-advanced-case Is the "rotational" supposed to be "1" meaning spinning HDD? Correct. This corresponds to

[ceph-users] Re: Add OSD with primary on HDD, WAL and DB on SSD

2020-08-24 Thread Eugen Block
Hi, if you shared your drivegroup config we might be able to help identify your issue. ;-) The last example in [1] shows the "wal_devices" filter for splitting wal and db. Regards, Eugen [1] https://docs.ceph.com/docs/master/cephadm/drivegroups/#dedicated-wal-db Zitat von Tony Liu :

[ceph-users] Re: How to change wal block in bluestore?

2020-08-24 Thread Eugen Block
Hi, taking a quick look at the script I noticed that you're trying to dd from old to new device and then additionally run the 'ceph-bluestore-tool bluefs-bdev-migrate' which seems redundant to me, I believe 'ceph-bluestore-tool bluefs-bdev-migrate' is supposed to migrate the data itself.

[ceph-users] Persistent problem with slow metadata

2020-08-24 Thread Momčilo Medić
Hi friends, Since deployment of our Ceph cluster we've been plagued by slow metadata error. Namely, cluster goes into HEALTH_WARN with a message similar to this one: 2 MDSs report slow metadata IOs 1 MDSs report slow requests 1 slow ops, oldest one blocked for 32 sec, daemons [osd.22,osd.4] have

[ceph-users] Re: Persistent problem with slow metadata

2020-08-24 Thread Momčilo Medić
Hi Eugen, On Mon, 2020-08-24 at 14:26 +, Eugen Block wrote: > Hi, > > there have been several threads about this topic [1], most likely > it's > the metadata operation during the cleanup that saturates your disks. > > The recommended settings seem to be: > > [osd] > osd op queue = wpq >

[ceph-users] Re: Persistent problem with slow metadata

2020-08-24 Thread Eugen Block
Hi, there have been several threads about this topic [1], most likely it's the metadata operation during the cleanup that saturates your disks. The recommended settings seem to be: [osd] osd op queue = wpq osd op queue cut off = high This helped us a lot, the number of slow requests has

[ceph-users] Cluster experiencing complete operational failure, various cephx authentication errors

2020-08-24 Thread Mathijs Smit
Hi everyone, I have a serious problem which currently exists of my entire Ceph no longer being able to provide service. As if yesterday I added 10 OSD's total 2 per node, the rebalance started and took some IO but seemed to be doing its work. This morning the cluster was still processing the

[ceph-users] Add OSD host with not clean disks

2020-08-24 Thread Tony Liu
Hi, I am trying to add a OSD host with not clean disks. I added the host. ceph orch host add storage-3 10.6.50.82 Host is added, but the disks on that host are not listed. I assume because they are not clean? I zapped all disks. ceph orch device zap storage-3 /dev/sdb .. All disks are

[ceph-users] Re: Upgrade options and *request for comment

2020-08-24 Thread Reed Dier
Your options while staying on Xenial are only to Nautilus. In the below chart, X is provided by the Ceph repos, U denotes from Ubuntu repos. rel jewel luminus mimic nautilus octopus trusty X X xenial XU X X X bionic U X X X focal XU Octopus is only supported on bionic and focal. Xenial

[ceph-users] rgw.none vs quota

2020-08-24 Thread Jean-Sebastien Landry
Hi everyone, a bucket was overquota, (default quota of 300k objects per bucket), I enabled the object quota for this bucket and set a quota of 600k objects. We are on Luminous (12.2.12) and dynamic resharding is disabled, I manually do the resharding from 3 to 6 shards. Since then,

[ceph-users] Re: Add OSD with primary on HDD, WAL and DB on SSD

2020-08-24 Thread Lindsay Mathieson
On 25/08/2020 6:07 am, Tony Liu wrote: I don't need to create WAL device, just primary on HDD and DB on SSD, and WAL will be using DB device cause it's faster. Is that correct? Yes. But be aware that the DB sizes are limited to 3GB, 30GB and 300GB. Anything less than those sizes will have

[ceph-users] Re: Cluster experiencing complete operational failure, various cephx authentication errors

2020-08-24 Thread Stefan Kooman
On 2020-08-24 20:35, Mathijs Smit wrote: > Hi everyone, > > I have a serious problem which currently exists of my entire Ceph no longer > being able to provide service. As if yesterday I added 10 OSD's total 2 per > node, the rebalance started and took some IO but seemed to be doing its work.

[ceph-users] Re: Add OSD with primary on HDD, WAL and DB on SSD

2020-08-24 Thread Tony Liu
> -Original Message- > From: Anthony D'Atri > Sent: Monday, August 24, 2020 7:30 PM > To: Tony Liu > Subject: Re: [ceph-users] Re: Add OSD with primary on HDD, WAL and DB on > SSD > > Why such small HDDs? Kinda not worth the drive bays and power, instead > of the complexity of putting

[ceph-users] rgw-orphan-list

2020-08-24 Thread Andrei Mikhailovsky
While continuing my saga with the rgw orphans and dozens of terabytes of wasted space I have used the rgw-orphan-list tool. after about 45 mins the tool has crashed ((( # time rgw-orphan-list .rgw.buckets Pool is ".rgw.buckets". Note: output files produced will be tagged with the current

[ceph-users] Re: Add OSD with primary on HDD, WAL and DB on SSD

2020-08-24 Thread ceph
Hi, you could try to use ceph-volume lvm create --data DEV --db DEV and inspect the output to learn what is being done. I am not sure about the right Syntax now but you should find related Information via search ... Hth Mehmet Am 23. August 2020 05:52:29 MESZ schrieb Tony Liu : >Hi, > >I

[ceph-users] Re: Add OSD with primary on HDD, WAL and DB on SSD

2020-08-24 Thread Tony Liu
Thanks Eugen for pointing it out. I reread this link. https://ceph.readthedocs.io/en/latest/rados/configuration/bluestore-config-ref/ It seems that, for the mix of HDD and SSD, I don't need to create WAL device, just primary on HDD and DB on SSD, and WAL will be using DB device cause it's faster.

[ceph-users] Re: Adding OSD

2020-08-24 Thread jcharles
Thanks for your advice, add one osd on another host does the trick. But I didn't understand why 2 osd is enought ? I would have gess 3 or 5 since I have replicated size of 3 and 5, and ec with size of 5 ___ ceph-users mailing list --

[ceph-users] Office Setup And Product Key

2020-08-24 Thread sapohoj267
The office is a productivity suite, developed and maintained by one of the biggest companies, market leaders in technology, Microsoft Office. You must already have gotten your answer, Microsoft is not just another company that poses to be developing great software with just a team of 100

[ceph-users] Re: Add OSD with primary on HDD, WAL and DB on SSD

2020-08-24 Thread Tony Liu
> > I don't need to create > > WAL device, just primary on HDD and DB on SSD, and WAL will be using > > DB device cause it's faster. Is that correct? > > Yes. > > > But be aware that the DB sizes are limited to 3GB, 30GB and 300GB. > Anything less than those sizes will have a lot of untilised