Yes, thanks. This helped.
Regards,
Lars
Tue, 28 May 2019 11:50:01 -0700
Gregory Farnum ==> Lars Täuber :
> You’re the second report I’ve seen if this, and while it’s confusing, you
> should be Abel to resolve it by restarting your active manager daemon.
>
> On Sun, May 26, 2019 at 11:52 PM
Dear All,
Quick question regarding SSD sizing for a DB/WAL...
I understand 4% is generally recommended for a DB/WAL.
Does this 4% continue for "large" 12TB drives, or can we economise and
use a smaller DB/WAL?
Ideally I'd fit a smaller drive providing a 266GB DB/WAL per 12TB OSD,
rather than
Hi Jake,
I have same question about size of DB/WAL for OSD。My situations: 12 osd
per OSD nodes, 8 TB(maybe 12TB later) per OSD, Intel NVMe SSD (optane
P4800x) 375G per OSD nodes, which means DB/WAL can use about 30GB per
OSD(8TB), I mainly use CephFS to serve the HPC cluster for ML.
(plan to
Thanks for everyone's suggestions which have now helped me to fix the space
free problem.
The newbie mistake was not knowing anything about rebalancing. Turning on the
balancer and using upmap I have gone from 7TB free to 50TB free on my cephfs.
Seeing that the object store is saying 180TB free
I enabled the balancer plugin and even tried to manually invoke it but it
won't allow any changes. Looking at ceph osd df, it's not even at all.
Thoughts?
root@hostadmin:~# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL%USE VAR PGS
1 hdd 0.0098000 B 0 B
I am working to develop some monitoring for our File clusters and as part of
the check I inspect `ceph mds stat` for damaged,failed,stopped MDS/Ranks.
Initially I set my check to Alarm if any of these states was discovered but as
I distributed it out I noticed that one of our clusters had the
You’re the second report I’ve seen if this, and while it’s confusing, you
should be Abel to resolve it by restarting your active manager daemon.
On Sun, May 26, 2019 at 11:52 PM Lars Täuber wrote:
> Fri, 24 May 2019 21:41:33 +0200
> Michel Raabe ==> Lars Täuber ,
> ceph-users@lists.ceph.com :
I suggest having a look at this thread, which suggests that sizes 'in
between' the requirements of different RocksDB levels have no net
effect, and size accordingly.
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-October/030740.html
My impression is that 28GB is good (L0+L1+L3), or 280
Hi Jake,
just my 2 cents - I'd suggest to use LVM for DB/WAL to be able
seamlessly extend their sizes if needed.
Once you've configured this way and if you're able to add more NVMe
later you're almost free to select any size at the initial stage.
Thanks,
Igor
On 5/28/2019 4:13 PM, Jake
Hello Jake,
you can use 2.2% as well and performance will most of the time better than
without having a DB/WAL. However if the DB/WAL is filled up, a spillover to
the regular drive is done and the performance will just drop as if you
wouldn't have a DB/WAL drive.
I believe that you could use
Hi Martin,
thanks for your reply :)
We already have a separate NVMe SSD pool for cephfs metadata.
I agree it's much simpler & more robust not using a separate DB/WAL, but
as we have enough money for a 1.6TB SSD for every 6 HDD, so it's
tempting to go down that route. If people think a 2.2%
On 5/28/19 11:17 AM, Scheurer François wrote:
Hi Casey
I greatly appreciate your quick and helpful answer :-)
It's unlikely that we'll do that, but if we do it would be announced with a
long deprecation period and migration strategy.
Fine, just the answer we wanted to hear ;-)
Hi Casey
I greatly appreciate your quick and helpful answer :-)
>It's unlikely that we'll do that, but if we do it would be announced with a
>long deprecation period and migration strategy.
Fine, just the answer we wanted to hear ;-)
>However, I would still caution against using either as
Hello Jake,
do you have any latency requirements that you do require the DB/WAL at all?
If not, CephFS with EC on SATA HDD works quite well as long as you have the
metadata on a separate ssd pool.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat:
Hi François,
Removing support for either of rgw_crypt_default_encryption_key or
rgw_crypt_s3_kms_encryption_keys would mean that objects encrypted with
those keys would no longer be accessible. It's unlikely that we'll do
that, but if we do it would be announced with a long deprecation
Dear All,
Quick question regarding SSD sizing for a DB/WAL...
I understand 4% is generally recommended for a DB/WAL.
Does this 4% continue for "large" 12TB drives, or can we economise and
use a smaller DB/WAL?
Ideally I'd fit a smaller drive providing a 266GB DB/WAL per 12TB OSD,
rather than
On 27.05.19 09:08, Stefan Kooman wrote:
> Quoting Robert Ruge (robert.r...@deakin.edu.au):
>> Ceph newbie question.
>>
>> I have a disparity between the free space that my cephfs file system
>> is showing and what ceph df is showing. As you can see below my
>> cephfs file system says there is
On 5/28/19 5:16 PM, Igor Fedotov wrote:
LVM volume and raw file resizing is quite simple, while partition one
might need manual data movement to another target via dd or something.
This also possible and tested, how-to is here https://bit.ly/2UFVO9Z
k
Am 28.05.19 um 03:24 schrieb Yan, Zheng:
On Mon, May 27, 2019 at 6:54 PM Oliver Freyermuth
wrote:
Am 27.05.19 um 12:48 schrieb Oliver Freyermuth:
Am 27.05.19 um 11:57 schrieb Dan van der Ster:
On Mon, May 27, 2019 at 11:54 AM Oliver Freyermuth
wrote:
Dear Dan,
thanks for the quick
Hi,
We have six storage nodes, and added three new only-SSD storage.nodes.
I started increasing weight to fill in freshly added OSD on new osd storage
nodes, the command was:
ceph osd crush reweight osd.126 0.2
cluster started rebalance:
2019-05-22 11:00:00.000253 mon.ceph-mon-01 mon.0
Dear Casey, Dear Ceph Users
The following is written in the radosgw documentation
(http://docs.ceph.com/docs/luminous/radosgw/encryption/):
rgw crypt default encryption key =
4YSmvJtBv0aZ7geVgAsdpRnLBEwWSWlMIGnRS8a9TSA=
Important: This mode is for diagnostic purposes only! The ceph
I switched first of may, and did not notice to much difference in memory
usage. After the restart of the osd's on the node I see the memory
consumption gradually getting back to as before.
Can't say anything about latency.
-Original Message-
From: Konstantin Shalygin
Sent:
Konstantin,
one should resize device before using bluefs-bdev-expand command.
So the first question should be what's the backend for block.db - simple
device partition, LVM volume, raw file?
LVM volume and raw file resizing is quite simple, while partition one
might need manual data
Hi Casey
Thank you for your help. We fixed the problem on the same day but then I forgot
to post back the solution here...
So basically we had 2 problems:
-the barbican secret key payload need to be exactly 32 Bytes
-the ceph.conf need a user id (username not ok): rgw keystone barbican user =
Hi,
With the release of 12.2.12 the bitmap allocator for BlueStore is now
available under Mimic and Luminous.
[osd]
bluestore_allocator = bitmap
bluefs_allocator = bitmap
Before setting this in production: What might the
Hello - I have created an OSD with 20G block.db, now I wanted to change the
block.db to 100G size.
Please let us know if there is a process for the same.
PS: Ceph version 12.2.4 with bluestore backend.
You should upgrade to 12.2.11+ first! Expand your block.db via
`ceph-bluestore-tool
Hi All,
I’ve configured a multisite deployment on Ceph Nautilus 14.2.1 with one zone
group “eu", one master zone and two secondary zone.
If I upload ( on the master zone ) for 200 objects of 80MB each and I delete
all of them without waiting for the replication to finish I end up with one
Am Di., 28. Mai 2019 um 10:20 Uhr schrieb Wido den Hollander :
>
>
> On 5/28/19 10:04 AM, Kevin Olbrich wrote:
> > Hi Wido,
> >
> > thanks for your reply!
> >
> > For CentOS 7, this means I can switch over to the "rpm-nautilus/el7"
> > repository and Qemu uses a nautilus compatible client?
> > I
On 5/28/19 10:04 AM, Kevin Olbrich wrote:
> Hi Wido,
>
> thanks for your reply!
>
> For CentOS 7, this means I can switch over to the "rpm-nautilus/el7"
> repository and Qemu uses a nautilus compatible client?
> I just want to make sure, I understand correctly.
>
Yes, that is correct. Keep
Hi Wido,
thanks for your reply!
For CentOS 7, this means I can switch over to the "rpm-nautilus/el7"
repository and Qemu uses a nautilus compatible client?
I just want to make sure, I understand correctly.
Thank you very much!
Kevin
Am Di., 28. Mai 2019 um 09:46 Uhr schrieb Wido den Hollander
What is your experience?
Does it make sense to use it -- is it solid enough or beta quality
rather (both in terms of stability and performance)?
I've read it was more or less packaged to work with RHEL. Does it hold
true still?
What's the best way to install it on, say, CentOS or Debian/Ubuntu?
On 5/28/19 7:52 AM, Kevin Olbrich wrote:
> Hi!
>
> How can I determine which client compatibility level (luminous, mimic,
> nautilus, etc.) is supported in Qemu/KVM?
> Does it depend on the version of ceph packages on the system? Or do I
> need a recent version Qemu/KVM?
This is mainly
32 matches
Mail list logo