Our use case is not Openstack but we have a cluster with similar size to
what you are looking at. Our cluster has 540 OSDs with 4PB of raw storage
spread across 9 nodes at this point.
2 pools
- 512 PGs - 3 way redundancy
- 32768 PGs - RS(6,3) erasure coding (99.9% of data in this pool)
The
I believe there is a command in radosgw-admin to change the owner of a
bucket which might be able to resolve the incorrect quota issue. I don't
know if that will work since the bucket doesn't think it exists. Perhaps
creating a new bucket of the same name and trying to run commands against
that
The proper way to prevent this is to set your full ratios safe and monitor
your disk usage. That will allow you to either clean up old data or add
new storage before you get to 95 full on any OSDs. What I mean by setting
your full ratios safe is that if your use case can fill 20% of your disk
I have this on a rbd pool with images/snapshots that have been created
in Luminous
> Hi Stefan, Mehmet,
>
> Are these clusters that were upgraded from prior versions, or fresh
> luminous installs?
>
>
> This message indicates that there is a stray clone object with no
> associated head or
I have found one image, how do I know what snapshot version to delete? I
have multiple
-Original Message-
From: c...@elchaka.de [mailto:c...@elchaka.de]
Sent: zondag 8 april 2018 13:30
To: ceph-users
Subject: Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for
$object?
Am
Disk controller seem fine
Any other suggestions will be really appreciated
megacli -AdpBbuCmd -aAll
BBU status for Adapter: 0
BatteryType: BBU
Voltage: 3925 mV
Current: 0 mA
Temperature: 17 C
Battery State: Optimal
BBU Firmware Status:
Charging Status : None
Voltage
On 04/09/2018 04:01 PM, Fulvio Galeazzi wrote:
> Hallo,
>
> I am wondering whether I could have the admin socket functionality
> enabled on a server which is a pure Ceph client (no MDS/MON/OSD/whatever
> running on such server). Is this at all possible? How should ceph.conf
> be configured?
Hallo,
I am wondering whether I could have the admin socket functionality
enabled on a server which is a pure Ceph client (no MDS/MON/OSD/whatever
running on such server). Is this at all possible? How should ceph.conf
be configured? Documentation pages led me to write something like this:
Hi list,
we were wondering if and how the consistency of OSD journals
(BlueStore) is checked.
Our cluster runs on Luminous (12.2.2) and we had migrated all our
filestore OSDs to bluestore a couple of months ago. During that
process we placed each rocksDB on a separate partition on a
Hi all,
a month has passed since the Dashboard v2 was merged into the master
branch, so I thought it might be helpful to write a summary/update (with
screenshots) of what we've been up to since then:
https://www.openattic.org/posts/ceph-dashboard-v2-update/
Let us know what you think!
Hallo Jason,
thanks again for your time and apologies for long silence but I was
busy upgrading to Luminous and converting Filestore->Bluestore.
In the meantime, the staging cluster where I was making tests was
both upgraded to Ceph-Luminous and upgraded to OpenStack-Pike: good news
is
Hi,
I have 2 questions.
I want to use ceph for OpenStack's volume backend by creating 2 ceph pools.
One pool consists of osds on SSD, and the other consists of osds on HDD.
The storage capacity of SSD pool is much smaller than that of HDD pool,
so I want to make configuration not to stop all IO
Hi,
Just a little question regarding this operation :
[root@osdhost osd]# ceph-volume lvm prepare --bluestore --data /dev/sdc
--block.wal /dev/sda2 --block.db /dev/sda1
On a previous post, I understood that if both wal and db are stored on
the same separate device, then we could use a
Hello,
We have use 1 server for deploy (called ceph-admin-node) for 3 mon and 4
OSD node.
We have created a folder called *ceph-deploy* to deploy all node members.
May we move this folder to other server?
This folder contains all following files:
total 1408
-rw--- 1 root root 113 Oct
14 matches
Mail list logo