Den tis 8 jan. 2019 kl 16:05 skrev Yoann Moulin :
> The best thing you can do here is added two disks to pf-us1-dfs3.
After that, get a fourth host with 4 OSDs on it and add to the cluster.
If you have 3 replicas (which is good!), then any downtime will mean
the cluster is
kept in a degraded
> root@pf-us1-dfs3:/home/rodrigo# ceph osd crush rule dump
> [
> {
> "rule_id": 0,
> "rule_name": "replicated_rule",
> "ruleset": 0,
> "type": 1,
> "min_size": 1,
> "max_size": 10,
> "steps": [
> {
> "op": "take",
>
Hi Yoann, thanks a lot for your help.
root@pf-us1-dfs3:/home/rodrigo# ceph osd crush tree
ID CLASS WEIGHT TYPE NAME
-1 72.77390 root default
-3 29.10956 host pf-us1-dfs1
0 hdd 7.27739 osd.0
5 hdd 7.27739 osd.5
6 hdd 7.27739 osd.6
8 hdd
It would but you should not:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-December/014846.html
Kevin
Am Di., 8. Jan. 2019 um 15:35 Uhr schrieb Rodrigo Embeita
:
>
> Thanks again Kevin.
> If I reduce the size flag to a value of 2, that should fix the problem?
>
> Regards
>
> On Tue,
Thanks again Kevin.
If I reduce the size flag to a value of 2, that should fix the problem?
Regards
On Tue, Jan 8, 2019 at 11:28 AM Kevin Olbrich wrote:
> You use replication 3 failure-domain host.
> OSD 2 and 4 are full, thats why your pool is also full.
> You need to add two disks to
Hello,
> Hi Yoann, thanks for your response.
> Here are the results of the commands.
>
> root@pf-us1-dfs2:/var/log/ceph# ceph osd df
> ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS
> 0 hdd 7.27739 1.0 7.3 TiB 6.7 TiB 571 GiB 92.33 1.74 310
> 5 hdd 7.27739 1.0
Hi Kevin, thanks for your answer.
How Can I check the (re-)weights?
On Tue, Jan 8, 2019 at 10:36 AM Kevin Olbrich wrote:
> Looks like the same problem like mine:
>
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-January/032054.html
>
> The free space is total while Ceph uses the
You use replication 3 failure-domain host.
OSD 2 and 4 are full, thats why your pool is also full.
You need to add two disks to pf-us1-dfs3 or swap one from the larger
nodes to this one.
Kevin
Am Di., 8. Jan. 2019 um 15:20 Uhr schrieb Rodrigo Embeita
:
>
> Hi Yoann, thanks for your response.
>
Hi Yoann, thanks for your response.
Here are the results of the commands.
root@pf-us1-dfs2:/var/log/ceph# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZEUSE AVAIL %USE VAR PGS
0 hdd 7.27739 1.0 7.3 TiB 6.7 TiB 571 GiB 92.33 1.74 310
5 hdd 7.27739 1.0 7.3 TiB 5.6 TiB 1.7 TiB
I believe I found something but I don't know how to fix it.
I run "ceph df" and I'm seeing that cephfs_data and cephfs_metadata is at
100% USED.
How can I increase the cephfs_data and cephfs_metadata pool.
Sorry I'm new with Ceph.
root@pf-us1-dfs1:/etc/ceph# ceph df
GLOBAL:
SIZE AVAIL
Hello,
> Hi guys, I need your help.
> I'm new with Cephfs and we started using it as file storage.
> Today we are getting no space left on device but I'm seeing that we have
> plenty space on the filesystem.
> Filesystem Size Used Avail Use% Mounted on
>
Looks like the same problem like mine:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-January/032054.html
The free space is total while Ceph uses the smallest free space (worst OSD).
Please check your (re-)weights.
Kevin
Am Di., 8. Jan. 2019 um 14:32 Uhr schrieb Rodrigo Embeita
:
>
>
Hi guys, I need your help.
I'm new with Cephfs and we started using it as file storage.
Today we are getting no space left on device but I'm seeing that we have
plenty space on the filesystem.
Filesystem Size Used Avail Use% Mounted on
13 matches
Mail list logo