greg.cha...@gmail.com, ceph-users@lists.ceph.com
ceph-users@lists.ceph.com
Subject: Re: [ceph-users] near full osd
It's not a hard value; you should adjust based on the size of your pools
(many of then are quite small when used with RGW, for instance). But in
general it is better to have more than fewer
...@expressionanalysis.com, Greg Chavez
greg.cha...@gmail.com, ceph-users@lists.ceph.com
ceph-users@lists.ceph.com
Subject: Re: [ceph-users] near full osd
It's not a hard value; you should adjust based on the size of your pools
(many of then are quite small when used with RGW, for instance). But in
general
Cc: Aronesty, Erik earone...@expressionanalysis.com, Greg Chavez
greg.cha...@gmail.com, ceph-users@lists.ceph.com
ceph-users@lists.ceph.com
Subject: Re: [ceph-users] near full osd
It's not a hard value; you should adjust based on the size of your pools
(many of then are quite small when used
: RE: [ceph-users] near full osd
If there¹s an underperforming disk, why on earth would more data be put
on
it? You¹d think it would be lessŠ. I would think an overperforming
disk
should (desirably) cause that case,right?
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun
javascript:;
Date: Tuesday, November 5, 2013 10:27 AM
To: Greg Chavez greg.cha...@gmail.com javascript:;, Kevin Weiler
kevin.wei...@imc-chicago.com javascript:;
Cc: ceph-users@lists.ceph.com javascript:;
ceph-users@lists.ceph.com javascript:;
Subject: RE: [ceph-users] near full osd
@lists.ceph.com javascript:_e({}, 'cvml',
'ceph-users@lists.ceph.com');
Subject: Re: [ceph-users] near full osd
It's not a hard value; you should adjust based on the size of your pools
(many of then are quite small when used with RGW, for instance). But in
general it is better to have more than fewer
@lists.ceph.com ceph-users@lists.ceph.com
Subject: RE: [ceph-users] near full osd
If there’s an underperforming disk, why on earth would more data be put on
it? You’d think it would be less…. I would think an overperforming disk
should (desirably) cause that case,right?
From: ceph-users-boun
Hi guys,
I have an OSD in my cluster that is near full at 90%, but we're using a little
less than half the available storage in the cluster. Shouldn't this be balanced
out?
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
Kevin, in my experience that usually indicates a bad or underperforming
disk, or a too-high priority. Try running ceph osd crush reweight
osd.## 1.0. If that doesn't do the trick, you may want to just out that
guy.
I don't think the crush algorithm guarantees balancing things out in the
way
@lists.ceph.commailto:ceph-users@lists.ceph.com
Subject: RE: [ceph-users] near full osd
If there’s an underperforming disk, why on earth would more data be put on it?
You’d think it would be less…. I would think an overperforming disk should
(desirably) cause that case,right?
From:
ceph-users-boun
Chavez
Sent: Tuesday, November 05, 2013 11:20 AM
To: Kevin Weiler
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] near full osd
Kevin, in my experience that usually indicates a bad or underperforming disk,
or a too-high priority. Try running ceph osd crush reweight osd.## 1.0
*
From: Aronesty, Erik earone...@expressionanalysis.com
Date: Tuesday, November 5, 2013 10:27 AM
To: Greg Chavez greg.cha...@gmail.com, Kevin Weiler
kevin.wei...@imc-chicago.com
Cc: ceph-users@lists.ceph.com ceph-users@lists.ceph.com
Subject: RE: [ceph-users] near full osd
If there’s
12 matches
Mail list logo