I would set min_size back to 2 for general running, but put it down to 1
during planned maintenance. There are a lot of threads on the ML talking
about why you shouldn't run with min_size of 1.
On Thu, Aug 10, 2017, 11:36 PM Hyun Ha wrote:
> Thanks for reply.
>
> In my case, it was an issue abou
Thanks for reply.
In my case, it was an issue about min_size of pool.
# ceph osd pool ls detail
pool 5 'volumes' replicated size 2 min_size 2 crush_ruleset 0 object_hash
rjenkins pg_num 512 pgp_num 512 last_change 844 flags hashpspool
stripe_width 0
removed_snaps [1~23]
when replicated s
When the node remote, are the osds being marked down immediately? If the
node were to reboot, but not Mark the osds down, then all requires to those
osds would block until they got marked down.
On Thu, Aug 10, 2017, 5:46 AM Hyun Ha wrote:
> Hi, Ramirez
>
> I have exactly same problem as yours.
>
Hi, Ramirez
I have exactly same problem as yours.
Did you solved that issue?
Do you have expireences or solutions?
Thank you.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Chris,
Yes, all pools have size=3 and min_size=2. The clients are only RBD.
I did a shutdown to make a firmware upgrade.
Kr.
Luis
On 15/07/16 09:05, Christian Balzer wrote:
Hello,
On Fri, 15 Jul 2016 00:28:37 +0200 Luis Ramirez wrote:
Hi,
I've a cluster with 3 MON nodes and 5 OSD
Hello,
On Fri, 15 Jul 2016 00:28:37 +0200 Luis Ramirez wrote:
> Hi,
>
> I've a cluster with 3 MON nodes and 5 OSD nodes. If i make a reboot
> of 1 of the osd nodes i get slow request waiting for active.
>
> 2016-07-14 19:39:07.996942 osd.33 10.255.128.32:6824/7404 888 : cluster
> [WRN]
Hi,
I've a cluster with 3 MON nodes and 5 OSD nodes. If i make a reboot
of 1 of the osd nodes i get slow request waiting for active.
2016-07-14 19:39:07.996942 osd.33 10.255.128.32:6824/7404 888 : cluster
[WRN] slow request 60.627789 seconds old, received at 2016-07-14
19:38:07.369009: o