Could someone clarify what the impact of this bug is?
We did increase pg_num/pgp_num and we are on dumpling (0.67.12 unofficial
snapshot).
Most of our clients are likely restarted already, but not all. Should we be
worried?
Thanks
Jan
> On 11 Aug 2015, at 17:31, Dan van der Ster wrote:
>
> On
On Tue, Aug 4, 2015 at 9:48 PM, Stefan Priebe wrote:
> Hi,
>
> Am 04.08.2015 um 21:16 schrieb Ketor D:
>>
>> Hi Stefan,
>>Could you describe more about the linger ops bug?
>>I'm runing Firefly as you say still has this bug.
>
>
> It will be fixed in next ff release.
>
> This on:
>
I started with 7 and expended it to 14 with starting PG of 512 to 4096, as
recommended.
Unfortunately I can’t tell you the exact IO impact as I’ve done my changes in
the off hours where the impact wasn’t important, I could see reduction in
performance but since it had no impact on me I didn’t
Hi,
comments inline.
> On 05 Aug 2015, at 05:45, Jevon Qiao wrote:
>
> Hi Jan,
>
> Thank you for the detailed suggestion. Please see my reply in-line.
> On 5/8/15 01:23, Jan Schermer wrote:
>> I think I wrote about my experience with this about 3 months ago, including
>> what techniques I used
Hi,
Am 04.08.2015 um 21:16 schrieb Ketor D:
Hi Stefan,
Could you describe more about the linger ops bug?
I'm runing Firefly as you say still has this bug.
It will be fixed in next ff release.
This on:
http://tracker.ceph.com/issues/9806
Stefan
Thanks!
On Wed, Aug 5, 2015 at
I think I wrote about my experience with this about 3 months ago, including
what techniques I used to minimize impact on production.
Basicaly we had to
1) increase pg_num in small increments only, bcreating the placement groups
themselves caused slowed requests on OSDs
2) increse pgp_num in smal
I have done this not that long ago. My original PG estimates were wrong and I
had to increase them.
After increasing the PG numbers the Ceph rebalanced, and that took a while. To
be honest in my case the slowdown wasn’t really visible, but it took a while.
My strong suggestion to you woul
We've done the splitting several times. The most important thing is to
run a ceph version which does not have the linger ops bug.
This is dumpling latest release, giant and hammer. Latest firefly
release still has this bug. Which results in wrong watchers and no
working snapshots.
Stefan
Am
It will cause a large amount of data movement. Each new pg after the
split will relocate. It might be ok if you do it slowly. Experiment
on a test cluster.
-Sam
On Mon, Aug 3, 2015 at 12:57 AM, 乔建峰 wrote:
> Hi Cephers,
>
> This is a greeting from Jevon. Currently, I'm experiencing an issue whi
Hi Cephers,
This is a greeting from Jevon. Currently, I'm experiencing an issue which
suffers me a lot, so I'm writing to ask for your comments/help/suggestions.
More details are provided bellow.
Issue:
I set up a cluster having 24 OSDs and created one pool with 1024 placement
groups on it for a
Hi Cephers,
This is a greeting from Jevon. Currently, I'm experiencing an issue which
suffers me a lot, so I'm writing to ask for your comments/help/suggestions.
More details are provided bellow.
Issue:
I set up a cluster having 24 OSDs and created one pool with 1024 placement
groups on it for a
11 matches
Mail list logo