On 04/11/2014 02:45 PM, Greg Poirier wrote:
So... our storage problems persisted for about 45 minutes. I gave an
entire hypervisor worth of VM's time to recover (approx. 30 vms), and
none of them recovered on their own. In the end, we had to stop and
start every VM (easily done, it was just alarm
So, setting pgp_num to 2048 to match pg_num had a more serious impact than
I expected. The cluster is rebalancing quite substantially (8.5% of objects
being rebalanced)... which makes sense... Disk utilization is evening out
fairly well which is encouraging.
We are a little stumped as to why a few
So... our storage problems persisted for about 45 minutes. I gave an entire
hypervisor worth of VM's time to recover (approx. 30 vms), and none of them
recovered on their own. In the end, we had to stop and start every VM
(easily done, it was just alarming). Once rebooted, the VMs of course were
fi
On 04/11/2014 09:23 AM, Josef Johansson wrote:
On 11/04/14 09:07, Wido den Hollander wrote:
Op 11 april 2014 om 8:50 schreef Josef Johansson :
Hi,
On 11/04/14 07:29, Wido den Hollander wrote:
Op 11 april 2014 om 7:13 schreef Greg Poirier :
One thing to note
All of our kvm VMs have t
On 11/04/14 09:07, Wido den Hollander wrote:
>
>> Op 11 april 2014 om 8:50 schreef Josef Johansson :
>>
>>
>> Hi,
>>
>> On 11/04/14 07:29, Wido den Hollander wrote:
Op 11 april 2014 om 7:13 schreef Greg Poirier :
One thing to note
All of our kvm VMs have to be rebooted
> Op 11 april 2014 om 8:50 schreef Josef Johansson :
>
>
> Hi,
>
> On 11/04/14 07:29, Wido den Hollander wrote:
> >
> >> Op 11 april 2014 om 7:13 schreef Greg Poirier :
> >>
> >>
> >> One thing to note
> >> All of our kvm VMs have to be rebooted. This is something I wasn't
> >> expecting.
Hi,
On 11/04/14 07:29, Wido den Hollander wrote:
>
>> Op 11 april 2014 om 7:13 schreef Greg Poirier :
>>
>>
>> One thing to note
>> All of our kvm VMs have to be rebooted. This is something I wasn't
>> expecting. Tried waiting for them to recover on their own, but that's not
>> happening. Reb
> Op 11 april 2014 om 7:13 schreef Greg Poirier :
>
>
> One thing to note
> All of our kvm VMs have to be rebooted. This is something I wasn't
> expecting. Tried waiting for them to recover on their own, but that's not
> happening. Rebooting them restores service immediately. :/ Not ideal.
One thing to note
All of our kvm VMs have to be rebooted. This is something I wasn't
expecting. Tried waiting for them to recover on their own, but that's not
happening. Rebooting them restores service immediately. :/ Not ideal.
On Thu, Apr 10, 2014 at 10:12 PM, Greg Poirier wrote:
> Going
Going to try increasing the full ratio. Disk utilization wasn't really
growing at an unreasonable pace. I'm going to keep an eye on it for the
next couple of hours and down/out the OSDs if necessary.
We have four more machines that we're in the process of adding (which
doubles the number of OSDs),
On Thu, 10 Apr 2014, Greg Poirier wrote:
> Hi,
> I have about 200 VMs with a common RBD volume as their root filesystem and a
> number of additional filesystems on Ceph.
>
> All of them have stopped responding. One of the OSDs in my cluster is marked
> full. I tried stopping that OSD to force thin
Hi,
I have about 200 VMs with a common RBD volume as their root filesystem and
a number of additional filesystems on Ceph.
All of them have stopped responding. One of the OSDs in my cluster is
marked full. I tried stopping that OSD to force things to rebalance or at
least go to degraded mode, but
12 matches
Mail list logo