On 2/25/19 3:49 AM, Sahina Bose wrote:
On Thu, Feb 21, 2019 at 11:11 PM Jason P. Thomas wrote:
On 2/20/19 5:33 PM, Darrell Budic wrote:
I was just helping Tristam on #ovirt with a similar problem, we found that his two
upgraded nodes were running multiple glusterfsd processes per brick
On Thu, Feb 21, 2019 at 11:11 PM Jason P. Thomas wrote:
>
> On 2/20/19 5:33 PM, Darrell Budic wrote:
>
> I was just helping Tristam on #ovirt with a similar problem, we found that
> his two upgraded nodes were running multiple glusterfsd processes per brick
> (but not all bricks). His volume &
On Thu, Feb 21, 2019 at 12:42 PM Jason P. Thomas
wrote:
> On 2/20/19 5:33 PM, Darrell Budic wrote:
>
> I was just helping Tristam on #ovirt with a similar problem, we found that
> his two upgraded nodes were running multiple glusterfsd processes per brick
> (but not all bricks). His volume &
On 2/20/19 5:33 PM, Darrell Budic wrote:
I was just helping Tristam on #ovirt with a similar problem, we found
that his two upgraded nodes were running multiple glusterfsd processes
per brick (but not all bricks). His volume & brick files in
/var/lib/gluster looked normal, but starting
I was just helping Tristam on #ovirt with a similar problem, we found that his
two upgraded nodes were running multiple glusterfsd processes per brick (but
not all bricks). His volume & brick files in /var/lib/gluster looked normal,
but starting glusterd would often spawn extra fsd processes
On Thu, Feb 14, 2019 at 2:39 AM Ron Jerome wrote:
>
>
> >
> > Can you be more specific? What things did you see, and did you report bugs?
>
> I've got this one: https://bugzilla.redhat.com/show_bug.cgi?id=1649054
> and this one: https://bugzilla.redhat.com/show_bug.cgi?id=1651246
> and I've got
Ron, well it looks like you're not wrong. Less than 24 hours after
upgrading my cluster I have a Gluster brick down...
On Wed, Feb 13, 2019 at 5:58 PM Jayme wrote:
> Ron, sorry to hear about the troubles. I haven't seen any gluster crashes
> yet *knock on wood*. I will monitor closely.
Ron, sorry to hear about the troubles. I haven't seen any gluster crashes
yet *knock on wood*. I will monitor closely. Thanks for the heads up!
On Wed, Feb 13, 2019 at 5:09 PM Ron Jerome wrote:
>
> >
> > Can you be more specific? What things did you see, and did you report
> bugs?
>
> I've
>
> Can you be more specific? What things did you see, and did you report bugs?
I've got this one: https://bugzilla.redhat.com/show_bug.cgi?id=1649054
and this one: https://bugzilla.redhat.com/show_bug.cgi?id=1651246
and I've got bricks randomly going offline and getting out of sync with the
On Wed, Feb 13, 2019 at 3:06 PM Ron Jerome wrote:
> > I can confirm that this worked. I had to shut down every single VM then
> > change ownership to vdsm:kvm of the image file then start VM back up.
> >
> Not to rain on your parade, but you should keep a close eye on your
> gluster file system
> I can confirm that this worked. I had to shut down every single VM then
> change ownership to vdsm:kvm of the image file then start VM back up.
>
Not to rain on your parade, but you should keep a close eye on your gluster
file system after the upgrade. The stability of my gluster file system
I can confirm that this worked. I had to shut down every single VM then
change ownership to vdsm:kvm of the image file then start VM back up.
On Wed, Feb 13, 2019 at 3:08 PM Simone Tiraboschi
wrote:
>
>
> On Wed, Feb 13, 2019 at 8:06 PM Jayme wrote:
>
>>
>> I might be hitting this bug:
>>
On Wed, Feb 13, 2019 at 8:06 PM Jayme wrote:
>
> I might be hitting this bug:
> https://bugzilla.redhat.com/show_bug.cgi?id=1666795
>
Yes, you definitively are.
Fixing files ownership on file system side is a valid workaround.
>
> On Wed, Feb 13, 2019 at 1:35 PM Jayme wrote:
>
>> This may be
> This may be happening because I changed cluster compatibility to 4.3 then
> immediately after changed data center compatibility to 4.3 (before
> restarting VMs after cluster compatibility change). If this is the case I
> can't fix by downgrading the data center compatibility to 4.2 as it won't
I might be hitting this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1666795
On Wed, Feb 13, 2019 at 1:35 PM Jayme wrote:
> This may be happening because I changed cluster compatibility to 4.3 then
> immediately after changed data center compatibility to 4.3 (before
> restarting VMs after
This may be happening because I changed cluster compatibility to 4.3 then
immediately after changed data center compatibility to 4.3 (before
restarting VMs after cluster compatibility change). If this is the case I
can't fix by downgrading the data center compatibility to 4.2 as it won't
allow me
I may have made matters worse. So I changed to 4.3 compatible cluster then
4.3 compatible data center. All VMs were marked as requiring a reboot. I
restarted a couple of them and none of them will start up, they are saying
"bad volume specification". The ones running that I did not yet restart
I think I just figured out what I was doing wrong. On edit cluster screen
I was changing both the CPU type and cluster level 4.3. I tried it again
by switching to the new CPU type first (leaving cluster on 4.2) then
saving, then going back in and switching compat level to 4.3. It appears
that
18 matches
Mail list logo