Hi,
On Sun, 2019-07-07 at 09:31 +0300, Benny Zlotnik wrote:
> Any chance you can setup gdb[1] so we can find out where it's stuck
> exactly?
Yes, abolutely - but I will need some assistance in getting GDB configured in
the engine as I am not very familar with it - or how to enable the correct
On the node in question, the metadata isn't coming across (state) wise.
It shows VMs being in an unknown state (some are up and some are down),
some show as migrating and there are 9 forever hung migrating tasks. We
tried to bring up some of the down VMs that had a state of Down, but
that
On Mon, 2019-07-08 at 16:49 +0300, Benny Zlotnik wrote:
> Not too useful unfortunately :\
> Can you try py-list instead of py-bt? Perhaps it will provide better
> results
(gdb) py-list
57if get_errno(ex) != errno.EEXIST:
58raise
59return
On Mon, 2019-07-08 at 16:25 +0300, Benny Zlotnik wrote:
> Hi,
>
> You have a typo, it's py-bt and I just tried it myself, I only had to
> install:
> $ yum install -y python-devel
> (in addition to the packages specified in the link)
Thanks - this is what I get:
#3 Frame 0x7f2046b59ad0, for file
I have opened a bug : https://bugzilla.redhat.com/show_bug.cgi?id=1727987 for
this issue.
Best Regards,
Strahil Nikolov
As nobody has replied , I guess it is a new bug and I'm going to report it in
bugzilla.
workaround for anyone who hit this is to modify the grub menu as follows:
FIX for
Hi,
On Sun, 2019-07-07 at 09:31 +0300, Benny Zlotnik wrote:
> Any chance you can setup gdb[1] so we can find out where it's stuck
> exactly?
Yes, abolutely - but I will need some assistance in getting GDB
configured in the engine as I am not very familar with it - or how to
enable the correct
Can you try to create mutliple ceph volumes manually via rbd from the
engine machine, so we can simulate what cinderlib does without using it,
this can be done
$ rbd -c ceph.conf create /vol1 --size 100M
$ rbd -c ceph.conf create /vol2 --size 100M
On Mon, Jul 8, 2019 at 4:58 PM Dan Poltawski
As nobody has replied , I guess it is a new bug and I'm going to report it in
bugzilla.
workaround for anyone who hit this is to modify the grub menu as follows:
FIX for RHEL8:vim /boot/efi/EFI/redhat/grub.cfg### BEGIN
/etc/grub.d/30_uefi-firmware ###menuentry 'Reboot' $menuentry_id_option
Hi,
the oVirt guest agent seems to report DST configuration for the timezone
since version 4.3 (of the guest agent). this results in "Actual timezone
in guest doesn't match configuration" messages in the UI for windows VMs
because the timezone field can't be matched with oVirt configuration
Il giorno lun 8 lug 2019 alle ore 16:45 Strahil Nikolov <
hunter86...@yahoo.com> ha scritto:
> Hi Sandro,
>
> I'm now currently running the latest RC and despite the issue with the
> snapshot (should be in the mailing list) from this weekend - I think
> everything is fine.
> I have noticed that
Hi Sandro,
I'm now currently running the latest RC and despite the issue with the snapshot
(should be in the mailing list) from this weekend - I think everything is
fine.I have noticed that qemu-guest-agent is now detected properly during
snapshots (I have updated that in the relevant bug).
The
Not too useful unfortunately :\
Can you try py-list instead of py-bt? Perhaps it will provide better results
On Mon, Jul 8, 2019 at 4:41 PM Dan Poltawski
wrote:
> On Mon, 2019-07-08 at 16:25 +0300, Benny Zlotnik wrote:
> > Hi,
> >
> > You have a typo, it's py-bt and I just tried it myself, I
Hi,
You have a typo, it's py-bt and I just tried it myself, I only had to
install:
$ yum install -y python-devel
(in addition to the packages specified in the link)
On Mon, Jul 8, 2019 at 2:40 PM Dan Poltawski
wrote:
> Hi,
>
> On Sun, 2019-07-07 at 09:31 +0300, Benny Zlotnik wrote:
>
> > Any
Hello,
part of updating a DC from 4.2 to 4.3 consists in updating format of data
storage domains from V4 to V5.
Questions:
- where can I find list of differences between V4 and V5? And in general
between previous ones?
- can I force only a particular storage domain to remain in V4 while having
the
Yes,
But I thought that cockpit should prevent creation of the gluster volume
'engine' as it is too small.
@Dev
Do we have such control ?
Best Regards,
Strahil Nikolov
On Jul 8, 2019 09:19, Parth Dhanjal wrote:
>
> Hey!
>
> The other 2 bricks were of 50G each.
> I forgot to check that.
> Sorry
I don't see any reason not to do it in case the SD replicas are separated
storage domain.
Just note that for the DR, you should prepare a separated DC with a cluster.
P.S - I most to admit that I didn't try this configuration - please share
your results.
On Mon, Jul 8, 2019 at 10:45 AM Gianluca
Hello,
suppose I want to implement Active-Passive DR between 2 sites.
Sites are SiteA and SiteB.
I have 2 storage domains SD1 and SD2, that I can configure so that SD1 is
active in storage array installed in SiteA with replica in SiteB and SD2
the reverse.
I have 4 hosts: host1 and host2 in SiteA
On Mon, Jul 8, 2019 at 8:05 AM Richard Chan
wrote:
> There was a NIC that was Up + Unplugged. I have removed all NICs and
> recreated them. The NPE remains.
>
Can you please provide the output of the following query:
*select type, device, address, is_managed, is_plugged, alias from
*>*
Hey!
The other 2 bricks were of 50G each.
I forgot to check that.
Sorry for the confusion.
Thanks!
On Mon, Jul 8, 2019 at 11:42 AM Sahina Bose wrote:
>
>
> On Mon, Jul 8, 2019 at 11:40 AM Parth Dhanjal wrote:
>
>> Hey!
>>
>> I used cockpit to deploy gluster.
>> And the problem seems to be
On Mon, Jul 8, 2019 at 11:40 AM Parth Dhanjal wrote:
> Hey!
>
> I used cockpit to deploy gluster.
> And the problem seems to be with
> 10.xx.xx.xx9:/engine 50G 3.6G 47G 8%
> /rhev/data-center/mnt/glusterSD/10.70.41.139:_engine
>
> Engine volume has 500G available
Hey!
I used cockpit to deploy gluster.
And the problem seems to be with
10.xx.xx.xx9:/engine 50G 3.6G 47G 8%
/rhev/data-center/mnt/glusterSD/10.70.41.139:_engine
Engine volume has 500G available
Filesystem Size Used Avail Use%
21 matches
Mail list logo