[ovirt-users] Re: Advice around ovirt 4.3 / gluster 5.x

2019-03-03 Thread Endre Karlson
I have tried bumping to 5.4 now and still getting alot of "Failed
Eventhandler" errors in the logs, any ideas guys?

Den søn. 3. mar. 2019 kl. 09:03 skrev Guillaume Pavese <
guillaume.pav...@interactiv-group.com>:

> Gluster 5.4 is released but not yet in official repository
> If like me you can not wait the official release of Gluster 5.4 with the
> instability bugfixes (planned for around March 12 hopefully), you can use
> the following repository :
>
> For Gluster 5.4-1 :
>
> #/etc/yum.repos.d/Gluster5-Testing.repo
> [Gluster5-Testing]
> name=Gluster5-Testing $basearch
> baseurl=
> https://cbs.centos.org/repos/storage7-gluster-5-testing/os/$basearch/
> enabled=1
> #metadata_expire=60m
> gpgcheck=0
>
>
> If adventurous ;)  Gluster 6-rc0 :
>
> #/etc/yum.repos.d/Gluster6-Testing.repo
> [Gluster6-Testing]
> name=Gluster6-Testing $basearch
> baseurl=
> https://cbs.centos.org/repos/storage7-gluster-6-testing/os/$basearch/
> enabled=1
> #metadata_expire=60m
> gpgcheck=0
>
>
> GLHF
>
> Guillaume Pavese
> Ingénieur Système et Réseau
> Interactiv-Group
>
>
> On Sun, Mar 3, 2019 at 6:16 AM Endre Karlson 
> wrote:
>
>> Hi, should we downgrade / reinstall our cluster? we have a 4 node cluster
>> that's breakin apart daily due to the issues with GlusterFS after upgrading
>> from 4.2.8 that was rock solid. I am wondering why 4.3 was released as a
>> stable version at all?? **FRUSTRATION**
>>
>> Endre
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3TJKJGGWCANXWZED2WF5ZHTSRS2DVHR2/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PYUE337CBDNJSICY3Z3CRW2OFSGLX2Q2/


[ovirt-users] Advice around ovirt 4.3 / gluster 5.x

2019-03-02 Thread Endre Karlson
Hi, should we downgrade / reinstall our cluster? we have a 4 node cluster
that's breakin apart daily due to the issues with GlusterFS after upgrading
from 4.2.8 that was rock solid. I am wondering why 4.3 was released as a
stable version at all?? **FRUSTRATION**

Endre
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3TJKJGGWCANXWZED2WF5ZHTSRS2DVHR2/


[ovirt-users] Problems with GlusterFS

2019-02-26 Thread Endre Karlson
Hi we are seeing a high number of errors / failures within the logs and
problems with our ovirt 4.3 cluster. IS there any assumption on a possible
fix?

The message "E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler" repeated 13 times between [2019-02-26 13:53:40.653905] and
[2019-02-26 13:54:04.684140]
[2019-02-26 13:54:08.684591] I [rpc-clnt.c:2042:rpc_clnt_reconfig]
0-vmstore-client-2: changing port to 49153 (from 0)
[2019-02-26 13:54:08.689021] E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler

==> /var/log/glusterfs/glustershd.log <==
[2019-02-26 13:54:08.783380] I [rpc-clnt.c:2042:rpc_clnt_reconfig]
0-vmstore-client-2: changing port to 49153 (from 0)
[2019-02-26 13:54:09.427338] I [rpc-clnt.c:2042:rpc_clnt_reconfig]
0-engine-client-2: changing port to 49152 (from 0)
[2019-02-26 13:54:10.785533] I [rpc-clnt.c:2042:rpc_clnt_reconfig]
0-engine-client-2: changing port to 49152 (from 0)
[2019-02-26 13:54:12.432411] I [rpc-clnt.c:2042:rpc_clnt_reconfig]
0-vmstore-client-2: changing port to 49153 (from 0)

==>
/var/log/glusterfs/rhev-data-center-mnt-glusterSD-ovirt4-stor.creator.local:_engine.log
<==
[2019-02-26 13:54:12.579095] I [rpc-clnt.c:2042:rpc_clnt_reconfig]
0-engine-client-2: changing port to 49152 (from 0)

==>
/var/log/glusterfs/rhev-data-center-mnt-glusterSD-ovirt4-stor.creator.local:vmstore.log
<==
[2019-02-26 13:54:12.689449] I [rpc-clnt.c:2042:rpc_clnt_reconfig]
0-vmstore-client-2: changing port to 49153 (from 0)

==> /var/log/glusterfs/glustershd.log <==
[2019-02-26 13:54:12.790471] I [rpc-clnt.c:2042:rpc_clnt_reconfig]
0-vmstore-client-2: changing port to 49153 (from 0)
[2019-02-26 13:54:13.437351] I [rpc-clnt.c:2042:rpc_clnt_reconfig]
0-engine-client-2: changing port to 49152 (from 0)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NWLGKXCKBSOMMX6AEUZTUULKM5224KVX/


[ovirt-users] Re: Error starting hosted engine

2019-02-12 Thread Endre Karlson
I Also tried to run
service vdsmd stop
vdsm-tool configure --force
service vdsmd start

and then restart the ha agent on all nodes but it doesnt help, the upgraded
to 4.3 node is still not able to start hte engine.

/ E

Den tir. 12. feb. 2019 kl. 22:01 skrev Endre Karlson <
endre.karl...@gmail.com>:

> Yes that seems correct, but is there no way to work around it ?
>
> Den tir. 12. feb. 2019 kl. 06:24 skrev Sahina Bose :
>
>>
>>
>> On Tue, Feb 12, 2019 at 10:51 AM Endre Karlson 
>> wrote:
>>
>>> It's a upgrade from 4.2.x < latest version of 4.2 series. I upgraded by
>>> adding the 4.3 repo and doing the steps on the upgrade guide page
>>> https://www.ovirt.org/release/4.3.0/#centos--rhel
>>>
>>
>> Seems like you're running into
>> https://bugzilla.redhat.com/show_bug.cgi?id=1666795
>>
>>
>>>
>>> Den man. 11. feb. 2019 kl. 23:35 skrev Greg Sheremeta <
>>> gsher...@redhat.com>:
>>>
>>>> Hi,
>>>>
>>>> Is this an upgrade or a fresh installation? What version? What
>>>> installation or upgrade commands / methods did you use?
>>>>
>>>> Best wishes,
>>>> Greg
>>>>
>>>>
>>>>
>>>> On Mon, Feb 11, 2019 at 5:11 PM Endre Karlson 
>>>> wrote:
>>>>
>>>>> https://paste.ubuntu.com/p/BrmPYRKmzT/
>>>>> https://paste.fedoraproject.org/paste/MjfioF9-Pzk02541abKyOw
>>>>>
>>>>> Seems like it's a error with vdsmd and glusterfs ?
>>>>>
>>>>> // Endre
>>>>> ___
>>>>> Users mailing list -- users@ovirt.org
>>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>> oVirt Code of Conduct:
>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>> List Archives:
>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OWKVL6PWBCPYPKD6QWQJERZDDRYRIUKU/
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> GREG SHEREMETA
>>>>
>>>> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>>>>
>>>> Red Hat NA
>>>>
>>>> <https://www.redhat.com/>
>>>>
>>>> gsher...@redhat.comIRC: gshereme
>>>> <https://red.ht/sig>
>>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LYXZVYVJAJFEBITTNQBNNF6WVTPNZJMJ/
>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P4Z7IQROW7RHX6TMWEQ6FCQ3KHN7X7NP/


[ovirt-users] Re: Error starting hosted engine

2019-02-12 Thread Endre Karlson
Yes that seems correct, but is there no way to work around it ?

Den tir. 12. feb. 2019 kl. 06:24 skrev Sahina Bose :

>
>
> On Tue, Feb 12, 2019 at 10:51 AM Endre Karlson 
> wrote:
>
>> It's a upgrade from 4.2.x < latest version of 4.2 series. I upgraded by
>> adding the 4.3 repo and doing the steps on the upgrade guide page
>> https://www.ovirt.org/release/4.3.0/#centos--rhel
>>
>
> Seems like you're running into
> https://bugzilla.redhat.com/show_bug.cgi?id=1666795
>
>
>>
>> Den man. 11. feb. 2019 kl. 23:35 skrev Greg Sheremeta <
>> gsher...@redhat.com>:
>>
>>> Hi,
>>>
>>> Is this an upgrade or a fresh installation? What version? What
>>> installation or upgrade commands / methods did you use?
>>>
>>> Best wishes,
>>> Greg
>>>
>>>
>>>
>>> On Mon, Feb 11, 2019 at 5:11 PM Endre Karlson 
>>> wrote:
>>>
>>>> https://paste.ubuntu.com/p/BrmPYRKmzT/
>>>> https://paste.fedoraproject.org/paste/MjfioF9-Pzk02541abKyOw
>>>>
>>>> Seems like it's a error with vdsmd and glusterfs ?
>>>>
>>>> // Endre
>>>> ___
>>>> Users mailing list -- users@ovirt.org
>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OWKVL6PWBCPYPKD6QWQJERZDDRYRIUKU/
>>>>
>>>
>>>
>>> --
>>>
>>> GREG SHEREMETA
>>>
>>> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>>>
>>> Red Hat NA
>>>
>>> <https://www.redhat.com/>
>>>
>>> gsher...@redhat.comIRC: gshereme
>>> <https://red.ht/sig>
>>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LYXZVYVJAJFEBITTNQBNNF6WVTPNZJMJ/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GO42HCO5B467TBN3VNXGKIP3O3UUMM32/


[ovirt-users] Re: Error starting hosted engine

2019-02-11 Thread Endre Karlson
It's a upgrade from 4.2.x < latest version of 4.2 series. I upgraded by
adding the 4.3 repo and doing the steps on the upgrade guide page
https://www.ovirt.org/release/4.3.0/#centos--rhel

Den man. 11. feb. 2019 kl. 23:35 skrev Greg Sheremeta :

> Hi,
>
> Is this an upgrade or a fresh installation? What version? What
> installation or upgrade commands / methods did you use?
>
> Best wishes,
> Greg
>
>
>
> On Mon, Feb 11, 2019 at 5:11 PM Endre Karlson 
> wrote:
>
>> https://paste.ubuntu.com/p/BrmPYRKmzT/
>> https://paste.fedoraproject.org/paste/MjfioF9-Pzk02541abKyOw
>>
>> Seems like it's a error with vdsmd and glusterfs ?
>>
>> // Endre
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OWKVL6PWBCPYPKD6QWQJERZDDRYRIUKU/
>>
>
>
> --
>
> GREG SHEREMETA
>
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>
> Red Hat NA
>
> <https://www.redhat.com/>
>
> gsher...@redhat.comIRC: gshereme
> <https://red.ht/sig>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LYXZVYVJAJFEBITTNQBNNF6WVTPNZJMJ/


[ovirt-users] Error starting hosted engine

2019-02-11 Thread Endre Karlson
https://paste.ubuntu.com/p/BrmPYRKmzT/
https://paste.fedoraproject.org/paste/MjfioF9-Pzk02541abKyOw

Seems like it's a error with vdsmd and glusterfs ?

// Endre
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OWKVL6PWBCPYPKD6QWQJERZDDRYRIUKU/


[ovirt-users] Gluster Issues

2018-09-11 Thread Endre Karlson
Hi, we are seeing some issues where our hosts oom kill glusterd after a
while but there's plenty of memory?

Running Centos 7,4.x and Ovirt 4.2.x
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U4SRQK74C7DPAB3GWCJQTZZZWU346CAW/


[ovirt-users] Ovirt vm's paused due to storage error

2018-03-15 Thread Endre Karlson
Hi, this is is here again and we are getting several vm's going into
storage error in our 4 node cluster running on centos 7.4 with gluster and
ovirt 4.2.1.

Gluster version: 3.12.6

volume status
[root@ovirt3 ~]# gluster volume status
Status of volume: data
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick ovirt0:/gluster/brick3/data   49152 0  Y
 9102
Brick ovirt2:/gluster/brick3/data   49152 0  Y
 28063
Brick ovirt3:/gluster/brick3/data   49152 0  Y
 28379
Brick ovirt0:/gluster/brick4/data   49153 0  Y
 9111
Brick ovirt2:/gluster/brick4/data   49153 0  Y
 28069
Brick ovirt3:/gluster/brick4/data   49153 0  Y
 28388
Brick ovirt0:/gluster/brick5/data   49154 0  Y
 9120
Brick ovirt2:/gluster/brick5/data   49154 0  Y
 28075
Brick ovirt3:/gluster/brick5/data   49154 0  Y
 28397
Brick ovirt0:/gluster/brick6/data   49155 0  Y
 9129
Brick ovirt2:/gluster/brick6_1/data 49155 0  Y
 28081
Brick ovirt3:/gluster/brick6/data   49155 0  Y
 28404
Brick ovirt0:/gluster/brick7/data   49156 0  Y
 9138
Brick ovirt2:/gluster/brick7/data   49156 0  Y
 28089
Brick ovirt3:/gluster/brick7/data   49156 0  Y
 28411
Brick ovirt0:/gluster/brick8/data   49157 0  Y
 9145
Brick ovirt2:/gluster/brick8/data   49157 0  Y
 28095
Brick ovirt3:/gluster/brick8/data   49157 0  Y
 28418
Brick ovirt1:/gluster/brick3/data   49152 0  Y
 23139
Brick ovirt1:/gluster/brick4/data   49153 0  Y
 23145
Brick ovirt1:/gluster/brick5/data   49154 0  Y
 23152
Brick ovirt1:/gluster/brick6/data   49155 0  Y
 23159
Brick ovirt1:/gluster/brick7/data   49156 0  Y
 23166
Brick ovirt1:/gluster/brick8/data   49157 0  Y
 23173
Self-heal Daemon on localhost   N/A   N/AY
 7757
Bitrot Daemon on localhost  N/A   N/AY
 7766
Scrubber Daemon on localhostN/A   N/AY
 7785
Self-heal Daemon on ovirt2  N/A   N/AY
 8205
Bitrot Daemon on ovirt2 N/A   N/AY
 8216
Scrubber Daemon on ovirt2   N/A   N/AY
 8227
Self-heal Daemon on ovirt0  N/A   N/AY
 32665
Bitrot Daemon on ovirt0 N/A   N/AY
 32674
Scrubber Daemon on ovirt0   N/A   N/AY
 32712
Self-heal Daemon on ovirt1  N/A   N/AY
 31759
Bitrot Daemon on ovirt1 N/A   N/AY
 31768
Scrubber Daemon on ovirt1   N/A   N/AY
 31790

Task Status of Volume data
--
Task : Rebalance
ID   : 62942ba3-db9e-4604-aa03-4970767f4d67
Status   : completed

Status of volume: engine
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick ovirt0:/gluster/brick1/engine 49158 0  Y
 9155
Brick ovirt2:/gluster/brick1/engine 49158 0  Y
 28107
Brick ovirt3:/gluster/brick1/engine 49158 0  Y
 28427
Self-heal Daemon on localhost   N/A   N/AY
 7757
Self-heal Daemon on ovirt1  N/A   N/AY
 31759
Self-heal Daemon on ovirt0  N/A   N/AY
 32665
Self-heal Daemon on ovirt2  N/A   N/AY
 8205

Task Status of Volume engine
--
There are no active volume tasks

Status of volume: iso
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick ovirt0:/gluster/brick2/iso49159 0  Y
 9164
Brick ovirt2:/gluster/brick2/iso49159 0  Y
 28116
Brick ovirt3:/gluster/brick2/iso49159 0  Y
 28436
NFS Server on localhost 2049  0  Y
 7746
Self-heal Daemon on localhost   N/A   N/AY
 7757
NFS Server on ovirt12049  0  Y
 31748
Self-heal Daemon on ovirt1  N/A   N/AY
 31759
NFS Server on ovirt02049  0  Y
 32656
Self-heal Daemon on ovirt0  N/A   N/AY
 32665
NFS Server on ovir

[ovirt-users] CPU queues on ovirt hosts.

2018-02-20 Thread Endre Karlson
Hi guys, is there a way to have CPU queues go down when having a java app
on a ovirt hosT ?

we have a idm app where the cpu queue is constantly 2-3 when we are doing
things with the configuration but on esx on a similar host it is much faster
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problems with some vms

2018-01-19 Thread Endre Karlson
Do anyone have any ideas on this?

2018-01-17 12:07 GMT+01:00 Endre Karlson :

> One brick was at a point down for replacement.
>
> It has been replaced and all vols are up
>
> Status of volume: data
> Gluster process TCP Port  RDMA Port  Online
> Pid
> 
> --
> Brick ovirt0:/gluster/brick3/data   49152 0  Y
>  22467
> Brick ovirt2:/gluster/brick3/data   49152 0  Y
>  20736
> Brick ovirt3:/gluster/brick3/data   49152 0  Y
>  23148
> Brick ovirt0:/gluster/brick4/data   49153 0  Y
>  22497
> Brick ovirt2:/gluster/brick4/data   49153 0  Y
>  20742
> Brick ovirt3:/gluster/brick4/data   49153 0  Y
>  23158
> Brick ovirt0:/gluster/brick5/data   49154 0  Y
>  22473
> Brick ovirt2:/gluster/brick5/data   49154 0  Y
>  20748
> Brick ovirt3:/gluster/brick5/data   49154 0  Y
>  23156
> Brick ovirt0:/gluster/brick6/data   49155 0  Y
>  22479
> Brick ovirt2:/gluster/brick6_1/data 49161 0  Y
>  21203
> Brick ovirt3:/gluster/brick6/data   49155 0  Y
>  23157
> Brick ovirt0:/gluster/brick7/data   49156 0  Y
>  22485
> Brick ovirt2:/gluster/brick7/data   49156 0  Y
>  20763
> Brick ovirt3:/gluster/brick7/data   49156 0  Y
>  23155
> Brick ovirt0:/gluster/brick8/data   49157 0  Y
>  22491
> Brick ovirt2:/gluster/brick8/data   49157 0  Y
>  20771
> Brick ovirt3:/gluster/brick8/data   49157 0  Y
>  23154
> Self-heal Daemon on localhost   N/A   N/AY
>  23238
> Bitrot Daemon on localhost  N/A   N/AY
>  24870
> Scrubber Daemon on localhostN/A   N/AY
>  24889
> Self-heal Daemon on ovirt2  N/A   N/AY
>  24271
> Bitrot Daemon on ovirt2 N/A   N/AY
>  24856
> Scrubber Daemon on ovirt2   N/A   N/AY
>  24866
> Self-heal Daemon on ovirt0  N/A   N/AY
>  29409
> Bitrot Daemon on ovirt0 N/A   N/AY
>  5457
> Scrubber Daemon on ovirt0   N/A   N/AY
>  5468
>
> Task Status of Volume data
> 
> --
> There are no active volume tasks
>
> Status of volume: engine
> Gluster process TCP Port  RDMA Port  Online
> Pid
> 
> --
> Brick ovirt0:/gluster/brick1/engine 49158 0  Y
>  22511
> Brick ovirt2:/gluster/brick1/engine 49158 0  Y
>  20780
> Brick ovirt3:/gluster/brick1/engine 49158 0  Y
>  23199
> Self-heal Daemon on localhost   N/A   N/AY
>  23238
> Self-heal Daemon on ovirt0  N/A   N/AY
>  29409
> Self-heal Daemon on ovirt2  N/A   N/AY
>  24271
>
> Task Status of Volume engine
> 
> --
> There are no active volume tasks
>
> Status of volume: iso
> Gluster process TCP Port  RDMA Port  Online
> Pid
> 
> --
> Brick ovirt0:/gluster/brick2/iso49159 0  Y
>  22520
> Brick ovirt2:/gluster/brick2/iso49159 0  Y
>  20789
> Brick ovirt3:/gluster/brick2/iso49159 0  Y
>  23208
> NFS Server on localhost N/A   N/AN
>  N/A
> Self-heal Daemon on localhost   N/A   N/AY
>  23238
> NFS Server on ovirt2N/A   N/AN
>  N/A
> Self-heal Daemon on ovirt2  N/A   N/AY
>  24271
> NFS Server on ovirt0N/A   N/AN
>  N/A
> Self-heal Daemon on ovirt0  N/A   N/AY
>  29409
>
> Task Status of Volume iso
> 
> --
> There are no active volume tasks
>
>
> 2018-01-17 8:13 GMT+01:00 Gobinda Das :
>
>> Hi,
>>  I can see some error in log:
>> [2018-01-14 11:19:49.886571] E [socket.c:230

Re: [ovirt-users] Problems with some vms

2018-01-17 Thread Endre Karlson
 Client process will keep trying to connect to glusterd
> until brick's port is available
>
> Can you please check gluster volume status and see if all bricks are up?
>
> On Wed, Jan 17, 2018 at 12:24 PM, Endre Karlson 
> wrote:
>
>> It's there now for each of the hosts. ovirt1 is not in service yet.
>>
>> 2018-01-17 5:52 GMT+01:00 Gobinda Das :
>>
>>> In the above url only data and iso mnt log present,But there is no
>>> engine and vmstore mount log.
>>>
>>> On Wed, Jan 17, 2018 at 1:26 AM, Endre Karlson 
>>> wrote:
>>>
>>>> Hi, all logs are located here: https://www.dropbox.com/
>>>> sh/3qzmwe76rkt09fk/AABzM9rJKbH5SBPWc31Npxhma?dl=0 for the mounts
>>>>
>>>> additionally we replaced a broken disk that is now resynced.
>>>>
>>>> 2018-01-15 11:17 GMT+01:00 Gobinda Das :
>>>>
>>>>> Hi Endre,
>>>>>  Mount logs will be in below format inside  /var/log/glusterfs :
>>>>>
>>>>>  /var/log/glusterfs/rhev-data-center-mnt-glusterSD-*\:_engine.log
>>>>> /var/log/glusterfs/rhev-data-center-mnt-glusterSD-*\:_data.log
>>>>> /var/log/glusterfs/rhev-data-center-mnt-glusterSD-*\:_vmstore.log
>>>>>
>>>>> On Mon, Jan 15, 2018 at 11:57 AM, Endre Karlson <
>>>>> endre.karl...@gmail.com> wrote:
>>>>>
>>>>>> Hi.
>>>>>>
>>>>>> What are the gluster mount logs ?
>>>>>>
>>>>>> I have these gluster logs.
>>>>>> cli.log  etc-glusterfs-glusterd.vol.log
>>>>>> glfsheal-engine.log  glusterd.lognfs.log
>>>>>>   rhev-data-center-mnt-glusterSD-ovirt0:_engine.log
>>>>>> rhev-data-center-mnt-glusterSD-ovirt3:_iso.log
>>>>>> cmd_history.log  glfsheal-data.log   glfsheal-iso.log
>>>>>>  glustershd.log  rhev-data-center-mnt-glusterSD-ovirt0:_data.log
>>>>>> rhev-data-center-mnt-glusterSD-ovirt0:_iso.log statedump.log
>>>>>>
>>>>>>
>>>>>> I am running version
>>>>>> glusterfs-server-3.12.4-1.el7.x86_64
>>>>>> glusterfs-geo-replication-3.12.4-1.el7.x86_64
>>>>>> libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.7.x86_64
>>>>>> glusterfs-libs-3.12.4-1.el7.x86_64
>>>>>> glusterfs-api-3.12.4-1.el7.x86_64
>>>>>> python2-gluster-3.12.4-1.el7.x86_64
>>>>>> glusterfs-client-xlators-3.12.4-1.el7.x86_64
>>>>>> glusterfs-cli-3.12.4-1.el7.x86_64
>>>>>> glusterfs-events-3.12.4-1.el7.x86_64
>>>>>> glusterfs-rdma-3.12.4-1.el7.x86_64
>>>>>> vdsm-gluster-4.20.9.3-1.el7.centos.noarch
>>>>>> glusterfs-3.12.4-1.el7.x86_64
>>>>>> glusterfs-fuse-3.12.4-1.el7.x86_64
>>>>>>
>>>>>> // Endre
>>>>>>
>>>>>> 2018-01-15 6:11 GMT+01:00 Gobinda Das :
>>>>>>
>>>>>>> Hi Endre,
>>>>>>>  Can you please provide glusterfs mount logs?
>>>>>>>
>>>>>>> On Mon, Jan 15, 2018 at 6:16 AM, Darrell Budic <
>>>>>>> bu...@onholyground.com> wrote:
>>>>>>>
>>>>>>>> What version of gluster are you running? I’ve seen a few of these
>>>>>>>> since moving my storage cluster to 12.3, but still haven’t been able to
>>>>>>>> determine what’s causing it. Seems to be happening most often on VMs 
>>>>>>>> that
>>>>>>>> haven’t been switches over to libgfapi mounts yet, but even one of 
>>>>>>>> those
>>>>>>>> has paused once so far. They generally restart fine from the GUI, and
>>>>>>>> nothing seems to need healing.
>>>>>>>>
>>>>>>>> --
>>>>>>>> *From:* Endre Karlson 
>>>>>>>> *Subject:* [ovirt-users] Problems with some vms
>>>>>>>> *Date:* January 14, 2018 at 12:55:45 PM CST
>>>>>>>> *To:* users
>>>>>>>>
>>>>>>>> Hi, we are getting some errors with some of our vms in a 3 node
>>>>>>>> server setup.
>>>>>>>>
>>>>>>>> 2018-01-14 15:01:44,015+0100 INFO  (libvirt/events) [virt.vm]
>>>>>>>> (vmId='2c34f52d-140b-4dbe-a4bd-d2cb467b0b7c') abnormal vm stop
>>>>>>>> device virtio-disk0  error eother (vm:4880)
>>>>>>>>
>>>>>>>> We are running glusterfs for shared storage.
>>>>>>>>
>>>>>>>> I have tried setting global maintenance on the first server and
>>>>>>>> then issuing a 'hosted-engine --vm-start' but that leads to nowhere.
>>>>>>>> ___
>>>>>>>> Users mailing list
>>>>>>>> Users@ovirt.org
>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> ___
>>>>>>>> Users mailing list
>>>>>>>> Users@ovirt.org
>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Thanks,
>>>>>>> Gobinda
>>>>>>> +91-9019047912 <+91%2090190%2047912>
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Thanks,
>>>>> Gobinda
>>>>> +91-9019047912 <+91%2090190%2047912>
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Thanks,
>>> Gobinda
>>> +91-9019047912 <+91%2090190%2047912>
>>>
>>
>>
>
>
> --
> Thanks,
> Gobinda
> +91-9019047912 <+91%2090190%2047912>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problems with some vms

2018-01-16 Thread Endre Karlson
It's there now for each of the hosts. ovirt1 is not in service yet.

2018-01-17 5:52 GMT+01:00 Gobinda Das :

> In the above url only data and iso mnt log present,But there is no engine
> and vmstore mount log.
>
> On Wed, Jan 17, 2018 at 1:26 AM, Endre Karlson 
> wrote:
>
>> Hi, all logs are located here: https://www.dropbox.com/
>> sh/3qzmwe76rkt09fk/AABzM9rJKbH5SBPWc31Npxhma?dl=0 for the mounts
>>
>> additionally we replaced a broken disk that is now resynced.
>>
>> 2018-01-15 11:17 GMT+01:00 Gobinda Das :
>>
>>> Hi Endre,
>>>  Mount logs will be in below format inside  /var/log/glusterfs :
>>>
>>>  /var/log/glusterfs/rhev-data-center-mnt-glusterSD-*\:_engine.log
>>> /var/log/glusterfs/rhev-data-center-mnt-glusterSD-*\:_data.log
>>> /var/log/glusterfs/rhev-data-center-mnt-glusterSD-*\:_vmstore.log
>>>
>>> On Mon, Jan 15, 2018 at 11:57 AM, Endre Karlson >> > wrote:
>>>
>>>> Hi.
>>>>
>>>> What are the gluster mount logs ?
>>>>
>>>> I have these gluster logs.
>>>> cli.log  etc-glusterfs-glusterd.vol.log  glfsheal-engine.log
>>>> glusterd.lognfs.log
>>>> rhev-data-center-mnt-glusterSD-ovirt0:_engine.log
>>>> rhev-data-center-mnt-glusterSD-ovirt3:_iso.log
>>>> cmd_history.log  glfsheal-data.log   glfsheal-iso.log
>>>>  glustershd.log  rhev-data-center-mnt-glusterSD-ovirt0:_data.log
>>>> rhev-data-center-mnt-glusterSD-ovirt0:_iso.log statedump.log
>>>>
>>>>
>>>> I am running version
>>>> glusterfs-server-3.12.4-1.el7.x86_64
>>>> glusterfs-geo-replication-3.12.4-1.el7.x86_64
>>>> libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.7.x86_64
>>>> glusterfs-libs-3.12.4-1.el7.x86_64
>>>> glusterfs-api-3.12.4-1.el7.x86_64
>>>> python2-gluster-3.12.4-1.el7.x86_64
>>>> glusterfs-client-xlators-3.12.4-1.el7.x86_64
>>>> glusterfs-cli-3.12.4-1.el7.x86_64
>>>> glusterfs-events-3.12.4-1.el7.x86_64
>>>> glusterfs-rdma-3.12.4-1.el7.x86_64
>>>> vdsm-gluster-4.20.9.3-1.el7.centos.noarch
>>>> glusterfs-3.12.4-1.el7.x86_64
>>>> glusterfs-fuse-3.12.4-1.el7.x86_64
>>>>
>>>> // Endre
>>>>
>>>> 2018-01-15 6:11 GMT+01:00 Gobinda Das :
>>>>
>>>>> Hi Endre,
>>>>>  Can you please provide glusterfs mount logs?
>>>>>
>>>>> On Mon, Jan 15, 2018 at 6:16 AM, Darrell Budic >>>> > wrote:
>>>>>
>>>>>> What version of gluster are you running? I’ve seen a few of these
>>>>>> since moving my storage cluster to 12.3, but still haven’t been able to
>>>>>> determine what’s causing it. Seems to be happening most often on VMs that
>>>>>> haven’t been switches over to libgfapi mounts yet, but even one of those
>>>>>> has paused once so far. They generally restart fine from the GUI, and
>>>>>> nothing seems to need healing.
>>>>>>
>>>>>> --
>>>>>> *From:* Endre Karlson 
>>>>>> *Subject:* [ovirt-users] Problems with some vms
>>>>>> *Date:* January 14, 2018 at 12:55:45 PM CST
>>>>>> *To:* users
>>>>>>
>>>>>> Hi, we are getting some errors with some of our vms in a 3 node
>>>>>> server setup.
>>>>>>
>>>>>> 2018-01-14 15:01:44,015+0100 INFO  (libvirt/events) [virt.vm]
>>>>>> (vmId='2c34f52d-140b-4dbe-a4bd-d2cb467b0b7c') abnormal vm stop
>>>>>> device virtio-disk0  error eother (vm:4880)
>>>>>>
>>>>>> We are running glusterfs for shared storage.
>>>>>>
>>>>>> I have tried setting global maintenance on the first server and then
>>>>>> issuing a 'hosted-engine --vm-start' but that leads to nowhere.
>>>>>> ___
>>>>>> Users mailing list
>>>>>> Users@ovirt.org
>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>
>>>>>>
>>>>>>
>>>>>> ___
>>>>>> Users mailing list
>>>>>> Users@ovirt.org
>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Thanks,
>>>>> Gobinda
>>>>> +91-9019047912 <+91%2090190%2047912>
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Thanks,
>>> Gobinda
>>> +91-9019047912 <+91%2090190%2047912>
>>>
>>
>>
>
>
> --
> Thanks,
> Gobinda
> +91-9019047912 <+91%2090190%2047912>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problems with some vms

2018-01-16 Thread Endre Karlson
Hi, all logs are located here: https://www.dropbox.com/
sh/3qzmwe76rkt09fk/AABzM9rJKbH5SBPWc31Npxhma?dl=0 for the mounts

additionally we replaced a broken disk that is now resynced.

2018-01-15 11:17 GMT+01:00 Gobinda Das :

> Hi Endre,
>  Mount logs will be in below format inside  /var/log/glusterfs :
>
>  /var/log/glusterfs/rhev-data-center-mnt-glusterSD-*\:_engine.log
> /var/log/glusterfs/rhev-data-center-mnt-glusterSD-*\:_data.log
> /var/log/glusterfs/rhev-data-center-mnt-glusterSD-*\:_vmstore.log
>
> On Mon, Jan 15, 2018 at 11:57 AM, Endre Karlson 
> wrote:
>
>> Hi.
>>
>> What are the gluster mount logs ?
>>
>> I have these gluster logs.
>> cli.log  etc-glusterfs-glusterd.vol.log  glfsheal-engine.log
>> glusterd.lognfs.log
>> rhev-data-center-mnt-glusterSD-ovirt0:_engine.log
>> rhev-data-center-mnt-glusterSD-ovirt3:_iso.log
>> cmd_history.log  glfsheal-data.log   glfsheal-iso.log
>>  glustershd.log  rhev-data-center-mnt-glusterSD-ovirt0:_data.log
>> rhev-data-center-mnt-glusterSD-ovirt0:_iso.log statedump.log
>>
>>
>> I am running version
>> glusterfs-server-3.12.4-1.el7.x86_64
>> glusterfs-geo-replication-3.12.4-1.el7.x86_64
>> libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.7.x86_64
>> glusterfs-libs-3.12.4-1.el7.x86_64
>> glusterfs-api-3.12.4-1.el7.x86_64
>> python2-gluster-3.12.4-1.el7.x86_64
>> glusterfs-client-xlators-3.12.4-1.el7.x86_64
>> glusterfs-cli-3.12.4-1.el7.x86_64
>> glusterfs-events-3.12.4-1.el7.x86_64
>> glusterfs-rdma-3.12.4-1.el7.x86_64
>> vdsm-gluster-4.20.9.3-1.el7.centos.noarch
>> glusterfs-3.12.4-1.el7.x86_64
>> glusterfs-fuse-3.12.4-1.el7.x86_64
>>
>> // Endre
>>
>> 2018-01-15 6:11 GMT+01:00 Gobinda Das :
>>
>>> Hi Endre,
>>>  Can you please provide glusterfs mount logs?
>>>
>>> On Mon, Jan 15, 2018 at 6:16 AM, Darrell Budic 
>>> wrote:
>>>
>>>> What version of gluster are you running? I’ve seen a few of these since
>>>> moving my storage cluster to 12.3, but still haven’t been able to determine
>>>> what’s causing it. Seems to be happening most often on VMs that haven’t
>>>> been switches over to libgfapi mounts yet, but even one of those has paused
>>>> once so far. They generally restart fine from the GUI, and nothing seems to
>>>> need healing.
>>>>
>>>> --
>>>> *From:* Endre Karlson 
>>>> *Subject:* [ovirt-users] Problems with some vms
>>>> *Date:* January 14, 2018 at 12:55:45 PM CST
>>>> *To:* users
>>>>
>>>> Hi, we are getting some errors with some of our vms in a 3 node server
>>>> setup.
>>>>
>>>> 2018-01-14 15:01:44,015+0100 INFO  (libvirt/events) [virt.vm]
>>>> (vmId='2c34f52d-140b-4dbe-a4bd-d2cb467b0b7c') abnormal vm stop device
>>>> virtio-disk0  error eother (vm:4880)
>>>>
>>>> We are running glusterfs for shared storage.
>>>>
>>>> I have tried setting global maintenance on the first server and then
>>>> issuing a 'hosted-engine --vm-start' but that leads to nowhere.
>>>> ___
>>>> Users mailing list
>>>> Users@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>>
>>>> ___
>>>> Users mailing list
>>>> Users@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>>>
>>> --
>>> Thanks,
>>> Gobinda
>>> +91-9019047912 <+91%2090190%2047912>
>>>
>>
>>
>
>
> --
> Thanks,
> Gobinda
> +91-9019047912 <+91%2090190%2047912>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problems with some vms

2018-01-14 Thread Endre Karlson
Hi.

What are the gluster mount logs ?

I have these gluster logs.
cli.log  etc-glusterfs-glusterd.vol.log  glfsheal-engine.log
glusterd.lognfs.log
rhev-data-center-mnt-glusterSD-ovirt0:_engine.log
rhev-data-center-mnt-glusterSD-ovirt3:_iso.log
cmd_history.log  glfsheal-data.log   glfsheal-iso.log
 glustershd.log  rhev-data-center-mnt-glusterSD-ovirt0:_data.log
rhev-data-center-mnt-glusterSD-ovirt0:_iso.log statedump.log


I am running version
glusterfs-server-3.12.4-1.el7.x86_64
glusterfs-geo-replication-3.12.4-1.el7.x86_64
libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.7.x86_64
glusterfs-libs-3.12.4-1.el7.x86_64
glusterfs-api-3.12.4-1.el7.x86_64
python2-gluster-3.12.4-1.el7.x86_64
glusterfs-client-xlators-3.12.4-1.el7.x86_64
glusterfs-cli-3.12.4-1.el7.x86_64
glusterfs-events-3.12.4-1.el7.x86_64
glusterfs-rdma-3.12.4-1.el7.x86_64
vdsm-gluster-4.20.9.3-1.el7.centos.noarch
glusterfs-3.12.4-1.el7.x86_64
glusterfs-fuse-3.12.4-1.el7.x86_64

// Endre

2018-01-15 6:11 GMT+01:00 Gobinda Das :

> Hi Endre,
>  Can you please provide glusterfs mount logs?
>
> On Mon, Jan 15, 2018 at 6:16 AM, Darrell Budic 
> wrote:
>
>> What version of gluster are you running? I’ve seen a few of these since
>> moving my storage cluster to 12.3, but still haven’t been able to determine
>> what’s causing it. Seems to be happening most often on VMs that haven’t
>> been switches over to libgfapi mounts yet, but even one of those has paused
>> once so far. They generally restart fine from the GUI, and nothing seems to
>> need healing.
>>
>> --
>> *From:* Endre Karlson 
>> *Subject:* [ovirt-users] Problems with some vms
>> *Date:* January 14, 2018 at 12:55:45 PM CST
>> *To:* users
>>
>> Hi, we are getting some errors with some of our vms in a 3 node server
>> setup.
>>
>> 2018-01-14 15:01:44,015+0100 INFO  (libvirt/events) [virt.vm]
>> (vmId='2c34f52d-140b-4dbe-a4bd-d2cb467b0b7c') abnormal vm stop device
>> virtio-disk0  error eother (vm:4880)
>>
>> We are running glusterfs for shared storage.
>>
>> I have tried setting global maintenance on the first server and then
>> issuing a 'hosted-engine --vm-start' but that leads to nowhere.
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
> Thanks,
> Gobinda
> +91-9019047912 <+91%2090190%2047912>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Problems with some vms

2018-01-14 Thread Endre Karlson
Hi, we are getting some errors with some of our vms in a 3 node server
setup.

2018-01-14 15:01:44,015+0100 INFO  (libvirt/events) [virt.vm]
(vmId='2c34f52d-140b-4dbe-a4bd-d2cb467b0b7c') abnormal vm stop device
virtio-disk0  error eother (vm:4880)

We are running glusterfs for shared storage.

I have tried setting global maintenance on the first server and then
issuing a 'hosted-engine --vm-start' but that leads to nowhere.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] GDeploy, thin provisioning and pools

2017-06-23 Thread Endre Karlson
Hi, I'm trying to get gdeploy working for my servers (3 of them) using the
following configu linked below:

https://gist.github.com/ekarlso/9bfa0e0560b84ec286ef34ab790d

But it seems I need to have 1 pool metadata lv pr volume group?

Regards
Endre
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error importing old NFS storage domain

2017-05-03 Thread Endre Karlson
So I have now attempted to import the domain after fixing other various
issues with the installation and now it says that there's no storage domain
on the specified path even though 10.2.0.15:/mnt/data/vm should contain a
valid vm storage domain?

2017-04-28 9:36 GMT+02:00 Endre Karlson :

> Here I am posting the logs at https://api.creator.no/logs
>
> The host in question that is used for the import and that has some
> glusterTasksList errors on it is ovhost20
>
> 2017-04-27 14:48 GMT+02:00 Elad Ben Aharon :
>
>> Hi,
>>
>> Please provide the logs
>>
>> On Thu, Apr 27, 2017 at 2:12 PM, Endre Karlson 
>> wrote:
>>
>>> I have a existing NFS storage domain I would like to import.
>>>
>>> I add the Name attribute and set the path and hit Enter but it gives me
>>> "Error while executing actions Attach Storage Domain: Internal Engine
>>> Error".
>>>
>>> I checked the Engine logs too to see if there's any clue when I do it
>>> but I cant seem to find anything. Maybe I can attach it here?
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error importing old NFS storage domain

2017-04-28 Thread Endre Karlson
Here I am posting the logs at https://api.creator.no/logs

The host in question that is used for the import and that has some
glusterTasksList errors on it is ovhost20

2017-04-27 14:48 GMT+02:00 Elad Ben Aharon :

> Hi,
>
> Please provide the logs
>
> On Thu, Apr 27, 2017 at 2:12 PM, Endre Karlson 
> wrote:
>
>> I have a existing NFS storage domain I would like to import.
>>
>> I add the Name attribute and set the path and hit Enter but it gives me
>> "Error while executing actions Attach Storage Domain: Internal Engine
>> Error".
>>
>> I checked the Engine logs too to see if there's any clue when I do it but
>> I cant seem to find anything. Maybe I can attach it here?
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Newly deployed cluster with glusterfs doesn't show engine and gives errors

2017-04-27 Thread Endre Karlson
VDSM ovhost20 command GlusterTaskListVDS failed: 'AutoProxy[instance]
object has no attribute 'glusterTaskList' do you guys have any idea on this?

Also I cannot select hosts as hosted engine host when I add a new host nor
does the HostedEgnine vm show in the vm's pane for the cluster.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Error importing old NFS storage domain

2017-04-27 Thread Endre Karlson
I have a existing NFS storage domain I would like to import.

I add the Name attribute and set the path and hit Enter but it gives me
"Error while executing actions Attach Storage Domain: Internal Engine
Error".

I checked the Engine logs too to see if there's any clue when I do it but I
cant seem to find anything. Maybe I can attach it here?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cloud init and vms

2017-03-24 Thread Endre Karlson
Yeah i tried that with ubuntu but it looks to just fail whebmn it starts
the vm because it tries to contact the metadata api instead of using the
cdrom source

23. mar. 2017 4:26 p.m. skrev "Artyom Lukianov" :

> Just be sure that the cloud-init service enabled before you create the
> template, otherwise it will fail to initialize a VM.
> Best Regards
>
> On Thu, Mar 23, 2017 at 1:06 PM, Endre Karlson 
> wrote:
>
>> Hi, is there any prerequisite setup on a Ubuntu vm that is turned into a
>> template that needs to be done except installing cloud init packages and
>> sealing the template?
>>
>> Endre
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Cloud init and vms

2017-03-23 Thread Endre Karlson
Hi, is there any prerequisite setup on a Ubuntu vm that is turned into a
template that needs to be done except installing cloud init packages and
sealing the template?

Endre
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Cloud init guide

2017-03-22 Thread Endre Karlson
Is there a guide to how to use cloud-init with ubuntu on ovirt?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users