Re: [ovirt-users] "remove" option greyed out on Permissions tab

2017-07-18 Thread Ian Neilsen
Hey All

Ive dug around trying to find a flag to allow "remove" option on
permissions, but cant tell for sure.  On every panel the 'remove' option is
greyed out. I need to remove users from modifying the disk of the engine
manager except admin and unfortunately I can't do this.

Any ideas?

Thanks in advance
Ian


On 7 July 2017 at 13:53, Ian Neilsen <ian.neil...@gmail.com> wrote:

> Hey guys
>
> Ive just noticed that I am unable to choose the "remove" option on any
> "Permissions" tab in Ovirt Self-hosted 4.1.
>
> Anyone have a suggestion on how to fix this. Im logged in as admin,
> original admin created during installation.
>
> Thanks in Advance
>
> --
> Ian Neilsen
>
> Mobile: 0424 379 762
> Linkedin: http://au.linkedin.com/in/ianneilsen
> Twitter : ineilsen
>



-- 
Ian Neilsen

Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] "remove" option greyed out on Permissions tab

2017-07-06 Thread Ian Neilsen
Hey guys

Ive just noticed that I am unable to choose the "remove" option on any
"Permissions" tab in Ovirt Self-hosted 4.1.

Anyone have a suggestion on how to fix this. Im logged in as admin,
original admin created during installation.

Thanks in Advance

-- 
Ian Neilsen

Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 - mount recovery iso for failing vm ?

2017-03-23 Thread Ian Neilsen
I should have been a little more specific.. a failing hosted engine vm.

On 23 March 2017 at 20:24, Ian Neilsen <ian.neil...@gmail.com> wrote:

> Guys
>
> I've spent the day reading any and all content on recovering a failing to
> boot vm under ovirt 4.1 and am finding many cmds are deprecated , old
> processes no longer work.
>
> What is the correct way to mount an iso to perform recovery on a boot
> partition of a failing VM in oVirt 4.1?
>
> vm.conf, virsh, virt-edit, vmhost.xml, vdsClient, new vda, way too many
> options and no singular right way to do this that works well.
>
> --
> Ian Neilsen
>
> Mobile: 0424 379 762
> Linkedin: http://au.linkedin.com/in/ianneilsen
> Twitter : ineilsen
>



-- 
Ian Neilsen

Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt 4.1 - mount recovery iso for failing vm ?

2017-03-23 Thread Ian Neilsen
Guys

I've spent the day reading any and all content on recovering a failing to
boot vm under ovirt 4.1 and am finding many cmds are deprecated , old
processes no longer work.

What is the correct way to mount an iso to perform recovery on a boot
partition of a failing VM in oVirt 4.1?

vm.conf, virsh, virt-edit, vmhost.xml, vdsClient, new vda, way too many
options and no singular right way to do this that works well.

-- 
Ian Neilsen

Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to handle mount options for hosted engine on glusterfs

2017-03-23 Thread Ian Neilsen
I Think I am going to have to go the "virsh" path and mount a guest based
cdrom storage device using a live cd and fix the vm disk this way.
Meanwhile fix up the fact that vnc and serial console have both been wiped
out following the 4.1 upgrade.

On 22 March 2017 at 11:52, Ian Neilsen <ian.neil...@gmail.com> wrote:

> Good to know. Ive been using the 1 dash notation following RH document.
> Dont think Ive seen the 1 dash before.
>
> On the original cluster I used IP's second cluster I used FQDN's but made
> sure I have a hosts file present.
>
> On 22 March 2017 at 10:09, /dev/null <devn...@linuxitil.org> wrote:
>
>> Ian, knara,
>>
>> success! I got it working using the two-dash-notation and ip addresses.
>> Shure this is the most relyable way, even with local hosts file.
>>
>> In my case, the hosted vm dies and takes some time to be running again.
>> Is it possible to have the vm surviving the switch to the
>> backup-volfile-server?
>>
>> Thanks & regards
>>
>> /dev/null
>>
>> *On Tue, 21 Mar 2017 11:52:32 +0530, knarra wrote*
>> > On 03/21/2017 10:52 AM, Ian Neilsen wrote:
>> >
>>
>>
>> >
>> > knara
>> >
>> > Looks like your conf is incorrect for mnt option.
>> >
>> >
>>
>> Hi Ian,
>> >
>> > mnt_option should be mnt_options=backup-volfile-servers=:
>> and this is how we test it.
>> >
>> > Thanks
>> > kasturi.
>> >
>>
>>
>> >
>> > It should be I believe;  mnt_options=backupvolfile-server=server name
>> >
>> > not
>> >
>> > mnt_options=backup-volfile-servers=host2
>> >
>> > If your dns isnt working or your hosts file is incorrect this will
>> prevent it as well.
>> >
>> >
>> >
>> > On 21 March 2017 at 03:30, /dev/null <devn...@linuxitil.org> wrote:
>> >
>>>
>>>
>>> > Hi kasturi,
>>> >
>>> > thank you. I tested and it seems not to work, even after rebooting the
>>> current mount does not show up the mnt_options nor the switch over works.
>>> >
>>> > [root@host2 ~]# cat /etc/ovirt-hosted-engine/hosted-engine.conf
>>> > ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
>>> > gateway=192.168.2.1
>>> > iqn=
>>> > conf_image_UUID=7bdc29ad-bee6-4a33-8d58-feae9f45d54f
>>> > ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
>>> > sdUUID=1775d440-649c-4921-ba3b-9b6218c27ef3
>>> > connectionUUID=fcf70593-8214-4e8d-b546-63c210a3d5e7
>>> > conf_volume_UUID=06dd17e5-a440-417a-94e8-75929b6f9ed5
>>> > user=
>>> > host_id=2
>>> > bridge=ovirtmgmt
>>> > metadata_image_UUID=6252c21c-227d-4dbd-bb7b-65cf342154b6
>>> > spUUID=----
>>> > mnt_options=backup-volfile-servers=host2
>>> > fqdn=ovirt.test.lab
>>> > portal=
>>> > vm_disk_id=1bb9ea7f-986c-4803-ae82-8d5a47b1c496
>>> > metadata_volume_UUID=426ff2cc-58a2-4b83-b22f-3f7dc99890d4
>>> > vm_disk_vol_id=b57d40d2-e68b-440a-bab7-0a9631f4baa4
>>> > domainType=glusterfs
>>> > port=
>>> > console=qxl
>>> > ca_subject="C=EN, L=Test, O=Test, CN=Test"
>>> > password=
>>> > vmid=272942f3-99b9-48b9-aca4-19ec852f6874
>>> > lockspace_image_UUID=9fbdbfd4-3b31-43ce-80e2-283f0aeead49
>>> > lockspace_volume_UUID=b1e4d3ed-ec78-41cd-9a39-4372f488fb92
>>> > vdsm_use_ssl=true
>>> > storage=host1:/gvol0
>>> > conf=/var/run/ovirt-hosted-engine-ha/vm.conf
>>> >
>>> > [root@host2 ~]# mount |grep gvol0
>>> > host1:/gvol0 on /rhev/data-center/mnt/glusterSD/host1:_gvol0 type
>>> fuse.glusterfs (rw,relatime,user_id=0,group_i
>>> d=0,default_permissions,allow_other,max_read=131072 <13%2010%2072>)
>>> >
>>> > Any suggestion?
>>> >
>>> > I will try an answerfile-install as well later, but it was helpful to
>>> know, where to set this.
>>> >
>>> > Thanks & best regards
>>> >
>>> * > On Mon, 20 Mar 2017 12:12:25 +0530, knarra wrote*
>>> >
>>> > > On 03/20/2017 05:09 AM, /dev/null wrote:
>>> > >
>>>
>>> Hi,
>>>
>>> how do i make the hosted_storage aware of gluster server failure? In 
>>> --deploy i
>>>
>>> cannot
>>> provide bac

[ovirt-users] ovirt ENGINE stuck in paused state following new partition

2017-03-22 Thread Ian Neilsen
hi guys

Bit of a pickle.  I expanded my engine managers disk with a new partition
and formatted to ext4, however I seem to have been hit by a superblock
issue on the new ext4 partition. The engine is now stuck in paused state.

I rebooted, powered-off the engine vm but it wont come out of PAUSED state.
I cannot obviously access the console, with it giving me paused messages.
VNC console is down also.

Any suggestions on restoring the engine vm?
Logs are not showing anything. Storage is up, vdsm seems happy, broker
seems happy, nothing really jumping out. Obviously the VM is havign start
issues.


-- 
Ian Neilsen

Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to handle mount options for hosted engine on glusterfs

2017-03-21 Thread Ian Neilsen
Good to know. Ive been using the 1 dash notation following RH document.
Dont think Ive seen the 1 dash before.

On the original cluster I used IP's second cluster I used FQDN's but made
sure I have a hosts file present.

On 22 March 2017 at 10:09, /dev/null <devn...@linuxitil.org> wrote:

> Ian, knara,
>
> success! I got it working using the two-dash-notation and ip addresses.
> Shure this is the most relyable way, even with local hosts file.
>
> In my case, the hosted vm dies and takes some time to be running again. Is
> it possible to have the vm surviving the switch to the
> backup-volfile-server?
>
> Thanks & regards
>
> /dev/null
>
> *On Tue, 21 Mar 2017 11:52:32 +0530, knarra wrote*
> > On 03/21/2017 10:52 AM, Ian Neilsen wrote:
> >
>
>
> >
> > knara
> >
> > Looks like your conf is incorrect for mnt option.
> >
> >
>
> Hi Ian,
> >
> > mnt_option should be mnt_options=backup-volfile-servers=:
> and this is how we test it.
> >
> > Thanks
> > kasturi.
> >
>
>
> >
> > It should be I believe;  mnt_options=backupvolfile-server=server name
> >
> > not
> >
> > mnt_options=backup-volfile-servers=host2
> >
> > If your dns isnt working or your hosts file is incorrect this will
> prevent it as well.
> >
> >
> >
> > On 21 March 2017 at 03:30, /dev/null <devn...@linuxitil.org> wrote:
> >
>>
>>
>> > Hi kasturi,
>> >
>> > thank you. I tested and it seems not to work, even after rebooting the
>> current mount does not show up the mnt_options nor the switch over works.
>> >
>> > [root@host2 ~]# cat /etc/ovirt-hosted-engine/hosted-engine.conf
>> > ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
>> > gateway=192.168.2.1
>> > iqn=
>> > conf_image_UUID=7bdc29ad-bee6-4a33-8d58-feae9f45d54f
>> > ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
>> > sdUUID=1775d440-649c-4921-ba3b-9b6218c27ef3
>> > connectionUUID=fcf70593-8214-4e8d-b546-63c210a3d5e7
>> > conf_volume_UUID=06dd17e5-a440-417a-94e8-75929b6f9ed5
>> > user=
>> > host_id=2
>> > bridge=ovirtmgmt
>> > metadata_image_UUID=6252c21c-227d-4dbd-bb7b-65cf342154b6
>> > spUUID=----
>> > mnt_options=backup-volfile-servers=host2
>> > fqdn=ovirt.test.lab
>> > portal=
>> > vm_disk_id=1bb9ea7f-986c-4803-ae82-8d5a47b1c496
>> > metadata_volume_UUID=426ff2cc-58a2-4b83-b22f-3f7dc99890d4
>> > vm_disk_vol_id=b57d40d2-e68b-440a-bab7-0a9631f4baa4
>> > domainType=glusterfs
>> > port=
>> > console=qxl
>> > ca_subject="C=EN, L=Test, O=Test, CN=Test"
>> > password=
>> > vmid=272942f3-99b9-48b9-aca4-19ec852f6874
>> > lockspace_image_UUID=9fbdbfd4-3b31-43ce-80e2-283f0aeead49
>> > lockspace_volume_UUID=b1e4d3ed-ec78-41cd-9a39-4372f488fb92
>> > vdsm_use_ssl=true
>> > storage=host1:/gvol0
>> > conf=/var/run/ovirt-hosted-engine-ha/vm.conf
>> >
>> > [root@host2 ~]# mount |grep gvol0
>> > host1:/gvol0 on /rhev/data-center/mnt/glusterSD/host1:_gvol0 type
>> fuse.glusterfs (rw,relatime,user_id=0,group_i
>> d=0,default_permissions,allow_other,max_read=131072 <13%2010%2072>)
>> >
>> > Any suggestion?
>> >
>> > I will try an answerfile-install as well later, but it was helpful to
>> know, where to set this.
>> >
>> > Thanks & best regards
>> >
>> * > On Mon, 20 Mar 2017 12:12:25 +0530, knarra wrote*
>> >
>> > > On 03/20/2017 05:09 AM, /dev/null wrote:
>> > >
>>
>> Hi,
>>
>> how do i make the hosted_storage aware of gluster server failure? In 
>> --deploy i
>>
>> cannot
>> provide backup-volfile-servers. In
>> /etc/ovirt-hosted-engine/hosted-engine.conf
>>
>> there
>>
>> is
>> an mnt_options line, but i
>>
>> read
>> (https://github.com/oVirt/ovirt-hosted-engine-setup/commit/995c6a65ab897d804f794306cc3654214f2c29b6)
>> that this settings get lost during deployment on seconday
>>
>> servers.
>>
>> Is there an official way to deal with that? Should this option be set 
>> manualy on
>>
>> all
>>
>> nodes?
>>
>> Thanks!
>>
>> /dev/null
>>
>> Hi, > >I think in the above patch they are just   hiding the the
>> query for mount_options but i think all the code is still present and

Re: [ovirt-users] how to handle mount options for hosted engine on glusterfs

2017-03-20 Thread Ian Neilsen
knara

Looks like your conf is incorrect for mnt option.

It should be I believe;  mnt_options=backupvolfile-server=server name

not

mnt_options=backup-volfile-servers=host2

If your dns isnt working or your hosts file is incorrect this will prevent
it as well.



On 21 March 2017 at 03:30, /dev/null <devn...@linuxitil.org> wrote:

> Hi kasturi,
>
> thank you. I tested and it seems not to work, even after rebooting the
> current mount does not show up the mnt_options nor the switch over works.
>
> [root@host2 ~]# cat /etc/ovirt-hosted-engine/hosted-engine.conf
> ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
> gateway=192.168.2.1
> iqn=
> conf_image_UUID=7bdc29ad-bee6-4a33-8d58-feae9f45d54f
> ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
> sdUUID=1775d440-649c-4921-ba3b-9b6218c27ef3
> connectionUUID=fcf70593-8214-4e8d-b546-63c210a3d5e7
> conf_volume_UUID=06dd17e5-a440-417a-94e8-75929b6f9ed5
> user=
> host_id=2
> bridge=ovirtmgmt
> metadata_image_UUID=6252c21c-227d-4dbd-bb7b-65cf342154b6
> spUUID=----
> mnt_options=backup-volfile-servers=host2
> fqdn=ovirt.test.lab
> portal=
> vm_disk_id=1bb9ea7f-986c-4803-ae82-8d5a47b1c496
> metadata_volume_UUID=426ff2cc-58a2-4b83-b22f-3f7dc99890d4
> vm_disk_vol_id=b57d40d2-e68b-440a-bab7-0a9631f4baa4
> domainType=glusterfs
> port=
> console=qxl
> ca_subject="C=EN, L=Test, O=Test, CN=Test"
> password=
> vmid=272942f3-99b9-48b9-aca4-19ec852f6874
> lockspace_image_UUID=9fbdbfd4-3b31-43ce-80e2-283f0aeead49
> lockspace_volume_UUID=b1e4d3ed-ec78-41cd-9a39-4372f488fb92
> vdsm_use_ssl=true
> storage=host1:/gvol0
> conf=/var/run/ovirt-hosted-engine-ha/vm.conf
>
>
> [root@host2 ~]# mount |grep gvol0
> host1:/gvol0 on /rhev/data-center/mnt/glusterSD/host1:_gvol0 type
> fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,
> allow_other,max_read=131072 <13%2010%2072>)
>
> Any suggestion?
>
> I will try an answerfile-install as well later, but it was helpful to
> know, where to set this.
>
> Thanks & best regards
>
> * On Mon, 20 Mar 2017 12:12:25 +0530, knarra wrote*
> > On 03/20/2017 05:09 AM, /dev/null wrote:
> >
>
> Hi,
>
> how do i make the hosted_storage aware of gluster server failure? In --deploy 
> i
> cannot
> provide backup-volfile-servers. In /etc/ovirt-hosted-engine/hosted-engine.conf
> there
> is
> an mnt_options line, but i
> read
> (https://github.com/oVirt/ovirt-hosted-engine-setup/commit/995c6a65ab897d804f794306cc3654214f2c29b6)
> that this settings get lost during deployment on seconday
> servers.
>
> Is there an official way to deal with that? Should this option be set manualy 
> on
> all
> nodes?
>
> Thanks!
>
> /dev/null
>
> Hi,
> >
> >I think in the above patch they are just   hiding the the query for
> mount_options but i think all the code is still present and you should not
> loose mount options during additional host deployment. For more info you
> can refer [1].
> >
> > You can set this option manually on all nodes by editing
> /etc/ovirt-hosted-engine/hosted-engine.conf. Following steps will help
> you to achieve this.
> >
> > 1) Move each host to maintenance, edit the file
> '/etc/ovirt-hosted-engine/hosted-engine.conf'.
> > 2) set mnt_options = backup-volfile-servers=:
> > 3) restart the services 'systemctl restart ovirt-ha-agent' ; 'systemctl
> restart ovirt-ha-broker'
> > 4) Activate the node.
> >
> > Repeat the above steps for all the nodes in the cluster.
> >
> > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1426517#c2
> >
> > Hope this helps !!
> >
> > Thanks
> > kasturi
> >
>
>
> --
> Diese Nachricht wurde auf Viren und andere gef�hrliche Inhalte
> untersucht
> und ist - aktuelle Virenscanner vorausgesetzt -
> sauber.
> For all your IT requirements visit: http://www.transtec.co.uk
>
>
> >
> >
>
> ___
> Users mailing
> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
> >
> > --
> > Diese E-Mail wurde auf Viren und gefährliche Anhänge
> > durch *MailScanner* <http://www.mailscanner.info/> untersucht und ist
> wahrscheinlich virenfrei.
> > MailScanner dankt transtec <http://www.transtec.de/> f�r die
> freundliche Unterst�tzung.
>
>
> --
> Diese E-Mail wurde auf Viren und gefährliche Anhänge
> durch *MailScanner* <http://www.mailscanner.info/> untersucht und ist
> wahrscheinlich virenfrei.
> MailScanner dankt transtec <http://www.transtec.de/> für die freundliche
> Unterstützung.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Ian Neilsen

Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirtmgmt bridge creation fails 100% during hosted-engine 4.1 install

2017-03-20 Thread Ian Neilsen
Edy

Arh ok, my fault. I will push up the files a bit later.

Ian

On 20 March 2017 at 16:47, Edward Haas <eh...@redhat.com> wrote:

> Hi Ian,
>
> Please include only the relevant files of the specified date, I could not
> figure out which ones to look at.
> There are also no supervdsm logs (except one for node2, but for different
> dates). Are there such logs at all?
>
> Thanks,
> Edy.
>
> On Mon, Mar 20, 2017 at 2:59 AM, Ian Neilsen <ian.neil...@gmail.com>
> wrote:
>
>> Hi Edward
>>
>> Let me know if this link does not work, I'll get you the logs another
>> way; https://mega.nz/#F!YFsgADAT!C7jOzkoH5BPZD3iChsY_5w
>>
>> Date range of install 13th March to 14th March. Logs should all be there
>> for that date range.
>>
>> cheers
>> Ian
>>
>> On 16 March 2017 at 17:41, Edward Haas <eh...@redhat.com> wrote:
>>
>>> Hello Ian,
>>>
>>> Please share your vdsm.log, supervdsm.log and Engine logs from the time
>>> you had this failure.
>>> (please specify the date/time you started the process so we can know
>>> where to look in the log)
>>>
>>> Thanks,
>>> Edy.
>>>
>>>
>>> On Tue, Mar 14, 2017 at 6:31 AM, Ian Neilsen <ian.neil...@gmail.com>
>>> wrote:
>>>
>>>> re:bug 2 - confirmed - ovirtmgmt bridge is not created during initial
>>>> deploy of second host.
>>>> Workaround - maintenance 2nd host --> re-establish bond network from
>>>> backup files or manually, systemctl restart network --> engine gui -->
>>>> second host --> network --> add in ovirtmgmt bridge
>>>>
>>>> Seems to work and allows the bridge to be created without hassle.
>>>>
>>>>
>>>>
>>>> On 14 March 2017 at 12:48, Ian Neilsen <ian.neil...@gmail.com> wrote:
>>>>
>>>>> Guys
>>>>>
>>>>> Bug 1:
>>>>> I have run the ovirt 4.1 installation many times now and everytime
>>>>> through the deploy the ovirtmgmt bridge fails at ifup when used in
>>>>> conjunction with bonded nic's
>>>>>
>>>>> The deploy process stops when it cannot raise the bridge ifup. To fix,
>>>>> a systemctl restart network works and I can start deploy again. ifup will
>>>>> not work.
>>>>>
>>>>> 2 questions:
>>>>> Do you want me to raise a bug for this. I'll get my logs to a place
>>>>> where you can access them.
>>>>> Has anyone got a scrip to create the ovirtmgmt bridge? Is there
>>>>> anything special I need to know if I create it manually on my bonded 
>>>>> nic's.
>>>>>
>>>>> Bug 2:
>>>>> During hosted deploy on second node,via gui, the ovirtmgt bridge fails
>>>>> to create necessary files and deletes other ifcfg files in the process.
>>>>> Bridge creation stops as it seems to be missing ifcfg files necessary.
>>>>> Again nics are bonded. manual creation is needed to fix the deploy mess.
>>>>>
>>>>> Yet to confirm this one, am about to try now. Will let you know.
>>>>>
>>>>> Systems:
>>>>> Centos7.3,
>>>>> ovirt 4.1,
>>>>> gluster 3.10.0
>>>>> vdsm-4.19.4-1.el7.centos.x86_64
>>>>> kernel 3.10.0-514.10.2.el7.x86_64
>>>>>
>>>>> --
>>>>> Ian Neilsen
>>>>>
>>>>> Mobile: 0424 379 762
>>>>> Linkedin: http://au.linkedin.com/in/ianneilsen
>>>>> Twitter : ineilsen
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Ian Neilsen
>>>>
>>>> Mobile: 0424 379 762
>>>> Linkedin: http://au.linkedin.com/in/ianneilsen
>>>> Twitter : ineilsen
>>>>
>>>> ___
>>>> Users mailing list
>>>> Users@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>>
>>
>> --
>> Ian Neilsen
>>
>> Mobile: 0424 379 762
>> Linkedin: http://au.linkedin.com/in/ianneilsen
>> Twitter : ineilsen
>>
>
>


-- 
Ian Neilsen

Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to deploy hosted-engine to the second and later host machines

2017-03-19 Thread Ian Neilsen
Found a working option to get second and subsequent hosts deployed with
ovirt 4.1

1-Set second host into maintenance
2-highlight second host and choose "Installation --> Reinstall", edit
params in popup and click OK
3-Ignore warning that pops up and watch the vdsm.log or look for
"Installing" status in webui
4-It should show you that the install is running, keep watching vdsm.log.
5-Click 'ok' on warning and wait patiently for install to finish.

Second node comes up and is active.

Ian



On 20 March 2017 at 09:13, Ian Neilsen <ian.neil...@gmail.com> wrote:

> Having same issue here also.
>
> hosted-engine --deploy via cli is no longer available and hosted deploy
> via UI does not work.
> Host 2 is present in the engine manager and can be controlled via GUI,
> however engine manager vm cannot be moved to host 2 due to hosted-engine
> --deploy needing to be run.
>
>
>
>
> On 20 March 2017 at 01:41, Tatsuya <sugin...@gmail.com> wrote:
>
>> Hello Didi.
>>
>> Thank you for the information.
>>
>> > There is an option to configure hosted-engine when adding a host from
>> > the web ui.
>> > In 4.1 this is the only way to add a hosted-engine host:
>> >
>> > https://bugzilla.redhat.com/1366183
>>
>> Does web ui include Cockpit UI?
>>
>> I tried to install additional Hosted Engine from Cockpit UI but I
>> received a same error which I received on cli.
>> > [ ERROR ] The selected device already contains a storage domain.
>>
>> I tried from "Start" link, but must I use "Deploy with Gluster *" like
>> following link?
>> http://www.ovirt.org/images/wiki/Deploy-With-Gluster.png?1475063791
>>
>> * In my case, it's "Hosted Engine with Gluster"
>>
>> I also tried "Hosted Engine with Gluster", but when I ran "deploy" after
>> option selection,
>> "Deployment failed" was displayed in a moment and I could not find a log,
>> so I gave up.
>>
>> Also, when running "Hosted Engine with Gluster", many things are done
>> automatically
>> (create LV and create GlusterFS volume etc.), so I believe there is just
>> another way to simply add hosted engine.
>>
>> If you know, could you please tell me which link should be executed to
>> add hosted-engine?
>>
>> Thanks,
>> Tatsuya
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
> Ian Neilsen
>
> Mobile: 0424 379 762
> Linkedin: http://au.linkedin.com/in/ianneilsen
> Twitter : ineilsen
>



-- 
Ian Neilsen

Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirtmgmt bridge creation fails 100% during hosted-engine 4.1 install

2017-03-19 Thread Ian Neilsen
Hi Edward

Let me know if this link does not work, I'll get you the logs another way;
https://mega.nz/#F!YFsgADAT!C7jOzkoH5BPZD3iChsY_5w

Date range of install 13th March to 14th March. Logs should all be there
for that date range.

cheers
Ian

On 16 March 2017 at 17:41, Edward Haas <eh...@redhat.com> wrote:

> Hello Ian,
>
> Please share your vdsm.log, supervdsm.log and Engine logs from the time
> you had this failure.
> (please specify the date/time you started the process so we can know where
> to look in the log)
>
> Thanks,
> Edy.
>
>
> On Tue, Mar 14, 2017 at 6:31 AM, Ian Neilsen <ian.neil...@gmail.com>
> wrote:
>
>> re:bug 2 - confirmed - ovirtmgmt bridge is not created during initial
>> deploy of second host.
>> Workaround - maintenance 2nd host --> re-establish bond network from
>> backup files or manually, systemctl restart network --> engine gui -->
>> second host --> network --> add in ovirtmgmt bridge
>>
>> Seems to work and allows the bridge to be created without hassle.
>>
>>
>>
>> On 14 March 2017 at 12:48, Ian Neilsen <ian.neil...@gmail.com> wrote:
>>
>>> Guys
>>>
>>> Bug 1:
>>> I have run the ovirt 4.1 installation many times now and everytime
>>> through the deploy the ovirtmgmt bridge fails at ifup when used in
>>> conjunction with bonded nic's
>>>
>>> The deploy process stops when it cannot raise the bridge ifup. To fix, a
>>> systemctl restart network works and I can start deploy again. ifup will not
>>> work.
>>>
>>> 2 questions:
>>> Do you want me to raise a bug for this. I'll get my logs to a place
>>> where you can access them.
>>> Has anyone got a scrip to create the ovirtmgmt bridge? Is there anything
>>> special I need to know if I create it manually on my bonded nic's.
>>>
>>> Bug 2:
>>> During hosted deploy on second node,via gui, the ovirtmgt bridge fails
>>> to create necessary files and deletes other ifcfg files in the process.
>>> Bridge creation stops as it seems to be missing ifcfg files necessary.
>>> Again nics are bonded. manual creation is needed to fix the deploy mess.
>>>
>>> Yet to confirm this one, am about to try now. Will let you know.
>>>
>>> Systems:
>>> Centos7.3,
>>> ovirt 4.1,
>>> gluster 3.10.0
>>> vdsm-4.19.4-1.el7.centos.x86_64
>>> kernel 3.10.0-514.10.2.el7.x86_64
>>>
>>> --
>>> Ian Neilsen
>>>
>>> Mobile: 0424 379 762
>>> Linkedin: http://au.linkedin.com/in/ianneilsen
>>> Twitter : ineilsen
>>>
>>
>>
>>
>> --
>> Ian Neilsen
>>
>> Mobile: 0424 379 762
>> Linkedin: http://au.linkedin.com/in/ianneilsen
>> Twitter : ineilsen
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>


-- 
Ian Neilsen

Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] host node non-operational

2017-03-19 Thread Ian Neilsen
make sure nfs-lock,nfs-server and rpcbind is running
run rpcinfo -p and make sure ports on the firewall are allowed. Looks like
you are using firewalld not plain iptables. Host and client firewall
settings.
make sure you have the following in your /etc/exports file;
(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
restart services and check showmount -e from client end.

Go grab the nfscheck.py file from  ovirt's github directory and run it. It
will check your nfs mounts.

On 16 March 2017 at 18:22, Yedidyah Bar David <d...@redhat.com> wrote:

> On Wed, Mar 15, 2017 at 5:45 PM, Angel R. Gonzalez
> <angel.gonza...@uam.es> wrote:
> > Hello,
> > I've installed a engine server and 2 host nodes. After install the 2
> nodes
> > I've configured a nfs domain storage in the first node. After few
> minutes,
> > the second host is down and I dont put it online.
> >
> > The log in the engine show:
> >>Host node2 cannot access the Storage Domain(s) nfsexport_node1 attached
> to
> >> the Data Center Labs. Setting Host state to Non-Operational.
> >
> > The nfsexport Storage Domain Format is V4, but in the /etc/nfsmount.conf
> > Defaultproto=tcp
> > Defaultvers=3
> > Nfsvers=3
> >
> > Also I've added the line
> > NFS4_SUPPORT="no"
> > to /etc/sysconfig/nfs file
> >
> > The node1's iptables rules are:
> >
> > ACCEPT all  --  anywhere anywhere state
> RELATED,ESTABLISHED
> > ACCEPT icmp --  anywhere anywhere
> > ACCEPT all  --  anywhere anywhere
> > ACCEPT tcp  --  anywhere anywhere tcp
> dpt:54321
> > ACCEPT tcp  --  anywhere anywhere tcp
> dpt:54322
> > ACCEPT tcp  --  anywhere anywhere tcp
> dpt:sunrpc
> > ACCEPT udp  --  anywhere anywhere udp
> dpt:sunrpc
> > ACCEPT tcp  --  anywhere anywhere tcp dpt:ssh
> > ACCEPT udp  --  anywhere anywhere udp
> dpt:snmp
> > ACCEPT tcp  --  anywhere anywhere tcp
> dpt:websm
> > ACCEPT tcp  --  anywhere anywhere tcp
> dpt:16514
> > ACCEPT tcp  --  anywhere anywhere multiport dports
> > rockwell-csp2
> > ACCEPT tcp  --  anywhere anywhere multiport dports
> rfb:6923
> > ACCEPT tcp  --  anywhere anywhere multiport dports
> > 49152:49216
> > ACCEPT tcp  --  anywhere anywhere tcp dpt:nfs
> > ACCEPT udp  --  anywhere anywhere udp dpt:nfs
> > ACCEPT tcp  --  anywhere anywhere tcp
> dpt:mountd
> > ACCEPT udp  --  anywhere anywhere udp
> dpt:mountd
> > REJECT all  --  anywhere anywhere reject-with
> > icmp-host-prohibited
> >
> > And the output of showmount -e command in a terminal of node1 is:
> >>/nfs/data *
> >
> > But the output of showmount -e node1 in a terminal of node2 is
> >>clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No
> >> route to host)
> >
> > Any help?
>
> Sounds like a general NFS issue, nothing specific to oVirt.
>
> You might want to use normal debugging means - tcpdump, strace, google
> :-), etc.
>
> I think your last error is because you need the portmapper port (111) open.
> You should see this easily with tcpdump or strace.
>
> Also please note that it's not considered a good idea to use one of the
> hosts as an nfs server, although some people happily do that. See also:
>
> https://lwn.net/Articles/595652/
>
> Best,
>
> >
> > Thanks you in advance.
> >
> > Ángel González
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> --
> Didi
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Ian Neilsen

Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to deploy hosted-engine to the second and later host machines

2017-03-19 Thread Ian Neilsen
Having same issue here also.

hosted-engine --deploy via cli is no longer available and hosted deploy via
UI does not work.
Host 2 is present in the engine manager and can be controlled via GUI,
however engine manager vm cannot be moved to host 2 due to hosted-engine
--deploy needing to be run.




On 20 March 2017 at 01:41, Tatsuya <sugin...@gmail.com> wrote:

> Hello Didi.
>
> Thank you for the information.
>
> > There is an option to configure hosted-engine when adding a host from
> > the web ui.
> > In 4.1 this is the only way to add a hosted-engine host:
> >
> > https://bugzilla.redhat.com/1366183
>
> Does web ui include Cockpit UI?
>
> I tried to install additional Hosted Engine from Cockpit UI but I received
> a same error which I received on cli.
> > [ ERROR ] The selected device already contains a storage domain.
>
> I tried from "Start" link, but must I use "Deploy with Gluster *" like
> following link?
> http://www.ovirt.org/images/wiki/Deploy-With-Gluster.png?1475063791
>
> * In my case, it's "Hosted Engine with Gluster"
>
> I also tried "Hosted Engine with Gluster", but when I ran "deploy" after
> option selection,
> "Deployment failed" was displayed in a moment and I could not find a log,
> so I gave up.
>
> Also, when running "Hosted Engine with Gluster", many things are done
> automatically
> (create LV and create GlusterFS volume etc.), so I believe there is just
> another way to simply add hosted engine.
>
> If you know, could you please tell me which link should be executed to add
> hosted-engine?
>
> Thanks,
> Tatsuya
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Ian Neilsen

Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirtmgmt bridge creation fails 100% during hosted-engine 4.1 install

2017-03-13 Thread Ian Neilsen
re:bug 2 - confirmed - ovirtmgmt bridge is not created during initial
deploy of second host.
Workaround - maintenance 2nd host --> re-establish bond network from backup
files or manually, systemctl restart network --> engine gui --> second host
--> network --> add in ovirtmgmt bridge

Seems to work and allows the bridge to be created without hassle.



On 14 March 2017 at 12:48, Ian Neilsen <ian.neil...@gmail.com> wrote:

> Guys
>
> Bug 1:
> I have run the ovirt 4.1 installation many times now and everytime through
> the deploy the ovirtmgmt bridge fails at ifup when used in conjunction with
> bonded nic's
>
> The deploy process stops when it cannot raise the bridge ifup. To fix, a
> systemctl restart network works and I can start deploy again. ifup will not
> work.
>
> 2 questions:
> Do you want me to raise a bug for this. I'll get my logs to a place where
> you can access them.
> Has anyone got a scrip to create the ovirtmgmt bridge? Is there anything
> special I need to know if I create it manually on my bonded nic's.
>
> Bug 2:
> During hosted deploy on second node,via gui, the ovirtmgt bridge fails to
> create necessary files and deletes other ifcfg files in the process. Bridge
> creation stops as it seems to be missing ifcfg files necessary. Again nics
> are bonded. manual creation is needed to fix the deploy mess.
>
> Yet to confirm this one, am about to try now. Will let you know.
>
> Systems:
> Centos7.3,
> ovirt 4.1,
> gluster 3.10.0
> vdsm-4.19.4-1.el7.centos.x86_64
> kernel 3.10.0-514.10.2.el7.x86_64
>
> --
> Ian Neilsen
>
> Mobile: 0424 379 762
> Linkedin: http://au.linkedin.com/in/ianneilsen
> Twitter : ineilsen
>



-- 
Ian Neilsen

Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirtmgmt bridge creation fails 100% during hosted-engine 4.1 install

2017-03-13 Thread Ian Neilsen
Guys

Bug 1:
I have run the ovirt 4.1 installation many times now and everytime through
the deploy the ovirtmgmt bridge fails at ifup when used in conjunction with
bonded nic's

The deploy process stops when it cannot raise the bridge ifup. To fix, a
systemctl restart network works and I can start deploy again. ifup will not
work.

2 questions:
Do you want me to raise a bug for this. I'll get my logs to a place where
you can access them.
Has anyone got a scrip to create the ovirtmgmt bridge? Is there anything
special I need to know if I create it manually on my bonded nic's.

Bug 2:
During hosted deploy on second node,via gui, the ovirtmgt bridge fails to
create necessary files and deletes other ifcfg files in the process. Bridge
creation stops as it seems to be missing ifcfg files necessary. Again nics
are bonded. manual creation is needed to fix the deploy mess.

Yet to confirm this one, am about to try now. Will let you know.

Systems:
Centos7.3,
ovirt 4.1,
gluster 3.10.0
vdsm-4.19.4-1.el7.centos.x86_64
kernel 3.10.0-514.10.2.el7.x86_64

-- 
Ian Neilsen

Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HE in bad stauts, will not start following storage issue - HELP

2017-03-12 Thread Ian Neilsen
I've checked id's in  /rhev/data-center/mnt/glusterSD/*./dom_md/

# -rw-rw. 1 vdsm kvm  1048576 Mar 12 05:14 ids

seems ok

sanlock.log showing;
---
r14 acquire_token open error -13
r14 cmd_acquire 2,11,89283 acquire_token -13

Now I'm not quiet sure on which direction to take.

Lockspace
---
"hosted-engine --reinitialize-lockspace" is throwing an exception;

Exception("Lockfile reset cannot be performed with"
Exception: Lockfile reset cannot be performed with an active agent.


@didi - I am in "Global Maintenance".
I just noticed that host 1 now shows.
Engine status: unknown stale-data
state= AgentStopped

I'm pretty sure Ive been able to start the Engine VM while in Global
Maintenance. But you raise a good question. I don't see why you would be
restricted in running the engine while in Global or even starting the VM.
If so this is a little bakwards.






On 12 March 2017 at 16:28, Yedidyah Bar David <d...@redhat.com> wrote:

> On Fri, Mar 10, 2017 at 12:39 PM, Martin Sivak <msi...@redhat.com> wrote:
> > Hi Ian,
> >
> > it is normal that VDSMs are competing for the lock, one should win
> > though. If that is not the case then the lockspace might be corrupted
> > or the sanlock daemons can't reach it.
> >
> > I would recommend putting the cluster to global maintenance and
> > attempting a manual start using:
> >
> > # hosted-engine --set-maintenance --mode=global
> > # hosted-engine --vm-start
>
> Is that possible? See also:
>
> http://lists.ovirt.org/pipermail/users/2016-January/036993.html
>
> >
> > You will need to check your storage connectivity and sanlock status on
> > all hosts if that does not work.
> >
> > # sanlock client status
> >
> > There are couple of locks I would expect to be there (ha_agent, spm),
> > but no lock for hosted engine disk should be visible.
> >
> > Next steps depend on whether you have important VMs running on the
> > cluster and on the Gluster status (I can't help you there
> > unfortunately).
> >
> > Best regards
> >
> > --
> > Martin Sivak
> > SLA / oVirt
> >
> >
> > On Fri, Mar 10, 2017 at 7:37 AM, Ian Neilsen <ian.neil...@gmail.com>
> wrote:
> >> I just noticed this in the vdsm.logs.  The agent looks like it is
> trying to
> >> start hosted engine on both machines??
> >>
> >> destroydestroy on_reboot>destroy
> >> Thread-7517::ERROR::2017-03-10
> >> 01:26:13,053::vm::773::virt.vm::(_startUnderlyingVm)
> >> vmId=`2419f9fe-4998-4b7a-9fe9-151571d20379`::The vm start process
> failed
> >> Traceback (most recent call last):
> >>   File "/usr/share/vdsm/virt/vm.py", line 714, in _startUnderlyingVm
> >> self._run()
> >>   File "/usr/share/vdsm/virt/vm.py", line 2026, in _run
> >> self._connection.createXML(domxml, flags),
> >>   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
> line
> >> 123, in wrapper ret = f(*args, **kwargs)
> >>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 917, in
> >> wrapper return func(inst, *args, **kwargs)
> >>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3782, in
> >> createXML if ret is None:raise libvirtError('virDomainCreateXML()
> failed',
> >> conn=self)
> >>
> >> libvirtError: Failed to acquire lock: Permission denied
> >>
> >> INFO::2017-03-10 01:26:13,054::vm::1330::virt.vm::(setDownStatus)
> >> vmId=`2419f9fe-4998-4b7a-9fe9-151571d20379`::Changed state to Down:
> Failed
> >> to acquire lock: Permission denied (code=1)
> >> INFO::2017-03-10 01:26:13,054::guestagent::430::virt.vm::(stop)
> >> vmId=`2419f9fe-4998-4b7a-9fe9-151571d20379`::Stopping connection
> >>
> >> DEBUG::2017-03-10 01:26:13,054::vmchannels::238::vds::(unregister)
> Delete
> >> fileno 56 from listener.
> >> DEBUG::2017-03-10 01:26:13,055::vmchannels::66::vds::(_unregister_fd)
> Failed
> >> to unregister FD from epoll (ENOENT): 56
> >> DEBUG::2017-03-10 01:26:13,055::__init__::209::
> jsonrpc.Notification::(emit)
> >> Sending event {"params": {"2419f9fe-4998-4b7a-9fe9-151571d20379":
> {"status":
> >> "Down", "exitReason": 1, "exitMessage": "Failed to acquire lock:
> Permission
> >> denied", "exitCode": 1}, "notify_time": 4308740560}, "jsonrpc": "2.0",
> >> "method": "|virt|VM_status|2419f9fe-4

[ovirt-users] HE in bad stauts, will not start following storage issue - HELP

2017-03-10 Thread Ian Neilsen
Hi All

I had a storage issue with my gluster volumes running under ovirt hosted.
I now cannot start the hosted engine manager vm from "hosted-engine
--vm-start".
I've scoured the net to find a way, but can't seem to find anything
concrete.

Running Centos7, ovirt 4.0 and gluster 3.8.9

How do I recover the engine manager. Im at a loss!

Engine Status = score between nodes was 0 for all, now node 1 is reading
3400, but all others are 0

{"reason": "bad vm status", "health": "bad", "vm": "down", "detail": "down"}


Logs from agent.log
==

INFO::2017-03-09
19:32:52,600::state_decorators::51::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Global maintenance detected
INFO::2017-03-09
19:32:52,603::hosted_engine::612::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Initializing VDSM
INFO::2017-03-09
19:32:54,820::hosted_engine::639::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Connecting the storage
INFO::2017-03-09
19:32:54,821::storage_server::219::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Connecting storage server
INFO::2017-03-09
19:32:59,194::storage_server::226::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Connecting storage server
INFO::2017-03-09
19:32:59,211::storage_server::233::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Refreshing the storage domain
INFO::2017-03-09
19:32:59,328::hosted_engine::666::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Preparing images
INFO::2017-03-09
19:32:59,328::image::126::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images)
Preparing images
INFO::2017-03-09
19:33:01,748::hosted_engine::669::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Reloading vm.conf from the shared storage domain
INFO::2017-03-09
19:33:01,748::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
Trying to get a fresher copy of vm configuration from the OVF_STORE
WARNING::2017-03-09
19:33:04,056::ovf_store::107::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
Unable to find OVF_STORE
ERROR::2017-03-09
19:33:04,058::config::235::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
Unable to get vm.conf from OVF_STORE, falling back to initial vm.conf

ovirt-ha-agent logs


ovirt-ha-agent
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config ERROR Unable
to get vm.conf from OVF_STORE, falling back to initial vm.conf

vdsm
==

vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof

ovirt-ha-broker
====

ovirt-ha-broker cpu_load_no_engine.EngineHealth ERROR Failed to getVmStats:
'pid'

-- 
Ian Neilsen

Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Passing VLAN trunk to VM

2017-03-10 Thread Ian Neilsen
Vote 1 for this. Interested also

On 10 March 2017 at 05:40, Rogério Ceni Coelho <rogeriocenicoe...@gmail.com>
wrote:

> Hi,
>
> Ovirt user interface does not allow to input 4095 as a tag vlan number ...
> Only values between 0 and 4094.
>
> This is useful to me too. Maybe any other way ?
>
> Em qui, 9 de mar de 2017 às 16:15, FERNANDO FREDIANI <
> fernando.fredi...@upx.com> escreveu:
>
>> Have you tried use Vlan 4095 ? On VMware it used to be the way to pass
>> all Vlans from a vSwitch to a Vlan in a single port. And yes I have used it
>> also for pfSense.
>>
>> Fernando
>>
>> On 09/03/2017 16:09, Simon Vincent wrote:
>>
>> Is it possible to pass multiple VLANs to a VM (pfSense) using a single
>> virtual NIC? All my existing oVirt networks are setup as a single tagged
>> VLAN. I know this didn't used to be supported but wondered if this has
>> changed. My other option is to pass each VLAN as a separate NIC to the VM
>> however if I needed to add a new VLAN I would have to add a new interface
>> and reboot the VM as hot-add of NICs is not supported by pfSense.
>>
>>
>>
>>
>> ___
>> Users mailing 
>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Ian Neilsen

Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HE in bad stauts, will not start following storage issue - HELP

2017-03-10 Thread Ian Neilsen
I just noticed this in the vdsm.logs.  The agent looks like it is trying to
start hosted engine on both machines??

destroydestroydestroy
Thread-7517::ERROR::2017-03-10
01:26:13,053::vm::773::virt.vm::(_startUnderlyingVm)
vmId=`2419f9fe-4998-4b7a-9fe9-151571d20379`::The vm start process failed
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 714, in _startUnderlyingVm
self._run()
  File "/usr/share/vdsm/virt/vm.py", line 2026, in _run
self._connection.createXML(domxml, flags),
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line
123, in wrapper ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 917, in
wrapper return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3782, in
createXML if ret is None:raise libvirtError('virDomainCreateXML() failed',
conn=self)

libvirtError: Failed to acquire lock: Permission denied

INFO::2017-03-10 01:26:13,054::vm::1330::virt.vm::(setDownStatus)
vmId=`2419f9fe-4998-4b7a-9fe9-151571d20379`::Changed state to Down: Failed
to acquire lock: Permission denied (code=1)
INFO::2017-03-10 01:26:13,054::guestagent::430::virt.vm::(stop)
vmId=`2419f9fe-4998-4b7a-9fe9-151571d20379`::Stopping connection

DEBUG::2017-03-10 01:26:13,054::vmchannels::238::vds::(unregister) Delete
fileno 56 from listener.
DEBUG::2017-03-10 01:26:13,055::vmchannels::66::vds::(_unregister_fd)
Failed to unregister FD from epoll (ENOENT): 56
DEBUG::2017-03-10 01:26:13,055::__init__::209::jsonrpc.Notification::(emit)
Sending event {"params": {"2419f9fe-4998-4b7a-9fe9-151571d20379":
{"status": "Down", "exitReason": 1, "exitMessage": "Failed to acquire lock:
Permission denied", "exitCode": 1}, "notify_time": 4308740560}, "jsonrpc":
"2.0", "method": "|virt|VM_status|2419f9fe-4998-4b7a-9fe9-151571d20379"}
VM Channels Listener::DEBUG::2017-03-10
01:26:13,475::vmchannels::142::vds::(_do_del_channels) fileno 56 was
removed from listener.
DEBUG::2017-03-10 01:26:14,430::check::296::storage.check::(_start_process)
START check 
u'/rhev/data-center/mnt/glusterSD/192.168.3.10:_data/a08822ec-3f5b-4dba-ac2d-5510f0b4b6a2/dom_md/metadata'
cmd=['/usr/bin/taskset', '--cpu-list', '0-39', '/usr/bin/dd',
u'if=/rhev/data-center/mnt/glusterSD/192.168.3.10:_data/a08822ec-3f5b-4dba-ac2d-5510f0b4b6a2/dom_md/metadata',
'of=/dev/null', 'bs=4096', 'count=1', 'iflag=direct'] delay=0.00
DEBUG::2017-03-10 01:26:14,481::asyncevent::564::storage.asyncevent::(reap)
Process  terminated (count=1)
DEBUG::2017-03-10
01:26:14,481::check::327::storage.check::(_check_completed) FINISH check
u'/rhev/data-center/mnt/glusterSD/192.168.3.10:_data/a08822ec-3f5b-4dba-ac2d-5510f0b4b6a2/dom_md/metadata'
rc=0 err=bytearray(b'0+1 records in\n0+1 records out\n300 bytes (300 B)
copied, 8.7603e-05 s, 3.4 MB/s\n') elapsed=0.06


On 10 March 2017 at 10:40, Ian Neilsen <ian.neil...@gmail.com> wrote:

> Hi All
>
> I had a storage issue with my gluster volumes running under ovirt hosted.
> I now cannot start the hosted engine manager vm from "hosted-engine
> --vm-start".
> I've scoured the net to find a way, but can't seem to find anything
> concrete.
>
> Running Centos7, ovirt 4.0 and gluster 3.8.9
>
> How do I recover the engine manager. Im at a loss!
>
> Engine Status = score between nodes was 0 for all, now node 1 is reading
> 3400, but all others are 0
>
> {"reason": "bad vm status", "health": "bad", "vm": "down", "detail":
> "down"}
>
>
> Logs from agent.log
> ==
>
> INFO::2017-03-09 19:32:52,600::state_decorators::51::ovirt_hosted_
> engine_ha.agent.hosted_engine.HostedEngine::(check) Global maintenance
> detected
> INFO::2017-03-09 19:32:52,603::hosted_engine::612::ovirt_hosted_engine_ha.
> agent.hosted_engine.HostedEngine::(_initialize_vdsm) Initializing VDSM
> INFO::2017-03-09 19:32:54,820::hosted_engine::639::ovirt_hosted_engine_ha.
> agent.hosted_engine.HostedEngine::(_initialize_storage_images) Connecting
> the storage
> INFO::2017-03-09 19:32:54,821::storage_server::
> 219::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
> Connecting storage server
> INFO::2017-03-09 19:32:59,194::storage_server::
> 226::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
> Connecting storage server
> INFO::2017-03-09 19:32:59,211::storage_server::
> 233::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
> Refreshing the storage domain
> INFO::2017-03-09 19:32:59,328::hosted_engine::666::ovirt_hosted_engine_ha.
> agent.hosted_engine.HostedEngin

Re: [ovirt-users] Shrink gluster logical volume under Ovirt 4.0

2017-03-09 Thread Ian Neilsen
Hi Sahina

I was shrinking the underlying LV/filesystem of bricks associated with the
gluster volume.
Im successfully all the way through doing this now.

I place cluster into global maintenance and went about resizing the
lv/filesystem for each server.

I think it's working. :-)

Thanks for the reply
Ian


On 9 March 2017 at 16:44, Sahina Bose <sab...@redhat.com> wrote:

> Are you shrinking the gluster volume by removing bricks, or are you
> shrinking the underlying LV/filesystem of the bricks associated with
> gluster volume?
> If latter, you need to move storage domain to maintenance and umount from
> all hosts.
>
> On Thu, Mar 9, 2017 at 4:56 AM, Ian Neilsen <ian.neil...@gmail.com> wrote:
>
>>
>> Hi all
>>
>> I need to shrink/reduce the size of a gluster logical volume under Ovirt
>> 4.0.
>>
>> Is there anything I should be aware of before reducing the file system
>> and logical volume on my servers?
>>
>> I ask because I see under "df -h" the following;
>> 192.168.3.10:data   5.7T  3.4G  5.7T   1% /rhev/data-center/mnt/glusterS
>> D/192.168.3.10:data
>>
>> The storage domain is not connected yet via oVirt manager, well at least
>> it isnt showing. Im running in hosted mode converged.
>>
>>
>> Thanks in advance
>> --
>> Ian Neilsen
>>
>> Mobile: 0424 379 762
>> Linkedin: http://au.linkedin.com/in/ianneilsen
>> Twitter : ineilsen
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>


-- 
Ian Neilsen

Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Shrink gluster logical volume under Ovirt 4.0

2017-03-08 Thread Ian Neilsen
Hi all

I need to shrink/reduce the size of a gluster logical volume under Ovirt
4.0.

Is there anything I should be aware of before reducing the file system and
logical volume on my servers?

I ask because I see under "df -h" the following;
192.168.3.10:data   5.7T  3.4G  5.7T   1%
/rhev/data-center/mnt/glusterSD/192.168.3.10:data

The storage domain is not connected yet via oVirt manager, well at least it
isnt showing. Im running in hosted mode converged.


Thanks in advance
-- 
Ian Neilsen

Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Second node ovirt 4.0 no ovirtmgmt bridge, non-operational issue

2017-03-08 Thread Ian Neilsen
Hi all

I currently have a non-operational second node in my "ovirt 4.0
hosted-engine" installation running on Centos7, kernel
3.10.0-514.10.2.el7.x86_64.

When I ran "hosted-engine --deploy" on the second node, it produce the
following error(see bottom of email).

I now cannot set the second node into operational mode, no matter what I
try. I did notice a few things though.

#1 - no ovirtmgt bridge was created during hosted-engine deploy; should I
create this bridge and retry?
#2- vdsmd keep throwing start errors before I ran hosted-engine deploy. I
ran vdsm-tool configure --force this seem to fix it.
#3 - second node complained of no fencing agent, so I added in a DRAC fence
agent and it worked/tested fine. But still not able to go opertional on
second node.
#4 Ive tried restarting vdsmd, ovirt-ha-agent and ovirt-ha-broker. Both the
agent and broker throw error messages.(see bottom of email for errors)

Any advice of where to look next, what to fix?


***Hosted-engine --deploy message***

[WARNING] Host left in non-operational state
  To finish deploying, please:
  - activate it
  - restart the hosted-engine high availability services by running
on this machine:
# service ovirt-ha-agent restart
# service ovirt-ha-broker restart
[ INFO  ] Enabling and starting HA services
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20170307062116.conf'
[ INFO  ] Generating answer file '/etc/ovirt-hosted-engine/answers.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ INFO  ] Hosted Engine successfully deployed

***ovirt-ha-agent error message***

ovirt-ha-agent
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config ERROR Unable
to get vm.conf from OVF_STORE, falling back to initial vm.conf

***ovirt-ha-broker error message***

ovirt-ha-broker ovirt_hosted_engine_ha.broker.notifications.Notifications
ERROR Connection unexpectedly closed

Traceback (most recent call last):
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/notifications.py",
line 34, in send_email message.as_string())
File "/usr/lib64/python2.7/smtplib.py", line 749, in sendmail
self.rset()
File "/usr/lib64/python2.7/smtplib.py", line 468, in rsetreturn
self.docmd("rset")
File "/usr/lib64/python2.7/smtplib.py", line 393, in docmdreturn
self.getreply()
File "/usr/lib64/python2.7/smtplib.py", line 367, in getreplyraise
SMTPServerDisconnected("Connection unexpectedly
closed")          SMTPServerDisconnected:
Connection unexpectedly closed



-- 
Ian Neilsen

Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users