Re: [ovirt-users] iscsi data domain when engine is down

2017-03-11 Thread Marcin Kruk
OK, but what your script do, only add paths by the iscsiadm command, but
the question is if hosted-engine can see it.
I do not know how to add an extra path, for example, when I congirured
hosted-engine during installation there was only one path in the target,
but now there are four. So how can I verify how many paths are now, and how
eventually change it.

2017-03-12 0:32 GMT+01:00 Chris Adams :

> Once upon a time, Devin A. Bougie  said:
> > On Mar 11, 2017, at 10:59 AM, Chris Adams  wrote:
> > > Hosted engine runs fine on iSCSI since oVirt 3.5.  It needs a separate
> > > target from VM storage, but then that access is managed by the hosted
> > > engine HA system.
> >
> > Thanks so much, Chris.  It sounds like that is exactly what I was
> missing.
> >
> > It would be great to know how to add multiple paths to the hosted
> engine's iSCSI target, but hopefully I can figure that out once I have
> things up and running.
>
> oVirt doesn't currently support adding paths to an existing storage
> domain; they all have to be selected when the domain is created.  Since
> the hosted engine setup doesn't support that, there's no way to add
> additional paths after the fact.  I think adding paths to a regular
> storage domain is something that is being looked at (believe I read that
> on the list), so maybe if that gets added, support for adding to the
> hosted engine domain will be added as well.
>
> I have a script that gets run out of cron periodically that looks at
> sessions and configured paths and tries to add them if necessary.  I
> manually created the config for the additional paths with iscsiadm.  It
> works, although I'm not sure that the hosted engine HA is really "happy"
> with what I did. :)
>
> --
> Chris Adams 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migrate to hosted engine

2017-03-11 Thread Yedidyah Bar David
On Fri, Mar 10, 2017 at 3:43 PM, Devin A. Bougie
 wrote:
> Hi, All.  We have an ovirt 4.1 cluster setup using multiple paths to a single 
> iSCSI LUN for the data storage domain.  I would now like to migrate to a 
> hosted engine.
>
> I setup the new engine VM, shutdown and backed-up the old VM, and restored to 
> the new VM using engine-backup.  After updating DNS to change our engine's 
> FQDN to point to the hosted engine, everything seems to work properly.  
> However, when rebooting the entire cluster, the engine VM doesn't come up 
> automatically.

I am aware that the above flow is technically possible - nothing stops
you from doing it - but it's not the standard/supported/expected way
to deploy a hosted-engine. You must use 'hosted-engine --deploy' for
that.
Nice try, though :-) Will be an interesting exercise to try and
convert this to a fully-functioning hosted-engine, but I'd personally
not try that on a production system.

>
> Is there anything that now needs to be done to tell the cluster that it's now 
> using a hosted engine?

Yes, but it's currently not possible to do this after the fact.

>
> I started with a  standard engine setup, as I didn't see a way to specify 
> multiple paths to a single iSCSI LUN when using "hosted-engine --deploy."

I think it's indeed not fully-supported yet [1], but can be done
manually [2][3].

Simone - are there updates for this? Perhaps we should add this to the
hosted-engine howto page.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1193961
[2] http://lists.ovirt.org/pipermail/users/2015-March/032034.html
[3] https://bugzilla.redhat.com/show_bug.cgi?id=1193961#c2

>
> Any tips would be greatly appreciated.
>
> Many thanks,
> Devin
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error on Node upgrade 2

2017-03-11 Thread Yedidyah Bar David
On Fri, Mar 10, 2017 at 2:37 PM, FERNANDO FREDIANI
 wrote:
> I am not sure if another email I sent went through but has anyone got
> problems when upgrading a running oVirt-node-ng from 4.1.0 to 4.1.1.

What kind of problems?

>
> Is the only solution a complete reinstall of the node ?

No, this should work.

Best,

>
> Thanks
>
> Fernando
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HE in bad stauts, will not start following storage issue - HELP

2017-03-11 Thread Yedidyah Bar David
On Fri, Mar 10, 2017 at 12:39 PM, Martin Sivak  wrote:
> Hi Ian,
>
> it is normal that VDSMs are competing for the lock, one should win
> though. If that is not the case then the lockspace might be corrupted
> or the sanlock daemons can't reach it.
>
> I would recommend putting the cluster to global maintenance and
> attempting a manual start using:
>
> # hosted-engine --set-maintenance --mode=global
> # hosted-engine --vm-start

Is that possible? See also:

http://lists.ovirt.org/pipermail/users/2016-January/036993.html

>
> You will need to check your storage connectivity and sanlock status on
> all hosts if that does not work.
>
> # sanlock client status
>
> There are couple of locks I would expect to be there (ha_agent, spm),
> but no lock for hosted engine disk should be visible.
>
> Next steps depend on whether you have important VMs running on the
> cluster and on the Gluster status (I can't help you there
> unfortunately).
>
> Best regards
>
> --
> Martin Sivak
> SLA / oVirt
>
>
> On Fri, Mar 10, 2017 at 7:37 AM, Ian Neilsen  wrote:
>> I just noticed this in the vdsm.logs.  The agent looks like it is trying to
>> start hosted engine on both machines??
>>
>> destroydestroydestroy
>> Thread-7517::ERROR::2017-03-10
>> 01:26:13,053::vm::773::virt.vm::(_startUnderlyingVm)
>> vmId=`2419f9fe-4998-4b7a-9fe9-151571d20379`::The vm start process failed
>> Traceback (most recent call last):
>>   File "/usr/share/vdsm/virt/vm.py", line 714, in _startUnderlyingVm
>> self._run()
>>   File "/usr/share/vdsm/virt/vm.py", line 2026, in _run
>> self._connection.createXML(domxml, flags),
>>   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line
>> 123, in wrapper ret = f(*args, **kwargs)
>>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 917, in
>> wrapper return func(inst, *args, **kwargs)
>>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3782, in
>> createXML if ret is None:raise libvirtError('virDomainCreateXML() failed',
>> conn=self)
>>
>> libvirtError: Failed to acquire lock: Permission denied
>>
>> INFO::2017-03-10 01:26:13,054::vm::1330::virt.vm::(setDownStatus)
>> vmId=`2419f9fe-4998-4b7a-9fe9-151571d20379`::Changed state to Down: Failed
>> to acquire lock: Permission denied (code=1)
>> INFO::2017-03-10 01:26:13,054::guestagent::430::virt.vm::(stop)
>> vmId=`2419f9fe-4998-4b7a-9fe9-151571d20379`::Stopping connection
>>
>> DEBUG::2017-03-10 01:26:13,054::vmchannels::238::vds::(unregister) Delete
>> fileno 56 from listener.
>> DEBUG::2017-03-10 01:26:13,055::vmchannels::66::vds::(_unregister_fd) Failed
>> to unregister FD from epoll (ENOENT): 56
>> DEBUG::2017-03-10 01:26:13,055::__init__::209::jsonrpc.Notification::(emit)
>> Sending event {"params": {"2419f9fe-4998-4b7a-9fe9-151571d20379": {"status":
>> "Down", "exitReason": 1, "exitMessage": "Failed to acquire lock: Permission
>> denied", "exitCode": 1}, "notify_time": 4308740560}, "jsonrpc": "2.0",
>> "method": "|virt|VM_status|2419f9fe-4998-4b7a-9fe9-151571d20379"}
>> VM Channels Listener::DEBUG::2017-03-10
>> 01:26:13,475::vmchannels::142::vds::(_do_del_channels) fileno 56 was removed
>> from listener.
>> DEBUG::2017-03-10 01:26:14,430::check::296::storage.check::(_start_process)
>> START check
>> u'/rhev/data-center/mnt/glusterSD/192.168.3.10:_data/a08822ec-3f5b-4dba-ac2d-5510f0b4b6a2/dom_md/metadata'
>> cmd=['/usr/bin/taskset', '--cpu-list', '0-39', '/usr/bin/dd',
>> u'if=/rhev/data-center/mnt/glusterSD/192.168.3.10:_data/a08822ec-3f5b-4dba-ac2d-5510f0b4b6a2/dom_md/metadata',
>> 'of=/dev/null', 'bs=4096', 'count=1', 'iflag=direct'] delay=0.00
>> DEBUG::2017-03-10 01:26:14,481::asyncevent::564::storage.asyncevent::(reap)
>> Process  terminated (count=1)
>> DEBUG::2017-03-10
>> 01:26:14,481::check::327::storage.check::(_check_completed) FINISH check
>> u'/rhev/data-center/mnt/glusterSD/192.168.3.10:_data/a08822ec-3f5b-4dba-ac2d-5510f0b4b6a2/dom_md/metadata'
>> rc=0 err=bytearray(b'0+1 records in\n0+1 records out\n300 bytes (300 B)
>> copied, 8.7603e-05 s, 3.4 MB/s\n') elapsed=0.06
>>
>>
>> On 10 March 2017 at 10:40, Ian Neilsen  wrote:
>>>
>>> Hi All
>>>
>>> I had a storage issue with my gluster volumes running under ovirt hosted.
>>> I now cannot start the hosted engine manager vm from "hosted-engine
>>> --vm-start".
>>> I've scoured the net to find a way, but can't seem to find anything
>>> concrete.
>>>
>>> Running Centos7, ovirt 4.0 and gluster 3.8.9
>>>
>>> How do I recover the engine manager. Im at a loss!
>>>
>>> Engine Status = score between nodes was 0 for all, now node 1 is reading
>>> 3400, but all others are 0
>>>
>>> {"reason": "bad vm status", "health": "bad", "vm": "down", "detail":
>>> "down"}
>>>
>>>
>>> Logs from agent.log
>>> ==
>>>
>>> INFO::2017-03-09
>>> 19:32:52,600::state_decorators::51::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
>>> Global maintenance detected
>>> INFO::2017-03-09
>>> 

Re: [ovirt-users] host-only network

2017-03-11 Thread Dan Kenigsberg
On Thu, Mar 9, 2017 at 9:31 AM, qinglong.d...@horebdata.cn
 wrote:
> Hi, all
> I have noticed that kvm supports host-only network mode. So I want
> to know how to create a host-only vinc for a virtual machine in ovirt.
> Anyone can help? Thanks!


You can have that if you create a dummy device on host boot. Then you
can attach a VM network to it.
Read a bit more in
http://lists.ovirt.org/pipermail/users/2015-December/036897.html

I don't know why you want to use host-only network, but maybe you'd
like to check out
https://www.ovirt.org/blog/2016/11/ovirt-provider-ovn/ which provides
your VM isolation from external networks, while allowing live
migration.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iscsi data domain when engine is down

2017-03-11 Thread Chris Adams
Once upon a time, Devin A. Bougie  said:
> On Mar 11, 2017, at 10:59 AM, Chris Adams  wrote:
> > Hosted engine runs fine on iSCSI since oVirt 3.5.  It needs a separate
> > target from VM storage, but then that access is managed by the hosted
> > engine HA system.
> 
> Thanks so much, Chris.  It sounds like that is exactly what I was missing.  
> 
> It would be great to know how to add multiple paths to the hosted engine's 
> iSCSI target, but hopefully I can figure that out once I have things up and 
> running.

oVirt doesn't currently support adding paths to an existing storage
domain; they all have to be selected when the domain is created.  Since
the hosted engine setup doesn't support that, there's no way to add
additional paths after the fact.  I think adding paths to a regular
storage domain is something that is being looked at (believe I read that
on the list), so maybe if that gets added, support for adding to the
hosted engine domain will be added as well.

I have a script that gets run out of cron periodically that looks at
sessions and configured paths and tries to add them if necessary.  I
manually created the config for the additional paths with iscsiadm.  It
works, although I'm not sure that the hosted engine HA is really "happy"
with what I did. :)

-- 
Chris Adams 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Passing VLAN trunk to VM

2017-03-11 Thread Edward Haas
Passing a trunk to the vnic is supported long ago.
Just create a network over a nic/bond that is connected to a trunk port and
do not define any VLAN (we call it non vlan network).
In oVirt, a non-vlan network will ignore the VLAN tag and will forward the
packets as is onward.
It is up to the VM vnic to define vlans or use a promisc mode to see
everything.

OVS can add a layer of security over the existing, by defining explicitly
which vlans are allowed for a specific vnic, but it is not
currently available.


On Thu, Mar 9, 2017 at 11:40 PM, Simon Vincent  wrote:

> I was wondering if open vswitch will get round this problem. Has anyone
> tried it?
>
> On 9 Mar 2017 7:41 pm, "Rogério Ceni Coelho" 
> wrote:
>
>> Hi,
>>
>> Ovirt user interface does not allow to input 4095 as a tag vlan number
>> ... Only values between 0 and 4094.
>>
>> This is useful to me too. Maybe any other way ?
>>
>> Em qui, 9 de mar de 2017 às 16:15, FERNANDO FREDIANI <
>> fernando.fredi...@upx.com> escreveu:
>>
>>> Have you tried use Vlan 4095 ? On VMware it used to be the way to pass
>>> all Vlans from a vSwitch to a Vlan in a single port. And yes I have used it
>>> also for pfSense.
>>>
>>> Fernando
>>>
>>> On 09/03/2017 16:09, Simon Vincent wrote:
>>>
>>> Is it possible to pass multiple VLANs to a VM (pfSense) using a single
>>> virtual NIC? All my existing oVirt networks are setup as a single tagged
>>> VLAN. I know this didn't used to be supported but wondered if this has
>>> changed. My other option is to pass each VLAN as a separate NIC to the VM
>>> however if I needed to add a new VLAN I would have to add a new interface
>>> and reboot the VM as hot-add of NICs is not supported by pfSense.
>>>
>>>
>>>
>>>
>>> ___
>>> Users mailing 
>>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iscsi data domain when engine is down

2017-03-11 Thread Devin A. Bougie
On Mar 11, 2017, at 10:59 AM, Chris Adams  wrote:
> Hosted engine runs fine on iSCSI since oVirt 3.5.  It needs a separate
> target from VM storage, but then that access is managed by the hosted
> engine HA system.

Thanks so much, Chris.  It sounds like that is exactly what I was missing.  

It would be great to know how to add multiple paths to the hosted engine's 
iSCSI target, but hopefully I can figure that out once I have things up and 
running.

Thanks again,
Devin

> 
> If all the engine hosts are shut down together, it will take a bit after
> boot for the HA system to converge and try to bring the engine back
> online (including logging in to the engine iSCSI LUN).  You can force
> this on one host by running "hosted-engine --vm-start".
> 
> -- 
> Chris Adams 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iscsi data domain when engine is down

2017-03-11 Thread Marcin Kruk
"Hosted engine runs fine on iSCSI since oVirt 3.5.", I have similar
configuration and also I have got problem with resolve the hosted-engine
storage by vdsm but the cause is that I do know how to edit iSCSI settings.
I suspect that there is only one path to target not four :(

Devin what is the output of "hosted-engine --vm-status"?

2017-03-11 16:59 GMT+01:00 Chris Adams :

> Once upon a time, Devin A. Bougie  said:
> > Thanks for replying, Juan.  I was under the impression that the hosted
> engine would run on an iSCSI data domain, based on
> http://www.ovirt.org/develop/release-management/features/
> engine/self-hosted-engine-iscsi-support/ and the fact that "hosted-engine
> --deploy" does give you the option to choose iscsi storage (but only one
> path, as far as I can tell).
>
> Hosted engine runs fine on iSCSI since oVirt 3.5.  It needs a separate
> target from VM storage, but then that access is managed by the hosted
> engine HA system.
>
> If all the engine hosts are shut down together, it will take a bit after
> boot for the HA system to converge and try to bring the engine back
> online (including logging in to the engine iSCSI LUN).  You can force
> this on one host by running "hosted-engine --vm-start".
>
> --
> Chris Adams 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iscsi data domain when engine is down

2017-03-11 Thread Chris Adams
Once upon a time, Devin A. Bougie  said:
> Thanks for replying, Juan.  I was under the impression that the hosted engine 
> would run on an iSCSI data domain, based on 
> http://www.ovirt.org/develop/release-management/features/engine/self-hosted-engine-iscsi-support/
>  and the fact that "hosted-engine --deploy" does give you the option to 
> choose iscsi storage (but only one path, as far as I can tell).

Hosted engine runs fine on iSCSI since oVirt 3.5.  It needs a separate
target from VM storage, but then that access is managed by the hosted
engine HA system.

If all the engine hosts are shut down together, it will take a bit after
boot for the HA system to converge and try to bring the engine back
online (including logging in to the engine iSCSI LUN).  You can force
this on one host by running "hosted-engine --vm-start".

-- 
Chris Adams 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iscsi data domain when engine is down

2017-03-11 Thread Devin A. Bougie
On Mar 10, 2017, at 1:28 PM, Juan Pablo  wrote:
> Hi, what kind of setup you have? hosted engine just runs on nfs or gluster 
> afaik.

Thanks for replying, Juan.  I was under the impression that the hosted engine 
would run on an iSCSI data domain, based on 
http://www.ovirt.org/develop/release-management/features/engine/self-hosted-engine-iscsi-support/
 and the fact that "hosted-engine --deploy" does give you the option to choose 
iscsi storage (but only one path, as far as I can tell).

I certainly could manage the iSCSI sessions outside of ovirt / vdsm, but wasn't 
sure if that would cause problems or if that was all that's needed to allow the 
hosted engine to boot automatically on an iSCSI data domain.

Thanks again,
Devin

> 2017-03-10 15:22 GMT-03:00 Devin A. Bougie :
> We have an ovirt 4.1 cluster with an iSCSI data domain.  If I shut down the 
> entire cluster and just boot the hosts, none of the hosts login to their 
> iSCSI sessions until the engine comes up.  Without logging into the sessions, 
> sanlock doesn't obtain any leases and obviously none of the VMs start.
> 
> I'm sure there's something I'm missing, as it looks like it should be 
> possible to run a hosted engine on a cluster using an iSCSI data domain.
> 
> Any pointers or suggestions would be greatly appreciated.
> 
> Many thanks,
> Devin
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users