Re: [ovirt-users] Ovirt remove network from hosted engine

2017-06-27 Thread knarra

On 06/27/2017 09:49 PM, Abi Askushi wrote:

Hi all,

Just in case ones needs it, in order to remove the secondary network 
interface from engine, you can go to:
Virtual Machines -> Hostedengine -> Network Interfaces -> edit -> 
unplug it -> confirm -> remove it.
cool. But in your previous mail you did mention that it fails for you 
since the engine is running. Instead of remove you tried unplug here ?


It was simple...


On Tue, Jun 27, 2017 at 4:54 PM, Abi Askushi > wrote:


Hi Knarra,

Then I had already enabled NFS on ISO gluster volume.
Maybe i had some networking issue then. I need to remove the
secondary interface in order to test that again.



On Tue, Jun 27, 2017 at 4:25 PM, knarra > wrote:

On 06/27/2017 06:34 PM, Abi Askushi wrote:

Hi Knarra,

The ISO domain is of type gluster though I had nfs enabled on
that volume.

you need to have nfs enabled on the volume. what i meant is
nfs.disable off which means nfs is on.

For more info please refer to bug
https://bugzilla.redhat.com/show_bug.cgi?id=1437799


I will disable the nfs and try. Though in order to try I need
first to remove that second interface from engine.
Is there a way I can remove the secondary storage network
interface from the engine?

I am not sure how to do that, but   you may shutdown the vm
using the command hosted-engine --vm-shutdown which will power
off the vm and try to remove the networks using vdsclient.
(not sure if this is right, but suggesting a way)


Thanx




On Tue, Jun 27, 2017 at 3:32 PM, knarra > wrote:

On 06/27/2017 05:41 PM, Abi Askushi wrote:

Hi all,

When setting up hosted engine setup on top gluster with
3 nodes, I had gluster configured on a separate network
interface, as recommended. When I tried later to upload
ISO from engine to ISO domain, the engine was not able
to upload it since the VM did not have access to the
separate storage network. I then added the storage
network interface to the hosted engine and ISO upload
succeeded.

May i know what was the volume type created and added as
ISO domain ?

If you plan to use a glusterfs volume below is the
procedure :

1) Create a glusterfs volume.
2) While adding storage domain select Domain Function as
'ISO' and Storage Type as 'glusterfs' .
3) You can either use 'use managed gluster volume' check
box and select the gluster volume which you have created
for storing ISO's or you can type the full path of the
volume.
4) Once this is added please make sure to set the option
nfs.disable off.
5) Now you can go to HE engine and run the command
engine-iso-uploader upload -i 


Iso gets uploaded successfully.



1st question: do I need to add the network interface to
engine in order to upload ISOs? does there exist any
alternate way?

AFAIK, this is not required when glusterfs volume is used.

Attached is the screenshot where i have only one network
attached to my HE which is ovirtmgmt.


Then I proceeded to configure bonding for the storage
domain, bonding 2 NICs at each server. When trying to
set a custom bond of mode=6 (as recommended from
gluster) I received a warning that mode0, 5 and 6 cannot
be configured since the interface is used from VMs. I
also understood that having the storage network assigned
to VMs makes it a bridge which decreases performance of
networking. When trying to remove the network interface
from engine it cannot be done, since the engine is running.

2nd question: Is there a way I can remove the secondary
storage network interface from the engine?

Many thanx


___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vm shutdown long delay is a problem for users of pools

2017-06-27 Thread Sharon Gratch
Hi,

Please see comments below.

On Thu, Jun 22, 2017 at 7:15 PM, Paul  wrote:

> Hi,
>
> Shutting down VM’s in the portal with the red downfacing arrow takes quite
> some time (about 90 seconds). I read this is mainly due to a 60 second
> delay in the ovirt-guest-agent. I got used to right-click and use “power
> off” instead of “shutdown”, which is fine.
>
>
>
> My users make use of VM in a VM-pool. They get assigned a VM and after
> console disconnect the VM shuts down (default recommended behavior). My
> issue is that the users stays assigned to this VM for the full 90 seconds
> and cannot do “power off”. Suppose he disconnected by accident, he has to
> wait 90 seconds until he is assigned to the pool again until he can connect
> to another VM.
>
>
>
> My questions are:
>
> -  Is it possible to decrease the time delay of a VM shutdown? 90
> seconds is quite a lot, 10 seconds should be enough
>
​​
​Is ovirt-guest-agent installed on all pool's VMs? Consider installing
ovirt-guest-agent
in all VMs in your Pool to decrease the time taken for the VM shutdown.

-  Is it possible for normal users to use “power off”?
>
​There is no option in UserPortal to power-off a VM but you can
​try to click twice (sequential clicks) on the 'shutdown' button. Two
sequential shutdown requests are handled in oVirt as "power off".

> -  Is it possible to “unallocate” the user from a VM if it is
> powering down? So he can allocate another VM
>
​You can consider assigning two VMs per each user, if possible of-course
(via WebAdmin->edit Pool -> and set "Maximum number of VMs per user" field
to "2") so that way while one VM is still shutting down, the user can
switch and connect to a second VM without waiting.

Another option is to create a pool with a different policy for console
disconnecting so that the VM won't shutdown each time the user close the
console (via WebAdmin->Pool->Console tab->"Console Disconnect Action").
Consider changing this field to "Lock screen" or "Logout user" instead of
"shutdown virtual machine".
This policy will avoid accidentally console disconnection waiting each
time...but on the other hand the VM state will remain as is since no
shutdown occurs, so it really depends on your requirements.

Regards,
Sharon

>
>
> Kind regards,
>
>
>
> Paul
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HostedEngine VM not visible, but running

2017-06-27 Thread cmc
On the host that has the Hosted Engine VM, the sanlock.log reports:

2017-06-27 17:30:20+0100 1043742 [7307]: add_lockspace
207221b2-959b-426b-b945-18e1adfed62f:3:/dev/207221b2-959b-426b-b945-18e1adfed62f/ids:0
conflicts with name of list1 s5
207221b2-959b-426b-b945-18e1adfed62f:1:/dev/207221b2-959b-426b-b945-18e1adfed62f/ids:0

Again, I'm not sure what has happened here.

On Tue, Jun 27, 2017 at 5:26 PM, cmc  wrote:
> I see this on the host it is trying to migrate in /var/log/sanlock:
>
> 2017-06-27 17:10:40+0100 527703 [2407]: s3528 lockspace
> 207221b2-959b-426b-b945-18e1adfed62f:1:/dev/207221b2-959b-426b-b945-18e1adfed62f/ids:0
> 2017-06-27 17:13:00+0100 527843 [27446]: s3528 delta_acquire host_id 1
> busy1 1 2 1042692 3d4ec963-8486-43a2-a7d9-afa82508f89f.kvm-ldn-03
> 2017-06-27 17:13:01+0100 527844 [2407]: s3528 add_lockspace fail result -262
>
> The sanlock service is running. Why would this occur?
>
> Thanks,
>
> C
>
> On Tue, Jun 27, 2017 at 5:21 PM, cmc  wrote:
>> Hi Martin,
>>
>> Thanks for the reply. I have done this, and the deployment completed
>> without error. However, it still will not allow the Hosted Engine
>> migrate to another host. The
>> /etc/ovirt-hosted-engine/hosted-engine.conf got created ok on the host
>> I re-installed, but the ovirt-ha-broker.service, though it starts,
>> reports:
>>
>> 8<---
>>
>> Jun 27 14:58:26 kvm-ldn-01 systemd[1]: Starting oVirt Hosted Engine
>> High Availability Communications Broker...
>> Jun 27 14:58:27 kvm-ldn-01 ovirt-ha-broker[6101]: ovirt-ha-broker
>> ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker ERROR
>> Failed to read metadata from
>> /rhev/data-center/mnt/blockSD/207221b2-959b-426b-b945-18e1adfed62f/ha_agent/hosted-engine.metadata
>>   Traceback (most
>> recent call last):
>> File
>> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
>> line 129, in get_raw_stats_for_service_type
>>   f =
>> os.open(path, direct_flag | os.O_RDONLY | os.O_SYNC)
>>   OSError: [Errno 2]
>> No such file or directory:
>> '/rhev/data-center/mnt/blockSD/207221b2-959b-426b-b945-18e1adfed62f/ha_agent/hosted-engine.metadata'
>>
>> 8<---
>>
>> I checked the path, and it exists. I can run 'less -f' on it fine. The
>> perms are slightly different on the host that is running the VM vs the
>> one that is reporting errors (600 vs 660), ownership is vdsm:qemu. Is
>> this a san locking issue?
>>
>> Thanks for any help,
>>
>> Cam
>>
>> On Tue, Jun 27, 2017 at 1:41 PM, Martin Sivak  wrote:
 Should it be? It was not in the instructions for the migration from
 bare-metal to Hosted VM
>>>
>>> The hosted engine will only migrate to hosts that have the services
>>> running. Please put one other host to maintenance and select Hosted
>>> engine action: DEPLOY in the reinstall dialog.
>>>
>>> Best regards
>>>
>>> Martin Sivak
>>>
>>> On Tue, Jun 27, 2017 at 1:23 PM, cmc  wrote:
 I changed the 'os.other.devices.display.protocols.value.3.6 =
 spice/qxl,vnc/cirrus,vnc/qxl' line to have the same display protocols
 as 4 and the hosted engine now appears in the list of VMs. I am
 guessing the compatibility version was causing it to use the 3.6
 version. However, I am still unable to migrate the engine VM to
 another host. When I try putting the host it is currently on into
 maintenance, it reports:

 Error while executing action: Cannot switch the Host(s) to Maintenance 
 mode.
 There are no available hosts capable of running the engine VM.

 Running 'hosted-engine --vm-status' still shows 'Engine status:
 unknown stale-data'.

 The ovirt-ha-broker service is only running on one host. It was set to
 'disabled' in systemd. It won't start as there is no
 /etc/ovirt-hosted-engine/hosted-engine.conf on the other two hosts.
 Should it be? It was not in the instructions for the migration from
 bare-metal to Hosted VM

 Thanks,

 Cam

 On Thu, Jun 22, 2017 at 1:07 PM, cmc  wrote:
> Hi Tomas,
>
> So in my /usr/share/ovirt-engine/conf/osinfo-defaults.properties on my
> engine VM, I have:
>
> os.other.devices.display.protocols.value = 
> spice/qxl,vnc/vga,vnc/qxl,vnc/cirrus
> os.other.devices.display.protocols.value.3.6 = 
> spice/qxl,vnc/cirrus,vnc/qxl
>
> That seems to match - I assume since this is 4.1, the 3.6 should not apply
>
> Is there somewhere else I should be looking?
>
> Thanks,
>
> Cam
>
> On Thu, Jun 22, 2017 at 11:40 AM, Tomas Jelinek  
> wrote:
>>
>>
>> On Thu, 

Re: [ovirt-users] HostedEngine VM not visible, but running

2017-06-27 Thread cmc
I see this on the host it is trying to migrate in /var/log/sanlock:

2017-06-27 17:10:40+0100 527703 [2407]: s3528 lockspace
207221b2-959b-426b-b945-18e1adfed62f:1:/dev/207221b2-959b-426b-b945-18e1adfed62f/ids:0
2017-06-27 17:13:00+0100 527843 [27446]: s3528 delta_acquire host_id 1
busy1 1 2 1042692 3d4ec963-8486-43a2-a7d9-afa82508f89f.kvm-ldn-03
2017-06-27 17:13:01+0100 527844 [2407]: s3528 add_lockspace fail result -262

The sanlock service is running. Why would this occur?

Thanks,

C

On Tue, Jun 27, 2017 at 5:21 PM, cmc  wrote:
> Hi Martin,
>
> Thanks for the reply. I have done this, and the deployment completed
> without error. However, it still will not allow the Hosted Engine
> migrate to another host. The
> /etc/ovirt-hosted-engine/hosted-engine.conf got created ok on the host
> I re-installed, but the ovirt-ha-broker.service, though it starts,
> reports:
>
> 8<---
>
> Jun 27 14:58:26 kvm-ldn-01 systemd[1]: Starting oVirt Hosted Engine
> High Availability Communications Broker...
> Jun 27 14:58:27 kvm-ldn-01 ovirt-ha-broker[6101]: ovirt-ha-broker
> ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker ERROR
> Failed to read metadata from
> /rhev/data-center/mnt/blockSD/207221b2-959b-426b-b945-18e1adfed62f/ha_agent/hosted-engine.metadata
>   Traceback (most
> recent call last):
> File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
> line 129, in get_raw_stats_for_service_type
>   f =
> os.open(path, direct_flag | os.O_RDONLY | os.O_SYNC)
>   OSError: [Errno 2]
> No such file or directory:
> '/rhev/data-center/mnt/blockSD/207221b2-959b-426b-b945-18e1adfed62f/ha_agent/hosted-engine.metadata'
>
> 8<---
>
> I checked the path, and it exists. I can run 'less -f' on it fine. The
> perms are slightly different on the host that is running the VM vs the
> one that is reporting errors (600 vs 660), ownership is vdsm:qemu. Is
> this a san locking issue?
>
> Thanks for any help,
>
> Cam
>
> On Tue, Jun 27, 2017 at 1:41 PM, Martin Sivak  wrote:
>>> Should it be? It was not in the instructions for the migration from
>>> bare-metal to Hosted VM
>>
>> The hosted engine will only migrate to hosts that have the services
>> running. Please put one other host to maintenance and select Hosted
>> engine action: DEPLOY in the reinstall dialog.
>>
>> Best regards
>>
>> Martin Sivak
>>
>> On Tue, Jun 27, 2017 at 1:23 PM, cmc  wrote:
>>> I changed the 'os.other.devices.display.protocols.value.3.6 =
>>> spice/qxl,vnc/cirrus,vnc/qxl' line to have the same display protocols
>>> as 4 and the hosted engine now appears in the list of VMs. I am
>>> guessing the compatibility version was causing it to use the 3.6
>>> version. However, I am still unable to migrate the engine VM to
>>> another host. When I try putting the host it is currently on into
>>> maintenance, it reports:
>>>
>>> Error while executing action: Cannot switch the Host(s) to Maintenance mode.
>>> There are no available hosts capable of running the engine VM.
>>>
>>> Running 'hosted-engine --vm-status' still shows 'Engine status:
>>> unknown stale-data'.
>>>
>>> The ovirt-ha-broker service is only running on one host. It was set to
>>> 'disabled' in systemd. It won't start as there is no
>>> /etc/ovirt-hosted-engine/hosted-engine.conf on the other two hosts.
>>> Should it be? It was not in the instructions for the migration from
>>> bare-metal to Hosted VM
>>>
>>> Thanks,
>>>
>>> Cam
>>>
>>> On Thu, Jun 22, 2017 at 1:07 PM, cmc  wrote:
 Hi Tomas,

 So in my /usr/share/ovirt-engine/conf/osinfo-defaults.properties on my
 engine VM, I have:

 os.other.devices.display.protocols.value = 
 spice/qxl,vnc/vga,vnc/qxl,vnc/cirrus
 os.other.devices.display.protocols.value.3.6 = spice/qxl,vnc/cirrus,vnc/qxl

 That seems to match - I assume since this is 4.1, the 3.6 should not apply

 Is there somewhere else I should be looking?

 Thanks,

 Cam

 On Thu, Jun 22, 2017 at 11:40 AM, Tomas Jelinek  
 wrote:
>
>
> On Thu, Jun 22, 2017 at 12:38 PM, Michal Skrivanek
>  wrote:
>>
>>
>> > On 22 Jun 2017, at 12:31, Martin Sivak  wrote:
>> >
>> > Tomas, what fields are needed in a VM to pass the check that causes
>> > the following error?
>> >
>> > WARN  [org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
>> > (org.ovirt.thread.pool-6-thread-23) [] Validation of action
>> > 'ImportVm'
>> > failed for user SYSTEM. Reasons: VAR__ACTION__IMPORT
>> >
>> > 

Re: [ovirt-users] HostedEngine VM not visible, but running

2017-06-27 Thread cmc
Hi Martin,

Thanks for the reply. I have done this, and the deployment completed
without error. However, it still will not allow the Hosted Engine
migrate to another host. The
/etc/ovirt-hosted-engine/hosted-engine.conf got created ok on the host
I re-installed, but the ovirt-ha-broker.service, though it starts,
reports:

8<---

Jun 27 14:58:26 kvm-ldn-01 systemd[1]: Starting oVirt Hosted Engine
High Availability Communications Broker...
Jun 27 14:58:27 kvm-ldn-01 ovirt-ha-broker[6101]: ovirt-ha-broker
ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker ERROR
Failed to read metadata from
/rhev/data-center/mnt/blockSD/207221b2-959b-426b-b945-18e1adfed62f/ha_agent/hosted-engine.metadata
  Traceback (most
recent call last):
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
line 129, in get_raw_stats_for_service_type
  f =
os.open(path, direct_flag | os.O_RDONLY | os.O_SYNC)
  OSError: [Errno 2]
No such file or directory:
'/rhev/data-center/mnt/blockSD/207221b2-959b-426b-b945-18e1adfed62f/ha_agent/hosted-engine.metadata'

8<---

I checked the path, and it exists. I can run 'less -f' on it fine. The
perms are slightly different on the host that is running the VM vs the
one that is reporting errors (600 vs 660), ownership is vdsm:qemu. Is
this a san locking issue?

Thanks for any help,

Cam

On Tue, Jun 27, 2017 at 1:41 PM, Martin Sivak  wrote:
>> Should it be? It was not in the instructions for the migration from
>> bare-metal to Hosted VM
>
> The hosted engine will only migrate to hosts that have the services
> running. Please put one other host to maintenance and select Hosted
> engine action: DEPLOY in the reinstall dialog.
>
> Best regards
>
> Martin Sivak
>
> On Tue, Jun 27, 2017 at 1:23 PM, cmc  wrote:
>> I changed the 'os.other.devices.display.protocols.value.3.6 =
>> spice/qxl,vnc/cirrus,vnc/qxl' line to have the same display protocols
>> as 4 and the hosted engine now appears in the list of VMs. I am
>> guessing the compatibility version was causing it to use the 3.6
>> version. However, I am still unable to migrate the engine VM to
>> another host. When I try putting the host it is currently on into
>> maintenance, it reports:
>>
>> Error while executing action: Cannot switch the Host(s) to Maintenance mode.
>> There are no available hosts capable of running the engine VM.
>>
>> Running 'hosted-engine --vm-status' still shows 'Engine status:
>> unknown stale-data'.
>>
>> The ovirt-ha-broker service is only running on one host. It was set to
>> 'disabled' in systemd. It won't start as there is no
>> /etc/ovirt-hosted-engine/hosted-engine.conf on the other two hosts.
>> Should it be? It was not in the instructions for the migration from
>> bare-metal to Hosted VM
>>
>> Thanks,
>>
>> Cam
>>
>> On Thu, Jun 22, 2017 at 1:07 PM, cmc  wrote:
>>> Hi Tomas,
>>>
>>> So in my /usr/share/ovirt-engine/conf/osinfo-defaults.properties on my
>>> engine VM, I have:
>>>
>>> os.other.devices.display.protocols.value = 
>>> spice/qxl,vnc/vga,vnc/qxl,vnc/cirrus
>>> os.other.devices.display.protocols.value.3.6 = spice/qxl,vnc/cirrus,vnc/qxl
>>>
>>> That seems to match - I assume since this is 4.1, the 3.6 should not apply
>>>
>>> Is there somewhere else I should be looking?
>>>
>>> Thanks,
>>>
>>> Cam
>>>
>>> On Thu, Jun 22, 2017 at 11:40 AM, Tomas Jelinek  wrote:


 On Thu, Jun 22, 2017 at 12:38 PM, Michal Skrivanek
  wrote:
>
>
> > On 22 Jun 2017, at 12:31, Martin Sivak  wrote:
> >
> > Tomas, what fields are needed in a VM to pass the check that causes
> > the following error?
> >
> > WARN  [org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
> > (org.ovirt.thread.pool-6-thread-23) [] Validation of action
> > 'ImportVm'
> > failed for user SYSTEM. Reasons: VAR__ACTION__IMPORT
> >
> > ,VAR__TYPE__VM,ACTION_TYPE_FAILED_ILLEGAL_VM_DISPLAY_TYPE_IS_NOT_SUPPORTED_BY_OS
>
> to match the OS and VM Display type;-)
> Configuration is in osinfo….e.g. if that is import from older releases on
> Linux this is typically caused by the cahgen of cirrus to vga for 
> non-SPICE
> VMs


 yep, the default supported combinations for 4.0+ is this:
 os.other.devices.display.protocols.value =
 spice/qxl,vnc/vga,vnc/qxl,vnc/cirrus

>
>
> >
> > Thanks.
> >
> > On Thu, Jun 22, 2017 at 12:19 PM, cmc  wrote:
> >> Hi Martin,
> >>
> >>>
> >>> just as a random comment, do you still have the database backup from
> 

Re: [ovirt-users] Ovirt remove network from hosted engine

2017-06-27 Thread Abi Askushi
Hi all,

Just in case ones needs it, in order to remove the secondary network
interface from engine, you can go to:
Virtual Machines -> Hostedengine -> Network Interfaces -> edit -> unplug it
-> confirm -> remove it.

It was simple...


On Tue, Jun 27, 2017 at 4:54 PM, Abi Askushi 
wrote:

> Hi Knarra,
>
> Then I had already enabled NFS on ISO gluster volume.
> Maybe i had some networking issue then. I need to remove the secondary
> interface in order to test that again.
>
>
>
> On Tue, Jun 27, 2017 at 4:25 PM, knarra  wrote:
>
>> On 06/27/2017 06:34 PM, Abi Askushi wrote:
>>
>> Hi Knarra,
>>
>> The ISO domain is of type gluster though I had nfs enabled on that
>> volume.
>>
>> you need to have nfs enabled on the volume. what i meant is nfs.disable
>> off which means nfs is on.
>>
>> For more info please refer to bug https://bugzilla.redhat.com/sh
>> ow_bug.cgi?id=1437799
>>
>> I will disable the nfs and try. Though in order to try I need first to
>> remove that second interface from engine.
>> Is there a way I can remove the secondary storage network interface from
>> the engine?
>>
>> I am not sure how to do that, but   you may shutdown the vm using the
>> command hosted-engine --vm-shutdown which will power off the vm and try to
>> remove the networks using vdsclient. (not sure if this is right, but
>> suggesting a way)
>>
>>
>> Thanx
>>
>>
>>
>> On Tue, Jun 27, 2017 at 3:32 PM, knarra  wrote:
>>
>>> On 06/27/2017 05:41 PM, Abi Askushi wrote:
>>>
>>> Hi all,
>>>
>>> When setting up hosted engine setup on top gluster with 3 nodes, I had
>>> gluster configured on a separate network interface, as recommended. When I
>>> tried later to upload ISO from engine to ISO domain, the engine was not
>>> able to upload it since the VM did not have access to the separate storage
>>> network. I then added the storage network interface to the hosted engine
>>> and ISO upload succeeded.
>>>
>>> May i know what was the volume type created and added as ISO domain ?
>>>
>>> If you plan to use a glusterfs volume below is the procedure :
>>>
>>> 1) Create a glusterfs volume.
>>> 2) While adding storage domain select Domain Function as 'ISO' and
>>> Storage Type as 'glusterfs' .
>>> 3) You can either use 'use managed gluster volume' check box and select
>>> the gluster volume which you have created for storing ISO's or you can type
>>> the full path of the volume.
>>> 4) Once this is added please make sure to set the option nfs.disable off.
>>> 5) Now you can go to HE engine and run the command engine-iso-uploader
>>> upload -i  
>>>
>>> Iso gets uploaded successfully.
>>>
>>>
>>> 1st question: do I need to add the network interface to engine in order
>>> to upload ISOs? does there exist any alternate way?
>>>
>>> AFAIK, this is not required when glusterfs volume is used.
>>>
>>> Attached is the screenshot where i have only one network attached to my
>>> HE which is ovirtmgmt.
>>>
>>>
>>> Then I proceeded to configure bonding for the storage domain, bonding 2
>>> NICs at each server. When trying to set a custom bond of mode=6 (as
>>> recommended from gluster) I received a warning that mode0, 5 and 6 cannot
>>> be configured since the interface is used from VMs. I also understood that
>>> having the storage network assigned to VMs makes it a bridge which
>>> decreases performance of networking. When trying to remove the network
>>> interface from engine it cannot be done, since the engine is running.
>>>
>>> 2nd question: Is there a way I can remove the secondary storage network
>>> interface from the engine?
>>>
>>> Many thanx
>>>
>>>
>>> ___
>>> Users mailing 
>>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Moving thin provisioned disks question

2017-06-27 Thread Gianluca Cecchi
On Tue, Jun 27, 2017 at 3:09 PM, InterNetX - Juergen Gotteswinter <
j...@internetx.com> wrote:

>
> > >
> > > Suppose I have one 500Gb thin provisioned disk
> > > Why can I indirectly see that the actual size is 300Gb only in
> Snapshots
> > > tab --> Disks of its VM ?
> >
> > if you are using live storage migration, ovirt creates a qcow/lvm
> > snapshot of the vm block device. but for whatever reason, it does NOT
> > remove the snapshot after the migration has finished. you have to
> remove
> > it yourself, otherwise disk usage will grow more and more.
> >
> >
> > I believe you are referring to the "Auto-generated" snapshot created
> > during live storage migration. This behavior is reported
> > in https://bugzilla.redhat.com/1317434 and fixed since 4.0.0.
>
> yep, thats what i meant. i just wasnt aware of the fact that this isnt
> the case anymore for 4.x and above. sorry for confusion
>
>
I confirm that the snapshot of the VM, named "Auto-generated for Live
Storage Migration" has been removed after the disk moving completion.
Also, for a preallocated disk the "qemu-img convert" has format options
raw/raw:

[root@ov300 ~]# ps -ef|grep qemu-img
vdsm 18343  3585  1 12:07 ?00:00:04 /usr/bin/qemu-img convert
-p -t none -T none -f raw
/rhev/data-center/mnt/blockSD/5ed04196-87f1-480e-9fee-9dd450a3b53b/images/303287ad-b7ee-40b4-b303-108a5b07c54d/fd408b9c-fdd5-4f72-a73c-332f47868b3c
-O raw
/rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/303287ad-b7ee-40b4-b303-108a5b07c54d/fd408b9c-fdd5-4f72-a73c-332f47868b3c


for a thin provisioned disk instead it is of the form qcow2/qcow2
[root@ov300 ~]# ps -ef|grep qemu-img
vdsm 28545  3585  3 12:49 ?00:00:01 /usr/bin/qemu-img convert
-p -t none -T none -f qcow2
/rhev/data-center/mnt/blockSD/5ed04196-87f1-480e-9fee-9dd450a3b53b/images/9302dca6-285e-49f7-a64c-68c5c95bdf91/0f3f927d-bb42-479f-ba86-cbd7d4c0fb51
-O qcow2 -o compat=1.1
/rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/9302dca6-285e-49f7-a64c-68c5c95bdf91/0f3f927d-bb42-479f-ba86-cbd7d4c0fb51


BTW: the "-p" option should give

   -p  display progress bar (compare, convert and rebase commands
only).  If the -p option is not used for a
   command that supports it, the progress is reported when the
process receives a "SIGUSR1" signal.

Is it of any meaning? I don't see any progress bar/information inside the
gui? Is it perhaps in any other file on filesystem?

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine network

2017-06-27 Thread Arsène Gschwind

Would re-deploying the hosted-engine and restoring the backup be an option?

Thanks a lot for your help

Rgds,
Arsène


On 06/27/2017 03:15 PM, Evgenia Tokar wrote:

The set-shared-config does not allow editing of the vm.conf or the ovf.

The original vm.conf that the hosted engine vm was started with 
contains the correct network, so it must have been removed at a later 
stage.


To get a working environment I don't see another option then fixing 
the network in the db.

What do the following queries return:
  1. select vm_interface.vm_guid, vnic_profile_id from vm_interface, 
vm_static where vm_name='HostedEngine';

  2. select id from vnic_profiles where name='ovirtmgmt';
The vnic profile id that is returned from the first query should be 
the same as the one that has the ovirtmgmt name in the second query.


Thanks,
Jenny



Jenny Tokar


On Thu, Jun 22, 2017 at 2:53 PM, Yedidyah Bar David > wrote:


On Thu, Jun 22, 2017 at 2:32 PM, Yanir Quinn > wrote:
> Adding Didi,
> Didi, do you happen to know of scenarios where we use the
migration scripts
> and the nic in the OVF is gone missing ?  (maybe after the
restore stage)

I do not think I understand the question.

Migration from standalone to hosted-engine does not involve a nic
in an ovf, AFAIU.

To get/set stuff on the shared storage, hosted-engine has options
'--{get,set}-shared-config', see also:

https://bugzilla.redhat.com/show_bug.cgi?id=1301681


For the engine vm.conf, indeed it's updated from data in the engine
db, but I do not know well the details. Adding Simone.

>
> Regards,
> Yanir
>
> On Wed, Jun 21, 2017 at 12:02 PM, Arsène Gschwind
> >
wrote:
>>
>> Hi Yanir,
>>
>> We had our oVirt Engine running on a HW server so we decided to
move it to
>> hosted-engine. For this I've followed the Howto at
>>

http://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment/

.
>>
>> The Hosted-Storage is located on a FC SAN LUN.
>>
>> Please find attached the setup log.
>>
>> Tanks a lot.
>>
>> Regards,
>> Arsène
>>
>>
>> On 06/21/2017 10:14 AM, Yanir Quinn wrote:
>>
>> HI Arsene
>>
>> Just to be clear, can you write down the steps to reproduce ? (the
>> migration procedure . and if possible the state before and after)
>>
>> Thanks
>>
>> On Mon, Jun 19, 2017 at 8:34 PM, Arsène Gschwind
>> >
wrote:
>>>
>>> Hi Jenny,
>>>
>>> Thanks for the explanations..
>>>
>>> Please find vm.conf attached, it looks like the ovirtmgmt
network is
>>> defined
>>>
>>> Regards,
>>> Arsène
>>>
>>>
>>> On 06/19/2017 01:46 PM, Evgenia Tokar wrote:
>>>
>>> Hi,
>>>
>>> It should be in one of the directories on your storage domain:
>>>
/cd1f6775-61e9-4d04-b41c-c64925d5a905/images//
>>>
>>> To see which one you can run the following command:
>>>
>>> vdsm-client Volume getInfo volumeID= imageID=
>>> storagedomainID= storagepoolID=
>>>
>>> the storage domain id is: cd1f6775-61e9-4d04-b41c-c64925d5a905
>>> the storage pool id can be found using: vdsm-client
StorageDomain getInfo
>>> storagedomainID=cd1f6775-61e9-4d04-b41c-c64925d5a905
>>>
>>> The volume that has "description":
"HostedEngineConfigurationImage" is
>>> the one you are looking for.
>>> Untar it and it should contain the original vm.conf which was
used to
>>> start the hosted engine.
>>>
>>> Jenny Tokar
>>>
>>>
>>> On Mon, Jun 19, 2017 at 12:59 PM, Arsène Gschwind
>>> >
wrote:

 Hi Jenny,

 1. I couldn't locate any tar file containing vm.conf, do you
know the
 exact place where it is stored?

 2. The ovirtmgmt appears in the network dropdown but I'm not
able to
 change since it complains about locked values.

 Thanks a lot for your help.

 Regards,
 Arsène



 On 06/14/2017 01:26 PM, Evgenia Tokar wrote:

 Hi Arseny,

 Looking at the log the ovf doesn't contain the ovirtmgmt network.

 1. Can you provide the original vm.conf file the engine was
started
 with? It is located in a tar archive on your storage domain.
 2. It's uncelar from 

Re: [ovirt-users] Ovirt remove network from hosted engine

2017-06-27 Thread Abi Askushi
Hi Knarra,

Then I had already enabled NFS on ISO gluster volume.
Maybe i had some networking issue then. I need to remove the secondary
interface in order to test that again.



On Tue, Jun 27, 2017 at 4:25 PM, knarra  wrote:

> On 06/27/2017 06:34 PM, Abi Askushi wrote:
>
> Hi Knarra,
>
> The ISO domain is of type gluster though I had nfs enabled on that volume.
>
> you need to have nfs enabled on the volume. what i meant is nfs.disable
> off which means nfs is on.
>
> For more info please refer to bug https://bugzilla.redhat.com/
> show_bug.cgi?id=1437799
>
> I will disable the nfs and try. Though in order to try I need first to
> remove that second interface from engine.
> Is there a way I can remove the secondary storage network interface from
> the engine?
>
> I am not sure how to do that, but   you may shutdown the vm using the
> command hosted-engine --vm-shutdown which will power off the vm and try to
> remove the networks using vdsclient. (not sure if this is right, but
> suggesting a way)
>
>
> Thanx
>
>
>
> On Tue, Jun 27, 2017 at 3:32 PM, knarra  wrote:
>
>> On 06/27/2017 05:41 PM, Abi Askushi wrote:
>>
>> Hi all,
>>
>> When setting up hosted engine setup on top gluster with 3 nodes, I had
>> gluster configured on a separate network interface, as recommended. When I
>> tried later to upload ISO from engine to ISO domain, the engine was not
>> able to upload it since the VM did not have access to the separate storage
>> network. I then added the storage network interface to the hosted engine
>> and ISO upload succeeded.
>>
>> May i know what was the volume type created and added as ISO domain ?
>>
>> If you plan to use a glusterfs volume below is the procedure :
>>
>> 1) Create a glusterfs volume.
>> 2) While adding storage domain select Domain Function as 'ISO' and
>> Storage Type as 'glusterfs' .
>> 3) You can either use 'use managed gluster volume' check box and select
>> the gluster volume which you have created for storing ISO's or you can type
>> the full path of the volume.
>> 4) Once this is added please make sure to set the option nfs.disable off.
>> 5) Now you can go to HE engine and run the command engine-iso-uploader
>> upload -i  
>>
>> Iso gets uploaded successfully.
>>
>>
>> 1st question: do I need to add the network interface to engine in order
>> to upload ISOs? does there exist any alternate way?
>>
>> AFAIK, this is not required when glusterfs volume is used.
>>
>> Attached is the screenshot where i have only one network attached to my
>> HE which is ovirtmgmt.
>>
>>
>> Then I proceeded to configure bonding for the storage domain, bonding 2
>> NICs at each server. When trying to set a custom bond of mode=6 (as
>> recommended from gluster) I received a warning that mode0, 5 and 6 cannot
>> be configured since the interface is used from VMs. I also understood that
>> having the storage network assigned to VMs makes it a bridge which
>> decreases performance of networking. When trying to remove the network
>> interface from engine it cannot be done, since the engine is running.
>>
>> 2nd question: Is there a way I can remove the secondary storage network
>> interface from the engine?
>>
>> Many thanx
>>
>>
>> ___
>> Users mailing 
>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Windows USB Redirection

2017-06-27 Thread Konstantinos Bonaros
Hi All,

I have Ovirt 4.1 with 3 nodes on top glusterfs, all are running smoothly
except USB redirection.

I Have 2 VMs:

   - Windows 2016 64bit
   - Windows 10 64bit

Although both are able to see USB have attached issue.

On Past when testing on Windows 7 VM USB redirection was running smoothly.

Can i have your suggestion on this issue.

Thank you in advanced.

Regards,

Konstantinos
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt remove network from hosted engine

2017-06-27 Thread knarra

On 06/27/2017 06:34 PM, Abi Askushi wrote:

Hi Knarra,

The ISO domain is of type gluster though I had nfs enabled on that 
volume.
you need to have nfs enabled on the volume. what i meant is nfs.disable 
off which means nfs is on.


For more info please refer to bug 
https://bugzilla.redhat.com/show_bug.cgi?id=1437799
I will disable the nfs and try. Though in order to try I need first to 
remove that second interface from engine.
Is there a way I can remove the secondary storage network interface 
from the engine?
I am not sure how to do that, but   you may shutdown the vm using the 
command hosted-engine --vm-shutdown which will power off the vm and try 
to remove the networks using vdsclient. (not sure if this is right, but 
suggesting a way)


Thanx




On Tue, Jun 27, 2017 at 3:32 PM, knarra > wrote:


On 06/27/2017 05:41 PM, Abi Askushi wrote:

Hi all,

When setting up hosted engine setup on top gluster with 3 nodes,
I had gluster configured on a separate network interface, as
recommended. When I tried later to upload ISO from engine to ISO
domain, the engine was not able to upload it since the VM did not
have access to the separate storage network. I then added the
storage network interface to the hosted engine and ISO upload
succeeded.

May i know what was the volume type created and added as ISO domain ?

If you plan to use a glusterfs volume below is the procedure :

1) Create a glusterfs volume.
2) While adding storage domain select Domain Function as 'ISO' and
Storage Type as 'glusterfs' .
3) You can either use 'use managed gluster volume' check box and
select the gluster volume which you have created for storing ISO's
or you can type the full path of the volume.
4) Once this is added please make sure to set the option
nfs.disable off.
5) Now you can go to HE engine and run the command
engine-iso-uploader upload -i  

Iso gets uploaded successfully.



1st question: do I need to add the network interface to engine in
order to upload ISOs? does there exist any alternate way?

AFAIK, this is not required when glusterfs volume is used.

Attached is the screenshot where i have only one network attached
to my HE which is ovirtmgmt.


Then I proceeded to configure bonding for the storage domain,
bonding 2 NICs at each server. When trying to set a custom bond
of mode=6 (as recommended from gluster) I received a warning that
mode0, 5 and 6 cannot be configured since the interface is used
from VMs. I also understood that having the storage network
assigned to VMs makes it a bridge which decreases performance of
networking. When trying to remove the network interface from
engine it cannot be done, since the engine is running.

2nd question: Is there a way I can remove the secondary storage
network interface from the engine?

Many thanx


___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine network

2017-06-27 Thread Evgenia Tokar
The set-shared-config does not allow editing of the vm.conf or the ovf.

The original vm.conf that the hosted engine vm was started with contains
the correct network, so it must have been removed at a later stage.

To get a working environment I don't see another option then fixing the
network in the db.
What do the following queries return:
  1. select vm_interface.vm_guid, vnic_profile_id from vm_interface,
vm_static where vm_name='HostedEngine';
  2. select id from vnic_profiles where name='ovirtmgmt';
The vnic profile id that is returned from the first query should be the
same as the one that has the ovirtmgmt name in the second query.

Thanks,
Jenny



Jenny Tokar


On Thu, Jun 22, 2017 at 2:53 PM, Yedidyah Bar David  wrote:

> On Thu, Jun 22, 2017 at 2:32 PM, Yanir Quinn  wrote:
> > Adding Didi,
> > Didi, do you happen to know of scenarios where we use the migration
> scripts
> > and the nic in the OVF is gone missing ?  (maybe after the restore stage)
>
> I do not think I understand the question.
>
> Migration from standalone to hosted-engine does not involve a nic
> in an ovf, AFAIU.
>
> To get/set stuff on the shared storage, hosted-engine has options
> '--{get,set}-shared-config', see also:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1301681
>
> For the engine vm.conf, indeed it's updated from data in the engine
> db, but I do not know well the details. Adding Simone.
>
> >
> > Regards,
> > Yanir
> >
> > On Wed, Jun 21, 2017 at 12:02 PM, Arsène Gschwind
> >  wrote:
> >>
> >> Hi Yanir,
> >>
> >> We had our oVirt Engine running on a HW server so we decided to move it
> to
> >> hosted-engine. For this I've followed the Howto at
> >> http://www.ovirt.org/documentation/self-hosted/
> chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment/.
> >>
> >> The Hosted-Storage is located on a FC SAN LUN.
> >>
> >> Please find attached the setup log.
> >>
> >> Tanks a lot.
> >>
> >> Regards,
> >> Arsène
> >>
> >>
> >> On 06/21/2017 10:14 AM, Yanir Quinn wrote:
> >>
> >> HI Arsene
> >>
> >> Just to be clear, can you write down the steps to reproduce ? (the
> >> migration procedure . and if possible the state before and after)
> >>
> >> Thanks
> >>
> >> On Mon, Jun 19, 2017 at 8:34 PM, Arsène Gschwind
> >>  wrote:
> >>>
> >>> Hi Jenny,
> >>>
> >>> Thanks for the explanations..
> >>>
> >>> Please find vm.conf attached, it looks like the ovirtmgmt network is
> >>> defined
> >>>
> >>> Regards,
> >>> Arsène
> >>>
> >>>
> >>> On 06/19/2017 01:46 PM, Evgenia Tokar wrote:
> >>>
> >>> Hi,
> >>>
> >>> It should be in one of the directories on your storage domain:
> >>> /cd1f6775-61e9-4d04-b41c-c64925d5a905/images//
> >>>
> >>> To see which one you can run the following command:
> >>>
> >>> vdsm-client Volume getInfo volumeID= imageID=
> >>> storagedomainID= storagepoolID=
> >>>
> >>> the storage domain id is: cd1f6775-61e9-4d04-b41c-c64925d5a905
> >>> the storage pool id can be found using: vdsm-client StorageDomain
> getInfo
> >>> storagedomainID=cd1f6775-61e9-4d04-b41c-c64925d5a905
> >>>
> >>> The volume that has "description": "HostedEngineConfigurationImage" is
> >>> the one you are looking for.
> >>> Untar it and it should contain the original vm.conf which was used to
> >>> start the hosted engine.
> >>>
> >>> Jenny Tokar
> >>>
> >>>
> >>> On Mon, Jun 19, 2017 at 12:59 PM, Arsène Gschwind
> >>>  wrote:
> 
>  Hi Jenny,
> 
>  1. I couldn't locate any tar file containing vm.conf, do you know the
>  exact place where it is stored?
> 
>  2. The ovirtmgmt appears in the network dropdown but I'm not able to
>  change since it complains about locked values.
> 
>  Thanks a lot for your help.
> 
>  Regards,
>  Arsène
> 
> 
> 
>  On 06/14/2017 01:26 PM, Evgenia Tokar wrote:
> 
>  Hi Arseny,
> 
>  Looking at the log the ovf doesn't contain the ovirtmgmt network.
> 
>  1. Can you provide the original vm.conf file the engine was started
>  with? It is located in a tar archive on your storage domain.
>  2. It's uncelar from the screenshot, in the network dropdown do you
> have
>  an option to add a ovirtmgmt network?
> 
>  Thanks,
>  Jenny
> 
> 
>  On Tue, Jun 13, 2017 at 11:19 AM, Arsène Gschwind
>   wrote:
> >
> > Sorry for that, I haven't checked.
> >
> > I've replaced the log file with a new version which should work i
> hope.
> >
> > Many Thanks.
> >
> > Regards,
> > Arsène
> >
> >
> > On 06/12/2017 02:33 PM, Martin Sivak wrote:
> >
> > I am sorry to say so, but it seems the log archive is corrupted. I
> > can't open it.
> >
> > Regards
> >
> > Martin Sivak
> >
> > On Mon, Jun 12, 2017 at 12:47 PM, Arsène Gschwind
> >  

Re: [ovirt-users] Moving thin provisioned disks question

2017-06-27 Thread InterNetX - Juergen Gotteswinter

> >
> > Suppose I have one 500Gb thin provisioned disk
> > Why can I indirectly see that the actual size is 300Gb only in Snapshots
> > tab --> Disks of its VM ?
> 
> if you are using live storage migration, ovirt creates a qcow/lvm
> snapshot of the vm block device. but for whatever reason, it does NOT
> remove the snapshot after the migration has finished. you have to remove
> it yourself, otherwise disk usage will grow more and more.
> 
> 
> I believe you are referring to the "Auto-generated" snapshot created
> during live storage migration. This behavior is reported
> in https://bugzilla.redhat.com/1317434 and fixed since 4.0.0.

yep, thats what i meant. i just wasnt aware of the fact that this isnt
the case anymore for 4.x and above. sorry for confusion

> 
> 
> >
> > Thanks,
> > Gianluca
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org 
> > http://lists.ovirt.org/mailman/listinfo/users
> 
> >
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt remove network from hosted engine

2017-06-27 Thread Abi Askushi
Hi Knarra,

The ISO domain is of type gluster though I had nfs enabled on that volume.
I will disable the nfs and try. Though in order to try I need first to
remove that second interface from engine.
Is there a way I can remove the secondary storage network interface from
the engine?

Thanx

On Tue, Jun 27, 2017 at 3:32 PM, knarra  wrote:

> On 06/27/2017 05:41 PM, Abi Askushi wrote:
>
> Hi all,
>
> When setting up hosted engine setup on top gluster with 3 nodes, I had
> gluster configured on a separate network interface, as recommended. When I
> tried later to upload ISO from engine to ISO domain, the engine was not
> able to upload it since the VM did not have access to the separate storage
> network. I then added the storage network interface to the hosted engine
> and ISO upload succeeded.
>
> May i know what was the volume type created and added as ISO domain ?
>
> If you plan to use a glusterfs volume below is the procedure :
>
> 1) Create a glusterfs volume.
> 2) While adding storage domain select Domain Function as 'ISO' and Storage
> Type as 'glusterfs' .
> 3) You can either use 'use managed gluster volume' check box and select
> the gluster volume which you have created for storing ISO's or you can type
> the full path of the volume.
> 4) Once this is added please make sure to set the option nfs.disable off.
> 5) Now you can go to HE engine and run the command engine-iso-uploader
> upload -i  
>
> Iso gets uploaded successfully.
>
>
> 1st question: do I need to add the network interface to engine in order to
> upload ISOs? does there exist any alternate way?
>
> AFAIK, this is not required when glusterfs volume is used.
>
> Attached is the screenshot where i have only one network attached to my HE
> which is ovirtmgmt.
>
>
> Then I proceeded to configure bonding for the storage domain, bonding 2
> NICs at each server. When trying to set a custom bond of mode=6 (as
> recommended from gluster) I received a warning that mode0, 5 and 6 cannot
> be configured since the interface is used from VMs. I also understood that
> having the storage network assigned to VMs makes it a bridge which
> decreases performance of networking. When trying to remove the network
> interface from engine it cannot be done, since the engine is running.
>
> 2nd question: Is there a way I can remove the secondary storage network
> interface from the engine?
>
> Many thanx
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HostedEngine VM not visible, but running

2017-06-27 Thread Martin Sivak
> Should it be? It was not in the instructions for the migration from
> bare-metal to Hosted VM

The hosted engine will only migrate to hosts that have the services
running. Please put one other host to maintenance and select Hosted
engine action: DEPLOY in the reinstall dialog.

Best regards

Martin Sivak

On Tue, Jun 27, 2017 at 1:23 PM, cmc  wrote:
> I changed the 'os.other.devices.display.protocols.value.3.6 =
> spice/qxl,vnc/cirrus,vnc/qxl' line to have the same display protocols
> as 4 and the hosted engine now appears in the list of VMs. I am
> guessing the compatibility version was causing it to use the 3.6
> version. However, I am still unable to migrate the engine VM to
> another host. When I try putting the host it is currently on into
> maintenance, it reports:
>
> Error while executing action: Cannot switch the Host(s) to Maintenance mode.
> There are no available hosts capable of running the engine VM.
>
> Running 'hosted-engine --vm-status' still shows 'Engine status:
> unknown stale-data'.
>
> The ovirt-ha-broker service is only running on one host. It was set to
> 'disabled' in systemd. It won't start as there is no
> /etc/ovirt-hosted-engine/hosted-engine.conf on the other two hosts.
> Should it be? It was not in the instructions for the migration from
> bare-metal to Hosted VM
>
> Thanks,
>
> Cam
>
> On Thu, Jun 22, 2017 at 1:07 PM, cmc  wrote:
>> Hi Tomas,
>>
>> So in my /usr/share/ovirt-engine/conf/osinfo-defaults.properties on my
>> engine VM, I have:
>>
>> os.other.devices.display.protocols.value = 
>> spice/qxl,vnc/vga,vnc/qxl,vnc/cirrus
>> os.other.devices.display.protocols.value.3.6 = spice/qxl,vnc/cirrus,vnc/qxl
>>
>> That seems to match - I assume since this is 4.1, the 3.6 should not apply
>>
>> Is there somewhere else I should be looking?
>>
>> Thanks,
>>
>> Cam
>>
>> On Thu, Jun 22, 2017 at 11:40 AM, Tomas Jelinek  wrote:
>>>
>>>
>>> On Thu, Jun 22, 2017 at 12:38 PM, Michal Skrivanek
>>>  wrote:


 > On 22 Jun 2017, at 12:31, Martin Sivak  wrote:
 >
 > Tomas, what fields are needed in a VM to pass the check that causes
 > the following error?
 >
 > WARN  [org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
 > (org.ovirt.thread.pool-6-thread-23) [] Validation of action
 > 'ImportVm'
 > failed for user SYSTEM. Reasons: VAR__ACTION__IMPORT
 >
 > ,VAR__TYPE__VM,ACTION_TYPE_FAILED_ILLEGAL_VM_DISPLAY_TYPE_IS_NOT_SUPPORTED_BY_OS

 to match the OS and VM Display type;-)
 Configuration is in osinfo….e.g. if that is import from older releases on
 Linux this is typically caused by the cahgen of cirrus to vga for non-SPICE
 VMs
>>>
>>>
>>> yep, the default supported combinations for 4.0+ is this:
>>> os.other.devices.display.protocols.value =
>>> spice/qxl,vnc/vga,vnc/qxl,vnc/cirrus
>>>


 >
 > Thanks.
 >
 > On Thu, Jun 22, 2017 at 12:19 PM, cmc  wrote:
 >> Hi Martin,
 >>
 >>>
 >>> just as a random comment, do you still have the database backup from
 >>> the bare metal -> VM attempt? It might be possible to just try again
 >>> using it. Or in the worst case.. update the offending value there
 >>> before restoring it to the new engine instance.
 >>
 >> I still have the backup. I'd rather do the latter, as re-running the
 >> HE deployment is quite lengthy and involved (I have to re-initialise
 >> the FC storage each time). Do you know what the offending value(s)
 >> would be? Would it be in the Postgres DB or in a config file
 >> somewhere?
 >>
 >> Cheers,
 >>
 >> Cam
 >>
 >>> Regards
 >>>
 >>> Martin Sivak
 >>>
 >>> On Thu, Jun 22, 2017 at 11:39 AM, cmc  wrote:
  Hi Yanir,
 
  Thanks for the reply.
 
 > First of all, maybe a chain reaction of :
 > WARN  [org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
 > (org.ovirt.thread.pool-6-thread-23) [] Validation of action
 > 'ImportVm'
 > failed for user SYSTEM. Reasons: VAR__ACTION__IMPORT
 >
 > ,VAR__TYPE__VM,ACTION_TYPE_FAILED_ILLEGAL_VM_DISPLAY_TYPE_IS_NOT_SUPPORTED_BY_OS
 > is causing the hosted engine vm not to be set up correctly  and
 > further
 > actions were made when the hosted engine vm wasnt in a stable state.
 >
 > As for now, are you trying to revert back to a previous/initial
 > state ?
 
  I'm not trying to revert it to a previous state for now. This was a
  migration from a bare metal engine, and it didn't report any error
  during the migration. I'd had some problems on my first attempts at
  this migration, whereby it never completed (due to a proxy issue) but
  I managed to resolve 

Re: [ovirt-users] Ovirt remove network from hosted engine

2017-06-27 Thread knarra

On 06/27/2017 05:41 PM, Abi Askushi wrote:

Hi all,

When setting up hosted engine setup on top gluster with 3 nodes, I had 
gluster configured on a separate network interface, as recommended. 
When I tried later to upload ISO from engine to ISO domain, the engine 
was not able to upload it since the VM did not have access to the 
separate storage network. I then added the storage network interface 
to the hosted engine and ISO upload succeeded.

May i know what was the volume type created and added as ISO domain ?

If you plan to use a glusterfs volume below is the procedure :

1) Create a glusterfs volume.
2) While adding storage domain select Domain Function as 'ISO' and 
Storage Type as 'glusterfs' .
3) You can either use 'use managed gluster volume' check box and select 
the gluster volume which you have created for storing ISO's or you can 
type the full path of the volume.

4) Once this is added please make sure to set the option nfs.disable off.
5) Now you can go to HE engine and run the command engine-iso-uploader 
upload -i  


Iso gets uploaded successfully.



1st question: do I need to add the network interface to engine in 
order to upload ISOs? does there exist any alternate way?

AFAIK, this is not required when glusterfs volume is used.

Attached is the screenshot where i have only one network attached to my 
HE which is ovirtmgmt.


Then I proceeded to configure bonding for the storage domain, bonding 
2 NICs at each server. When trying to set a custom bond of mode=6 (as 
recommended from gluster) I received a warning that mode0, 5 and 6 
cannot be configured since the interface is used from VMs. I also 
understood that having the storage network assigned to VMs makes it a 
bridge which decreases performance of networking. When trying to 
remove the network interface from engine it cannot be done, since the 
engine is running.


2nd question: Is there a way I can remove the secondary storage 
network interface from the engine?


Many thanx


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt remove network from hosted engine

2017-06-27 Thread Abi Askushi
Hi all,

When setting up hosted engine setup on top gluster with 3 nodes, I had
gluster configured on a separate network interface, as recommended. When I
tried later to upload ISO from engine to ISO domain, the engine was not
able to upload it since the VM did not have access to the separate storage
network. I then added the storage network interface to the hosted engine
and ISO upload succeeded.

1st question: do I need to add the network interface to engine in order to
upload ISOs? does there exist any alternate way?

Then I proceeded to configure bonding for the storage domain, bonding 2
NICs at each server. When trying to set a custom bond of mode=6 (as
recommended from gluster) I received a warning that mode0, 5 and 6 cannot
be configured since the interface is used from VMs. I also understood that
having the storage network assigned to VMs makes it a bridge which
decreases performance of networking. When trying to remove the network
interface from engine it cannot be done, since the engine is running.

2nd question: Is there a way I can remove the secondary storage network
interface from the engine?

Many thanx
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt split brain resolution

2017-06-27 Thread Abi Askushi
Hi Satheesaran,

gluster volume info engine

Volume Name: engine
Type: Replicate
Volume ID: 3caae601-74dd-40d1-8629-9a61072bec0f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster0:/gluster/engine/brick
Brick2: gluster1:/gluster/engine/brick
Brick3: gluster2:/gluster/engine/brick (arbiter)
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 1
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
network.ping-timeout: 30
performance.strict-o-direct: on
cluster.granular-entry-heal: enable
nfs.export-volumes: on

As per my previous, i have resolved this by following the steps described.


On Tue, Jun 27, 2017 at 1:42 PM, Satheesaran Sundaramoorthi <
sasun...@redhat.com> wrote:

> On Sat, Jun 24, 2017 at 3:17 PM, Abi Askushi 
> wrote:
>
>> Hi all,
>>
>> For the records, I had to remove manually the conflicting directory and
>> ts respective gfid from the arbiter volume:
>>
>>  getfattr -m . -d -e hex e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
>>
>> That gave me the gfid: 0x277c9caa9dce4a17a2a93775357befd5
>>
>> Then cd .glusterfs/27/7c
>>
>> rm -rf 277c9caa-9dce-4a17-a2a9-3775357befd5 (or move it out of there)
>>
>> Triggerred heal: gluster volume heal engine
>>
>> Then all ok:
>>
>> gluster volume heal engine info
>> Brick gluster0:/gluster/engine/brick
>> Status: Connected
>> Number of entries: 0
>>
>> Brick gluster1:/gluster/engine/brick
>> Status: Connected
>> Number of entries: 0
>>
>> Brick gluster2:/gluster/engine/brick
>> Status: Connected
>> Number of entries: 0
>>
>> Thanx.
>>
>
> ​Hi Abi,
>
> What is the volume type of 'engine' volume ?
> Could you also provide the output of 'gluster volume info engine' to get
> to the
> closer look at the problem
>
> ​-- sas​
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HostedEngine VM not visible, but running

2017-06-27 Thread cmc
I changed the 'os.other.devices.display.protocols.value.3.6 =
spice/qxl,vnc/cirrus,vnc/qxl' line to have the same display protocols
as 4 and the hosted engine now appears in the list of VMs. I am
guessing the compatibility version was causing it to use the 3.6
version. However, I am still unable to migrate the engine VM to
another host. When I try putting the host it is currently on into
maintenance, it reports:

Error while executing action: Cannot switch the Host(s) to Maintenance mode.
There are no available hosts capable of running the engine VM.

Running 'hosted-engine --vm-status' still shows 'Engine status:
unknown stale-data'.

The ovirt-ha-broker service is only running on one host. It was set to
'disabled' in systemd. It won't start as there is no
/etc/ovirt-hosted-engine/hosted-engine.conf on the other two hosts.
Should it be? It was not in the instructions for the migration from
bare-metal to Hosted VM

Thanks,

Cam

On Thu, Jun 22, 2017 at 1:07 PM, cmc  wrote:
> Hi Tomas,
>
> So in my /usr/share/ovirt-engine/conf/osinfo-defaults.properties on my
> engine VM, I have:
>
> os.other.devices.display.protocols.value = 
> spice/qxl,vnc/vga,vnc/qxl,vnc/cirrus
> os.other.devices.display.protocols.value.3.6 = spice/qxl,vnc/cirrus,vnc/qxl
>
> That seems to match - I assume since this is 4.1, the 3.6 should not apply
>
> Is there somewhere else I should be looking?
>
> Thanks,
>
> Cam
>
> On Thu, Jun 22, 2017 at 11:40 AM, Tomas Jelinek  wrote:
>>
>>
>> On Thu, Jun 22, 2017 at 12:38 PM, Michal Skrivanek
>>  wrote:
>>>
>>>
>>> > On 22 Jun 2017, at 12:31, Martin Sivak  wrote:
>>> >
>>> > Tomas, what fields are needed in a VM to pass the check that causes
>>> > the following error?
>>> >
>>> > WARN  [org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
>>> > (org.ovirt.thread.pool-6-thread-23) [] Validation of action
>>> > 'ImportVm'
>>> > failed for user SYSTEM. Reasons: VAR__ACTION__IMPORT
>>> >
>>> > ,VAR__TYPE__VM,ACTION_TYPE_FAILED_ILLEGAL_VM_DISPLAY_TYPE_IS_NOT_SUPPORTED_BY_OS
>>>
>>> to match the OS and VM Display type;-)
>>> Configuration is in osinfo….e.g. if that is import from older releases on
>>> Linux this is typically caused by the cahgen of cirrus to vga for non-SPICE
>>> VMs
>>
>>
>> yep, the default supported combinations for 4.0+ is this:
>> os.other.devices.display.protocols.value =
>> spice/qxl,vnc/vga,vnc/qxl,vnc/cirrus
>>
>>>
>>>
>>> >
>>> > Thanks.
>>> >
>>> > On Thu, Jun 22, 2017 at 12:19 PM, cmc  wrote:
>>> >> Hi Martin,
>>> >>
>>> >>>
>>> >>> just as a random comment, do you still have the database backup from
>>> >>> the bare metal -> VM attempt? It might be possible to just try again
>>> >>> using it. Or in the worst case.. update the offending value there
>>> >>> before restoring it to the new engine instance.
>>> >>
>>> >> I still have the backup. I'd rather do the latter, as re-running the
>>> >> HE deployment is quite lengthy and involved (I have to re-initialise
>>> >> the FC storage each time). Do you know what the offending value(s)
>>> >> would be? Would it be in the Postgres DB or in a config file
>>> >> somewhere?
>>> >>
>>> >> Cheers,
>>> >>
>>> >> Cam
>>> >>
>>> >>> Regards
>>> >>>
>>> >>> Martin Sivak
>>> >>>
>>> >>> On Thu, Jun 22, 2017 at 11:39 AM, cmc  wrote:
>>>  Hi Yanir,
>>> 
>>>  Thanks for the reply.
>>> 
>>> > First of all, maybe a chain reaction of :
>>> > WARN  [org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
>>> > (org.ovirt.thread.pool-6-thread-23) [] Validation of action
>>> > 'ImportVm'
>>> > failed for user SYSTEM. Reasons: VAR__ACTION__IMPORT
>>> >
>>> > ,VAR__TYPE__VM,ACTION_TYPE_FAILED_ILLEGAL_VM_DISPLAY_TYPE_IS_NOT_SUPPORTED_BY_OS
>>> > is causing the hosted engine vm not to be set up correctly  and
>>> > further
>>> > actions were made when the hosted engine vm wasnt in a stable state.
>>> >
>>> > As for now, are you trying to revert back to a previous/initial
>>> > state ?
>>> 
>>>  I'm not trying to revert it to a previous state for now. This was a
>>>  migration from a bare metal engine, and it didn't report any error
>>>  during the migration. I'd had some problems on my first attempts at
>>>  this migration, whereby it never completed (due to a proxy issue) but
>>>  I managed to resolve this. Do you know of a way to get the Hosted
>>>  Engine VM into a stable state, without rebuilding the entire cluster
>>>  from scratch (since I have a lot of VMs on it)?
>>> 
>>>  Thanks for any help.
>>> 
>>>  Regards,
>>> 
>>>  Cam
>>> 
>>> > Regards,
>>> > Yanir
>>> >
>>> > On Wed, Jun 21, 2017 at 4:32 PM, cmc  wrote:
>>> >>
>>> >> Hi Jenny/Martin,
>>> >>
>>> >> Any idea what I can do here? The hosted engine VM has no 

Re: [ovirt-users] Moving thin provisioned disks question

2017-06-27 Thread Ala Hino
On Tue, Jun 27, 2017 at 1:37 PM, InterNetX - Juergen Gotteswinter <
j...@internetx.com> wrote:

>
>
> Am 27.06.2017 um 11:27 schrieb Gianluca Cecchi:
> > Hello,
> > I have a storage domain that I have to empty, moving its disks to
> > another storage domain,
> >
> > Both source and target domains are iSCSI
> > What is the behavior in case of preallocated and thin provisioned disk?
> > Are they preserved with their initial configuration?
>
> yes, they stay within their initial configuration
>
> >
> > Suppose I have one 500Gb thin provisioned disk
> > Why can I indirectly see that the actual size is 300Gb only in Snapshots
> > tab --> Disks of its VM ?
>
> if you are using live storage migration, ovirt creates a qcow/lvm
> snapshot of the vm block device. but for whatever reason, it does NOT
> remove the snapshot after the migration has finished. you have to remove
> it yourself, otherwise disk usage will grow more and more.
>

I believe you are referring to the "Auto-generated" snapshot created during
live storage migration. This behavior is reported in
https://bugzilla.redhat.com/1317434 and fixed since 4.0.0.

>
> >
> > Thanks,
> > Gianluca
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Moving thin provisioned disks question

2017-06-27 Thread Gianluca Cecchi
On Tue, Jun 27, 2017 at 12:37 PM, InterNetX - Juergen Gotteswinter <
j...@internetx.com> wrote:

>
>
> Am 27.06.2017 um 11:27 schrieb Gianluca Cecchi:
> > Hello,
> > I have a storage domain that I have to empty, moving its disks to
> > another storage domain,
> >
> > Both source and target domains are iSCSI
> > What is the behavior in case of preallocated and thin provisioned disk?
> > Are they preserved with their initial configuration?
>
> yes, they stay within their initial configuration
>

Thanks. I'll try and report in case of problems


>
> >
> > Suppose I have one 500Gb thin provisioned disk
> > Why can I indirectly see that the actual size is 300Gb only in Snapshots
> > tab --> Disks of its VM ?
>
> if you are using live storage migration, ovirt creates a qcow/lvm
> snapshot of the vm block device. but for whatever reason, it does NOT
> remove the snapshot after the migration has finished. you have to remove
> it yourself, otherwise disk usage will grow more and more.
>
>
Do you mean the snapshot of the original one? In my case I'm moving from
storage domain to storage domain and I would expect nothing remaining at
source storage...
How can I verify the snapshot still there?
Is there any bugzilla entry tracking this? It would be strange if not...

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt split brain resolution

2017-06-27 Thread Satheesaran Sundaramoorthi
On Sat, Jun 24, 2017 at 3:17 PM, Abi Askushi 
wrote:

> Hi all,
>
> For the records, I had to remove manually the conflicting directory and ts
> respective gfid from the arbiter volume:
>
>  getfattr -m . -d -e hex e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
>
> That gave me the gfid: 0x277c9caa9dce4a17a2a93775357befd5
>
> Then cd .glusterfs/27/7c
>
> rm -rf 277c9caa-9dce-4a17-a2a9-3775357befd5 (or move it out of there)
>
> Triggerred heal: gluster volume heal engine
>
> Then all ok:
>
> gluster volume heal engine info
> Brick gluster0:/gluster/engine/brick
> Status: Connected
> Number of entries: 0
>
> Brick gluster1:/gluster/engine/brick
> Status: Connected
> Number of entries: 0
>
> Brick gluster2:/gluster/engine/brick
> Status: Connected
> Number of entries: 0
>
> Thanx.
>

​Hi Abi,

What is the volume type of 'engine' volume ?
Could you also provide the output of 'gluster volume info engine' to get to
the
closer look at the problem

​-- sas​
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Moving thin provisioned disks question

2017-06-27 Thread InterNetX - Juergen Gotteswinter


Am 27.06.2017 um 11:27 schrieb Gianluca Cecchi:
> Hello,
> I have a storage domain that I have to empty, moving its disks to
> another storage domain,
> 
> Both source and target domains are iSCSI
> What is the behavior in case of preallocated and thin provisioned disk?
> Are they preserved with their initial configuration?

yes, they stay within their initial configuration

> 
> Suppose I have one 500Gb thin provisioned disk
> Why can I indirectly see that the actual size is 300Gb only in Snapshots
> tab --> Disks of its VM ?

if you are using live storage migration, ovirt creates a qcow/lvm
snapshot of the vm block device. but for whatever reason, it does NOT
remove the snapshot after the migration has finished. you have to remove
it yourself, otherwise disk usage will grow more and more.

> 
> Thanks,
> Gianluca
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Access VM Console on a Smart Phone with User Permission

2017-06-27 Thread Tomas Jelinek
On Tue, Jun 27, 2017 at 12:08 PM, Jerome R  wrote:

> I tried this workaround, I tried to logon the user acct to moVirt with 1
> of the admin permission resources, it works I can access to the VM assigned
> however I'm able to see what Admin can see in the portal though not able to
> perform action. So far that's one of my concern the user should be able to
> see just his/her VM assigned.
>

yes, this is a consequence of using the admin API - you can see all the
entities and do actions only on the ones you have explicit rights to.

Unfortunately, until the https://github.com/oVirt/moVirt/issues/282 is
done, there is nothing better I can offer you.

We can try to give that item a priority, just need to get the current RC
out of the door (hopefully soon).


>
> Thanks,
> Jerome
>
> On Tue, Jun 27, 2017 at 3:20 PM, Tomas Jelinek 
> wrote:
>
>>
>>
>> On Tue, Jun 27, 2017 at 10:13 AM, Jerome Roque 
>> wrote:
>>
>>> Hi Tomas,
>>>
>>> Thanks for your response. What do you mean by "removing the support for
>>> user permissions"? I'm using
>>>
>>
>> The oVirt permission model expects to be told explicitly by one header if
>> the logged in user has some admin permissions or not. In the past the API
>> behaved differently in this two cases so we needed to remove the option to
>> use it without admin permissions.
>>
>> Now the situation is better so we may be able to bring this support back,
>> but it will require some testing.
>>
>>
>>> the latest version of moVirt 1.7.1, and ovirt-engine 4.1.
>>> Is there anyone tried running user role in moVirt?
>>>
>>
>> you will get permission denied from the API if you try to log in with a
>> user which has no admin permission. If you give him any admin permission on
>> any resource, it might work as a workaround.
>>
>>
>>>
>>> Best Regards,
>>> Jerome
>>>
>>> On Tue, Jun 20, 2017 at 5:14 PM, Tomas Jelinek 
>>> wrote:
>>>


 On Fri, Jun 16, 2017 at 6:14 AM, Jerome Roque 
 wrote:

> Good day oVirt Users,
>
> I need some little help. I have a KVM and used oVirt for the
> management of VMs. What I want is that my client will log on to their
> account and access their virtual machine using their Smart phone. I tried
> to install mOvirt and yes can connect to the console of my machine, but it
> is only accessible for admin console.
>

 moVirt originally worked both with admin and user permissions. We had
 to remove the support for user permissions since the oVirt API did not
 provide all features moVirt needed for user permissions (search for
 example). But the API moved significantly since then (the search works also
 for users now for one) so we can move it back. I have opened an issue about
 it: https://github.com/oVirt/moVirt/issues/282 - we can try to do it
 in next version.


> Tried to use web console, it downloaded console.vv but can't open it.
> By any chance could make this thing possible?
>

 If you want to use a web console for users, I would suggest to try the
 new ovirt-web-ui [1] - you have a link to it from oVirt landing page and
 since 4.1 it is installed by default with oVirt.

 The .vv file can not be opened using aSPICE AFAIK - adding Iordan as
 the author of aSPICE to comment on this.

 [1]: https://github.com/oVirt/ovirt-web-ui


>
> Thank you,
> Jerome
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>

>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Access VM Console on a Smart Phone with User Permission

2017-06-27 Thread Jerome Roque
Hi Tomas,

Thanks for your response. What do you mean by "removing the support for
user permissions"? I'm using the latest version of moVirt 1.7.1, and
ovirt-engine 4.1.
Is there anyone tried running user role in moVirt?

Best Regards,
Jerome

On Tue, Jun 20, 2017 at 5:14 PM, Tomas Jelinek  wrote:

>
>
> On Fri, Jun 16, 2017 at 6:14 AM, Jerome Roque 
> wrote:
>
>> Good day oVirt Users,
>>
>> I need some little help. I have a KVM and used oVirt for the management
>> of VMs. What I want is that my client will log on to their account and
>> access their virtual machine using their Smart phone. I tried to install
>> mOvirt and yes can connect to the console of my machine, but it is only
>> accessible for admin console.
>>
>
> moVirt originally worked both with admin and user permissions. We had to
> remove the support for user permissions since the oVirt API did not provide
> all features moVirt needed for user permissions (search for example). But
> the API moved significantly since then (the search works also for users now
> for one) so we can move it back. I have opened an issue about it:
> https://github.com/oVirt/moVirt/issues/282 - we can try to do it in next
> version.
>
>
>> Tried to use web console, it downloaded console.vv but can't open it. By
>> any chance could make this thing possible?
>>
>
> If you want to use a web console for users, I would suggest to try the new
> ovirt-web-ui [1] - you have a link to it from oVirt landing page and since
> 4.1 it is installed by default with oVirt.
>
> The .vv file can not be opened using aSPICE AFAIK - adding Iordan as the
> author of aSPICE to comment on this.
>
> [1]: https://github.com/oVirt/ovirt-web-ui
>
>
>>
>> Thank you,
>> Jerome
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Moving thin provisioned disks question

2017-06-27 Thread Gianluca Cecchi
Hello,
I have a storage domain that I have to empty, moving its disks to another
storage domain,

Both source and target domains are iSCSI
What is the behavior in case of preallocated and thin provisioned disk? Are
they preserved with their initial configuration?

Suppose I have one 500Gb thin provisioned disk
Why can I indirectly see that the actual size is 300Gb only in Snapshots
tab --> Disks of its VM ?

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI storage domain and multipath when adding node

2017-06-27 Thread Gianluca Cecchi
On Mon, Apr 10, 2017 at 3:06 PM, Gianluca Cecchi 
wrote:

>
>
> On Mon, Apr 10, 2017 at 2:44 PM, Ondrej Svoboda 
> wrote:
>
>> Yes, this is what struck me about your situation. Will you be able to
>> find relevant logs regarding multipath configuration, in which we would see
>> when (or even why) the third connection was created on the first node, and
>> only one connection on the second?
>>
>> On Mon, Apr 10, 2017 at 2:17 PM, Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>> On Mon, Apr 10, 2017 at 2:12 PM, Ondrej Svoboda 
>>> wrote:
>>>
 Gianluca,

 I can see that the workaround you describe here (to complete multipath
 configuration in CLI) fixes an inconsistency in observed iSCSI sessions. I
 think it is a shortcoming in oVirt that you had to resort to manual
 configuration. Could you file a bug about this? Ideally, following the bug
 template presented to you by Bugzilla, i.e. "Expected: two iSCSI sessions",
 "Got: one the first node ... one the second node".

 Edy, Martin, do you think you could help out here?

 Thanks,
 Ondra

>>>
>>> Ok, this evening I'm going to open a bugzilla for that.
>>> Please keep in mind that on the already configured node (where before
>>> node addition there were two connections in place with multipath), actually
>>> the node addition generates a third connection, added to the existing two,
>>> using "default" as iSCSI interface (clearly seen if I run "iscsiadm -m
>>> session -P1") 
>>>
>>> Gianluca
>>>
>>>
>>
>>
> vdsm log of the already configured host is here for that day:
> https://drive.google.com/file/d/0BwoPbcrMv8mvQzdCUmtIT1NOT2c/
> view?usp=sharing
>
> Installation / configuration of the second node happened between 11:30 AM
> and 01:30 PM of 6th of April.
>
> Aound 12:29 you will find:
>
> 2017-04-06 12:29:05,832+0200 INFO  (jsonrpc/7) [dispatcher] Run and
> protect: getVGInfo, Return response: {'info': {'state': 'OK', 'vgsize':
> '1099108974592', 'name': '5ed04196-87f1-480e-9fee-9dd450a3b53b',
> 'vgfree': '182536110080', 'vgUUID': 'rIENae-3NLj-o4t8-GVuJ-ZKKb-ksTk-qBkMrE',
> 'pvlist': [{'vendorID': 'EQLOGIC', 'capacity': '1099108974592', 'fwrev':
> '', 'pe_alloc_count': '6829', 'vgUUID': 
> 'rIENae-3NLj-o4t8-GVuJ-ZKKb-ksTk-qBkMrE',
> 'pathlist': [{'connection': '10.10.100.9', 'iqn':
> 'iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910',
> 'portal': '1', 'port': '3260', 'initiatorname': 'p1p1.100'}, {'connection':
> '10.10.100.9', 'iqn': 
> 'iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910',
> 'portal': '1', 'port': '3260', 'initiatorname': 'p1p2'}, {'connection':
> '10.10.100.9', 'iqn': 
> 'iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910',
> 'portal': '1', 'port': '3260', 'initiatorname': 'default'}], 'pe_count':
> '8189', 'discard_max_bytes': 15728640, 'pathstatus': [{'type': 'iSCSI',
> 'physdev': 'sde', 'capacity': '1099526307840', 'state': 'active', 'lun':
> '0'}, {'type': 'iSCSI', 'physdev': 'sdf', 'capacity': '1099526307840',
> 'state': 'active', 'lun': '0'}, {'type': 'iSCSI', 'physdev': 'sdg',
> 'capacity': '1099526307840', 'state': 'active', 'lun': '0'}], 'devtype':
> 'iSCSI', 'discard_zeroes_data': 1, 'pvUUID': 
> 'g9pjI0-oifQ-kz2O-0Afy-xdnx-THYD-eTWgqB',
> 'serial': 'SEQLOGIC_100E-00_64817197B5DFD0E5538D959702249B1C', 'GUID': '
> 364817197b5dfd0e5538d959702249b1c', 'devcapacity': '1099526307840',
> 'productID': '100E-00'}], 'type': 3, 'attr': {'allocation': 'n', 'partial':
> '-', 'exported': '-', 'permission': 'w', 'clustered': '-', 'resizeable':
> 'z'}}} (logUtils:54)
>
> and around 12:39 you will find
>
> 2017-04-06 12:39:11,003+0200 ERROR (check/loop) [storage.Monitor] Error
> checking path /dev/5ed04196-87f1-480e-9fee-9dd450a3b53b/metadata
> (monitor:485)
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/monitor.py", line 483, in _pathChecked
> delay = result.delay()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/check.py", line
> 368, in delay
> raise exception.MiscFileReadException(self.path, self.rc, self.err)
> MiscFileReadException: Internal file read failure:
> ('/dev/5ed04196-87f1-480e-9fee-9dd450a3b53b/metadata', 1,
> bytearray(b"/usr/bin/dd: error reading 
> \'/dev/5ed04196-87f1-480e-9fee-9dd450a3b53b/metadata\':
> Input/output error\n0+0 records in\n0+0 records out\n0 bytes (0 B) copied,
> 0.000234164 s, 0.0 kB/s\n"))
> 2017-04-06 12:39:11,020+0200 INFO  (check/loop) [storage.Monitor] Domain
> 5ed04196-87f1-480e-9fee-9dd450a3b53b became INVALID (monitor:456)
>
> that I think corresponds to the moment when I executed "iscsiadm -m
> session -u" and had the automaic remediation of the correctly defined paths
>
> Gianluca
>

So I come back here because I have an "orthogonal" action with the same
effect.

I have already in place the same 2 oVirt hosts using one 4Tb iSCSI lun.
With the 

Re: [ovirt-users] oVirt Live USB3 question

2017-06-27 Thread Lev Veyde
Hi Michael,

I'm glad that I could help.

Yes, while USB3 has an impressive interface (and thus potential) speed of
up to 5 gbps (or over 600 MB/sec) the actual hardware normally provides
speeds which for basic DOKs are even under USB 2.0 spec. in most cases
(under 60 MB/sec.).

However, as I also mentioned, it will only affect the time it takes to
load, and from my experience with any decent DOK it's quite OK, you don't
even have to invest into ultra-expensive and ultra-fast one, especially if
you don't plan to "re-burn" the DOK very often.

Thanks in advance,

On Mon, Jun 26, 2017 at 8:05 PM, Michael McConachie <
michael.mcconac...@hotmail.com> wrote:

> Lev,
>
>
> That's exactly what I thought; you hit all the points (including slow DOM
> SATA I type performance vs. USB3 speeds).  That's all I needed
> clarification on, before running down the rabbit hole.
>
>
> MM
>
>
>
> --
> *From:* Lev Veyde 
> *Sent:* Monday, June 26, 2017 4:00 PM
> *To:* Michael McConachie
> *Cc:* users@ovirt.org
> *Subject:* Re: [ovirt-users] oVirt Live USB3 question
>
> Hi Michael,
>
> oVirt-Live's basically functions as any other LiveCD, so no special
> differences here.
> Generally it means that necessary kernel/kernel modules/programs are
> loaded into RAM, and access to the image is needed if e.g. one wants to run
> some more programs.
> All writes are done into RAM.
>
> Note however that since the whole CD isn't copied into RAM, so you still
> need it to be accessible, e.g. have the USB DOK be inserted into the
> computer.
>
> As you already mentioned the oVirt-Live is designed to work in a sandboxed
> environment as far as the network is concerned.
> It was never designed nor tested to work with i.e. external storages, as
> in order to do so the network configuration will have to be modified.
>
> Regarding performance: we haven't tested it, but probably the oVirt-Live
> may take a bit more time to load when compared to normal installation.
> This is because USB DOKs are generally slower than SAS/SATA HDDs/SSDs.
> Once loaded however the performance will be similar and in some cases even
> greater, since all writes are done to the RAM.
> The price of this is of course that once the system is rebooted for
> whatever reason all data is lost.
>
> Hope it helped,
>
> On Sat, Jun 24, 2017 at 5:10 AM, Michael McConachie <
> michael.mcconac...@hotmail.com> wrote:
>
>> Hi all,
>>
>>
>> Potentially stupid question here. Sorry in advance if so.  I have always
>> built out full blown multi rack instances of oVirt, and RHEV for clients,
>> but the following question has me wondering before I go digging and trying
>> it out...
>>
>>
>> I realize that the oVirtLive ISO is for demo purposes, sandboxing, and
>> not production: I have a client, who is in need of a bootable AIO-based USB
>> install with the caveat of being able to connect to the computer's HD and
>> other external storage at that point (for the Storage and ISO domains that
>> I'll create afterwards).  This is because they have one BM to work with and
>> they don't want the extra overhead using an SSD HD slot. They don't want to
>> use a Sata DOM either if possible.
>>
>> In saying that, and concerning the oVirt LiveISO capabilities - I have
>> two questions.
>>
>>
>> - Does the AIO USB install load necessary runtimes into memory, similar
>> to esxi bootable USBs and utilize the base hardware afterwards so that the
>> rest of the operations are ran in memory, hitting the disk (USB in this
>> case) like a normal OS load when needed for kernel calls, etc..??
>>
>> - Are there a terrible performance costs if we stay with USB3 (which has
>> a ridiculous theoretical speed in certain hardware matching situations)?
>>
>>
>> Thanks in advance for anyone who might have already crossed this bridge
>> and can provide insight.
>>
>>
>>
>> Michael J. McConachie | keys.fedoraproject.org | PubKey: 0x7BCD88F8
>>
>> *NOTE: The information included and/or attached in this electronic mail
>> transmission may contain confidential or privileged information and is
>> intended solely for the addressee(s). Any unauthorized disclosure,
>> reproduction, distribution or the taking of action in reliance on the 
>> **contents
>> of the information are strictly prohibited. If you have received the
>> message in error, please notify the sender by reply transmission and delete
>> the message without copying, disclosing or forwarding.*
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
>
> Lev Veyde
>
> Software Engineer, RHCE | RHCVA | MCITP
>
> Red Hat Israel
>
> 
>
> l...@redhat.com | lve...@redhat.com
> 
> TRIED. TESTED. TRUSTED. 
>



-- 

Lev Veyde

Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel



l...@redhat.com | lve...@redhat.com

Re: [ovirt-users] Access VM Console on a Smart Phone with User Permission

2017-06-27 Thread Tomas Jelinek
On Tue, Jun 27, 2017 at 10:13 AM, Jerome Roque 
wrote:

> Hi Tomas,
>
> Thanks for your response. What do you mean by "removing the support for
> user permissions"? I'm using
>

The oVirt permission model expects to be told explicitly by one header if
the logged in user has some admin permissions or not. In the past the API
behaved differently in this two cases so we needed to remove the option to
use it without admin permissions.

Now the situation is better so we may be able to bring this support back,
but it will require some testing.


> the latest version of moVirt 1.7.1, and ovirt-engine 4.1.
> Is there anyone tried running user role in moVirt?
>

you will get permission denied from the API if you try to log in with a
user which has no admin permission. If you give him any admin permission on
any resource, it might work as a workaround.


>
> Best Regards,
> Jerome
>
> On Tue, Jun 20, 2017 at 5:14 PM, Tomas Jelinek 
> wrote:
>
>>
>>
>> On Fri, Jun 16, 2017 at 6:14 AM, Jerome Roque 
>> wrote:
>>
>>> Good day oVirt Users,
>>>
>>> I need some little help. I have a KVM and used oVirt for the management
>>> of VMs. What I want is that my client will log on to their account and
>>> access their virtual machine using their Smart phone. I tried to install
>>> mOvirt and yes can connect to the console of my machine, but it is only
>>> accessible for admin console.
>>>
>>
>> moVirt originally worked both with admin and user permissions. We had to
>> remove the support for user permissions since the oVirt API did not provide
>> all features moVirt needed for user permissions (search for example). But
>> the API moved significantly since then (the search works also for users now
>> for one) so we can move it back. I have opened an issue about it:
>> https://github.com/oVirt/moVirt/issues/282 - we can try to do it in next
>> version.
>>
>>
>>> Tried to use web console, it downloaded console.vv but can't open it. By
>>> any chance could make this thing possible?
>>>
>>
>> If you want to use a web console for users, I would suggest to try the
>> new ovirt-web-ui [1] - you have a link to it from oVirt landing page and
>> since 4.1 it is installed by default with oVirt.
>>
>> The .vv file can not be opened using aSPICE AFAIK - adding Iordan as the
>> author of aSPICE to comment on this.
>>
>> [1]: https://github.com/oVirt/ovirt-web-ui
>>
>>
>>>
>>> Thank you,
>>> Jerome
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Empty cgroup files on centos 7.3 host

2017-06-27 Thread Yaniv Kaul
On Mon, Jun 26, 2017 at 11:03 PM, Florian Schmid  wrote:

> Hi,
>
> I wanted to monitor disk IO and R/W on all of our oVirt centos 7.3
> hypervisor hosts, but it looks like that all those files are empty.
>

We have a very nice integration with Elastic based monitoring and logging -
why not use it.
On the host, we use collectd for monitoring.
See
http://www.ovirt.org/develop/release-management/features/engine/metrics-store/

Y.


> For example:
> ls -al /sys/fs/cgroup/blkio/machine.slice/machine-qemu\\x2d14\\
> x2dHostedEngine.scope/
> insgesamt 0
> drwxr-xr-x.  2 root root 0 30. Mai 10:09 .
> drwxr-xr-x. 16 root root 0 26. Jun 09:25 ..
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_merged
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_merged_recursive
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_queued
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_queued_recursive
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_service_bytes
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_service_bytes_recursive
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_serviced
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_serviced_recursive
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_service_time
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_service_time_recursive
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_wait_time
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.io_wait_time_recursive
> -rw-r--r--.  1 root root 0 30. Mai 10:09 blkio.leaf_weight
> -rw-r--r--.  1 root root 0 30. Mai 10:09 blkio.leaf_weight_device
> --w---.  1 root root 0 30. Mai 10:09 blkio.reset_stats
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.sectors
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.sectors_recursive
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.throttle.io_service_bytes
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.throttle.io_serviced
> -rw-r--r--.  1 root root 0 30. Mai 10:09 blkio.throttle.read_bps_device
> -rw-r--r--.  1 root root 0 30. Mai 10:09 blkio.throttle.read_iops_device
> -rw-r--r--.  1 root root 0 30. Mai 10:09 blkio.throttle.write_bps_device
> -rw-r--r--.  1 root root 0 30. Mai 10:09 blkio.throttle.write_iops_device
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.time
> -r--r--r--.  1 root root 0 30. Mai 10:09 blkio.time_recursive
> -rw-r--r--.  1 root root 0 30. Mai 10:09 blkio.weight
> -rw-r--r--.  1 root root 0 30. Mai 10:09 blkio.weight_device
> -rw-r--r--.  1 root root 0 30. Mai 10:09 cgroup.clone_children
> --w--w--w-.  1 root root 0 30. Mai 10:09 cgroup.event_control
> -rw-r--r--.  1 root root 0 30. Mai 10:09 cgroup.procs
> -rw-r--r--.  1 root root 0 30. Mai 10:09 notify_on_release
> -rw-r--r--.  1 root root 0 30. Mai 10:09 tasks
>
>
> I thought, I can get my needed values from there, but all files are empty.
>
> Looking at this post: http://lists.ovirt.org/pipermail/users/2017-January/
> 079011.html
> this should work.
>
> Is this normal on centos 7.3 with oVirt installed? How can I get those
> values, without monitoring all VMs directly?
>
> oVirt Version we use:
> 4.1.1.8-1.el7.centos
>
> BR Florian
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] "Could not detect Guest Agent on VM"

2017-06-27 Thread Sven Achtelik
Hi Tomas,



this is how it looks like in the device Manager:

[cid:image002.jpg@01D2EF24.EE658F20]



The Logfile holds some errors and no new errors are generated when I initiate 
the snapshot. I would like to do snapshots without memory and I believe that 
this requires some working file system pause. With that message in the GUI it’s 
hard to trust in this snapshot procedure. Is there a way to see if the FS is 
paused when I do a snapshot without memory ?



When doing a snapshot without memory I can find these in the Event-Viewer under 
Application:



[cid:image005.jpg@01D2EF24.EE658F20]



Is there anthing that needs to be changed ?



Thank you,



Sven

-Ursprüngliche Nachricht-
Von: Tomáš Golembiovský [mailto:tgole...@redhat.com]
Gesendet: Monday, June 26, 2017 2:06 PM
An: Sven Achtelik 
Cc: ovirt users 
Betreff: Re: [ovirt-users] "Could not detect Guest Agent on VM"



Hi,



sorry for late answer.



On Mon, 12 Jun 2017 11:26:51 -0500

Sven Achtelik > wrote:



> Hi All,

>

> I have several Windows VMs (Server 2012, Server 2012 R2) running and the 
> guest tools are installed. IF I go and check the Services in those VMs I can 
> also see the "Ovirt Guest Service" running and the "QEMU Guest Agent" 
> running. Also there is Information about the VM show in the GUI, like 
> installed applications etc.. I've tried several Versions of the tools and 
> even with the latest 4.1.5 I can't get that message to disappear.



First check if the virtio-serial driver is properly installed. In Device 
Manager you should see VirtIO Serial Driver in System devices category.



You should also check the oVirt GA log. Search for ovirt-guest-agent.log, it 
should be in C:\Windows\SysWOW64.



> Am I doing something wrong ?

>

> How does this affect the Backup-API and possible Snapshots are taken for 
> Backups ?



I'm not sure about this one. I don't think we use QEMU GA (or any other

GA) to freeze filesystems. So this feature should not be affected.



Tomas



>

> If you have any hints please let me know.

>

> Thank you,

>

> Sven

>





--

Tomáš Golembiovský >
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users