[ovirt-users] Guest/User permissions

2016-02-25 Thread Colin Coe
Hi all

I've been asked to produce a matrix of guests (VMs) and the users who have
rights on them.

Just trying to save myself some effort and see if anyone's already done
this and is willing to share?

Thanks

CC
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [hosted-engine] admin@internal password for additional nodes?

2016-02-25 Thread Wee Sritippho

Hi,

I'm trying to deploy a 2nd host to my hosted-engine environment, but at 
some point, the setup ask me to type a password for admin@internal 
again. Do I need to type the same password that I choose when deploying 
the 1st host? If not, would it replace the old password?


Thank you,
Wee

---
ซอฟต์แวร์ Avast แอนตี้ไวรัสตรวจสอบหาไวรัสจากอีเมลนี้แล้ว
https://www.avast.com/antivirus

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Impact of changing VLAN of ovirtmgmt in the Data Center?

2016-02-25 Thread Garry Tiedemann

Hi everyone,

In Data Centers > (Name) > Networks section of the Ovirt GUI, network 
definitions include the VLAN IDs.
In my case, the VLAN ID of ovirtmgmt has been empty (meaning VLAN 1) 
since I built it; it's always been wrong.


My hypervisor hosts' ovirtmgmt bridges are actually in VLAN 20.

An error message alerted me to this mismatch a few days ago.

None of my production VMs are on VLAN 1, or VLAN 20, but I'd like to 
confirm if it's safe to change this.


Can changing the VLAN ID of ovirtmgmt within Data Center > Networks 
impact VMs from other VLANs?


I see no reason why it should be a problem. We just need some certainty 
on that point.


Thankyou in advance for answers.

Regards,

Garry

PS This is a briefer and clearer re-statement of the question I asked a 
couple of days ago.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Master Doamin - Ovirt 3.6 - hoted engine

2016-02-25 Thread Dariusz Kryszak




On Tue, 2016-02-23 at 17:13 +0100, Simone Tiraboschi wrote:
> 
> 
> On Tue, Feb 23, 2016 at 4:19 PM, Dariusz Kryszak <
> dariusz.krys...@gmail.com> wrote:
> > Hi folks,
> > I have a question about master domain when I'm using hosted engine
> > deployment.
> > At the beginning I've made deployment on NUC (small home
> > installation) with hosted engine on the nfs share from NUC host.
> > I've configured FS gluster on the same machine and used it for
> > master domain and iso domain. Lets say All-in-ONE.
> > After reboot happened something strange. Log says that master
> > domain is not available and has to become on the hosted_storage.
> > This is not ok in my opinion. I know that behavior because master
> > doamin is not available, has been migrated to other shareable (in
> > this case hosted_domain is nfs ).
> > Do you thing, that should be locked in this particular case means
> > when available is only hosted_storage? Right now it is not possible
> > to change this situation because hosted engine resides on the
> > hosted_storage. I Can't migrate it. 
> > 
> > 
> It could happen only after the hosted-engine storage domain got
> imported by the engine but to do that you need an additional storage
> domain which will become the master storage domain.
> In the past we had a bug that let you remove the last regular storage
> domain and it the case the hosted-engine would become the master
> storage domain and as you pointed out that was an issue.
> https://bugzilla.redhat.com/show_bug.cgi?id=1298697
> 
> Now it should be fixed. If it just happened again just because you
> gluster regular storage domain wasn't available is not really fixed.
> Adding Roy here.
> Dariusz, which release are you using? 

Regarding to the ovirt version.
1. ovirt manager
ovirt-engine-setup - oVirt Engine Version: 3.6.2.6-1.el7.centos
# uname -a
Linux ovirtm.stylenet 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16
17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
cat /etc/redhat-release 
CentOS Linux release 7.2.1511 (Core) 



2. hypervisor

uname -a
Linux ovirth1.stylenet 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16
17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
cat /etc/redhat-release 
CentOS Linux release 7.2.1511 (Core)

rpm -qa|grep 'hosted\|vdsm'
vdsm-cli-4.17.18-1.el7.noarch
ovirt-hosted-engine-ha-1.3.3.7-1.el7.centos.noarch
vdsm-xmlrpc-4.17.18-1.el7.noarch
vdsm-jsonrpc-4.17.18-1.el7.noarch
vdsm-4.17.18-1.el7.noarch
vdsm-python-4.17.18-1.el7.noarch
vdsm-yajsonrpc-4.17.18-1.el7.noarch
vdsm-hook-vmfex-dev-4.17.18-1.el7.noarch
ovirt-hosted-engine-setup-1.3.2.3-1.el7.centos.noarch
vdsm-infra-4.17.18-1.el7.noarch
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] macvlan + IPv6

2016-02-25 Thread Jay Turner
Attached.

macvlan-1.xml is from oVirt (and includes the VF MAC address)
macvlan-test-2.xml is from virt-manager (and does not include the VF MAC
address)

- jkt

  macvlan-1
  9c275eca-ecb1-4be5-ad80-18365c59dba8
  http://ovirt.org/vm/tune/1.0";>

  
  4294967296
  1048576
  1048576
  16
  
1020
  
  

  
  
/machine
  
  

  oVirt
  oVirt Node
  7-2.1511.el7.centos.2.10
  ----0CC47A55F2C8
  9c275eca-ecb1-4be5-ad80-18365c59dba8

  
  
hvm

  
  

  
  
Nehalem


  

  
  



  
  destroy
  restart
  destroy
  
/usr/libexec/qemu-kvm

  
  
  
  
  
  
  
  


  
  
  
  
  10d0a0b5-3ec2-41a5-8c3b-5cf9f21d6fcc
  
  
  


  
  


  
  


  
  


  


  
  


  
  
  
  
  
  
  
  


  
  
  

  
  
  


  
  
  
  


  
  
  
  


  
  
  




  


  
  
  


  
  

  
  
system_u:system_r:svirt_t:s0:c626,c866
system_u:object_r:svirt_image_t:s0:c626,c866
  


  macvlan-test-2
  0c2d37bb-9ac6-433c-9a09-64acd499d0c7
  2097152
  2097152
  2
  
/machine
  
  
hvm

  
  


  
  
Nehalem
  
  



  
  destroy
  restart
  restart
  


  
  
/usr/libexec/qemu-kvm

  
  
  
  
  
  


  
  
  
  
  
  
  


  
  


  
  
  


  
  
  


  
  
  


  


  
  


  
  


  
  
  
  
  
  


  
  
  


  
  
  


  
  
  
  


  
  
  


  




  
  


  
  


  
  
  


  
  

  
  
  


  


  


  
  

  
  
system_u:system_r:svirt_t:s0:c910,c945
system_u:object_r:svirt_image_t:s0:c910,c945
  


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Snapshot Locked for an extended period of time

2016-02-25 Thread Simon Hallam
On Thu, 2016-02-25 at 17:25 +0200, Raz Tamir wrote:
> Hi Simon,
> 1) What is the ovirt version you are using?

I'm currently using oVirt Engine Version: 3.5.1-1.el6
The problem is I would like to backup all the VMs before I try
upgrading the hosted engine.

> 2) On what is the storage type (nfs, iscsi, ...) you creating the
> snapshot?

All of the storage domains are NFS.

> 
> * It seems the the logs are not from the time this issue occur
Hmm, it should be starting about line 75618 of that log.

Cheers,

Simon

Please visit our new website at www.pml.ac.uk and follow us on Twitter  
@PlymouthMarine

Winner of the Environment & Conservation category, the Charity Awards 2014.

Plymouth Marine Laboratory (PML) is a company limited by guarantee registered 
in England & Wales, company number 4178503. Registered Charity No. 1091222. 
Registered Office: Prospect Place, The Hoe, Plymouth  PL1 3DH, UK. 

This message is private and confidential. If you have received this message in 
error, please notify the sender and remove it from your system. You are 
reminded that e-mail communications are not secure and may contain viruses; PML 
accepts no liability for any loss or damage which may be caused by viruses.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Snapshot Locked for an extended period of time

2016-02-25 Thread Raz Tamir
Hi Simon,
1) What is the ovirt version you are using?
2) On what is the storage type (nfs, iscsi, ...) you creating the snapshot?

* It seems the the logs are not from the time this issue occur



Thanks,
Raz Tamir
Red Hat Israel

On Thu, Feb 25, 2016 at 1:43 PM, Simon Hallam  wrote:

> Hi All,
>
> I’ve recently written a script which:
>1. Takes a snapshot with memory.
>2. Creates a VM from that snapshot.
>3. Exports the snapshot to the export domain.
>4. Deletes the snapshot.
>5. Deletes the cloned VM.
>
> Whilst running the script for the first time last night, the script
> timed out whilst deleting the snapshot on the VM at step 4.
>
> Now the logs are filled with:
>
> 2016-02-25 10:32:53,272
> INFO  [org.ovirt.engine.core.bll.RemoveSnapshotCommandCallback]
> (DefaultQuartzScheduler_Worker-25) [4559dd29] Waiting on Live Merge child
> commands to complete
> 2016-02-25 10:32:53,332
> INFO  [org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskLiveCommand]
> (DefaultQuartzScheduler_Worker-25) [225b841f] Waiting on Live Merge command
> step MERGE to complete
> 2016-02-25 10:32:53,335
> INFO  [org.ovirt.engine.core.bll.MergeCommandCallback]
> (DefaultQuartzScheduler_Worker-25) [4559dd29] Waiting on merge command to
> complete
> 2016-02-25 10:32:55,331
> INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> (DefaultQuartzScheduler_Worker-19) [4559dd29] VM job
> c1064551-022f-4549-a9d4-bb915e177e80: In progress (no change)
> 2016-02-25 10:32:58,405
> INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> (DefaultQuartzScheduler_Worker-37) [4559dd29] VM job
> c1064551-022f-4549-a9d4-bb915e177e80: In progress, updating
> 2016-02-25 10:33:01,521
> INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> (DefaultQuartzScheduler_Worker-56) [4559dd29] VM job
> c1064551-022f-4549-a9d4-bb915e177e80: In progress (no change)
> 2016-02-25 10:33:03,337
> INFO  [org.ovirt.engine.core.bll.RemoveSnapshotCommandCallback]
> (DefaultQuartzScheduler_Worker-12) [4559dd29] Waiting on Live Merge child
> commands to complete
> 2016-02-25 10:33:03,350
> INFO  [org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskLiveCommand]
> (DefaultQuartzScheduler_Worker-12) [225b841f] Waiting on Live Merge command
> step MERGE to complete
> 2016-02-25 10:33:03,352
> INFO  [org.ovirt.engine.core.bll.MergeCommandCallback]
> (DefaultQuartzScheduler_Worker-12) [4559dd29] Waiting on merge command to
> complete
> 2016-02-25 10:33:04,678
> INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> (DefaultQuartzScheduler_Worker-22) [4559dd29] VM job
> c1064551-022f-4549-a9d4-bb915e177e80: In progress (no change)
> 2016-02-25 10:33:07,852
> INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> (DefaultQuartzScheduler_Worker-11) [4559dd29] VM job
> c1064551-022f-4549-a9d4-bb915e177e80: In progress (no change)
> 2016-02-25 10:33:10,993
> INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> (DefaultQuartzScheduler_Worker-44) [4559dd29] VM job
> c1064551-022f-4549-a9d4-bb915e177e80: In progress, updating
> 2016-02-25 10:33:13,354
> INFO  [org.ovirt.engine.core.bll.RemoveSnapshotCommandCallback]
> (DefaultQuartzScheduler_Worker-14) [4559dd29] Waiting on Live Merge child
> commands to complete
>
> I have attached the log of when the snapshot started deleting (2016-02-
> 24 23:51:17,094).
>
> Any idea what's gone wrong? What are my next steps to clear this?
>
> Cheers,
>
> Simon
>
> Please visit our new website at www.pml.ac.uk and follow us on Twitter
> @PlymouthMarine
>
> Winner of the Environment & Conservation category, the Charity Awards 2014.
>
> Plymouth Marine Laboratory (PML) is a company limited by guarantee
> registered in England & Wales, company number 4178503. Registered Charity
> No. 1091222. Registered Office: Prospect Place, The Hoe, Plymouth  PL1 3DH,
> UK.
>
> This message is private and confidential. If you have received this
> message in error, please notify the sender and remove it from your system.
> You are reminded that e-mail communications are not secure and may contain
> viruses; PML accepts no liability for any loss or damage which may be
> caused by viruses.
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] R: One host failed to attach one of Storage Domains after reboot of all hosts

2016-02-25 Thread Giuseppe Berellini
Hi,

about one hour ago my AMD host came back up, after more than 10 days being down.
Apart from checking the logs (which I suppose didn't help in solving my 
problem), I enabled NFS share on my ISO domain.

I'm still not able to understand how that could help.
I would be really happy to better understand what happened! :-)
If anyone has ideas/explanations to share, you are welcome! :-)

Best regards,
Giuseppe

--
Giuseppe Berellini
PTV SISTeMA
www.sistemaits.com
facebook.com/sistemaits
linkedin.com/SISTeMA

Da: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Per conto di 
Giuseppe Berellini
Inviato: giovedì 25 febbraio 2016 12:10
A: users@ovirt.org
Oggetto: [ovirt-users] One host failed to attach one of Storage Domains after 
reboot of all hosts

Hi,

At the beginning of february I successfully installed oVirt 3.6.2 (with hosted 
engine) on 3 hosts, which are using 1 storage server with GlusterFS.
2 hosts (with Intel CPU) are using HA and are hosting the engine; the 3rd host 
(AMD CPU) was added as host from oVirt web administration panel, without hosted 
engine deployment (I don't want the engine running on this host).

About 10 days ago I tried to reboot my oVirt environment (i.e. going to global 
maintenance, shutting down the engine, turning off all the hosts, starting them 
again, then setting maintenance mode to "none").
After the reboot, everything was fine with the Intel hosts and the hosted 
engine, but AMD host (the one without HA) was not operational.
I tryed to activate it, buti t failed with the following error:
"Host failed to attach one of Storage Domains attached to it."

If I log into my AMD host and I check the logs, I see that the storage domain 
which is not mounted is the one of the hosted engine (but this could be 
correct, since this host won't run the hosted engine).

>From /var/log/vdsm/vdsm.log:

Thread-29::DEBUG::2016-02-25 
11:44:01,157::monitor::322::Storage.Monitor::(_produceDomain) Producing domain 
6fb10a49-5f1c-4bd4-9ff7-b7e33c1125cd
Thread-29::ERROR::2016-02-25 
11:44:01,158::sdc::139::Storage.StorageDomainCache::(_findDomain) looking for 
unfetched domain 6fb10a49-5f1c-4bd4-9ff7-b7e33c1125cd
Thread-29::ERROR::2016-02-25 
11:44:01,158::sdc::156::Storage.StorageDomainCache::(_findUnfetchedDomain) 
looking for domain 6fb10a49-5f1c-4bd4-9ff7-b7e33c1125cd
Thread-29::DEBUG::2016-02-25 
11:44:01,159::lvm::370::Storage.OperationMutex::(_reloadvgs) Operation 'lvm 
reload operation' got the operation mutex
Thread-29::DEBUG::2016-02-25 11:44:01,159::lvm::290::Storage.Misc.excCmd::(cmd) 
/usr/bin/taskset --cpu-list 0-63 /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' 
devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 
write_cache_state=0 disable_after_error_count=3 filter = [ '\''r|.*|'\'' ] }  
global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  
use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --noheadings 
--units b --nosuffix --separator '|' --ignoreskippedcluster -o 
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
 6fb10a49-5f1c-4bd4-9ff7-b7e33c1125cd (cwd None)
Thread-29::DEBUG::2016-02-25 11:44:01,223::lvm::290::Storage.Misc.excCmd::(cmd) 
FAILED:  = '  WARNING: lvmetad is running but disabled. Restart lvmetad 
before enabling it!\n  Volume group "6fb10a49-5f1c-4bd4-9ff7-b7e33c1125cd" not 
found\n  Cannot process volume group 6fb10a49-5f1c-4bd4-9ff7-b7e33c1125cd\n'; 
 = 5
Thread-29::WARNING::2016-02-25 
11:44:01,225::lvm::375::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['  
WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!', 
'  Volume group "6fb10a49-5f1c-4bd4-9ff7-b7e33c1125cd" not found', '  Cannot 
process volume group 6fb10a49-5f1c-4bd4-9ff7-b7e33c1125cd']
Thread-29::DEBUG::2016-02-25 
11:44:01,225::lvm::415::Storage.OperationMutex::(_reloadvgs) Operation 'lvm 
reload operation' released the operation mutex
Thread-29::ERROR::2016-02-25 
11:44:01,245::sdc::145::Storage.StorageDomainCache::(_findDomain) domain 
6fb10a49-5f1c-4bd4-9ff7-b7e33c1125cd not found
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/sdc.py", line 143, in _findDomain
dom = findMethod(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 173, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: 
(u'6fb10a49-5f1c-4bd4-9ff7-b7e33c1125cd',)
Thread-29::ERROR::2016-02-25 
11:44:01,246::monitor::276::Storage.Monitor::(_monitorDomain) Error monitoring 
domain 6fb10a49-5f1c-4bd4-9ff7-b7e33c1125cd
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/monitor.py", line 264, in _monitorDomain
self._produceDomain()
  File "/usr/lib/python2.7/site-

[ovirt-users] One host failed to attach one of Storage Domains after reboot of all hosts

2016-02-25 Thread Giuseppe Berellini
Hi,

At the beginning of february I successfully installed oVirt 3.6.2 (with hosted 
engine) on 3 hosts, which are using 1 storage server with GlusterFS.
2 hosts (with Intel CPU) are using HA and are hosting the engine; the 3rd host 
(AMD CPU) was added as host from oVirt web administration panel, without hosted 
engine deployment (I don't want the engine running on this host).

About 10 days ago I tried to reboot my oVirt environment (i.e. going to global 
maintenance, shutting down the engine, turning off all the hosts, starting them 
again, then setting maintenance mode to "none").
After the reboot, everything was fine with the Intel hosts and the hosted 
engine, but AMD host (the one without HA) was not operational.
I tryed to activate it, buti t failed with the following error:
"Host failed to attach one of Storage Domains attached to it."

If I log into my AMD host and I check the logs, I see that the storage domain 
which is not mounted is the one of the hosted engine (but this could be 
correct, since this host won't run the hosted engine).

>From /var/log/vdsm/vdsm.log:

Thread-29::DEBUG::2016-02-25 
11:44:01,157::monitor::322::Storage.Monitor::(_produceDomain) Producing domain 
6fb10a49-5f1c-4bd4-9ff7-b7e33c1125cd
Thread-29::ERROR::2016-02-25 
11:44:01,158::sdc::139::Storage.StorageDomainCache::(_findDomain) looking for 
unfetched domain 6fb10a49-5f1c-4bd4-9ff7-b7e33c1125cd
Thread-29::ERROR::2016-02-25 
11:44:01,158::sdc::156::Storage.StorageDomainCache::(_findUnfetchedDomain) 
looking for domain 6fb10a49-5f1c-4bd4-9ff7-b7e33c1125cd
Thread-29::DEBUG::2016-02-25 
11:44:01,159::lvm::370::Storage.OperationMutex::(_reloadvgs) Operation 'lvm 
reload operation' got the operation mutex
Thread-29::DEBUG::2016-02-25 11:44:01,159::lvm::290::Storage.Misc.excCmd::(cmd) 
/usr/bin/taskset --cpu-list 0-63 /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' 
devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 
write_cache_state=0 disable_after_error_count=3 filter = [ '\''r|.*|'\'' ] }  
global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  
use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --noheadings 
--units b --nosuffix --separator '|' --ignoreskippedcluster -o 
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
 6fb10a49-5f1c-4bd4-9ff7-b7e33c1125cd (cwd None)
Thread-29::DEBUG::2016-02-25 11:44:01,223::lvm::290::Storage.Misc.excCmd::(cmd) 
FAILED:  = '  WARNING: lvmetad is running but disabled. Restart lvmetad 
before enabling it!\n  Volume group "6fb10a49-5f1c-4bd4-9ff7-b7e33c1125cd" not 
found\n  Cannot process volume group 6fb10a49-5f1c-4bd4-9ff7-b7e33c1125cd\n'; 
 = 5
Thread-29::WARNING::2016-02-25 
11:44:01,225::lvm::375::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['  
WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!', 
'  Volume group "6fb10a49-5f1c-4bd4-9ff7-b7e33c1125cd" not found', '  Cannot 
process volume group 6fb10a49-5f1c-4bd4-9ff7-b7e33c1125cd']
Thread-29::DEBUG::2016-02-25 
11:44:01,225::lvm::415::Storage.OperationMutex::(_reloadvgs) Operation 'lvm 
reload operation' released the operation mutex
Thread-29::ERROR::2016-02-25 
11:44:01,245::sdc::145::Storage.StorageDomainCache::(_findDomain) domain 
6fb10a49-5f1c-4bd4-9ff7-b7e33c1125cd not found
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/sdc.py", line 143, in _findDomain
dom = findMethod(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 173, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: 
(u'6fb10a49-5f1c-4bd4-9ff7-b7e33c1125cd',)
Thread-29::ERROR::2016-02-25 
11:44:01,246::monitor::276::Storage.Monitor::(_monitorDomain) Error monitoring 
domain 6fb10a49-5f1c-4bd4-9ff7-b7e33c1125cd
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/monitor.py", line 264, in _monitorDomain
self._produceDomain()
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 767, in wrapper
value = meth(self, *a, **kw)
  File "/usr/share/vdsm/storage/monitor.py", line 323, in _produceDomain
self.domain = sdCache.produce(self.sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 100, in produce
domain.getRealDomain()
  File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain
return self._cache._realProduce(self._sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 124, in _realProduce
domain = self._findDomain(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 143, in _findDomain
dom = findMethod(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 173, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: 
(u'6fb10a49-5f1c-4bd4-9ff7-b7e33c1125cd',)
jsonrpc.Executor/0::DEBUG::2016-02-25 
11:44:03,292::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`2862ba96-8080-4e

Re: [ovirt-users] macvlan + IPv6

2016-02-25 Thread Dan Kenigsberg
On Tue, Feb 23, 2016 at 03:13:21PM +, Jay Turner wrote:
> As a follow-up to this, I made some headway in sorting out the source of
> the issue, but hoping someone can give me a pointer to where this is
> happening in the code, as well as some understanding for why.
> 
> In oVirt, when I allocate a virtual function to a guest, a new MAC address
> is generated for the VF (as it should be) from the MAC address pool in
> oVirt, and then that MAC address is written to the VF on the hypervisor.
> Thus I end up with something like:
> 
> : ens11:  mtu 1500 qdisc mq master i40e
> state UP mode DEFAULT qlen 1000
> link/ether 3c:fd:fe:9d:a1:38 brd ff:ff:ff:ff:ff:ff
> vf 0 MAC 00:1a:4a:16:01:52, spoof checking on, link-state auto
> 
> This *is not* how it happens under libvirt/virt-manager, however.  When
> allocating a VF to a guest under libvirt, a random MAC address is generated
> and associated with the VF under the guest, but it is not written back to
> the hypervisor, and is instead left as 00:00:00:00:00:00.
> 
> I am pretty sure this writing of the MAC address at the hypervisor is
> causing at least some of the issues I'm seeing, as with the Intel cards,
> that prevents the guest from changing/adding a new MAC address, which is
> what happens with the instantiation of a macvlan interface.
> 
> So can anyone point me to where in the oVirt code this MAC address
> assignment is occurring?

https://gerrit.ovirt.org/gitweb?p=vdsm.git;a=blob;f=vdsm/virt/vmdevices/network.py;h=b2fa629c55ff728a964fda9d0ea598ef57676b53;hb=HEAD#l122

> Also curious why oVirt does this assignment, but
> libvirt does not.

When we start a VM, we want it to see its allocated mac address,
regardless of the specific VF that was assigned to it. It would surprise
me if virt-manager does not do that, hence I'd love to see the
 elemet that it creates for the VF.

Regards,
Dan.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users