[ovirt-users] New VDSM and QEmu-KVM versions available for testing

2015-07-02 Thread Sandro Bonazzola
Hi,
the following packages from oVirt 3.5.4 RC1[1] have been pushed to CentOS Virt 
SIG testing repositories:
- qemu-kvm-ev-2.1.2-23.el7_1.4.1
- vdsm-4.16.21-1.el7

You're welcome to test them[2] and provide feedback.

[1] http://www.ovirt.org/OVirt_3.5.4_Release_Notes
[2] http://wiki.centos.org/HowTos/oVirt

Thanks,
-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how can i setup hosted engine on the ovirt-node-3.5-201504280933

2015-07-02 Thread Sandro Bonazzola
Il 02/07/2015 05:45, Zhong Qiang ha scritto:
> hey,
>   I tried two case.
> ##
>   case 1:
> installed 
> ovirt-node-iso-3.5-0.999.201506302311.el7.centos.iso or 
> ovirt-node-iso-3.5-0.999.201507012312.el7.centos.iso ,  and setup
> hosted-engine .
> got error message below:
> @@@
> [root@ovirthost01 Stor01]# hosted-engine --deploy
> [ INFO  ] Stage: Initializing
> [ INFO  ] Generating a temporary VNC password.
> [ INFO  ] Stage: Environment setup
>   Continuing will configure this host for serving as hypervisor and 
> create a VM where you have to install oVirt Engine afterwards.
>   Are you sure you want to continue? (Yes, No)[Yes]:
>   Configuration files: []
>   Log file: 
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150702021002-pa9xwr.log
>   Version: otopi-1.3.3_master 
> (otopi-1.3.3-0.0.master.20150617163218.gitfc9ccb8.el7)
> [ INFO  ] Hardware supports virtualization
> [ INFO  ] Stage: Environment packages setup
> [ INFO  ] Stage: Programs detection
> [ INFO  ] Stage: Environment setup
> [ INFO  ] Stage: Environment customization
> 
>   --== STORAGE CONFIGURATION ==--
> 
>   During customization use CTRL-D to abort.
>   Please specify the storage you would like to use (iscsi, nfs3, 
> nfs4)[nfs3]:
>   Please specify the full shared storage connection path to use 
> (example: host:/path): 10.10.19.99:/volume1/nfsstor
>   The specified storage location already contains a data domain. Is 
> this an additional host setup (Yes, No)[Yes]?
> [ INFO  ] Installing on additional host
>   Please specify the Host ID [Must be integer, default: 2]:
>   Please provide storage domain name. [hosted_storage]: hostedStor
> [ ERROR ] Failed to execute stage 'Environment customization': Storage domain 
> name cannot be empty. It can only consist of alphanumeric characters
> (that is, letters, numbers, and signs "-" and "_"). All other characters are 
> not valid in the name.

Can you please provide 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150702021002-pa9xwr.log?



> [ INFO  ] Stage: Clean up
> [ INFO  ] Generating answer file 
> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150702021034.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> 
> 
> #
> case 2:
>  installed ovirt-node-iso-3.5-0.999.201504280931.el7.centos.iso ,  
> and setup hosted-engine .
> got error message below:
> 
> [root@ovirthost01 ~]# hosted-engine --deploy
> [ INFO  ] Stage: Initializing
> [ INFO  ] Generating a temporary VNC password.
> [ INFO  ] Stage: Environment setup
>   Continuing will configure this host for serving as hypervisor and 
> create a VM where you have to install oVirt Engine afterwards.
>   Are you sure you want to continue? (Yes, No)[Yes]:
>   Configuration files: []
>   Log file: 
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150702022914-83zr2r.log
>   Version: otopi-1.3.1 (otopi-1.3.1-1.el7)
> [ INFO  ] Hardware supports virtualization
> [ INFO  ] Stage: Environment packages setup
> [ INFO  ] Stage: Programs detection
> [ INFO  ] Stage: Environment setup
> [ INFO  ] Stage: Environment customization
> 
>   --== STORAGE CONFIGURATION ==--
> 
>   During customization use CTRL-D to abort.
>   Please specify the storage you would like to use (iscsi, nfs3, 
> nfs4)[nfs3]:
>   Please specify the full shared storage connection path to use 
> (example: host:/path): 10.10.19.99:/volume1/nfsstor
>   The specified storage location already contains a data domain. Is 
> this an additional host setup (Yes, No)[Yes]? No
> [ INFO  ] Installing on first host
>   Local storage datacenter name is an internal name and currently 
> will not be shown in engine's admin UI.
>   Please enter local datacenter name [hosted_datacenter]:
> 
>   --== SYSTEM CONFIGURATION ==--
> 
> 
>   --== NETWORK CONFIGURATION ==--
> 
>   Please indicate a nic to set ovirtmgmt bridge on: (p4p1) [p4p1]:
>   iptables was detected on your computer, do you wish setup to 
> configure it? (Yes, No)[Yes]:
>   Please indicate a pingable gateway IP address [10.10.19.254]:
> 
>   --== VM CONFIGURATION ==--
> 
>   Please specify the device to boot the VM from (cdrom, disk, pxe) 
> [cdrom]: disk
>   Please specify path to OVF archive you would like to use [None]: 
> /data/Stor01/oVirt-Engine-Appliance-CentOS-x86_64-7-20150628.74.ova
> [ INFO  ] Checking OVF archive content (could take a few minutes depending on 
> archive size)
> [ INFO  ] Checking OVF XML content (could take a 

Re: [ovirt-users] Confused / Hosted-Engine won't start

2015-07-02 Thread Sandro Bonazzola
Il 02/07/2015 02:36, Christopher Young ha scritto:
> I'm sure I have worked through this before, but I've been banging my head 
> against this one for a while now, and I think I'm too close to the issue.
> 
> Basically, my hosted-engine won't start anymore.  I do recall attempting to 
> migrate it to a new gluster-based NFS share recently, but I could have
> sworn that was successful and working.  I _think_ I have some sort of id 
> issue with storage/volume/whatever id's, but I need some help digging through
> it if someone would be so kind.
> 
> I have the following error in /var/log/libvirt/qemu/HostedEngine.log:
> 
> --
> 2015-07-02T00:01:13.080952Z qemu-kvm: -drive
> file=/var/run/vdsm/storage/4e3017eb-d062-4ad1-9df8-7057fcee412c/5ead7b5d-50e8-4d6c-a0e5-bbe6d93dd836/eeb2d821-a432-4df6-8856-fdb14df260e3,if=none,id=drive-virtio-disk0,format=raw,serial=5ead7b5d-50e8-4d6c-a0e5-bbe6d93dd836,cache=none,werror=stop,rerror=stop,aio=threads:
> could not open disk image
> /var/run/vdsm/storage/4e3017eb-d062-4ad1-9df8-7057fcee412c/5ead7b5d-50e8-4d6c-a0e5-bbe6d93dd836/eeb2d821-a432-4df6-8856-fdb14df260e3:
>  Could not
> refresh total sector count: Input/output error
> --


Please check selinux: ausearch -m avc
it might be selinux preventing access to the disk image.



> 
> I also am including a few command outputs that I'm sure might help:
> 
> --
> [root@orldc-dev-vnode02 4e3017eb-d062-4ad1-9df8-7057fcee412c]# ls -la 
> /var/run/vdsm/storage/4e3017eb-d062-4ad1-9df8-7057fcee412c/
> total 8
> drwxr-xr-x. 2 vdsm kvm  80 Jul  1 20:04 .
> drwxr-xr-x. 3 vdsm kvm  60 Jul  1 20:04 ..
> lrwxrwxrwx. 1 vdsm kvm 128 Jul  1 20:04 1d80a60c-8f26-4448-9460-2c7b00ff75bf 
> ->
> /rhev/data-center/mnt/ovirt-gluster-nfs:_engine/4e3017eb-d062-4ad1-9df8-7057fcee412c/images/1d80a60c-8f26-4448-9460-2c7b00ff75bf
> lrwxrwxrwx. 1 vdsm kvm 128 Jul  1 20:04 23ac8897-b0c7-41d6-a7de-19f46ed78400 
> ->
> /rhev/data-center/mnt/ovirt-gluster-nfs:_engine/4e3017eb-d062-4ad1-9df8-7057fcee412c/images/23ac8897-b0c7-41d6-a7de-19f46ed78400
> 
> --
> [root@orldc-dev-vnode02 4e3017eb-d062-4ad1-9df8-7057fcee412c]# cat 
> /etc/ovirt-hosted-engine/vm.conf
> vmId=6b7329f9-518a-4488-b1c4-2cd809f2f580
> memSize=5120
> display=vnc
> devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, bus:1,
> type:drive},specParams:{},readonly:true,deviceId:77924fc2-c5c9-408b-97d3-cd0b0d11a62c,path:/home/tmp/CentOS-6.6-x86_64-minimal.iso,device:cdrom,shared:false,type:disk}
> devices={index:0,iface:virtio,format:raw,poolID:----,volumeID:eeb2d821-a432-4df6-8856-fdb14df260e3,imageID:5ead7b5d-50e8-4d6c-a0e5-bbe6d93dd836,specParams:{},readonly:false,domainID:4e3017eb-d062-4ad1-9df8-7057fcee412c,optional:false,deviceId:5ead7b5d-50e8-4d6c-a0e5-bbe6d93dd836,address:{bus:0x00,
> slot:0x06, domain:0x, type:pci, 
> function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk,bootOrder:1}
> devices={device:scsi,model:virtio-scsi,type:controller}
> devices={nicModel:pv,macAddr:00:16:3e:0e:d0:68,linkActive:true,network:ovirtmgmt,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:f70ba622-6ac8-4c06-a005-0ebd940a15b2,address:{bus:0x00,
> slot:0x03, domain:0x, type:pci, 
> function:0x0},device:bridge,type:interface}
> devices={device:console,specParams:{},type:console,deviceId:98386e6c-ae56-4b6d-9bfb-c72bbd299ad1,alias:console0}
> vmName=HostedEngine
> spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
> smp=2
> cpuType=Westmere
> emulatedMachine=pc
> --
> 
> [root@orldc-dev-vnode02 4e3017eb-d062-4ad1-9df8-7057fcee412c]# cat 
> /etc/ovirt-hosted-engine/hosted-engine.conf
> fqdn=orldc-dev-vengine01.***
> vm_disk_id=5ead7b5d-50e8-4d6c-a0e5-bbe6d93dd836
> vmid=6b7329f9-518a-4488-b1c4-2cd809f2f580
> storage=ovirt-gluster-nfs:/engine
> conf=/etc/ovirt-hosted-engine/vm.conf
> service_start_time=0
> host_id=1
> console=vnc
> domainType=nfs3
> spUUID=379cf161-d5b1-4c20-bb71-e3ca5d2ccd6b
> sdUUID=4e3017eb-d062-4ad1-9df8-7057fcee412c
> connectionUUID=0d1b50ac-cf3f-4cd7-90df-3c3a6d11a984
> ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
> ca_subject="C=EN, L=Test, O=Test, CN=Test"
> vdsm_use_ssl=true
> gateway=10.16.3.1
> bridge=ovirtmgmt
> metadata_volume_UUID=dd9f373c-d161-4fa0-aab1-3cb52305dba7
> metadata_image_UUID=23ac8897-b0c7-41d6-a7de-19f46ed78400
> lockspace_volume_UUID=d9bacbf6-c2f4-4f74-a91f-3a3a52f255bf
> lockspace_image_UUID=1d80a60c-8f26-4448-9460-2c7b00ff75bf
> 
> # The following are used only for iSCSI storage
> iqn=
> portal=
> user=
> password=
> port=
> --
> 
> (mount output for the NFS share this should be running on (gluster-based):
> 
> ovirt-gluster-nfs:/engine on /rhev/data-center/mnt/ovirt-gluster-nfs:_engine 
> type nfs
> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=10.16.3.30,mountvers=3,mountport=38465,mountproto=tcp,local_lock=all,ad

[ovirt-users] Dashboard - Page Not Found

2015-07-02 Thread Simon Barrett
I am running oVirt Engine Version: 3.5.3.1-1.el6 and get a "Page Not Found" 
error when I click on the Dashboards tab at the top right of the admin portal.

The Reports server is setup and working fine and I can see "Cluster Dashboard", 
"Datacenter Dashboard", "System Dashboard" reports in "Webadmin Dashboards" 
when viewing them directly through the oVirt Engine reports web interface.

I couldn't find any logs that would assist in diagnosing this and couldn't find 
any other solutions.

Any suggestions as to how I fix this?

Many thanks,

Simon
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how can i setup hosted engine on the ovirt-node-3.5-201504280933

2015-07-02 Thread Sandro Bonazzola
Il 02/07/2015 10:59, Zhong Qiang ha scritto:
> hey Sandro,
>this is my log:
> 
> 2015-07-02 15:51 GMT+08:00 Sandro Bonazzola  >:
> 
> Il 02/07/2015 05:45, Zhong Qiang ha scritto:
> > hey,
> >   I tried two case.
> > ##
> >   case 1:
> > installed 
> ovirt-node-iso-3.5-0.999.201506302311.el7.centos.iso or 
> ovirt-node-iso-3.5-0.999.201507012312.el7.centos.iso ,  and setup
> > hosted-engine .
> > got error message below:
> > @@@
> > [root@ovirthost01 Stor01]# hosted-engine --deploy
> > [ INFO  ] Stage: Initializing
> > [ INFO  ] Generating a temporary VNC password.
> > [ INFO  ] Stage: Environment setup
> >   Continuing will configure this host for serving as hypervisor 
> and create a VM where you have to install oVirt Engine afterwards.
> >   Are you sure you want to continue? (Yes, No)[Yes]:
> >   Configuration files: []
> >   Log file: 
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150702021002-pa9xwr.log
> >   Version: otopi-1.3.3_master 
> (otopi-1.3.3-0.0.master.20150617163218.gitfc9ccb8.el7)
> > [ INFO  ] Hardware supports virtualization
> > [ INFO  ] Stage: Environment packages setup
> > [ INFO  ] Stage: Programs detection
> > [ INFO  ] Stage: Environment setup
> > [ INFO  ] Stage: Environment customization
> >
> >   --== STORAGE CONFIGURATION ==--
> >
> >   During customization use CTRL-D to abort.
> >   Please specify the storage you would like to use (iscsi, 
> nfs3, nfs4)[nfs3]:
> >   Please specify the full shared storage connection path to use 
> (example: host:/path): 10.10.19.99:/volume1/nfsstor
> >   The specified storage location already contains a data 
> domain. Is this an additional host setup (Yes, No)[Yes]?
> > [ INFO  ] Installing on additional host
> >   Please specify the Host ID [Must be integer, default: 2]:
> >   Please provide storage domain name. [hosted_storage]: 
> hostedStor
> > [ ERROR ] Failed to execute stage 'Environment customization': Storage 
> domain name cannot be empty. It can only consist of alphanumeric characters
> > (that is, letters, numbers, and signs "-" and "_"). All other 
> characters are not valid in the name.
> 
> Can you please provide 
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150702021002-pa9xwr.log?

This is weird.
the storage domain recorded is:
OVEHOSTED_STORAGE/storageDomainName=str:'hostedStor'

and the validation should fail only if it matches _RE_NOT_ALPHANUMERIC = 
re.compile(r"[^-\w]")

so 'hostedStor' should have been considered valid.
can you please provide 'locale' output?
maybe it's a locale issue.


> 
> 
> 
> > [ INFO  ] Stage: Clean up
> > [ INFO  ] Generating answer file 
> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150702021034.conf'
> > [ INFO  ] Stage: Pre-termination
> > [ INFO  ] Stage: Termination
> > 
> >
> > #
> > case 2:
> >  installed ovirt-node-iso-3.5-0.999.201504280931.el7.centos.iso 
> ,  and setup hosted-engine .
> > got error message below:
> > 
> > [root@ovirthost01 ~]# hosted-engine --deploy
> > [ INFO  ] Stage: Initializing
> > [ INFO  ] Generating a temporary VNC password.
> > [ INFO  ] Stage: Environment setup
> >   Continuing will configure this host for serving as hypervisor 
> and create a VM where you have to install oVirt Engine afterwards.
> >   Are you sure you want to continue? (Yes, No)[Yes]:
> >   Configuration files: []
> >   Log file: 
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150702022914-83zr2r.log
> >   Version: otopi-1.3.1 (otopi-1.3.1-1.el7)
> > [ INFO  ] Hardware supports virtualization
> > [ INFO  ] Stage: Environment packages setup
> > [ INFO  ] Stage: Programs detection
> > [ INFO  ] Stage: Environment setup
> > [ INFO  ] Stage: Environment customization
> >
> >   --== STORAGE CONFIGURATION ==--
> >
> >   During customization use CTRL-D to abort.
> >   Please specify the storage you would like to use (iscsi, 
> nfs3, nfs4)[nfs3]:
> >   Please specify the full shared storage connection path to use 
> (example: host:/path): 10.10.19.99:/volume1/nfsstor
> >   The specified storage location already contains a data 
> domain. Is this an additional host setup (Yes, No)[Yes]? No
> > [ INFO  ] Installing on first host
> >   Local storage datacenter name is an internal name and 
> currently will not be shown in engine's admin UI.
> >   Pl

Re: [ovirt-users] Dashboard - Page Not Found

2015-07-02 Thread Yaniv Dary

please send the engine.log and jasperserver.log.
Also make sure that your host name is fully resolvable.


Thanks!

On 07/02/2015 12:01 PM, Simon Barrett wrote:


I am running oVirt Engine Version: 3.5.3.1-1.el6 and get a “Page Not 
Found” error when I click on the Dashboards tab at the top right of 
the admin portal.


The Reports server is setup and working fine and I can see “Cluster 
Dashboard”, “Datacenter Dashboard”, “System Dashboard” reports in 
“Webadmin Dashboards” when viewing them directly through the oVirt 
Engine reports web interface.


I couldn’t find any logs that would assist in diagnosing this and 
couldn’t find any other solutions.


Any suggestions as to how I fix this?

Many thanks,

Simon



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
  8272306
Email: yd...@redhat.com
IRC : ydary

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Dashboard - Page Not Found

2015-07-02 Thread Yaniv Dary

Please also attach server logs for both reports and engine service.

On 07/02/2015 12:38 PM, Simon Barrett wrote:


Both are attached and the hostname is fully resolvable.

Thanks

*From:*Yaniv Dary [mailto:yd...@redhat.com]
*Sent:* 02 July 2015 10:21
*To:* Simon Barrett; users@ovirt.org
*Subject:* Re: [ovirt-users] Dashboard - Page Not Found

please send the engine.log and jasperserver.log.
Also make sure that your host name is fully resolvable.


Thanks!

On 07/02/2015 12:01 PM, Simon Barrett wrote:

I am running oVirt Engine Version: 3.5.3.1-1.el6 and get a “Page
Not Found” error when I click on the Dashboards tab at the top
right of the admin portal.

The Reports server is setup and working fine and I can see
“Cluster Dashboard”, “Datacenter Dashboard”, “System Dashboard”
reports in “Webadmin Dashboards” when viewing them directly
through the oVirt Engine reports web interface.

I couldn’t find any logs that would assist in diagnosing this and
couldn’t find any other solutions.

Any suggestions as to how I fix this?

Many thanks,

Simon




___

Users mailing list

Users@ovirt.org  

http://lists.ovirt.org/mailman/listinfo/users



--
Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109
  
Tel : +972 (9) 7692306

   8272306
Email:yd...@redhat.com  
IRC : ydary


--
Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
  8272306
Email: yd...@redhat.com
IRC : ydary

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how can i setup hosted engine on the ovirt-node-3.5-201504280933

2015-07-02 Thread Zhong Qiang
Hey Sandro,
   Thanks for your reply.

[root@ovirthost01 ~]# locale
LANG=en_US.utf8
LC_CTYPE="en_US.utf8"
LC_NUMERIC="en_US.utf8"
LC_TIME="en_US.utf8"
LC_COLLATE="en_US.utf8"
LC_MONETARY="en_US.utf8"
LC_MESSAGES="en_US.utf8"
LC_PAPER="en_US.utf8"
LC_NAME="en_US.utf8"
LC_ADDRESS="en_US.utf8"
LC_TELEPHONE="en_US.utf8"
LC_MEASUREMENT="en_US.utf8"
LC_IDENTIFICATION="en_US.utf8"
LC_ALL=


2015-07-02 17:17 GMT+08:00 Sandro Bonazzola :

> Il 02/07/2015 10:59, Zhong Qiang ha scritto:
> > hey Sandro,
> >this is my log:
> >
> > 2015-07-02 15:51 GMT+08:00 Sandro Bonazzola  >:
> >
> > Il 02/07/2015 05:45, Zhong Qiang ha scritto:
> > > hey,
> > >   I tried two case.
> > > ##
> > >   case 1:
> > > installed
> ovirt-node-iso-3.5-0.999.201506302311.el7.centos.iso or
> ovirt-node-iso-3.5-0.999.201507012312.el7.centos.iso ,  and setup
> > > hosted-engine .
> > > got error message below:
> > > @@@
> > > [root@ovirthost01 Stor01]# hosted-engine --deploy
> > > [ INFO  ] Stage: Initializing
> > > [ INFO  ] Generating a temporary VNC password.
> > > [ INFO  ] Stage: Environment setup
> > >   Continuing will configure this host for serving as
> hypervisor and create a VM where you have to install oVirt Engine
> afterwards.
> > >   Are you sure you want to continue? (Yes, No)[Yes]:
> > >   Configuration files: []
> > >   Log file:
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150702021002-pa9xwr.log
> > >   Version: otopi-1.3.3_master
> (otopi-1.3.3-0.0.master.20150617163218.gitfc9ccb8.el7)
> > > [ INFO  ] Hardware supports virtualization
> > > [ INFO  ] Stage: Environment packages setup
> > > [ INFO  ] Stage: Programs detection
> > > [ INFO  ] Stage: Environment setup
> > > [ INFO  ] Stage: Environment customization
> > >
> > >   --== STORAGE CONFIGURATION ==--
> > >
> > >   During customization use CTRL-D to abort.
> > >   Please specify the storage you would like to use (iscsi,
> nfs3, nfs4)[nfs3]:
> > >   Please specify the full shared storage connection path
> to use (example: host:/path): 10.10.19.99:/volume1/nfsstor
> > >   The specified storage location already contains a data
> domain. Is this an additional host setup (Yes, No)[Yes]?
> > > [ INFO  ] Installing on additional host
> > >   Please specify the Host ID [Must be integer, default: 2]:
> > >   Please provide storage domain name. [hosted_storage]:
> hostedStor
> > > [ ERROR ] Failed to execute stage 'Environment customization':
> Storage domain name cannot be empty. It can only consist of alphanumeric
> characters
> > > (that is, letters, numbers, and signs "-" and "_"). All other
> characters are not valid in the name.
> >
> > Can you please provide
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150702021002-pa9xwr.log?
>
> This is weird.
> the storage domain recorded is:
> OVEHOSTED_STORAGE/storageDomainName=str:'hostedStor'
>
> and the validation should fail only if it matches _RE_NOT_ALPHANUMERIC =
> re.compile(r"[^-\w]")
>
> so 'hostedStor' should have been considered valid.
> can you please provide 'locale' output?
> maybe it's a locale issue.
>
>
> >
> >
> >
> > > [ INFO  ] Stage: Clean up
> > > [ INFO  ] Generating answer file
> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150702021034.conf'
> > > [ INFO  ] Stage: Pre-termination
> > > [ INFO  ] Stage: Termination
> > > 
> > >
> > > #
> > > case 2:
> > >  installed
> ovirt-node-iso-3.5-0.999.201504280931.el7.centos.iso ,  and setup
> hosted-engine .
> > > got error message below:
> > > 
> > > [root@ovirthost01 ~]# hosted-engine --deploy
> > > [ INFO  ] Stage: Initializing
> > > [ INFO  ] Generating a temporary VNC password.
> > > [ INFO  ] Stage: Environment setup
> > >   Continuing will configure this host for serving as
> hypervisor and create a VM where you have to install oVirt Engine
> afterwards.
> > >   Are you sure you want to continue? (Yes, No)[Yes]:
> > >   Configuration files: []
> > >   Log file:
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150702022914-83zr2r.log
> > >   Version: otopi-1.3.1 (otopi-1.3.1-1.el7)
> > > [ INFO  ] Hardware supports virtualization
> > > [ INFO  ] Stage: Environment packages setup
> > > [ INFO  ] Stage: Programs detection
> > > [ INFO  ] Stage: Environment setup
> > > [ INFO  ] Stage: Environment customization
> > >
> > >   --== STORAGE CONFIGURATION ==--
> > >
> > >   Duri

Re: [ovirt-users] Dashboard - Page Not Found

2015-07-02 Thread Simon Barrett
Please see attached.

Thanks

From: Yaniv Dary [mailto:yd...@redhat.com]
Sent: 02 July 2015 10:58
To: Simon Barrett; users@ovirt.org
Subject: Re: [ovirt-users] Dashboard - Page Not Found

Please also attach server logs for both reports and engine service.
On 07/02/2015 12:38 PM, Simon Barrett wrote:
Both are attached and the hostname is fully resolvable.

Thanks

From: Yaniv Dary [mailto:yd...@redhat.com]
Sent: 02 July 2015 10:21
To: Simon Barrett; users@ovirt.org
Subject: Re: [ovirt-users] Dashboard - Page Not Found

please send the engine.log and jasperserver.log.
Also make sure that your host name is fully resolvable.


Thanks!
On 07/02/2015 12:01 PM, Simon Barrett wrote:
I am running oVirt Engine Version: 3.5.3.1-1.el6 and get a "Page Not Found" 
error when I click on the Dashboards tab at the top right of the admin portal.

The Reports server is setup and working fine and I can see "Cluster Dashboard", 
"Datacenter Dashboard", "System Dashboard" reports in "Webadmin Dashboards" 
when viewing them directly through the oVirt Engine reports web interface.

I couldn't find any logs that would assist in diagnosing this and couldn't find 
any other solutions.

Any suggestions as to how I fix this?

Many thanks,

Simon





___

Users mailing list

Users@ovirt.org

http://lists.ovirt.org/mailman/listinfo/users




--

Yaniv Dary

Technical Product Manager

Red Hat Israel Ltd.

34 Jerusalem Road

Building A, 4th floor

Ra'anana, Israel 4350109



Tel : +972 (9) 7692306

  8272306

Email: yd...@redhat.com

IRC : ydary



--

Yaniv Dary

Technical Product Manager

Red Hat Israel Ltd.

34 Jerusalem Road

Building A, 4th floor

Ra'anana, Israel 4350109



Tel : +972 (9) 7692306

  8272306

Email: yd...@redhat.com

IRC : ydary


ovirt-engine-reports-server.log.gz
Description: ovirt-engine-reports-server.log.gz


ovirt-engine-server.log.gz
Description: ovirt-engine-server.log.gz
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how can i setup hosted engine on the ovirt-node-3.5-201504280933

2015-07-02 Thread Sandro Bonazzola
Il 02/07/2015 12:44, Zhong Qiang ha scritto:
> Hey Sandro,
>Thanks for your reply.
> 
> [root@ovirthost01 ~]# locale
> LANG=en_US.utf8
> LC_CTYPE="en_US.utf8"
> LC_NUMERIC="en_US.utf8"
> LC_TIME="en_US.utf8"
> LC_COLLATE="en_US.utf8"
> LC_MONETARY="en_US.utf8"
> LC_MESSAGES="en_US.utf8"
> LC_PAPER="en_US.utf8"
> LC_NAME="en_US.utf8"
> LC_ADDRESS="en_US.utf8"
> LC_TELEPHONE="en_US.utf8"
> LC_MEASUREMENT="en_US.utf8"
> LC_IDENTIFICATION="en_US.utf8"
> LC_ALL=
> 

tried to reproduce your issue on a clean non-node system and it works fine.

Fabian, are you able to reproduce on node?

> 
> 2015-07-02 17:17 GMT+08:00 Sandro Bonazzola  >:
> 
> Il 02/07/2015 10:59, Zhong Qiang ha scritto:
> > hey Sandro,
> >this is my log:
> >
> > 2015-07-02 15:51 GMT+08:00 Sandro Bonazzola    >>:
> >
> > Il 02/07/2015 05:45, Zhong Qiang ha scritto:
> > > hey,
> > >   I tried two case.
> > > ##
> > >   case 1:
> > > installed 
> ovirt-node-iso-3.5-0.999.201506302311.el7.centos.iso or 
> ovirt-node-iso-3.5-0.999.201507012312.el7.centos.iso , 
> and setup
> > > hosted-engine .
> > > got error message below:
> > > @@@
> > > [root@ovirthost01 Stor01]# hosted-engine --deploy
> > > [ INFO  ] Stage: Initializing
> > > [ INFO  ] Generating a temporary VNC password.
> > > [ INFO  ] Stage: Environment setup
> > >   Continuing will configure this host for serving as 
> hypervisor and create a VM where you have to install oVirt Engine afterwards.
> > >   Are you sure you want to continue? (Yes, No)[Yes]:
> > >   Configuration files: []
> > >   Log file: 
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150702021002-pa9xwr.log
> > >   Version: otopi-1.3.3_master 
> (otopi-1.3.3-0.0.master.20150617163218.gitfc9ccb8.el7)
> > > [ INFO  ] Hardware supports virtualization
> > > [ INFO  ] Stage: Environment packages setup
> > > [ INFO  ] Stage: Programs detection
> > > [ INFO  ] Stage: Environment setup
> > > [ INFO  ] Stage: Environment customization
> > >
> > >   --== STORAGE CONFIGURATION ==--
> > >
> > >   During customization use CTRL-D to abort.
> > >   Please specify the storage you would like to use 
> (iscsi, nfs3, nfs4)[nfs3]:
> > >   Please specify the full shared storage connection path 
> to use (example: host:/path): 10.10.19.99:/volume1/nfsstor
> > >   The specified storage location already contains a data 
> domain. Is this an additional host setup (Yes, No)[Yes]?
> > > [ INFO  ] Installing on additional host
> > >   Please specify the Host ID [Must be integer, default: 
> 2]:
> > >   Please provide storage domain name. [hosted_storage]: 
> hostedStor
> > > [ ERROR ] Failed to execute stage 'Environment customization': 
> Storage domain name cannot be empty. It can only consist of alphanumeric
> characters
> > > (that is, letters, numbers, and signs "-" and "_"). All other 
> characters are not valid in the name.
> >
> > Can you please provide 
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150702021002-pa9xwr.log?
> 
> This is weird.
> the storage domain recorded is:
> OVEHOSTED_STORAGE/storageDomainName=str:'hostedStor'
> 
> and the validation should fail only if it matches _RE_NOT_ALPHANUMERIC = 
> re.compile(r"[^-\w]")
> 
> so 'hostedStor' should have been considered valid.
> can you please provide 'locale' output?
> maybe it's a locale issue.
> 
> 
> >
> >
> >
> > > [ INFO  ] Stage: Clean up
> > > [ INFO  ] Generating answer file 
> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150702021034.conf'
> > > [ INFO  ] Stage: Pre-termination
> > > [ INFO  ] Stage: Termination
> > > 
> > >
> > > #
> > > case 2:
> > >  installed 
> ovirt-node-iso-3.5-0.999.201504280931.el7.centos.iso ,  and setup 
> hosted-engine .
> > > got error message below:
> > > 
> > > [root@ovirthost01 ~]# hosted-engine --deploy
> > > [ INFO  ] Stage: Initializing
> > > [ INFO  ] Generating a temporary VNC password.
> > > [ INFO  ] Stage: Environment setup
> > >   Continuing will configure this host for serving as 
> hypervisor and create a VM where you have to install oVirt Engine afterwards.
> > >   Are yo

Re: [ovirt-users] how can i setup hosted engine on the ovirt-node-3.5-201504280933

2015-07-02 Thread Sandro Bonazzola
Il 02/07/2015 12:56, Sandro Bonazzola ha scritto:
> Il 02/07/2015 12:44, Zhong Qiang ha scritto:
>> Hey Sandro,
>>Thanks for your reply.
>>
>> [root@ovirthost01 ~]# locale
>> LANG=en_US.utf8
>> LC_CTYPE="en_US.utf8"
>> LC_NUMERIC="en_US.utf8"
>> LC_TIME="en_US.utf8"
>> LC_COLLATE="en_US.utf8"
>> LC_MONETARY="en_US.utf8"
>> LC_MESSAGES="en_US.utf8"
>> LC_PAPER="en_US.utf8"
>> LC_NAME="en_US.utf8"
>> LC_ADDRESS="en_US.utf8"
>> LC_TELEPHONE="en_US.utf8"
>> LC_MEASUREMENT="en_US.utf8"
>> LC_IDENTIFICATION="en_US.utf8"
>> LC_ALL=
>>
> 
> tried to reproduce your issue on a clean non-node system and it works fine.

Issue has been found, patch pushed: https://gerrit.ovirt.org/43147



> 
> Fabian, are you able to reproduce on node?
> 
>>
>> 2015-07-02 17:17 GMT+08:00 Sandro Bonazzola > >:
>>
>> Il 02/07/2015 10:59, Zhong Qiang ha scritto:
>> > hey Sandro,
>> >this is my log:
>> >
>> > 2015-07-02 15:51 GMT+08:00 Sandro Bonazzola >  > >>:
>> >
>> > Il 02/07/2015 05:45, Zhong Qiang ha scritto:
>> > > hey,
>> > >   I tried two case.
>> > > ##
>> > >   case 1:
>> > > installed 
>> ovirt-node-iso-3.5-0.999.201506302311.el7.centos.iso or 
>> ovirt-node-iso-3.5-0.999.201507012312.el7.centos.iso , 
>> and setup
>> > > hosted-engine .
>> > > got error message below:
>> > > @@@
>> > > [root@ovirthost01 Stor01]# hosted-engine --deploy
>> > > [ INFO  ] Stage: Initializing
>> > > [ INFO  ] Generating a temporary VNC password.
>> > > [ INFO  ] Stage: Environment setup
>> > >   Continuing will configure this host for serving as 
>> hypervisor and create a VM where you have to install oVirt Engine afterwards.
>> > >   Are you sure you want to continue? (Yes, No)[Yes]:
>> > >   Configuration files: []
>> > >   Log file: 
>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150702021002-pa9xwr.log
>> > >   Version: otopi-1.3.3_master 
>> (otopi-1.3.3-0.0.master.20150617163218.gitfc9ccb8.el7)
>> > > [ INFO  ] Hardware supports virtualization
>> > > [ INFO  ] Stage: Environment packages setup
>> > > [ INFO  ] Stage: Programs detection
>> > > [ INFO  ] Stage: Environment setup
>> > > [ INFO  ] Stage: Environment customization
>> > >
>> > >   --== STORAGE CONFIGURATION ==--
>> > >
>> > >   During customization use CTRL-D to abort.
>> > >   Please specify the storage you would like to use 
>> (iscsi, nfs3, nfs4)[nfs3]:
>> > >   Please specify the full shared storage connection path 
>> to use (example: host:/path): 10.10.19.99:/volume1/nfsstor
>> > >   The specified storage location already contains a data 
>> domain. Is this an additional host setup (Yes, No)[Yes]?
>> > > [ INFO  ] Installing on additional host
>> > >   Please specify the Host ID [Must be integer, default: 
>> 2]:
>> > >   Please provide storage domain name. [hosted_storage]: 
>> hostedStor
>> > > [ ERROR ] Failed to execute stage 'Environment customization': 
>> Storage domain name cannot be empty. It can only consist of alphanumeric
>> characters
>> > > (that is, letters, numbers, and signs "-" and "_"). All other 
>> characters are not valid in the name.
>> >
>> > Can you please provide 
>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150702021002-pa9xwr.log?
>>
>> This is weird.
>> the storage domain recorded is:
>> OVEHOSTED_STORAGE/storageDomainName=str:'hostedStor'
>>
>> and the validation should fail only if it matches _RE_NOT_ALPHANUMERIC = 
>> re.compile(r"[^-\w]")
>>
>> so 'hostedStor' should have been considered valid.
>> can you please provide 'locale' output?
>> maybe it's a locale issue.
>>
>>
>> >
>> >
>> >
>> > > [ INFO  ] Stage: Clean up
>> > > [ INFO  ] Generating answer file 
>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150702021034.conf'
>> > > [ INFO  ] Stage: Pre-termination
>> > > [ INFO  ] Stage: Termination
>> > > 
>> > >
>> > > #
>> > > case 2:
>> > >  installed 
>> ovirt-node-iso-3.5-0.999.201504280931.el7.centos.iso ,  and setup 
>> hosted-engine .
>> > > got error message below:
>> > > 
>> > > [root@ovirthost01 ~]# hosted-engine --deploy
>> > > [ INFO  ] Stage: Initializing
>> > > [ INFO  ] Generating a temporary VNC passw

Re: [ovirt-users] Confused / Hosted-Engine won't start

2015-07-02 Thread Christopher Young
[root@orldc-dev-vnode02 ~]# getenforce 
Permissive

It looks like it isn't SELinux.

On Thu, 2015-07-02 at 09:53 +0200, Sandro Bonazzola wrote:
> Il 02/07/2015 02:36, Christopher Young ha scritto:
> > I'm sure I have worked through this before, but I've been banging 
> > my head against this one for a while now, and I think I'm too close 
> > to the issue.
> > 
> > Basically, my hosted-engine won't start anymore.  I do recall 
> > attempting to migrate it to a new gluster-based NFS share recently, 
> > but I could have
> > sworn that was successful and working.  I _think_ I have some sort 
> > of id issue with storage/volume/whatever id's, but I need some help 
> > digging through
> > it if someone would be so kind.
> > 
> > I have the following error in 
> > /var/log/libvirt/qemu/HostedEngine.log:
> > 
> > --
> > 2015-07-02T00:01:13.080952Z qemu-kvm: -drive
> > file=/var/run/vdsm/storage/4e3017eb-d062-4ad1-9df8
> > -7057fcee412c/5ead7b5d-50e8-4d6c-a0e5-bbe6d93dd836/eeb2d821-a432
> > -4df6-8856-fdb14df260e3,if=none,id=drive-virtio
> > -disk0,format=raw,serial=5ead7b5d-50e8-4d6c-a0e5
> > -bbe6d93dd836,cache=none,werror=stop,rerror=stop,aio=threads:
> > could not open disk image
> > /var/run/vdsm/storage/4e3017eb-d062-4ad1-9df8-7057fcee412c/5ead7b5d
> > -50e8-4d6c-a0e5-bbe6d93dd836/eeb2d821-a432-4df6-8856-fdb14df260e3: 
> > Could not
> > refresh total sector count: Input/output error
> > --
> 
> 
> Please check selinux: ausearch -m avc
> it might be selinux preventing access to the disk image.
> 
> 
> 
> > 
> > I also am including a few command outputs that I'm sure might help:
> > 
> > --
> > [root@orldc-dev-vnode02 4e3017eb-d062-4ad1-9df8-7057fcee412c]# ls 
> > -la /var/run/vdsm/storage/4e3017eb-d062-4ad1-9df8-7057fcee412c/
> > total 8
> > drwxr-xr-x. 2 vdsm kvm  80 Jul  1 20:04 .
> > drwxr-xr-x. 3 vdsm kvm  60 Jul  1 20:04 ..
> > lrwxrwxrwx. 1 vdsm kvm 128 Jul  1 20:04 1d80a60c-8f26-4448-9460
> > -2c7b00ff75bf ->
> > /rhev/data-center/mnt/ovirt-gluster-nfs:_engine/4e3017eb-d062-4ad1
> > -9df8-7057fcee412c/images/1d80a60c-8f26-4448-9460-2c7b00ff75bf
> > lrwxrwxrwx. 1 vdsm kvm 128 Jul  1 20:04 23ac8897-b0c7-41d6-a7de
> > -19f46ed78400 ->
> > /rhev/data-center/mnt/ovirt-gluster-nfs:_engine/4e3017eb-d062-4ad1
> > -9df8-7057fcee412c/images/23ac8897-b0c7-41d6-a7de-19f46ed78400
> > 
> > --
> > [root@orldc-dev-vnode02 4e3017eb-d062-4ad1-9df8-7057fcee412c]# cat 
> > /etc/ovirt-hosted-engine/vm.conf
> > vmId=6b7329f9-518a-4488-b1c4-2cd809f2f580
> > memSize=5120
> > display=vnc
> > devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, 
> > bus:1,
> > type:drive},specParams:{},readonly:true,deviceId:77924fc2-c5c9-408b
> > -97d3-cd0b0d11a62c,path:/home/tmp/CentOS-6.6-x86_64
> > -minimal.iso,device:cdrom,shared:false,type:disk}
> > devices={index:0,iface:virtio,format:raw,poolID:--
> > --,volumeID:eeb2d821-a432-4df6-8856
> > -fdb14df260e3,imageID:5ead7b5d-50e8-4d6c-a0e5
> > -bbe6d93dd836,specParams:{},readonly:false,domainID:4e3017eb-d062
> > -4ad1-9df8-7057fcee412c,optional:false,deviceId:5ead7b5d-50e8-4d6c
> > -a0e5-bbe6d93dd836,address:{bus:0x00,
> > slot:0x06, domain:0x, type:pci, 
> > function:0x0},device:disk,shared:exclusive,propagateErrors:off,type
> > :disk,bootOrder:1}
> > devices={device:scsi,model:virtio-scsi,type:controller}
> > devices={nicModel:pv,macAddr:00:16:3e:0e:d0:68,linkActive:true,netw
> > ork:ovirtmgmt,filter:vdsm-no-mac
> > -spoofing,specParams:{},deviceId:f70ba622-6ac8-4c06-a005
> > -0ebd940a15b2,address:{bus:0x00,
> > slot:0x03, domain:0x, type:pci, 
> > function:0x0},device:bridge,type:interface}
> > devices={device:console,specParams:{},type:console,deviceId:98386e6
> > c-ae56-4b6d-9bfb-c72bbd299ad1,alias:console0}
> > vmName=HostedEngine
> > spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecor
> > d,ssmartcard,susbredir
> > smp=2
> > cpuType=Westmere
> > emulatedMachine=pc
> > --
> > 
> > [root@orldc-dev-vnode02 4e3017eb-d062-4ad1-9df8-7057fcee412c]# cat 
> > /etc/ovirt-hosted-engine/hosted-engine.conf
> > fqdn=orldc-dev-vengine01.***
> > vm_disk_id=5ead7b5d-50e8-4d6c-a0e5-bbe6d93dd836
> > vmid=6b7329f9-518a-4488-b1c4-2cd809f2f580
> > storage=ovirt-gluster-nfs:/engine
> > conf=/etc/ovirt-hosted-engine/vm.conf
> > service_start_time=0
> > host_id=1
> > console=vnc
> > domainType=nfs3
> > spUUID=379cf161-d5b1-4c20-bb71-e3ca5d2ccd6b
> > sdUUID=4e3017eb-d062-4ad1-9df8-7057fcee412c
> > connectionUUID=0d1b50ac-cf3f-4cd7-90df-3c3a6d11a984
> > ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
> > ca_subject="C=EN, L=Test, O=Test, CN=Test"
> > vdsm_use_ssl=true
> > gateway=10.16.3.1
> > bridge=ovirtmgmt
> > metadata_volume_UUID=dd9f373c-d161-4fa0-aab1-3cb52305dba7
> > metadata_image_UUID=23ac8897-b0c7-41d6-a7de-19f46ed78400
> > lockspace_volume_UUID=d9bacbf6-c2f4-4f74-a91f-3a3a52f255bf
> > lockspace_image_UUID=1d80a60c-8f26-4448-9460-2c7b00ff75bf
> > 
> > # The following are used only for 

Re: [ovirt-users] Confused / Hosted-Engine won't start

2015-07-02 Thread Sandro Bonazzola
Il 02/07/2015 16:04, Christopher Young ha scritto:
> [root@orldc-dev-vnode02 ~]# getenforce 
> Permissive
> 
> It looks like it isn't SELinux.

Checked also on the storage server ovirt-gluster-nfs?


> 
> On Thu, 2015-07-02 at 09:53 +0200, Sandro Bonazzola wrote:
>> Il 02/07/2015 02:36, Christopher Young ha scritto:
>>> I'm sure I have worked through this before, but I've been banging 
>>> my head against this one for a while now, and I think I'm too close 
>>> to the issue.
>>>
>>> Basically, my hosted-engine won't start anymore.  I do recall 
>>> attempting to migrate it to a new gluster-based NFS share recently, 
>>> but I could have
>>> sworn that was successful and working.  I _think_ I have some sort 
>>> of id issue with storage/volume/whatever id's, but I need some help 
>>> digging through
>>> it if someone would be so kind.
>>>
>>> I have the following error in 
>>> /var/log/libvirt/qemu/HostedEngine.log:
>>>
>>> --
>>> 2015-07-02T00:01:13.080952Z qemu-kvm: -drive
>>> file=/var/run/vdsm/storage/4e3017eb-d062-4ad1-9df8
>>> -7057fcee412c/5ead7b5d-50e8-4d6c-a0e5-bbe6d93dd836/eeb2d821-a432
>>> -4df6-8856-fdb14df260e3,if=none,id=drive-virtio
>>> -disk0,format=raw,serial=5ead7b5d-50e8-4d6c-a0e5
>>> -bbe6d93dd836,cache=none,werror=stop,rerror=stop,aio=threads:
>>> could not open disk image
>>> /var/run/vdsm/storage/4e3017eb-d062-4ad1-9df8-7057fcee412c/5ead7b5d
>>> -50e8-4d6c-a0e5-bbe6d93dd836/eeb2d821-a432-4df6-8856-fdb14df260e3: 
>>> Could not
>>> refresh total sector count: Input/output error
>>> --
>>
>>
>> Please check selinux: ausearch -m avc
>> it might be selinux preventing access to the disk image.
>>
>>
>>
>>>
>>> I also am including a few command outputs that I'm sure might help:
>>>
>>> --
>>> [root@orldc-dev-vnode02 4e3017eb-d062-4ad1-9df8-7057fcee412c]# ls 
>>> -la /var/run/vdsm/storage/4e3017eb-d062-4ad1-9df8-7057fcee412c/
>>> total 8
>>> drwxr-xr-x. 2 vdsm kvm  80 Jul  1 20:04 .
>>> drwxr-xr-x. 3 vdsm kvm  60 Jul  1 20:04 ..
>>> lrwxrwxrwx. 1 vdsm kvm 128 Jul  1 20:04 1d80a60c-8f26-4448-9460
>>> -2c7b00ff75bf ->
>>> /rhev/data-center/mnt/ovirt-gluster-nfs:_engine/4e3017eb-d062-4ad1
>>> -9df8-7057fcee412c/images/1d80a60c-8f26-4448-9460-2c7b00ff75bf
>>> lrwxrwxrwx. 1 vdsm kvm 128 Jul  1 20:04 23ac8897-b0c7-41d6-a7de
>>> -19f46ed78400 ->
>>> /rhev/data-center/mnt/ovirt-gluster-nfs:_engine/4e3017eb-d062-4ad1
>>> -9df8-7057fcee412c/images/23ac8897-b0c7-41d6-a7de-19f46ed78400
>>>
>>> --
>>> [root@orldc-dev-vnode02 4e3017eb-d062-4ad1-9df8-7057fcee412c]# cat 
>>> /etc/ovirt-hosted-engine/vm.conf
>>> vmId=6b7329f9-518a-4488-b1c4-2cd809f2f580
>>> memSize=5120
>>> display=vnc
>>> devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, 
>>> bus:1,
>>> type:drive},specParams:{},readonly:true,deviceId:77924fc2-c5c9-408b
>>> -97d3-cd0b0d11a62c,path:/home/tmp/CentOS-6.6-x86_64
>>> -minimal.iso,device:cdrom,shared:false,type:disk}
>>> devices={index:0,iface:virtio,format:raw,poolID:--
>>> --,volumeID:eeb2d821-a432-4df6-8856
>>> -fdb14df260e3,imageID:5ead7b5d-50e8-4d6c-a0e5
>>> -bbe6d93dd836,specParams:{},readonly:false,domainID:4e3017eb-d062
>>> -4ad1-9df8-7057fcee412c,optional:false,deviceId:5ead7b5d-50e8-4d6c
>>> -a0e5-bbe6d93dd836,address:{bus:0x00,
>>> slot:0x06, domain:0x, type:pci, 
>>> function:0x0},device:disk,shared:exclusive,propagateErrors:off,type
>>> :disk,bootOrder:1}
>>> devices={device:scsi,model:virtio-scsi,type:controller}
>>> devices={nicModel:pv,macAddr:00:16:3e:0e:d0:68,linkActive:true,netw
>>> ork:ovirtmgmt,filter:vdsm-no-mac
>>> -spoofing,specParams:{},deviceId:f70ba622-6ac8-4c06-a005
>>> -0ebd940a15b2,address:{bus:0x00,
>>> slot:0x03, domain:0x, type:pci, 
>>> function:0x0},device:bridge,type:interface}
>>> devices={device:console,specParams:{},type:console,deviceId:98386e6
>>> c-ae56-4b6d-9bfb-c72bbd299ad1,alias:console0}
>>> vmName=HostedEngine
>>> spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecor
>>> d,ssmartcard,susbredir
>>> smp=2
>>> cpuType=Westmere
>>> emulatedMachine=pc
>>> --
>>>
>>> [root@orldc-dev-vnode02 4e3017eb-d062-4ad1-9df8-7057fcee412c]# cat 
>>> /etc/ovirt-hosted-engine/hosted-engine.conf
>>> fqdn=orldc-dev-vengine01.***
>>> vm_disk_id=5ead7b5d-50e8-4d6c-a0e5-bbe6d93dd836
>>> vmid=6b7329f9-518a-4488-b1c4-2cd809f2f580
>>> storage=ovirt-gluster-nfs:/engine
>>> conf=/etc/ovirt-hosted-engine/vm.conf
>>> service_start_time=0
>>> host_id=1
>>> console=vnc
>>> domainType=nfs3
>>> spUUID=379cf161-d5b1-4c20-bb71-e3ca5d2ccd6b
>>> sdUUID=4e3017eb-d062-4ad1-9df8-7057fcee412c
>>> connectionUUID=0d1b50ac-cf3f-4cd7-90df-3c3a6d11a984
>>> ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
>>> ca_subject="C=EN, L=Test, O=Test, CN=Test"
>>> vdsm_use_ssl=true
>>> gateway=10.16.3.1
>>> bridge=ovirtmgmt
>>> metadata_volume_UUID=dd9f373c-d161-4fa0-aab1-3cb52305dba7
>>> metadata_image_UUID=23ac8897-b0c7-41d6-a7de-19f46ed78400
>>> lockspace_volume_UUID=d9bacbf6-c2f4-4f74-a91f-3a

Re: [ovirt-users] Confused / Hosted-Engine won't start

2015-07-02 Thread Christopher Young
That's actually a local NFS implementation (running on Gluster) so I
wouldn't expect that it would factor in, but I just verified that all
gluster hosts are in Permissive mode.  Like I mentioned, I think I may
have some ids wrong or bad links or something since this had previously
been working fine, though I must be missing it. 

On Thu, 2015-07-02 at 16:08 +0200, Sandro Bonazzola wrote:
> Il 02/07/2015 16:04, Christopher Young ha scritto:
> > [root@orldc-dev-vnode02 ~]# getenforce 
> > Permissive
> > 
> > It looks like it isn't SELinux.
> 
> Checked also on the storage server ovirt-gluster-nfs?
> 
> 
> > 
> > On Thu, 2015-07-02 at 09:53 +0200, Sandro Bonazzola wrote:
> > > Il 02/07/2015 02:36, Christopher Young ha scritto:
> > > > I'm sure I have worked through this before, but I've been 
> > > > banging 
> > > > my head against this one for a while now, and I think I'm too 
> > > > close 
> > > > to the issue.
> > > > 
> > > > Basically, my hosted-engine won't start anymore.  I do recall 
> > > > attempting to migrate it to a new gluster-based NFS share 
> > > > recently, 
> > > > but I could have
> > > > sworn that was successful and working.  I _think_ I have some 
> > > > sort 
> > > > of id issue with storage/volume/whatever id's, but I need some 
> > > > help 
> > > > digging through
> > > > it if someone would be so kind.
> > > > 
> > > > I have the following error in 
> > > > /var/log/libvirt/qemu/HostedEngine.log:
> > > > 
> > > > --
> > > > 2015-07-02T00:01:13.080952Z qemu-kvm: -drive
> > > > file=/var/run/vdsm/storage/4e3017eb-d062-4ad1-9df8
> > > > -7057fcee412c/5ead7b5d-50e8-4d6c-a0e5-bbe6d93dd836/eeb2d821
> > > > -a432
> > > > -4df6-8856-fdb14df260e3,if=none,id=drive-virtio
> > > > -disk0,format=raw,serial=5ead7b5d-50e8-4d6c-a0e5
> > > > -bbe6d93dd836,cache=none,werror=stop,rerror=stop,aio=threads:
> > > > could not open disk image
> > > > /var/run/vdsm/storage/4e3017eb-d062-4ad1-9df8
> > > > -7057fcee412c/5ead7b5d
> > > > -50e8-4d6c-a0e5-bbe6d93dd836/eeb2d821-a432-4df6-8856
> > > > -fdb14df260e3: 
> > > > Could not
> > > > refresh total sector count: Input/output error
> > > > --
> > > 
> > > 
> > > Please check selinux: ausearch -m avc
> > > it might be selinux preventing access to the disk image.
> > > 
> > > 
> > > 
> > > > 
> > > > I also am including a few command outputs that I'm sure might 
> > > > help:
> > > > 
> > > > --
> > > > [root@orldc-dev-vnode02 4e3017eb-d062-4ad1-9df8-7057fcee412c]# 
> > > > ls 
> > > > -la /var/run/vdsm/storage/4e3017eb-d062-4ad1-9df8-7057fcee412c/
> > > > total 8
> > > > drwxr-xr-x. 2 vdsm kvm  80 Jul  1 20:04 .
> > > > drwxr-xr-x. 3 vdsm kvm  60 Jul  1 20:04 ..
> > > > lrwxrwxrwx. 1 vdsm kvm 128 Jul  1 20:04 1d80a60c-8f26-4448-9460
> > > > -2c7b00ff75bf ->
> > > > /rhev/data-center/mnt/ovirt-gluster-nfs:_engine/4e3017eb-d062
> > > > -4ad1
> > > > -9df8-7057fcee412c/images/1d80a60c-8f26-4448-9460-2c7b00ff75bf
> > > > lrwxrwxrwx. 1 vdsm kvm 128 Jul  1 20:04 23ac8897-b0c7-41d6-a7de
> > > > -19f46ed78400 ->
> > > > /rhev/data-center/mnt/ovirt-gluster-nfs:_engine/4e3017eb-d062
> > > > -4ad1
> > > > -9df8-7057fcee412c/images/23ac8897-b0c7-41d6-a7de-19f46ed78400
> > > > 
> > > > --
> > > > [root@orldc-dev-vnode02 4e3017eb-d062-4ad1-9df8-7057fcee412c]# 
> > > > cat 
> > > > /etc/ovirt-hosted-engine/vm.conf
> > > > vmId=6b7329f9-518a-4488-b1c4-2cd809f2f580
> > > > memSize=5120
> > > > display=vnc
> > > > devices={index:2,iface:ide,address:{ controller:0, 
> > > > target:0,unit:0, 
> > > > bus:1,
> > > > type:drive},specParams:{},readonly:true,deviceId:77924fc2-c5c9
> > > > -408b
> > > > -97d3-cd0b0d11a62c,path:/home/tmp/CentOS-6.6-x86_64
> > > > -minimal.iso,device:cdrom,shared:false,type:disk}
> > > > devices={index:0,iface:virtio,format:raw,poolID:-
> > > > -
> > > > --,volumeID:eeb2d821-a432-4df6-8856
> > > > -fdb14df260e3,imageID:5ead7b5d-50e8-4d6c-a0e5
> > > > -bbe6d93dd836,specParams:{},readonly:false,domainID:4e3017eb
> > > > -d062
> > > > -4ad1-9df8-7057fcee412c,optional:false,deviceId:5ead7b5d-50e8
> > > > -4d6c
> > > > -a0e5-bbe6d93dd836,address:{bus:0x00,
> > > > slot:0x06, domain:0x, type:pci, 
> > > > function:0x0},device:disk,shared:exclusive,propagateErrors:off,
> > > > type
> > > > :disk,bootOrder:1}
> > > > devices={device:scsi,model:virtio-scsi,type:controller}
> > > > devices={nicModel:pv,macAddr:00:16:3e:0e:d0:68,linkActive:true,
> > > > netw
> > > > ork:ovirtmgmt,filter:vdsm-no-mac
> > > > -spoofing,specParams:{},deviceId:f70ba622-6ac8-4c06-a005
> > > > -0ebd940a15b2,address:{bus:0x00,
> > > > slot:0x03, domain:0x, type:pci, 
> > > > function:0x0},device:bridge,type:interface}
> > > > devices={device:console,specParams:{},type:console,deviceId:983
> > > > 86e6
> > > > c-ae56-4b6d-9bfb-c72bbd299ad1,alias:console0}
> > > > vmName=HostedEngine
> > > > spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,sr
> > > > ecor
> > > > d,ssmartcard,susbredir
> > > > smp=2
> 

Re: [ovirt-users] Can´t use a VM Network VLAN, because the Virtual Machines inside can't reach the Gateway

2015-07-02 Thread Julián Tete
I tried a different Aproach:

One VLAN for each NIC

I tried but not works I left the Virtual Machine pinging the Gateway
after 10 minutes I left my Worstation alone and I went back to my home.
Today i see the ping working (¿?) Maybe I rush too much...or maybe not.

I reboot the Virtual Machine, same problem: after 15 minutes the Machine
can't reach the Gateway, we need HA and 10+ minutes it's not considerated.

After a I mess my oVirt LAB, after trying everything.

This my last working setup with sloow reaching of Gateway:

DATA CENTERS

Logical Networks:
---
Name: ovirtmgmt
Description: Management Network
Network Label: 1
MTU: Default
---
Name: dmz
Description: VLAN 50
Network Label: 50
Enable VLAN tagging: 50
VM network
MTU: Default
---
Name: Hosting
Description: VLAN 100
Network Label: 100
Enable VLAN tagging: 100
VM network
MTU: Default


CLUSTERS

Logical Networks

ovirtmgmt Assign Required Display Network Migration Network
dmz Assign
HostingAssign Required

HOSTS

Name: srvovirt02.cnsc.net
Hostname/IP: 192.168.0.63

Network Interfaces

HOSTS BOND  VLANNETWORK NAMEADDRESS
MAC   SPEED(Mbps)RX(Mbps)   TX(Mbps)

eno1 * ovirtmgmt
192.168.0.63   00:17:a4:77:00:18 1 <
1   < 1

eno2eno2.50 (50)  dmz
192.168.50.8  00:17:a4:77:00:1a 1  <
1   < 1

ens1f0 ens1f0.100 (100)Hosting
192.168.100.700:17:a4:77:00:1c  1  <
1   < 1

ens1f1
00:17:a4:77:00:1e 0 < 1   < 1

Setup Hosts Networks

Network: ovirtmgmt
Static
IP: 192.168.0.63
Subnet Mask: 255.255.255.0
Gateway: 192.168.0.1
Custom Properties: No available keys
Sync network: Yes

Network: dmz
Static
IP: 192.168.50.8
Subnet Mask: 255.255.255.224
Gateway: 192.168.50.1
Custom Properties: Please select a key...

Network: Hosting
Static
IP: 192.168.100.7
Subnet Mask: 255.255.255.240
Gateway: 192.168.100.1
Custom Properties: Please select a key...

Virtual Machines

Name: PruebaVLAN
Host: srvovirt02.cnsc.net

Edit Network Interface

Name: nic1
Profile: dmz/dmz
Type: VirtIO
Custom MAC address: 00:00:1a:4a:3e:00
Link State: Up
Card Status: Plugged

Network Interfaces in the Host srvovirt02.cnsc.net:

1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: eno1:  mtu 1500 qdisc mq master
ovirtmgmt state UP qlen 1000
link/ether 00:17:a4:77:00:18 brd ff:ff:ff:ff:ff:ff
inet6 fe80::217:a4ff:fe77:18/64 scope link
   valid_lft forever preferred_lft forever
3: eno2:  mtu 1500 qdisc mq state UP qlen
1000
link/ether 00:17:a4:77:00:1a brd ff:ff:ff:ff:ff:ff
inet6 fe80::217:a4ff:fe77:1a/64 scope link
   valid_lft forever preferred_lft forever
4: ens1f0:  mtu 1500 qdisc mq state UP
qlen 1000
link/ether 00:17:a4:77:00:1c brd ff:ff:ff:ff:ff:ff
inet6 fe80::217:a4ff:fe77:1c/64 scope link
   valid_lft forever preferred_lft forever
5: ens1f1:  mtu 1500 qdisc noop state DOWN qlen 1000
link/ether 00:17:a4:77:00:1e brd ff:ff:ff:ff:ff:ff
6: bond0:  mtu 1500 qdisc noop state DOWN
link/ether 72:f8:9c:75:e3:86 brd ff:ff:ff:ff:ff:ff
7: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
10: vnet0:  mtu 1500 qdisc pfifo_fast
master dmz state UNKNOWN qlen 500
link/ether fe:00:1a:4a:3e:00 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc00:1aff:fe4a:3e00/64 scope link
   valid_lft forever preferred_lft forever
17: eno2.50@eno2:  mtu 1500 qdisc noqueue
master dmz state UP
link/ether 00:17:a4:77:00:1a brd ff:ff:ff:ff:ff:ff
inet6 fe80::217:a4ff:fe77:1a/64 scope link
   valid_lft forever preferred_lft forever
18: dmz:  mtu 1500 qdisc noqueue state UP
link/ether 00:17:a4:77:00:1a brd ff:ff:ff:ff:ff:ff
inet 192.168.50.8/27 brd 192.168.50.31 scope global dmz
   valid_lft forever preferred_lft forever
inet6 fe80::217:a4ff:fe77:1a/64 scope link
   valid_lft forever preferred_lft forever
19: ens1f0.100@ens1f0:  mtu 1500 qdisc
noqueue master Hosting state UP
link/ether 00:17:a4:77:00:1c brd ff:ff:ff:ff:ff:ff
inet6 fe80::217:a4ff:fe77:1c/64 scope link
   valid_lft forever preferred_lft forever
20: Hosting:  mtu 1500 qdisc noqueue state
UP
link/ether 00:17:a4:77:00:1c brd ff:ff:ff:ff:ff:ff
inet 192.168.100.7/28 brd 192.168.100.15 scope global Hosting
   valid_lft forever preferred_lft forever
inet6 fe80::217:a4ff:fe77:1c/64 scope link
   valid_lft forever preferred_lft forever
21: ovirtmgmt:  mtu 1500 qdisc noqueue
state UP
link/ether 00:17:a4:77:00:18 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.63/24 br

Re: [ovirt-users] Confused / Hosted-Engine won't start

2015-07-02 Thread Christopher Young
If anyone has an experience of the various IDs in hosted-engine.conf
and vm.conf for the Hosted Engine, I believe I need to just verify
everything.  I tried a couple of changes, but I feel like I'm just
making this worse, so I've reverted them.

One thing I do not understand well is how a gluster-based (NFS) storage
domain for the hosted-engine has so many entries:

-

[root@orldc-dev-vnode02 ovirt-gluster-nfs:_engine]# find . -type f |
xargs ls -lah
ls: cannot access ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/images/5ead7b5d-50e8-4d6c-a0e5-bbe6d93dd836/eeb2d821-a432
-4df6-8856-fdb14df260e3: Input/output error
-rw-rw. 1 vdsm kvm  1.0M Jul  2 11:20 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/dom_md/ids
-rw-rw. 1 vdsm kvm   16M Jul  1 19:54 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/dom_md/inbox
-rw-rw. 1 vdsm kvm  2.0M Jul  1 19:50 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/dom_md/leases
-rw-r--r--. 1 vdsm kvm   482 Jul  1 19:49 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/dom_md/metadata
-rw-rw. 1 vdsm kvm   16M Jul  1 19:49 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/dom_md/outbox
-rw-rw. 1 vdsm kvm  1.0M Jul  2 11:32 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/images/1d80a60c-8f26-4448-9460-2c7b00ff75bf/d9bacbf6-c2f4
-4f74-a91f-3a3a52f255bf
-rw-rw. 1 vdsm kvm  1.0M Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/images/1d80a60c-8f26-4448-9460-2c7b00ff75bf/d9bacbf6-c2f4
-4f74-a91f-3a3a52f255bf.lease
-rw-r--r--. 1 vdsm kvm   284 Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/images/1d80a60c-8f26-4448-9460-2c7b00ff75bf/d9bacbf6-c2f4
-4f74-a91f-3a3a52f255bf.meta
-rw-rw. 1 vdsm kvm  1.0M Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/images/23ac8897-b0c7-41d6-a7de-19f46ed78400/dd9f373c-d161
-4fa0-aab1-3cb52305dba7.lease
-rw-r--r--. 1 vdsm kvm   283 Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/images/23ac8897-b0c7-41d6-a7de-19f46ed78400/dd9f373c-d161
-4fa0-aab1-3cb52305dba7.meta
-rw-rw. 1 vdsm kvm   25G Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/images/3278c444-d92a-4cb9-87d6-9669c6e4993e/1a4b6a5d-9c8f
-4d54-91a7-3dd87377c362
-rw-rw. 1 vdsm kvm  1.0M Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/images/3278c444-d92a-4cb9-87d6-9669c6e4993e/1a4b6a5d-9c8f
-4d54-91a7-3dd87377c362.lease
-rw-r--r--. 1 vdsm kvm   278 Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/images/3278c444-d92a-4cb9-87d6-9669c6e4993e/1a4b6a5d-9c8f
-4d54-91a7-3dd87377c362.meta
-rw-rw. 1 vdsm kvm  1.0M Jul  1 17:50 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/images/5ead7b5d-50e8-4d6c-a0e5-bbe6d93dd836/eeb2d821-a432
-4df6-8856-fdb14df260e3.lease
-rw-r--r--. 1 vdsm kvm   278 Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/images/5ead7b5d-50e8-4d6c-a0e5-bbe6d93dd836/eeb2d821-a432
-4df6-8856-fdb14df260e3.meta
-rw-rw. 1 vdsm kvm  1.0M Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/images/6064179f-2720-4db9-a7c4-a97e044c2238/05afaa26-95af
-4226-9a00-5383d8937a83
-rw-rw. 1 vdsm kvm  1.0M Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/images/6064179f-2720-4db9-a7c4-a97e044c2238/05afaa26-95af
-4226-9a00-5383d8937a83.lease
-rw-r--r--. 1 vdsm kvm   284 Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/images/6064179f-2720-4db9-a7c4-a97e044c2238/05afaa26-95af
-4226-9a00-5383d8937a83.meta
-rw-rw. 1 vdsm kvm 1004K Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/images/bb9d9a37-4f91-4973-ba9e-72ee81aed0b6/5acb27b3-62c5
-46ac-8978-576a8a4a0399
-rw-rw. 1 vdsm kvm  1.0M Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/images/bb9d9a37-4f91-4973-ba9e-72ee81aed0b6/5acb27b3-62c5
-46ac-8978-576a8a4a0399.lease
-rw-r--r--. 1 vdsm kvm   283 Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/images/bb9d9a37-4f91-4973-ba9e-72ee81aed0b6/5acb27b3-62c5
-46ac-8978-576a8a4a0399.meta
-rw-r--r--. 1 vdsm kvm   384 Jul  1 19:49 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/master/tasks/fef13299-0e7f-4c7a-a399
-092a1235faab/fef13299-0e7f-4c7a-a399-092a1235faab.job.0
-rw-r--r--. 1 vdsm kvm   277 Jul  1 19:49 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/master/tasks/fef13299-0e7f-4c7a-a399
-092a1235faab/fef13299-0e7f-4c7a-a399-092a1235faab.recover.0
-rw-r--r--. 1 vdsm kvm   417 Jul  1 19:49 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/master/tasks/fef13299-0e7f-4c7a-a399
-092a1235faab/fef13299-0e7f-4c7a-a399-092a1235faab.recover.1
-rw-r--r--. 1 vdsm kvm   107 Jul  1 19:49 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/master/tasks/fef13299-0e7f-4c7a-a399
-092a1235faab/fef13299-0e7f-4c7a-a399-092a1235faab.result
-rw-r--r--. 1 vdsm kvm   299 Jul  1 19:49 ./4e3017eb-d062-4ad1-9df8
-7057fcee412c/master/tasks/fef13299-0e7f-4c7a-a399
-092a1235faab/fef13299-0e7f-4c7a-a399-092a1235faab.task
-rwxr-xr-x. 1 vdsm kvm 0 Jul  2 11:22 ./__DIRECT_IO_TEST__
-



On Thu, 2015-07-02 at 11:05 -0400, Christopher Young wrote:
> That's actually a local NFS implementation (running on Gluster) so I
> wouldn't expect that it would factor in, but I just verified that all
> gluster hosts are in Permissive

Re: [ovirt-users] Can´t use a VM Network VLAN, because the Virtual Machines inside can't reach the Gateway

2015-07-02 Thread Julián Tete
I'm going to erase everything and the try this:

http://www.ovirt.org/QA:TestCase_Hosted_Engine_Tagged_VLAN_Support

2015-07-02 10:41 GMT-05:00 Julián Tete :

> I tried a different Aproach:
>
> One VLAN for each NIC
>
> I tried but not works I left the Virtual Machine pinging the Gateway
> after 10 minutes I left my Worstation alone and I went back to my home.
> Today i see the ping working (¿?) Maybe I rush too much...or maybe not.
>
> I reboot the Virtual Machine, same problem: after 15 minutes the Machine
> can't reach the Gateway, we need HA and 10+ minutes it's not considerated.
>
> After a I mess my oVirt LAB, after trying everything.
>
> This my last working setup with sloow reaching of Gateway:
>
> DATA CENTERS
>
> Logical Networks:
> ---
> Name: ovirtmgmt
> Description: Management Network
> Network Label: 1
> MTU: Default
> ---
> Name: dmz
> Description: VLAN 50
> Network Label: 50
> Enable VLAN tagging: 50
> VM network
> MTU: Default
> ---
> Name: Hosting
> Description: VLAN 100
> Network Label: 100
> Enable VLAN tagging: 100
> VM network
> MTU: Default
> 
>
> CLUSTERS
>
> Logical Networks
>
> ovirtmgmt Assign Required Display Network Migration Network
> dmz Assign
> HostingAssign Required
>
> HOSTS
>
> Name: srvovirt02.cnsc.net
> Hostname/IP: 192.168.0.63
>
> Network Interfaces
>
> HOSTS BOND  VLANNETWORK NAMEADDRESS
> MAC   SPEED(Mbps)RX(Mbps)   TX(Mbps)
>
> eno1 * ovirtmgmt
> 192.168.0.63   00:17:a4:77:00:18 1 <
> 1   < 1
>
> eno2eno2.50 (50)  dmz
> 192.168.50.8  00:17:a4:77:00:1a 1  <
> 1   < 1
>
> ens1f0 ens1f0.100 (100)Hosting
> 192.168.100.700:17:a4:77:00:1c  1  <
> 1   < 1
>
> ens1f1
> 00:17:a4:77:00:1e 0 < 1   < 1
>
> Setup Hosts Networks
>
> Network: ovirtmgmt
> Static
> IP: 192.168.0.63
> Subnet Mask: 255.255.255.0
> Gateway: 192.168.0.1
> Custom Properties: No available keys
> Sync network: Yes
>
> Network: dmz
> Static
> IP: 192.168.50.8
> Subnet Mask: 255.255.255.224
> Gateway: 192.168.50.1
> Custom Properties: Please select a key...
>
> Network: Hosting
> Static
> IP: 192.168.100.7
> Subnet Mask: 255.255.255.240
> Gateway: 192.168.100.1
> Custom Properties: Please select a key...
>
> Virtual Machines
>
> Name: PruebaVLAN
> Host: srvovirt02.cnsc.net
>
> Edit Network Interface
>
> Name: nic1
> Profile: dmz/dmz
> Type: VirtIO
> Custom MAC address: 00:00:1a:4a:3e:00
> Link State: Up
> Card Status: Plugged
>
> Network Interfaces in the Host srvovirt02.cnsc.net:
>
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
>valid_lft forever preferred_lft forever
> 2: eno1:  mtu 1500 qdisc mq master
> ovirtmgmt state UP qlen 1000
> link/ether 00:17:a4:77:00:18 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::217:a4ff:fe77:18/64 scope link
>valid_lft forever preferred_lft forever
> 3: eno2:  mtu 1500 qdisc mq state UP qlen
> 1000
> link/ether 00:17:a4:77:00:1a brd ff:ff:ff:ff:ff:ff
> inet6 fe80::217:a4ff:fe77:1a/64 scope link
>valid_lft forever preferred_lft forever
> 4: ens1f0:  mtu 1500 qdisc mq state UP
> qlen 1000
> link/ether 00:17:a4:77:00:1c brd ff:ff:ff:ff:ff:ff
> inet6 fe80::217:a4ff:fe77:1c/64 scope link
>valid_lft forever preferred_lft forever
> 5: ens1f1:  mtu 1500 qdisc noop state DOWN qlen 1000
> link/ether 00:17:a4:77:00:1e brd ff:ff:ff:ff:ff:ff
> 6: bond0:  mtu 1500 qdisc noop state DOWN
> link/ether 72:f8:9c:75:e3:86 brd ff:ff:ff:ff:ff:ff
> 7: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN
> link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> 10: vnet0:  mtu 1500 qdisc pfifo_fast
> master dmz state UNKNOWN qlen 500
> link/ether fe:00:1a:4a:3e:00 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::fc00:1aff:fe4a:3e00/64 scope link
>valid_lft forever preferred_lft forever
> 17: eno2.50@eno2:  mtu 1500 qdisc
> noqueue master dmz state UP
> link/ether 00:17:a4:77:00:1a brd ff:ff:ff:ff:ff:ff
> inet6 fe80::217:a4ff:fe77:1a/64 scope link
>valid_lft forever preferred_lft forever
> 18: dmz:  mtu 1500 qdisc noqueue state UP
> link/ether 00:17:a4:77:00:1a brd ff:ff:ff:ff:ff:ff
> inet 192.168.50.8/27 brd 192.168.50.31 scope global dmz
>valid_lft forever preferred_lft forever
> inet6 fe80::217:a4ff:fe77:1a/64 scope link
>valid_lft forever preferred_lft forever
> 19: ens1f0.100@ens1f0:  mtu 1500 qdisc
> noqueue master Hosting state UP
> link/ether 00:17:a4:77:00:1c brd ff:ff:ff:ff:ff:ff
> inet6 fe80::217:a4ff:fe77:1c/64 scope link
>valid_lft forever preferred

Re: [ovirt-users] Confused / Hosted-Engine won't start

2015-07-02 Thread Darrell Budic
Looks normal, hosted engine uses some extra files/leases to track some of it’s 
stuff.

Looks like you might have a gluster problem though, that IO error appears to be 
on your hosted engines disk image. Check for split brains and try and initiate 
a heal on the files, see what you get.

  -Darrell

> On Jul 2, 2015, at 11:33 AM, Christopher Young  wrote:
> 
> If anyone has an experience of the various IDs in hosted-engine.conf
> and vm.conf for the Hosted Engine, I believe I need to just verify
> everything.  I tried a couple of changes, but I feel like I'm just
> making this worse, so I've reverted them.
> 
> One thing I do not understand well is how a gluster-based (NFS) storage
> domain for the hosted-engine has so many entries:
> 
> -
> 
> [root@orldc-dev-vnode02 ovirt-gluster-nfs:_engine]# find . -type f |
> xargs ls -lah
> ls: cannot access ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/images/5ead7b5d-50e8-4d6c-a0e5-bbe6d93dd836/eeb2d821-a432
> -4df6-8856-fdb14df260e3: Input/output error
> -rw-rw. 1 vdsm kvm  1.0M Jul  2 11:20 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/dom_md/ids
> -rw-rw. 1 vdsm kvm   16M Jul  1 19:54 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/dom_md/inbox
> -rw-rw. 1 vdsm kvm  2.0M Jul  1 19:50 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/dom_md/leases
> -rw-r--r--. 1 vdsm kvm   482 Jul  1 19:49 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/dom_md/metadata
> -rw-rw. 1 vdsm kvm   16M Jul  1 19:49 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/dom_md/outbox
> -rw-rw. 1 vdsm kvm  1.0M Jul  2 11:32 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/images/1d80a60c-8f26-4448-9460-2c7b00ff75bf/d9bacbf6-c2f4
> -4f74-a91f-3a3a52f255bf
> -rw-rw. 1 vdsm kvm  1.0M Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/images/1d80a60c-8f26-4448-9460-2c7b00ff75bf/d9bacbf6-c2f4
> -4f74-a91f-3a3a52f255bf.lease
> -rw-r--r--. 1 vdsm kvm   284 Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/images/1d80a60c-8f26-4448-9460-2c7b00ff75bf/d9bacbf6-c2f4
> -4f74-a91f-3a3a52f255bf.meta
> -rw-rw. 1 vdsm kvm  1.0M Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/images/23ac8897-b0c7-41d6-a7de-19f46ed78400/dd9f373c-d161
> -4fa0-aab1-3cb52305dba7.lease
> -rw-r--r--. 1 vdsm kvm   283 Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/images/23ac8897-b0c7-41d6-a7de-19f46ed78400/dd9f373c-d161
> -4fa0-aab1-3cb52305dba7.meta
> -rw-rw. 1 vdsm kvm   25G Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/images/3278c444-d92a-4cb9-87d6-9669c6e4993e/1a4b6a5d-9c8f
> -4d54-91a7-3dd87377c362
> -rw-rw. 1 vdsm kvm  1.0M Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/images/3278c444-d92a-4cb9-87d6-9669c6e4993e/1a4b6a5d-9c8f
> -4d54-91a7-3dd87377c362.lease
> -rw-r--r--. 1 vdsm kvm   278 Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/images/3278c444-d92a-4cb9-87d6-9669c6e4993e/1a4b6a5d-9c8f
> -4d54-91a7-3dd87377c362.meta
> -rw-rw. 1 vdsm kvm  1.0M Jul  1 17:50 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/images/5ead7b5d-50e8-4d6c-a0e5-bbe6d93dd836/eeb2d821-a432
> -4df6-8856-fdb14df260e3.lease
> -rw-r--r--. 1 vdsm kvm   278 Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/images/5ead7b5d-50e8-4d6c-a0e5-bbe6d93dd836/eeb2d821-a432
> -4df6-8856-fdb14df260e3.meta
> -rw-rw. 1 vdsm kvm  1.0M Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/images/6064179f-2720-4db9-a7c4-a97e044c2238/05afaa26-95af
> -4226-9a00-5383d8937a83
> -rw-rw. 1 vdsm kvm  1.0M Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/images/6064179f-2720-4db9-a7c4-a97e044c2238/05afaa26-95af
> -4226-9a00-5383d8937a83.lease
> -rw-r--r--. 1 vdsm kvm   284 Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/images/6064179f-2720-4db9-a7c4-a97e044c2238/05afaa26-95af
> -4226-9a00-5383d8937a83.meta
> -rw-rw. 1 vdsm kvm 1004K Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/images/bb9d9a37-4f91-4973-ba9e-72ee81aed0b6/5acb27b3-62c5
> -46ac-8978-576a8a4a0399
> -rw-rw. 1 vdsm kvm  1.0M Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/images/bb9d9a37-4f91-4973-ba9e-72ee81aed0b6/5acb27b3-62c5
> -46ac-8978-576a8a4a0399.lease
> -rw-r--r--. 1 vdsm kvm   283 Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/images/bb9d9a37-4f91-4973-ba9e-72ee81aed0b6/5acb27b3-62c5
> -46ac-8978-576a8a4a0399.meta
> -rw-r--r--. 1 vdsm kvm   384 Jul  1 19:49 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/master/tasks/fef13299-0e7f-4c7a-a399
> -092a1235faab/fef13299-0e7f-4c7a-a399-092a1235faab.job.0
> -rw-r--r--. 1 vdsm kvm   277 Jul  1 19:49 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/master/tasks/fef13299-0e7f-4c7a-a399
> -092a1235faab/fef13299-0e7f-4c7a-a399-092a1235faab.recover.0
> -rw-r--r--. 1 vdsm kvm   417 Jul  1 19:49 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/master/tasks/fef13299-0e7f-4c7a-a399
> -092a1235faab/fef13299-0e7f-4c7a-a399-092a1235faab.recover.1
> -rw-r--r--. 1 vdsm kvm   107 Jul  1 19:49 ./4e3017eb-d062-4ad1-9df8
> -7057fcee412c/master/tasks/fef13299-0e7f-4c7a-a399
> -092a1235faab/fef1

Re: [ovirt-users] Confused / Hosted-Engine won't start

2015-07-02 Thread Christopher Young
Ok.  It turns out that I had a split-brain on that volume file and
didn't even realize it.  Thank both of you guys for keeping me thinking
and putting me on track.  I had so many thoughts running through my
head and had run 'gluster volume status engine' (etc.) without checking
for heal status.

In the process, I learned more about glusters healing options, so I
suppose I'm going to put this all in the category of 'learning
experience'.

I believe that I eventually want to build out a new hosted-engine VM
since this one had its configuration changed a few times and getting an
additional, redundant hosted-engine fails (primarily due to me missing
the original answers file).  I had tried recreating it using my limited
knowledge, but I could never get the 2nd hosted-engine deploy to work.

If anyone has any advise on that, I'd be appreciative and open to it. 
 I'm debating either continuing with this hosted-engine VM or backing
things up and building a new one (though I don't know the optimal
procedure for that).  Any direction is always helpful.

Thanks again!

On Thu, 2015-07-02 at 11:40 -0500, Darrell Budic wrote:
> Looks normal, hosted engine uses some extra files/leases to track 
> some of it’s stuff.
> 
> Looks like you might have a gluster problem though, that IO error 
> appears to be on your hosted engines disk image. Check for split 
> brains and try and initiate a heal on the files, see what you get.
> 
>   -Darrell
> 
> > On Jul 2, 2015, at 11:33 AM, Christopher Young <
> > mexigaba...@gmail.com> wrote:
> > 
> > If anyone has an experience of the various IDs in hosted
> > -engine.conf
> > and vm.conf for the Hosted Engine, I believe I need to just verify
> > everything.  I tried a couple of changes, but I feel like I'm just
> > making this worse, so I've reverted them.
> > 
> > One thing I do not understand well is how a gluster-based (NFS) 
> > storage
> > domain for the hosted-engine has so many entries:
> > 
> > -
> > 
> > [root@orldc-dev-vnode02 ovirt-gluster-nfs:_engine]# find . -type f 
> > |
> > xargs ls -lah
> > ls: cannot access ./4e3017eb-d062-4ad1-9df8
> > -7057fcee412c/images/5ead7b5d-50e8-4d6c-a0e5-bbe6d93dd836/eeb2d821
> > -a432
> > -4df6-8856-fdb14df260e3: Input/output error
> > -rw-rw. 1 vdsm kvm  1.0M Jul  2 11:20 ./4e3017eb-d062-4ad1-9df8
> > -7057fcee412c/dom_md/ids
> > -rw-rw. 1 vdsm kvm   16M Jul  1 19:54 ./4e3017eb-d062-4ad1-9df8
> > -7057fcee412c/dom_md/inbox
> > -rw-rw. 1 vdsm kvm  2.0M Jul  1 19:50 ./4e3017eb-d062-4ad1-9df8
> > -7057fcee412c/dom_md/leases
> > -rw-r--r--. 1 vdsm kvm   482 Jul  1 19:49 ./4e3017eb-d062-4ad1-9df8
> > -7057fcee412c/dom_md/metadata
> > -rw-rw. 1 vdsm kvm   16M Jul  1 19:49 ./4e3017eb-d062-4ad1-9df8
> > -7057fcee412c/dom_md/outbox
> > -rw-rw. 1 vdsm kvm  1.0M Jul  2 11:32 ./4e3017eb-d062-4ad1-9df8
> > -7057fcee412c/images/1d80a60c-8f26-4448-9460-2c7b00ff75bf/d9bacbf6
> > -c2f4
> > -4f74-a91f-3a3a52f255bf
> > -rw-rw. 1 vdsm kvm  1.0M Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> > -7057fcee412c/images/1d80a60c-8f26-4448-9460-2c7b00ff75bf/d9bacbf6
> > -c2f4
> > -4f74-a91f-3a3a52f255bf.lease
> > -rw-r--r--. 1 vdsm kvm   284 Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> > -7057fcee412c/images/1d80a60c-8f26-4448-9460-2c7b00ff75bf/d9bacbf6
> > -c2f4
> > -4f74-a91f-3a3a52f255bf.meta
> > -rw-rw. 1 vdsm kvm  1.0M Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> > -7057fcee412c/images/23ac8897-b0c7-41d6-a7de-19f46ed78400/dd9f373c
> > -d161
> > -4fa0-aab1-3cb52305dba7.lease
> > -rw-r--r--. 1 vdsm kvm   283 Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> > -7057fcee412c/images/23ac8897-b0c7-41d6-a7de-19f46ed78400/dd9f373c
> > -d161
> > -4fa0-aab1-3cb52305dba7.meta
> > -rw-rw. 1 vdsm kvm   25G Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> > -7057fcee412c/images/3278c444-d92a-4cb9-87d6-9669c6e4993e/1a4b6a5d
> > -9c8f
> > -4d54-91a7-3dd87377c362
> > -rw-rw. 1 vdsm kvm  1.0M Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> > -7057fcee412c/images/3278c444-d92a-4cb9-87d6-9669c6e4993e/1a4b6a5d
> > -9c8f
> > -4d54-91a7-3dd87377c362.lease
> > -rw-r--r--. 1 vdsm kvm   278 Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> > -7057fcee412c/images/3278c444-d92a-4cb9-87d6-9669c6e4993e/1a4b6a5d
> > -9c8f
> > -4d54-91a7-3dd87377c362.meta
> > -rw-rw. 1 vdsm kvm  1.0M Jul  1 17:50 ./4e3017eb-d062-4ad1-9df8
> > -7057fcee412c/images/5ead7b5d-50e8-4d6c-a0e5-bbe6d93dd836/eeb2d821
> > -a432
> > -4df6-8856-fdb14df260e3.lease
> > -rw-r--r--. 1 vdsm kvm   278 Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> > -7057fcee412c/images/5ead7b5d-50e8-4d6c-a0e5-bbe6d93dd836/eeb2d821
> > -a432
> > -4df6-8856-fdb14df260e3.meta
> > -rw-rw. 1 vdsm kvm  1.0M Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> > -7057fcee412c/images/6064179f-2720-4db9-a7c4-a97e044c2238/05afaa26
> > -95af
> > -4226-9a00-5383d8937a83
> > -rw-rw. 1 vdsm kvm  1.0M Dec 23  2014 ./4e3017eb-d062-4ad1-9df8
> > -7057fcee412c/images/6064179f-2720-4db9-a7c4-a97e044c2238/05afaa26
> > -95af
> > -4226-9a00-5383d8937a83.lease
> > -rw-r

Re: [ovirt-users] Can´t use a VM Network VLAN, because the Virtual Machines inside can't reach the Gateway

2015-07-02 Thread Julián Tete
Don't work :'(

I'm going to try this:




*https://bugzilla.redhat.com/show_bug.cgi?id=1072027
https://bugzilla.redhat.com/show_bug.cgi?id=1072027
*

2015-07-02 11:40 GMT-05:00 Julián Tete :

> I'm going to erase everything and the try this:
>
> http://www.ovirt.org/QA:TestCase_Hosted_Engine_Tagged_VLAN_Support
>
> 2015-07-02 10:41 GMT-05:00 Julián Tete :
>
>> I tried a different Aproach:
>>
>> One VLAN for each NIC
>>
>> I tried but not works I left the Virtual Machine pinging the Gateway
>> after 10 minutes I left my Worstation alone and I went back to my home.
>> Today i see the ping working (¿?) Maybe I rush too much...or maybe not.
>>
>> I reboot the Virtual Machine, same problem: after 15 minutes the Machine
>> can't reach the Gateway, we need HA and 10+ minutes it's not considerated.
>>
>> After a I mess my oVirt LAB, after trying everything.
>>
>> This my last working setup with sloow reaching of Gateway:
>>
>> DATA CENTERS
>>
>> Logical Networks:
>> ---
>> Name: ovirtmgmt
>> Description: Management Network
>> Network Label: 1
>> MTU: Default
>> ---
>> Name: dmz
>> Description: VLAN 50
>> Network Label: 50
>> Enable VLAN tagging: 50
>> VM network
>> MTU: Default
>> ---
>> Name: Hosting
>> Description: VLAN 100
>> Network Label: 100
>> Enable VLAN tagging: 100
>> VM network
>> MTU: Default
>> 
>>
>> CLUSTERS
>>
>> Logical Networks
>>
>> ovirtmgmt Assign Required Display Network Migration Network
>> dmz Assign
>> HostingAssign Required
>>
>> HOSTS
>>
>> Name: srvovirt02.cnsc.net
>> Hostname/IP: 192.168.0.63
>>
>> Network Interfaces
>>
>> HOSTS BOND  VLANNETWORK NAMEADDRESS
>> MAC   SPEED(Mbps)RX(Mbps)   TX(Mbps)
>>
>> eno1 * ovirtmgmt
>> 192.168.0.63   00:17:a4:77:00:18 1 <
>> 1   < 1
>>
>> eno2eno2.50 (50)  dmz
>> 192.168.50.8  00:17:a4:77:00:1a 1  <
>> 1   < 1
>>
>> ens1f0 ens1f0.100 (100)Hosting
>> 192.168.100.700:17:a4:77:00:1c  1  <
>> 1   < 1
>>
>> ens1f1
>> 00:17:a4:77:00:1e 0 < 1   < 1
>>
>> Setup Hosts Networks
>>
>> Network: ovirtmgmt
>> Static
>> IP: 192.168.0.63
>> Subnet Mask: 255.255.255.0
>> Gateway: 192.168.0.1
>> Custom Properties: No available keys
>> Sync network: Yes
>>
>> Network: dmz
>> Static
>> IP: 192.168.50.8
>> Subnet Mask: 255.255.255.224
>> Gateway: 192.168.50.1
>> Custom Properties: Please select a key...
>>
>> Network: Hosting
>> Static
>> IP: 192.168.100.7
>> Subnet Mask: 255.255.255.240
>> Gateway: 192.168.100.1
>> Custom Properties: Please select a key...
>>
>> Virtual Machines
>>
>> Name: PruebaVLAN
>> Host: srvovirt02.cnsc.net
>>
>> Edit Network Interface
>>
>> Name: nic1
>> Profile: dmz/dmz
>> Type: VirtIO
>> Custom MAC address: 00:00:1a:4a:3e:00
>> Link State: Up
>> Card Status: Plugged
>>
>> Network Interfaces in the Host srvovirt02.cnsc.net:
>>
>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> inet 127.0.0.1/8 scope host lo
>>valid_lft forever preferred_lft forever
>> inet6 ::1/128 scope host
>>valid_lft forever preferred_lft forever
>> 2: eno1:  mtu 1500 qdisc mq master
>> ovirtmgmt state UP qlen 1000
>> link/ether 00:17:a4:77:00:18 brd ff:ff:ff:ff:ff:ff
>> inet6 fe80::217:a4ff:fe77:18/64 scope link
>>valid_lft forever preferred_lft forever
>> 3: eno2:  mtu 1500 qdisc mq state UP
>> qlen 1000
>> link/ether 00:17:a4:77:00:1a brd ff:ff:ff:ff:ff:ff
>> inet6 fe80::217:a4ff:fe77:1a/64 scope link
>>valid_lft forever preferred_lft forever
>> 4: ens1f0:  mtu 1500 qdisc mq state UP
>> qlen 1000
>> link/ether 00:17:a4:77:00:1c brd ff:ff:ff:ff:ff:ff
>> inet6 fe80::217:a4ff:fe77:1c/64 scope link
>>valid_lft forever preferred_lft forever
>> 5: ens1f1:  mtu 1500 qdisc noop state DOWN qlen 1000
>> link/ether 00:17:a4:77:00:1e brd ff:ff:ff:ff:ff:ff
>> 6: bond0:  mtu 1500 qdisc noop state DOWN
>> link/ether 72:f8:9c:75:e3:86 brd ff:ff:ff:ff:ff:ff
>> 7: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN
>> link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
>> 10: vnet0:  mtu 1500 qdisc pfifo_fast
>> master dmz state UNKNOWN qlen 500
>> link/ether fe:00:1a:4a:3e:00 brd ff:ff:ff:ff:ff:ff
>> inet6 fe80::fc00:1aff:fe4a:3e00/64 scope link
>>valid_lft forever preferred_lft forever
>> 17: eno2.50@eno2:  mtu 1500 qdisc
>> noqueue master dmz state UP
>> link/ether 00:17:a4:77:00:1a brd ff:ff:ff:ff:ff:ff
>> inet6 fe80::217:a4ff:fe77:1a/64 scope link
>>valid_lft forever preferred_lft forever
>> 18: dmz:  mtu 1500 qdisc noqueue state UP
>> link/ether 00:

Re: [ovirt-users] Dashboard - Page Not Found

2015-07-02 Thread Yaniv Dary

Can you please check the file:

/etc/ovirt-engine/engine.conf.d/20-ovirt-engine-reports.conf
and it's content?
Did you have any issue in the upgrade from pervious version?



Thanks!


On 07/02/2015 01:48 PM, Simon Barrett wrote:


Please see attached.

Thanks

*From:*Yaniv Dary [mailto:yd...@redhat.com]
*Sent:* 02 July 2015 10:58
*To:* Simon Barrett; users@ovirt.org
*Subject:* Re: [ovirt-users] Dashboard - Page Not Found

Please also attach server logs for both reports and engine service.

On 07/02/2015 12:38 PM, Simon Barrett wrote:

Both are attached and the hostname is fully resolvable.

Thanks

*From:*Yaniv Dary [mailto:yd...@redhat.com]
*Sent:* 02 July 2015 10:21
*To:* Simon Barrett; users@ovirt.org 
*Subject:* Re: [ovirt-users] Dashboard - Page Not Found

please send the engine.log and jasperserver.log.
Also make sure that your host name is fully resolvable.


Thanks!

On 07/02/2015 12:01 PM, Simon Barrett wrote:

I am running oVirt Engine Version: 3.5.3.1-1.el6 and get a
“Page Not Found” error when I click on the Dashboards tab at
the top right of the admin portal.

The Reports server is setup and working fine and I can see
“Cluster Dashboard”, “Datacenter Dashboard”, “System
Dashboard” reports in “Webadmin Dashboards” when viewing them
directly through the oVirt Engine reports web interface.

I couldn’t find any logs that would assist in diagnosing this
and couldn’t find any other solutions.

Any suggestions as to how I fix this?

Many thanks,

Simon





___

Users mailing list

Users@ovirt.org  

http://lists.ovirt.org/mailman/listinfo/users




-- 


Yaniv Dary

Technical Product Manager

Red Hat Israel Ltd.

34 Jerusalem Road

Building A, 4th floor

Ra'anana, Israel 4350109

  


Tel : +972 (9) 7692306

   8272306

Email:yd...@redhat.com  

IRC : ydary



--
Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109
  
Tel : +972 (9) 7692306

   8272306
Email:yd...@redhat.com  
IRC : ydary


--
Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
  8272306
Email: yd...@redhat.com
IRC : ydary

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can´t use a VM Network VLAN, because the Virtual Machines inside can't reach the Gateway

2015-07-02 Thread Donny Davis
Something isn't right. Are you using centos 7 for hosts or the node ISO. I
just setup Vlans and it works just fine on my end.

Try setting the switch ports to access mode with your Vlans, remove the
tagging from ovirt and see if it works.

I know that isn't how you want to set things up, but we need to start
isolating things... Is it just the gateway Can you ping other machines
on different subnets... What about pinging google
On Jul 2, 2015 5:51 PM, "Julián Tete"  wrote:

> Don't work :'(
>
> I'm going to try this:
>
>
>
>
> *https://bugzilla.redhat.com/show_bug.cgi?id=1072027
> https://bugzilla.redhat.com/show_bug.cgi?id=1072027
> *
>
> 2015-07-02 11:40 GMT-05:00 Julián Tete :
>
>> I'm going to erase everything and the try this:
>>
>> http://www.ovirt.org/QA:TestCase_Hosted_Engine_Tagged_VLAN_Support
>>
>> 2015-07-02 10:41 GMT-05:00 Julián Tete :
>>
>>> I tried a different Aproach:
>>>
>>> One VLAN for each NIC
>>>
>>> I tried but not works I left the Virtual Machine pinging the Gateway
>>> after 10 minutes I left my Worstation alone and I went back to my home.
>>> Today i see the ping working (¿?) Maybe I rush too much...or maybe not.
>>>
>>> I reboot the Virtual Machine, same problem: after 15 minutes the Machine
>>> can't reach the Gateway, we need HA and 10+ minutes it's not considerated.
>>>
>>> After a I mess my oVirt LAB, after trying everything.
>>>
>>> This my last working setup with sloow reaching of Gateway:
>>>
>>> DATA CENTERS
>>>
>>> Logical Networks:
>>> ---
>>> Name: ovirtmgmt
>>> Description: Management Network
>>> Network Label: 1
>>> MTU: Default
>>> ---
>>> Name: dmz
>>> Description: VLAN 50
>>> Network Label: 50
>>> Enable VLAN tagging: 50
>>> VM network
>>> MTU: Default
>>> ---
>>> Name: Hosting
>>> Description: VLAN 100
>>> Network Label: 100
>>> Enable VLAN tagging: 100
>>> VM network
>>> MTU: Default
>>> 
>>>
>>> CLUSTERS
>>>
>>> Logical Networks
>>>
>>> ovirtmgmt Assign Required Display Network Migration Network
>>> dmz Assign
>>> HostingAssign Required
>>>
>>> HOSTS
>>>
>>> Name: srvovirt02.cnsc.net
>>> Hostname/IP: 192.168.0.63
>>>
>>> Network Interfaces
>>>
>>> HOSTS BOND  VLANNETWORK NAMEADDRESS
>>> MAC   SPEED(Mbps)RX(Mbps)   TX(Mbps)
>>>
>>> eno1 * ovirtmgmt
>>> 192.168.0.63   00:17:a4:77:00:18 1 <
>>> 1   < 1
>>>
>>> eno2eno2.50 (50)  dmz
>>> 192.168.50.8  00:17:a4:77:00:1a 1  <
>>> 1   < 1
>>>
>>> ens1f0 ens1f0.100 (100)Hosting
>>> 192.168.100.700:17:a4:77:00:1c  1  <
>>> 1   < 1
>>>
>>> ens1f1
>>> 00:17:a4:77:00:1e 0 < 1   < 1
>>>
>>> Setup Hosts Networks
>>>
>>> Network: ovirtmgmt
>>> Static
>>> IP: 192.168.0.63
>>> Subnet Mask: 255.255.255.0
>>> Gateway: 192.168.0.1
>>> Custom Properties: No available keys
>>> Sync network: Yes
>>>
>>> Network: dmz
>>> Static
>>> IP: 192.168.50.8
>>> Subnet Mask: 255.255.255.224
>>> Gateway: 192.168.50.1
>>> Custom Properties: Please select a key...
>>>
>>> Network: Hosting
>>> Static
>>> IP: 192.168.100.7
>>> Subnet Mask: 255.255.255.240
>>> Gateway: 192.168.100.1
>>> Custom Properties: Please select a key...
>>>
>>> Virtual Machines
>>>
>>> Name: PruebaVLAN
>>> Host: srvovirt02.cnsc.net
>>>
>>> Edit Network Interface
>>>
>>> Name: nic1
>>> Profile: dmz/dmz
>>> Type: VirtIO
>>> Custom MAC address: 00:00:1a:4a:3e:00
>>> Link State: Up
>>> Card Status: Plugged
>>>
>>> Network Interfaces in the Host srvovirt02.cnsc.net:
>>>
>>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
>>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>> inet 127.0.0.1/8 scope host lo
>>>valid_lft forever preferred_lft forever
>>> inet6 ::1/128 scope host
>>>valid_lft forever preferred_lft forever
>>> 2: eno1:  mtu 1500 qdisc mq master
>>> ovirtmgmt state UP qlen 1000
>>> link/ether 00:17:a4:77:00:18 brd ff:ff:ff:ff:ff:ff
>>> inet6 fe80::217:a4ff:fe77:18/64 scope link
>>>valid_lft forever preferred_lft forever
>>> 3: eno2:  mtu 1500 qdisc mq state UP
>>> qlen 1000
>>> link/ether 00:17:a4:77:00:1a brd ff:ff:ff:ff:ff:ff
>>> inet6 fe80::217:a4ff:fe77:1a/64 scope link
>>>valid_lft forever preferred_lft forever
>>> 4: ens1f0:  mtu 1500 qdisc mq state UP
>>> qlen 1000
>>> link/ether 00:17:a4:77:00:1c brd ff:ff:ff:ff:ff:ff
>>> inet6 fe80::217:a4ff:fe77:1c/64 scope link
>>>valid_lft forever preferred_lft forever
>>> 5: ens1f1:  mtu 1500 qdisc noop state DOWN qlen 1000
>>> link/ether 00:17:a4:77:00:1e brd ff:ff:ff:ff:ff:ff
>>> 6: bond0:  mtu 1500 qdisc noop state DOWN
>>> link/ether 72:f8:9c:75:e3:86 brd ff:ff:ff:ff:ff