[ovirt-users] Failed to connect Host to the Storage Domains hosted_storage.

2016-07-22 Thread Robert Story
Hello,

I'm in the process of upgrading from 3.5.x to 3.6.x. My hosted engine and
hosts in the primary cluster are all upgraded and appear to be running fine.

I have a second cluster of 2 machines which are just regular hosts, without
the hosted-engine. Both have been marked non-operational, with the
following messages logged about every 5 minutes:


Failed to connect Host perses to Storage Pool Default

Host perses cannot access the Storage Domain(s) hosted_storage attached to the 
Data Center Default. Setting Host state to Non-Operational.

Host perses reports about one of the Active Storage Domains as Problematic.

Failed to connect Host perses to Storage Servers

Failed to connect Host perses to the Storage Domains hosted_storage.


I could see the normal storage/iso/export domains mounted on the host, and
the VMs running on the host are fine.

I shut down the VMs on one host, put it in maintenance mode, installed 3.6
repo and ran yum update. All went well, but when I activated the host, same
deal.

I've attached the engine log snippet for the activation attempt.

Robert

-- 
Senior Software Engineer @ Parsons


engine.log-0722
Description: Binary data


pgphIjB2LDVYl.pgp
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to connect Host to the Storage Domains hosted_storage.

2016-07-22 Thread Simone Tiraboschi
On Fri, Jul 22, 2016 at 3:47 PM, Robert Story  wrote:
> Hello,
>
> I'm in the process of upgrading from 3.5.x to 3.6.x. My hosted engine and
> hosts in the primary cluster are all upgraded and appear to be running fine.
>
> I have a second cluster of 2 machines which are just regular hosts, without
> the hosted-engine. Both have been marked non-operational, with the
> following messages logged about every 5 minutes:
>
>
> Failed to connect Host perses to Storage Pool Default
>
> Host perses cannot access the Storage Domain(s) hosted_storage attached to 
> the Data Center Default. Setting Host state to Non-Operational.
>
> Host perses reports about one of the Active Storage Domains as Problematic.
>
> Failed to connect Host perses to Storage Servers
>
> Failed to connect Host perses to the Storage Domains hosted_storage.
>
>
> I could see the normal storage/iso/export domains mounted on the host, and
> the VMs running on the host are fine.

In 3.5 only the hosts involved in hosted-engine have to access the
hosted-engine storage domain.
With 3.6 we introduced the capabilities to manage the engine VM from
the engine itself so the engine has to import in the hosted-engine
storage domain.
This means that all the hosts in the datacenter that contains the
cluster with the hosted-engine hosts have now to be able to connect
the hosted-engine storage domain.

Can you please check the ACL on the storage server (NFS or iSCSI) that
you use to expose the hosted-engine storage domain?

> I shut down the VMs on one host, put it in maintenance mode, installed 3.6
> repo and ran yum update. All went well, but when I activated the host, same
> deal.
>
> I've attached the engine log snippet for the activation attempt.
>
> Robert
>
> --
> Senior Software Engineer @ Parsons
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 4 Hosted Engine deploy on fc storage - [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network is unreachable

2016-07-22 Thread aleksey . maksimov
 Hello oVirt guru`s !I have problem with initial deploy of ovirt 4.0 hosted engine.My environment :* Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with multipathd) to storage HP 3PAR 7200* On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64)* On 3PAR storage I created 2 LUNs for oVirt.- First LUN for oVirt Hosted Engine VM (60GB)- Second LUN for all other VMs (2TB)# multipath -ll3par-vv1 (360002ac0001bcec9) dm-0 3PARdata,VVsize=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw`-+- policy='round-robin 0' prio=50 status=active  |- 2:0:1:1 sdd 8:48  active ready running  |- 3:0:0:1 sdf 8:80  active ready running  |- 2:0:0:1 sdb 8:16  active ready running  `- 3:0:1:1 sdh 8:112 active ready running3par-vv2 (360002ac00016cec9) dm-1 3PARdata,VVsize=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw`-+- policy='round-robin 0' prio=50 status=active  |- 2:0:0:0 sda 8:0   active ready running  |- 3:0:0:0 sde 8:64  active ready running  |- 2:0:1:0 sdc 8:32  active ready running  `- 3:0:1:0 sdg 8:96  active ready running My steps on first server (initial deploy of ovirt 4.0 hosted engine):# systemctl stop NetworkManager# systemctl disable NetworkManager# yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm# yum -y install epel-release# wget http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511.iso -P /tmp/# yum install ovirt-hosted-engine-setup# yum install screen# screen -RD...in screen session :# hosted-engine --deploy...in configuration process I chose "fc" as storage type for oVirt hosted engine vm and select 60GB LUN..--== CONFIGURATION PREVIEW ==--...  Firewall manager   : iptables  Gateway address    : 10.1.0.1  Host name for web application  : KOM-AD01-OVIRT1  Storage Domain type    : fc  Host ID    : 1  LUN ID : 360002ac0001bcec9  Image size GB  : 40  Console type   : vnc  Memory size MB : 4096  MAC address    : 00:16:3e:77:1d:07  Boot type  : cdrom  Number of CPUs : 2  ISO image (cdrom boot/cloud-init)  : /tmp/CentOS-7-x86_64-NetInstall-1511.iso  CPU Type   : model_Penryn...and get error after step "Verifying sanlock lockspace initialization"...[ INFO  ] Verifying sanlock lockspace initialization[ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network is unreachable[ INFO  ] Stage: Clean up[ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf'[ INFO  ] Stage: Pre-termination[ INFO  ] Stage: Termination[ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue, fix and redeploy  Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.logInterestinglyIf I try to deploy hosted-engine v3.6, everything goes well in the same configuration !! :[ INFO  ] Stage: Transaction setup[ INFO  ] Stage: Misc configuration[ INFO  ] Stage: Package installation[ INFO  ] Stage: Misc configuration[ INFO  ] Configuring libvirt[ INFO  ] Configuring VDSM[ INFO  ] Starting vdsmd[ INFO  ] Waiting for VDSM hardware info[ INFO  ] Configuring the management bridge[ INFO  ] Creating Volume Group[ INFO  ] Creating Storage Domain[ INFO  ] Creating Storage Pool[ INFO  ] Connecting Storage Pool[ INFO  ] Verifying sanlock lockspace initialization[ INFO  ] Creating Image for 'hosted-engine.lockspace' ...[ INFO  ] Image for 'hosted-engine.lockspace' created successfully[ INFO  ] Creating Image for 'hosted-engine.metadata' ...[ INFO  ] Image for 'hosted-engine.metadata' created successfully[ INFO  ] Creating VM Image[ INFO  ] Destroying Storage Pool[ INFO  ] Start monitoring domain[ INFO  ] Configuring VM[ INFO  ] Updating hosted-engine configuration[ INFO  ] Stage: Transaction commit[ INFO  ] Stage: Closing up[ INFO  ] Creating VM  You can now connect to the VM with the following command:    /bin/remote-viewer vnc://localhost:5900...What could be the problem?___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 4 Hosted Engine deploy on fc storage - [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network is unreachable

2016-07-22 Thread aleksey . maksimov
Hello oVirt guru`s ! I have problem with initial deploy of ovirt 4.0 hosted engine. My environment :* Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with multipathd) to storage HP 3PAR 7200* On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64)* On 3PAR storage I created 2 LUNs for oVirt.- First LUN for oVirt Hosted Engine VM (60GB)- Second LUN for all other VMs (2TB) # multipath -ll 3par-vv1 (360002ac0001bcec9) dm-0 3PARdata,VVsize=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw`-+- policy='round-robin 0' prio=50 status=active  |- 2:0:1:1 sdd 8:48  active ready running  |- 3:0:0:1 sdf 8:80  active ready running  |- 2:0:0:1 sdb 8:16  active ready running  `- 3:0:1:1 sdh 8:112 active ready running 3par-vv2 (360002ac00016cec9) dm-1 3PARdata,VVsize=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw`-+- policy='round-robin 0' prio=50 status=active  |- 2:0:0:0 sda 8:0   active ready running  |- 3:0:0:0 sde 8:64  active ready running  |- 2:0:1:0 sdc 8:32  active ready running  `- 3:0:1:0 sdg 8:96  active ready running   My steps on first server (initial deploy of ovirt 4.0 hosted engine): # systemctl stop NetworkManager# systemctl disable NetworkManager# yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm# yum -y install epel-release# wget http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511.iso -P /tmp/# yum install ovirt-hosted-engine-setup# yum install screen# screen -RD ...in screen session : # hosted-engine --deploy ...in configuration process I chose "fc" as storage type for oVirt hosted engine vm and select 60GB LUN.. --== CONFIGURATION PREVIEW ==-- ...  Firewall manager   : iptables  Gateway address    : 10.1.0.1  Host name for web application  : KOM-AD01-OVIRT1  Storage Domain type    : fc  Host ID    : 1  LUN ID : 360002ac0001bcec9  Image size GB  : 40  Console type   : vnc  Memory size MB : 4096  MAC address    : 00:16:3e:77:1d:07  Boot type  : cdrom  Number of CPUs : 2  ISO image (cdrom boot/cloud-init)  : /tmp/CentOS-7-x86_64-NetInstall-1511.iso  CPU Type   : model_Penryn...and get error after step "Verifying sanlock lockspace initialization"... [ INFO  ] Verifying sanlock lockspace initialization[ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network is unreachable[ INFO  ] Stage: Clean up[ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf'[ INFO  ] Stage: Pre-termination[ INFO  ] Stage: Termination[ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue, fix and redeploy  Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log  InterestinglyIf I try to deploy hosted-engine v3.6, everything goes well in the same configuration !! : [ INFO  ] Stage: Transaction setup[ INFO  ] Stage: Misc configuration[ INFO  ] Stage: Package installation[ INFO  ] Stage: Misc configuration[ INFO  ] Configuring libvirt[ INFO  ] Configuring VDSM[ INFO  ] Starting vdsmd[ INFO  ] Waiting for VDSM hardware info[ INFO  ] Configuring the management bridge[ INFO  ] Creating Volume Group[ INFO  ] Creating Storage Domain[ INFO  ] Creating Storage Pool[ INFO  ] Connecting Storage Pool[ INFO  ] Verifying sanlock lockspace initialization[ INFO  ] Creating Image for 'hosted-engine.lockspace' ...[ INFO  ] Image for 'hosted-engine.lockspace' created successfully[ INFO  ] Creating Image for 'hosted-engine.metadata' ...[ INFO  ] Image for 'hosted-engine.metadata' created successfully[ INFO  ] Creating VM Image[ INFO  ] Destroying Storage Pool[ INFO  ] Start monitoring domain[ INFO  ] Configuring VM[ INFO  ] Updating hosted-engine configuration[ INFO  ] Stage: Transaction commit[ INFO  ] Stage: Closing up[ INFO  ] Creating VM  You can now connect to the VM with the following command:    /bin/remote-viewer vnc://localhost:5900... What could be the problem?___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4 Hosted Engine deploy on fc storage - [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network is unreachable

2016-07-22 Thread Simone Tiraboschi
Hi Aleksey,
Can you please attach hosted-engine-setup logs?

On Fri, Jul 22, 2016 at 3:46 PM,   wrote:
>
> Hello oVirt guru`s !
>
> I have problem with initial deploy of ovirt 4.0 hosted engine.
>
> My environment :
> 
> * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with
> multipathd) to storage HP 3PAR 7200
> * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64)
> * On 3PAR storage I created 2 LUNs for oVirt.
> - First LUN for oVirt Hosted Engine VM (60GB)
> - Second LUN for all other VMs (2TB)
>
> # multipath -ll
>
> 3par-vv1 (360002ac0001bcec9) dm-0 3PARdata,VV
> size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
> `-+- policy='round-robin 0' prio=50 status=active
>   |- 2:0:1:1 sdd 8:48  active ready running
>   |- 3:0:0:1 sdf 8:80  active ready running
>   |- 2:0:0:1 sdb 8:16  active ready running
>   `- 3:0:1:1 sdh 8:112 active ready running
>
> 3par-vv2 (360002ac00016cec9) dm-1 3PARdata,VV
> size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
> `-+- policy='round-robin 0' prio=50 status=active
>   |- 2:0:0:0 sda 8:0   active ready running
>   |- 3:0:0:0 sde 8:64  active ready running
>   |- 2:0:1:0 sdc 8:32  active ready running
>   `- 3:0:1:0 sdg 8:96  active ready running
>
>
>
> My steps on first server (initial deploy of ovirt 4.0 hosted engine):
> 
>
> # systemctl stop NetworkManager
> # systemctl disable NetworkManager
> # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm
> # yum -y install epel-release
> # wget
> http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511.iso
> -P /tmp/
> # yum install ovirt-hosted-engine-setup
> # yum install screen
> # screen -RD
>
> ...in screen session :
>
> # hosted-engine --deploy
>
> ...
> in configuration process I chose "fc" as storage type for oVirt hosted
> engine vm and select 60GB LUN...
> ...
>
> --== CONFIGURATION PREVIEW ==--
>
> ...
>   Firewall manager   : iptables
>   Gateway address: 10.1.0.1
>   Host name for web application  : KOM-AD01-OVIRT1
>   Storage Domain type: fc
>   Host ID: 1
>   LUN ID :
> 360002ac0001bcec9
>   Image size GB  : 40
>   Console type   : vnc
>   Memory size MB : 4096
>   MAC address: 00:16:3e:77:1d:07
>   Boot type  : cdrom
>   Number of CPUs : 2
>   ISO image (cdrom boot/cloud-init)  :
> /tmp/CentOS-7-x86_64-NetInstall-1511.iso

Can I ask why you prefer/need to manually create a VM installing from
a CD instead of using the ready-to-use ovirt-engine-appliance?
Using the appliance makes the setup process a lot shorted and more comfortable.

>   CPU Type   : model_Penryn
> ...
> and get error after step "Verifying sanlock lockspace initialization"
> ...
>
> [ INFO  ] Verifying sanlock lockspace initialization
> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network
> is unreachable
> [ INFO  ] Stage: Clean up
> [ INFO  ] Generating answer file
> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> [ ERROR ] Hosted Engine deployment failed: this system is not reliable,
> please check the issue, fix and redeploy
>   Log file is located at
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log
>
>
> Interestingly
> 
> If I try to deploy hosted-engine v3.6, everything goes well in the same
> configuration !! :
>
> 
> [ INFO  ] Stage: Transaction setup
> [ INFO  ] Stage: Misc configuration
> [ INFO  ] Stage: Package installation
> [ INFO  ] Stage: Misc configuration
> [ INFO  ] Configuring libvirt
> [ INFO  ] Configuring VDSM
> [ INFO  ] Starting vdsmd
> [ INFO  ] Waiting for VDSM hardware info
> [ INFO  ] Configuring the management bridge
> [ INFO  ] Creating Volume Group
> [ INFO  ] Creating Storage Domain
> [ INFO  ] Creating Storage Pool
> [ INFO  ] Connecting Storage Pool
> [ INFO  ] Verifying sanlock lockspace initialization
> [ INFO  ] Creating Image for 'hosted-engine.lockspace' ...
> [ INFO  ] Image for 'hosted-engine.lockspace' created successfully
> [ INFO  ] Creating Image for 'hosted-engine.metadata' ...
> [ INFO  ] Image for 'hosted-engine.metadata' created successfully
> [ INFO  ] Creating VM Image
> [ INFO  ] Destroying Storage Pool
> [ INFO  ] Start monitoring domain
> [ INFO  ] Configuring VM
> [ INFO  ] Updating hosted-engine configuration
> [ INFO  ] Stage: Transaction commit
> [ INFO  ] Stage: Closing up
> [ INFO  ] 

Re: [ovirt-users] Network settings for multiple hosts

2016-07-22 Thread Alexis HAUSER
Ok I start to understand where was the problem :





[81387.469731] CPU: 1 PID: 20688 Comm: umount Tainted: G  I
   3.10.0-327.13.1.el7.x86_64 #1
[81387.469733] Hardware name: Dell Inc. PowerEdge R610/086HF8, BIOS 1.2.6 
07/17/2009
[81387.469734]   240ade23 880b2d44bda0 
816356f4
[81387.469737]  880b2d44bdd8 8107b1e0 880c582997b0 
880c58299838
[81387.469740]  819c1900 0083  
880b2d44bde8
[81387.469742] Call Trace:
[81387.469748]  [] dump_stack+0x19/0x1b
[81387.469752]  [] warn_slowpath_common+0x70/0xb0
[81387.469754]  [] warn_slowpath_null+0x1a/0x20
[81387.469756]  [] bdev_inode_switch_bdi+0x7a/0x90
[81387.469758]  [] __blkdev_put+0x74/0x1a0
[81387.469760]  [] blkdev_put+0x4e/0x140
[81387.469764]  [] kill_block_super+0x44/0x70
[81387.469767]  [] deactivate_locked_super+0x49/0x60
[81387.469769]  [] deactivate_super+0x46/0x60
[81387.469772]  [] mntput_no_expire+0xc5/0x120
[81387.469775]  [] SyS_umount+0x9f/0x3c0
[81387.469778]  [] system_call_fastpath+0x16/0x1b
[81387.469780] ---[ end trace 24243ae635253c84 ]---
[81387.649850] blk_update_request: I/O error, dev dm-11, sector 5769216
[81387.649874] blk_update_request: I/O error, dev dm-11, sector 5770240
[81388.150048] blk_update_request: I/O error, dev dm-11, sector 5769216
[81388.150074] blk_update_request: I/O error, dev dm-11, sector 5770240

[83839.025136] bnx2: fw sync timeout, reset code = 502002d
[83839.025146] bnx2 :02:00.0 em3: <--- start MCP states dump --->
[83839.025152] bnx2 :02:00.0 em3: DEBUG: MCP_STATE_P0[0003650e] 
MCP_STATE_P1[0003600e]
[83839.025158] bnx2 :02:00.0 em3: DEBUG: MCP mode[b880] state[8000] 
evt_mask[0500]
[83839.025164] bnx2 :02:00.0 em3: DEBUG: pc[080032d8] pc[08003568] 
instr[a462]
[83839.025166] bnx2 :02:00.0 em3: DEBUG: shmem states:
[83839.025172] bnx2 :02:00.0 em3: DEBUG: drv_mb[0502002d] fw_mb[002b] 
link_status[006f]
[83839.025175]  drv_pulse_mb[3bd8]
[83839.025179] bnx2 :02:00.0 em3: DEBUG: dev_info_signature[44564903] 
reset_type[01005254]
[83839.025182]  condition[0003650e]
[83839.025188] bnx2 :02:00.0 em3: DEBUG: 01c0: 01005254 42530088 
0003650e 
[83839.025195] bnx2 :02:00.0 em3: DEBUG: 03cc:   
 0a28
[83839.025202] bnx2 :02:00.0 em3: DEBUG: 03dc: 0004  
 
[83839.025209] bnx2 :02:00.0 em3: DEBUG: 03ec:   
0[83839.025136] bnx2: fw sync timeout, reset code = 502002d
[83839.025146] bnx2 :02:00.0 em3: <--- start MCP states dump --->
[83839.025152] bnx2 :02:00.0 em3: DEBUG: MCP_STATE_P0[0003650e] 
MCP_STATE_P1[0003600e]
[83839.025158] bnx2 :02:00.0 em3: DEBUG: MCP mode[b880] state[8000] 
evt_mask[0500]
[83839.025164] bnx2 :02:00.0 em3: DEBUG: pc[080032d8] pc[08003568] 
instr[a462]
[83839.025166] bnx2 :02:00.0 em3: DEBUG: shmem states:
[83839.025172] bnx2 :02:00.0 em3: DEBUG: drv_mb[0502002d] fw_mb[002b] 
link_status[006f]
[83839.025175]  drv_pulse_mb[3bd8]
[83839.025179] bnx2 :02:00.0 em3: DEBUG: dev_info_signature[44564903] 
reset_type[01005254]
[83839.025182]  condition[0003650e]
[83839.025188] bnx2 :02:00.0 em3: DEBUG: 01c0: 01005254 42530088 
0003650e 
[83839.025195] bnx2 :02:00.0 em3: DEBUG: 03cc:   
 0a28
[83839.025202] bnx2 :02:00.0 em3: DEBUG: 03dc: 0004  
 
[83839.025209] bnx2 :02:00.0 em3: DEBUG: 03ec:   
 
[83839.025212] bnx2 :02:00.0 em3: DEBUG: 0x3fc[]
[83839.025214] bnx2 :02:00.0 em3: <--- end MCP states dump --->
000 
[83839.025212] bnx2 :02:00.0 em3: DEBUG: 0x3fc[]
[83839.025214] bnx2 :02:00.0 em3: <--- end MCP states dump --->

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Spice in 4.0

2016-07-22 Thread Melissa Mesler
We are using the 10Zig clients. We talked to support at 10Zig about the
future of these thin clients and even though they are Red Hat certified
thin clients, they are dropping spice support and not moving to virt-
viewer. We are more than happy to help someone with the development of
these in any way as we are kinda stuck and have a lot of these in our
inventory.
 
 
On Fri, Jul 22, 2016, at 07:06 AM, Yaniv Kaul wrote:
>
>
> On Thu, Jul 21, 2016 at 9:19 PM, Melissa Mesler
>  wrote:
>> Yes we are trying to get spice working on a thin client where
>> we can't
>>  use virt-viewer. I just don't know the steps in the bugzilla to
>>  accomplish it as it's not completely clear.
>
> I don't know the details of this thin client, but I suggest requesting
> it to be supported from the virt-viewer team. Perhaps it's not such a
> big deal.
> Y.
>
>>
>> On Thu, Jul 21, 2016, at 01:14 PM, Alexander Wels wrote:
>>  > On Thursday, July 21, 2016 01:08:49 PM Melissa Mesler wrote:
>>  > > So I am trying to get spice working in ovirt 4.0. I found the
>>  > > following
>>  > > solution:
>>  > > https://bugzilla.redhat.com/show_bug.cgi?id=1316560
>>  > >
>>  >
>>  > That bugzilla relates to the legacy spice.xpi FF plugin, and
>>  > possibly
>>  > some
>>  > activex plugin for IE. The current way is the following:
>>  >
>>  > 1. Get virt-viewer for your platform.
>>  > 2. Associated virt-viewer with .vv files in your browser.
>>  > 3. Click the button, which will download the .vv file with the
>>  > appropriate
>>  > ticket.
>>  > 4. The browser will launch virt-viewer with the .vv file as a
>>  >parameter
>>  > and it
>>  > should just all work.
>>  >
>>  > > Where do you set
>>  > > vdc_options.EnableDeprecatedClientModeSpicePlugin to
>>  > > 'true'?? I see it says ENGINE_DB but what steps do I follow to
>>  > > do this?
>>  > > Can someone help me?
>>  > > ___
>>  > > Users mailing list
>>  > > Users@ovirt.org
>>  > > http://lists.ovirt.org/mailman/listinfo/users
>>  >
>>  ___
>>  Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] one export domain two DC

2016-07-22 Thread Fernando Fuentes
To All,
 
Thank you for the help!
 
Regards,
 
--
Fernando Fuentes
ffuen...@txweather.org
http://www.txweather.org
 
 
 
On Fri, Jul 22, 2016, at 07:12 AM, Yaniv Kaul wrote:
>
>
> On Wed, Jul 20, 2016 at 2:04 PM, Fernando Fuentes
>  wrote:
>> Is it possible to export all of my vms on my oVirt 3.5 Domain
>> and than
>>  attach my export domain on my oVirt 4.0 DC and import the vm's?
>
> Yes, you can do this + just import a storage domain (see [1] for
> details - since 3.5)
> Y.
>
> [1] 
> http://www.ovirt.org/develop/release-management/features/storage/importstoragedomain/
>
>>
>> Regards,
>>
>>  --
>>  Fernando Fuentes ffuen...@txweather.org http://www.txweather.org
>>  ___
>>  Users mailing list Users@ovirt.org
>>  http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to connect Host to the Storage Domains hosted_storage.

2016-07-22 Thread Ralf Schenk
Hello,

I also see from the logs that all your Storage-Domains that work are
mounted as nfsVersion='V4' but ovirt-nfs.netsec:/ovirt/hosted-engine is
mounted as nfsVersion='null'.

Bye


Am 22.07.2016 um 16:17 schrieb Simone Tiraboschi:
> On Fri, Jul 22, 2016 at 3:47 PM, Robert Story  wrote:
>> Hello,
>>
>> I'm in the process of upgrading from 3.5.x to 3.6.x. My hosted engine and
>> hosts in the primary cluster are all upgraded and appear to be running fine.
>>
>> I have a second cluster of 2 machines which are just regular hosts, without
>> the hosted-engine. Both have been marked non-operational, with the
>> following messages logged about every 5 minutes:
>>
>>
>> Failed to connect Host perses to Storage Pool Default
>>
>> Host perses cannot access the Storage Domain(s) hosted_storage attached to 
>> the Data Center Default. Setting Host state to Non-Operational.
>>
>> Host perses reports about one of the Active Storage Domains as Problematic.
>>
>> Failed to connect Host perses to Storage Servers
>>
>> Failed to connect Host perses to the Storage Domains hosted_storage.
>>
>>
>> I could see the normal storage/iso/export domains mounted on the host, and
>> the VMs running on the host are fine.
> In 3.5 only the hosts involved in hosted-engine have to access the
> hosted-engine storage domain.
> With 3.6 we introduced the capabilities to manage the engine VM from
> the engine itself so the engine has to import in the hosted-engine
> storage domain.
> This means that all the hosts in the datacenter that contains the
> cluster with the hosted-engine hosts have now to be able to connect
> the hosted-engine storage domain.
>
> Can you please check the ACL on the storage server (NFS or iSCSI) that
> you use to expose the hosted-engine storage domain?
>
>> I shut down the VMs on one host, put it in maintenance mode, installed 3.6
>> repo and ran yum update. All went well, but when I activated the host, same
>> deal.
>>
>> I've attached the engine log snippet for the activation attempt.
>>
>> Robert
>>
>> --
>> Senior Software Engineer @ Parsons
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* 

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* 

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to import pre-configured nfs data domain

2016-07-22 Thread Logan Kuhn
Thank you!

That does do some interesting things, but doesn't appear to work.

I've added a new storage domain, merged the differences in the metadata file 
back to the old one and powered it up.  When I start up the nodes they 
endlessly report: VDSM ovirt-reqa1 command failed: (-226, 'Unable to read 
resource owners', 'Sanlock exception')

I tried reinstalling one of them and the error message continues.  At least for 
now I've restored the old config and the error is gone.

Regards,
Logan

- On Jul 22, 2016, at 3:06 AM, Milan Zamazal mzama...@redhat.com wrote:

| Logan Kuhn  writes:
| 
|> Am I correct in the assumption that importing a previously master data domain
|> into a fresh engine without a current master domain is supported?
| 
| It's supported only in case the master domain was previously correctly
| detached from the data center.
| 
| In case of an unexpected complete disaster, when a fresh engine is
| installed and used, it's still possible to recover the master domain in
| theory.  You must find `metadata' file in the master domain and edit it
| for the new engine.  It's completely unsupported and it may or may not
| work.  We don't have guidelines how to do it, but you may try to create
| a new master domain, then detach it and compare the two metadata files.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to connect Host to the Storage Domains hosted_storage.

2016-07-22 Thread Simone Tiraboschi
On Fri, Jul 22, 2016 at 4:48 PM, Ralf Schenk  wrote:

> Hello,
>
> I also see from the logs that all your Storage-Domains that work are
> mounted as nfsVersion='V4' but ovirt-nfs.netsec:/ovirt/hosted-engine is
> mounted as nfsVersion='null'.
>

Hi Robert,
unfortunately Ralf is right: I reproduced the issue.

The auto-import procedure for the hosted-engine storage domain ignores the
nfsVersion parameter and so we don't have a value for that in the engine DB.
On hosted-engine hosts, the agent mounts the hosted-engine storage domain
before the engine and so everything is fine since the agent knows that it's
nfsv4.

The issue comes with the hosts of that datacenter not involved in
hosted-engine: in this case the engine simply tries to mount without
the nfsVersion parameter and so, if the NFS server cannot be access over
nfsv3, the mount could fail and the host will be declared as not operation.

I opened a ticket to track it:
https://bugzilla.redhat.com/show_bug.cgi?id=1359265

If you need a quick fix you can:
- fix the configuration of your storage server to allow it to be accessed
also over nfsv3
- edit the configuration of the storage connection in the engine DB on the
engine VM to add the missing parameter. Something like:
 # sudo -u postgresl psql
 \c engine;
 select * from storage_server_connections;
 UPDATE storage_server_connections SET nfs_version = '4' WHERE connection =
'ovirt-nfs.netsec:/ovirt/hosted-engine';
 commit;
 select * from storage_server_connections;


> Bye
>
> Am 22.07.2016 um 16:17 schrieb Simone Tiraboschi:
>
> On Fri, Jul 22, 2016 at 3:47 PM, Robert Story  
>  wrote:
>
> Hello,
>
> I'm in the process of upgrading from 3.5.x to 3.6.x. My hosted engine and
> hosts in the primary cluster are all upgraded and appear to be running fine.
>
> I have a second cluster of 2 machines which are just regular hosts, without
> the hosted-engine. Both have been marked non-operational, with the
> following messages logged about every 5 minutes:
>
>
> Failed to connect Host perses to Storage Pool Default
>
> Host perses cannot access the Storage Domain(s) hosted_storage attached to 
> the Data Center Default. Setting Host state to Non-Operational.
>
> Host perses reports about one of the Active Storage Domains as Problematic.
>
> Failed to connect Host perses to Storage Servers
>
> Failed to connect Host perses to the Storage Domains hosted_storage.
>
>
> I could see the normal storage/iso/export domains mounted on the host, and
> the VMs running on the host are fine.
>
> In 3.5 only the hosts involved in hosted-engine have to access the
> hosted-engine storage domain.
> With 3.6 we introduced the capabilities to manage the engine VM from
> the engine itself so the engine has to import in the hosted-engine
> storage domain.
> This means that all the hosts in the datacenter that contains the
> cluster with the hosted-engine hosts have now to be able to connect
> the hosted-engine storage domain.
>
> Can you please check the ACL on the storage server (NFS or iSCSI) that
> you use to expose the hosted-engine storage domain?
>
>
> I shut down the VMs on one host, put it in maintenance mode, installed 3.6
> repo and ran yum update. All went well, but when I activated the host, same
> deal.
>
> I've attached the engine log snippet for the activation attempt.
>
> Robert
>
> --
> Senior Software Engineer @ Parsons
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
> --
>
>
> *Ralf Schenk*
> fon +49 (0) 24 05 / 40 83 70
> fax +49 (0) 24 05 / 40 83 759
> mail *r...@databay.de* 
>
> *Databay AG*
> Jens-Otto-Krag-Straße 11
> D-52146 Würselen
> *www.databay.de* 
>
> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
> Philipp Hermanns
> Aufsichtsratsvorsitzender: Wilhelm Dohmen
> --
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Solved: Re: Failed to connect Host to the Storage Domains hosted_storage.

2016-07-22 Thread Robert Story
On Fri, 22 Jul 2016 18:21:15 +0200 Simone wrote:
ST> On Fri, Jul 22, 2016 at 4:48 PM, Ralf Schenk  wrote:
ST> 
ST> > Hello,
ST> >
ST> > I also see from the logs that all your Storage-Domains that work are
ST> > mounted as nfsVersion='V4' but ovirt-nfs.netsec:/ovirt/hosted-engine is
ST> > mounted as nfsVersion='null'.
ST> >  
ST> 
ST> Hi Robert,
ST> unfortunately Ralf is right: I reproduced the issue.
ST> 
ST> The auto-import procedure for the hosted-engine storage domain ignores the
ST> nfsVersion parameter and so we don't have a value for that in the engine DB.
ST> On hosted-engine hosts, the agent mounts the hosted-engine storage domain
ST> before the engine and so everything is fine since the agent knows that it's
ST> nfsv4.
ST> 
ST> The issue comes with the hosts of that datacenter not involved in
ST> hosted-engine: in this case the engine simply tries to mount without
ST> the nfsVersion parameter and so, if the NFS server cannot be access over
ST> nfsv3, the mount could fail and the host will be declared as not operation.
ST> 
ST> I opened a ticket to track it:
ST> https://bugzilla.redhat.com/show_bug.cgi?id=1359265
ST> 
ST> If you need a quick fix you can:
ST> - fix the configuration of your storage server to allow it to be accessed
ST> also over nfsv3
ST> - edit the configuration of the storage connection in the engine DB on the
ST> engine VM to add the missing parameter. Something like:
ST>  # sudo -u postgresl psql
ST>  \c engine;
ST>  select * from storage_server_connections;
ST>  UPDATE storage_server_connections SET nfs_version = '4' WHERE connection =
ST> 'ovirt-nfs.netsec:/ovirt/hosted-engine';
ST>  commit;
ST>  select * from storage_server_connections;

Thanks for that workaround. I've added it to the bugzilla. The easy manual
workaround I tried was:

# mkdir /rhev/data-center/mnt/ovirt-nfs.localdomain:_ovirt_hosted-engine

# /usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6,nfsvers=4 \
  ovirt-nfs.localdomain:/ovirt/hosted-engine \
  /rhev/data-center/mnt/ovirt-nfs.netsec:_ovirt_hosted-engine

which got the hosts operational again.

Thanks for all the help!


Robert

-- 
Senior Software Engineer @ Parsons


pgp0ZZWfGusFe.pgp
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Windows 10 + qemu + Blue Iris = Blue screen

2016-07-22 Thread Blaster

> On Jul 22, 2016, at 4:27 AM, Michal Skrivanek  
> wrote:
> 
> 
>> On 21 Jul 2016, at 20:05, Blaster  wrote:
>> 
>> I am running an application called Blue Iris which records video from IP 
>> cameras.
>> 
>> This was working great under Ovirt 3.6.3 + Windows 7.  Now I’ve upgraded to 
>> Windows 10 and as soon as the Blue Iris service starts, the VM blue screens.
>> 
>> I talked to the software vendor, and they said it’s not their problem, they 
>> aren’t doing anything that could cause a blue screen, so it must be  
>> driver/memory/hardware problem.  They say the application works just fine 
>> under Windows 10.
>> 
>> So thinking maybe the upgrade went bad, I created a new VM, used e1000 and 
>> IDE interfaces (i.e., no Virtualized hardware or drivers were used) and 
>> re-installed Blue Iris.
> 
> I would expect better luck with virtio drivers. Either way, if it was working 
> before and not working in Win10 it’s likely related to drivers. Can you make 
> sure you try latest drivers? Can you pinpoint the blue screen…to perhaps USB 
> or other subsystem?
> Might be worth trying on clean Win10 install just to rule out upgrade issues 
> (I didn’t understand whether you cloned the old VM and just reinstalled blue 
> iris or reinstalled everything) , and if it still reproduces it is likely 
> some low level incompatibility in QEMU/KVM. You would likely have to try 
> experiment with qemu cmdline or use latest qemu and check the qemu mailing 
> list
> 
> Thanks,
> michal

Hi Michal, 

I did try a clean install.  Both an upgrade and a fresh install cause a blue 
screen.How do I pin point the blue screen?  I’m guessing it’s a QEMU issue 
with Win 10.  I’m on Fed 22, how do I get a newer QEMU than what’s in the 
distribution?  or should I just upgrade to Fedora 24?


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to import pre-configured nfs data domain

2016-07-22 Thread Milan Zamazal
Logan Kuhn  writes:

> Am I correct in the assumption that importing a previously master data domain
> into a fresh engine without a current master domain is supported?

It's supported only in case the master domain was previously correctly
detached from the data center.

In case of an unexpected complete disaster, when a fresh engine is
installed and used, it's still possible to recover the master domain in
theory.  You must find `metadata' file in the master domain and edit it
for the new engine.  It's completely unsupported and it may or may not
work.  We don't have guidelines how to do it, but you may try to create
a new master domain, then detach it and compare the two metadata files.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Solved: Re: 3.5 to 3.6 upgrade stuck

2016-07-22 Thread Simone Tiraboschi
On Fri, Jul 22, 2016 at 9:58 AM, Simone Tiraboschi  wrote:
> On Fri, Jul 22, 2016 at 4:11 AM, Robert Story  wrote:
>> On Thu, 21 Jul 2016 16:04:41 -0400 Robert wrote:
>> RS> 
>> Thread-1::config::278::ovirt_hosted_engine_ha.broker.notifications.Notifications.config
>> RS>  ::(refresh_local_conf_file) local conf file was correctly written
>> RS>
>> RS> And then  nothing. It just hangs. Nothing more is logged Thread-1.
>>
>> So I started digging around the the python source, starting from
>> refresh_local_conf_file. I ended up in ./broker/notifications.py, in
>> send_email. I added some logging:
>>
>> def send_email(cfg, email_body):
>> """Send email."""
>>
>> logger = logging.getLogger("%s.Notifications" % __name__)
>>
>> try:
>> logger.debug(" setting up smtp 1")
>> server = smtplib.SMTP(cfg["smtp-server"], port=cfg["smtp-port"])
>> logger.debug(" setting up smtp 2")
>> ...
>>
>> Now the final messages are:
>>
>> Thread-1::DEBUG::2016-07-21 21:35:05,280::config::278::
>>   ovirt_hosted_engine_ha.broker.notifications.Notifications.config::
>>   (refresh_local_conf_file) local conf file was correctly written
>> Thread-1::DEBUG::2016-07-21 21:35:05,282::notifications::27::
>>   ovirt_hosted_engine_ha.broker.notifications.Notifications::
>>   (send_email)  setting up smtp 1
>>
>>
>> So the culprit is:
>>
>> server = smtplib.SMTP(cfg["smtp-server"], port=cfg["smtp-port"])
>>
>> Note that this does actually send the email - 2 minutes later.
>
> Thanks for time and your effort Robert!
> In general the agent shouldn't got stuck if the broker is not able to
> send a notification email within a certain amount of time.
> I'm open a bug to track this. Adding Martin here.

https://bugzilla.redhat.com/1359059

>> So I tried:
>>
>>   $ telnet localhost 25
>>   Trying ::1...
>>
>> which hung, and a little bell went off in my brain...
>>
>> After changing /etc/hosts from:
>>
>> 127.0.0.1   localhost localhost.localdomain localhost4 
>> localhost4.localdomain4
>> ::1 localhost localhost.localdomain localhost6 
>> localhost6.localdomain6
>>
>> to
>>
>> 127.0.0.1   localhost localhost.localdomain localhost4 
>> localhost4.localdomain4
>> ::1 localhost6 localhost6.localdomain6
>>
>> localhost resolves to 127.0.0.1, the delay is gone, and everything is fine.
>
> We are seeing similar reports regarding ip4/ip6 issues also migrating on 4.0
> See also http://lists.ovirt.org/pipermail/users/2016-June/040578.html and
> https://bugzilla.redhat.com/show_bug.cgi?id=1358530
>
> Adding Oved here.
>
>> I don't want to update /etc/hosts on each host. Is there somewhere I can
>> edit the broker config for mail?
>
> The shortest option is to edit broker.conf inside the configuration
> volume on the hosted-engine storage domain but it's a bit tricky and
> also potentially dangerous if not well done.
> We have an RFE about letting you reconfigure it from the engine, for
> now, if you are brave enough, please try something like this.
>
> dir=`mktemp -d` && cd $dir
> mnt_point=/rhev/data-center/mnt/192.168.1.115:_Virtual_ext35u36 #
> pleace with your local mount point
> systemctl stop ovirt-ha-broker # on all the hosts!
> sdUUID_line=$(grep sdUUID /etc/ovirt-hosted-engine/hosted-engine.conf)
> sdUUID=${sdUUID_line:7:36}
> conf_volume_UUID_line=$(grep conf_volume_UUID
> /etc/ovirt-hosted-engine/hosted-engine.conf)
> conf_volume_UUID=${conf_volume_UUID_line:17:36}
> conf_image_UUID_line=$(grep conf_image_UUID
> /etc/ovirt-hosted-engine/hosted-engine.conf)
> conf_image_UUID=${conf_image_UUID_line:16:36}
> sudo -u vdsm dd
> if=$mnt_point/$sdUUID/images/$conf_image_UUID/$conf_volume_UUID
> 2>/dev/null| tar -xvf -
> # here you have to edit the locally extracted broker.conf
> tar -cO * | sudo -u vdsm dd
> of=$mnt_point/$sdUUID/images/$conf_image_UUID/$conf_volume_UUID
> systemctl restart ovirt-ha-agent # on all the hosts
>
> I strongly advice to take a backup before editing.
>
>> Robert
>>
>> --
>> Senior Software Engineer @ Parsons
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Centos 7 no bootable device

2016-07-22 Thread Milan Zamazal
Johan Kooijman  writes:

> Situation as follows: mixed cluster with 3.5 and 3.6 nodes. Now in the
> process of reinstalling the 3.5 nodes with 3.6 on CentOS 7.2. I can't live
> migrate VM's while they're running on different versions.

Live migration should always work within a single cluster.  What do Vdsm
logs say on both the source and target hosts when the migration fails?

> The most interesting part is happening when I power down a VM, and then run
> it on a 3.6 node. Only on CentOS 7 VM's, I'm getting a "no bootable device"
> error. I have a mixed setup of ubuntu, CentOS 6 and CentOS 7. Ubuntu &
> CentOS 6 are fine.
>
> Tried shooting grub in MBR again, to no effect. I start the VM then on a
> 3.5 node and all is fine.

So the VM is indicated as starting in Engine and BIOS or GRUB can't find
the device to boot from?  Could you provide Vdsm logs from both
successful and unsuccessful boot of the same VM?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] synchronize cache

2016-07-22 Thread Markus Scherer

grml...between chair and keyboard.

sorry i am now downloading fc23 and test again


thx for help

Am 21.07.2016 um 13:20 schrieb Sandro Bonazzola:



On Thu, Jul 21, 2016 at 12:43 PM, Markus Scherer > wrote:


sorry no success


[root@vm02 ~]# dnf clean all
24 files removed
[root@vm02 ~]# dnf check-update
Fedora 24 - x86_64 - Updates3.1 MB/s |  11
MB 00:03
Fehler: Failed to synchronize cache for repo 'ovirt-4.0'
[root@vm02 ~]#


Wait, you said you were on fedora 23, not fedora 24.
Fedora 24 is not supported by oVirt 4.0 (yet)
For Fedora 24 you need to use ovirt master nightly: 
http://www.ovirt.org/develop/dev-process/install-nightly-snapshot/




Am 21.07.2016 um 11:16 schrieb Sandro Bonazzola:

try with:
[ovirt-4.0]
name=Latest oVirt 4.0 Release
baseurl=http://resources.ovirt.org/pub/ovirt-4.0/rpm/fc$releasever/

#mirrorlist=http://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-4.0-fc$releasever
enabled=1
skip_if_unavailable=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.0

On Thu, Jul 21, 2016 at 10:20 AM, Markus Scherer
> wrote:

my ovirt.repo file

[ovirt-4.0]
name=Latest oVirt 4.0 Release
#baseurl=http://resources.ovirt.org/pub/ovirt-4.0/rpm/fc$releasever/

mirrorlist=http://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-4.0-fc$releasever
enabled=1
skip_if_unavailable=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.0


Am 21.07.2016 um 09:08 schrieb Sandro Bonazzola:



On Thu, Jul 21, 2016 at 7:11 AM, Markus Scherer
> wrote:

Sorry the same problem



Maybe you hit a mirror out of sync, can you try using
baseurl instead of mirrorlist in the ovirt .repo files?


Thx


Am 20.07.2016 um 16:45 schrieb Sandro Bonazzola:



On Mon, Jul 18, 2016 at 9:41 AM, Markus Scherer
> wrote:

Hi,

on an fresh installed fedora server 23 i got a
"Failed to synchronize cache for repo 'ovirt-4.0'".
I have done a "dnf clean all" a "dnf check-update"
and a reboot but always the same problem.

thx for help


I regenerated the metadata within the repo, can you
please try again?
Thanks,

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users




-- 
Sandro Bonazzola

Better technology. Faster innovation. Powered by
community collaboration.
See how it works at redhat.com 





-- 
Sandro Bonazzola

Better technology. Faster innovation. Powered by community
collaboration.
See how it works at redhat.com 


___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





-- 
Sandro Bonazzola

Better technology. Faster innovation. Powered by community
collaboration.
See how it works at redhat.com 





--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Spice in 4.0

2016-07-22 Thread Yaniv Kaul
On Thu, Jul 21, 2016 at 9:19 PM, Melissa Mesler 
wrote:

> Yes we are trying to get spice working on a thin client where we can't
> use virt-viewer. I just don't know the steps in the bugzilla to
> accomplish it as it's not completely clear.
>

I don't know the details of this thin client, but I suggest requesting it
to be supported from the virt-viewer team. Perhaps it's not such a big deal.
Y.


>
> On Thu, Jul 21, 2016, at 01:14 PM, Alexander Wels wrote:
> > On Thursday, July 21, 2016 01:08:49 PM Melissa Mesler wrote:
> > > So I am trying to get spice working in ovirt 4.0. I found the following
> > > solution:
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1316560
> > >
> >
> > That bugzilla relates to the legacy spice.xpi FF plugin, and possibly
> > some
> > activex plugin for IE. The current way is the following:
> >
> > 1. Get virt-viewer for your platform.
> > 2. Associated virt-viewer with .vv files in your browser.
> > 3. Click the button, which will download the .vv file with the
> > appropriate
> > ticket.
> > 4. The browser will launch virt-viewer with the .vv file as a parameter
> > and it
> > should just all work.
> >
> > > Where do you set vdc_options.EnableDeprecatedClientModeSpicePlugin to
> > > 'true'?? I see it says ENGINE_DB but what steps do I follow to do this?
> > > Can someone help me?
> > > ___
> > > Users mailing list
> > > Users@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/users
> >
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] one export domain two DC

2016-07-22 Thread Yaniv Kaul
On Wed, Jul 20, 2016 at 2:04 PM, Fernando Fuentes 
wrote:

> Is it possible to export all of my vms on my oVirt 3.5 Domain and than
> attach my export domain on my oVirt 4.0 DC and import the vm's?
>

Yes, you can do this + just import a storage domain (see [1] for details -
since 3.5)
Y.

[1]
http://www.ovirt.org/develop/release-management/features/storage/importstoragedomain/


>
> Regards,
>
> --
> Fernando Fuentes
> ffuen...@txweather.org
> http://www.txweather.org
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] stuck host in hosted engine migration 3.6->4.0

2016-07-22 Thread Martin Perina
Hi,

I suspect some networking issue, because in vdsm.log I see several network
errors during connection either to HA agent and/or to NFS server. And this
probably storage errors like 'Domain is either partially accessible or
entirely inaccessible'. But no idea what is causing those low level network
errors :-(

Martin


On Fri, Jul 22, 2016 at 10:35 AM, Simone Tiraboschi 
wrote:

> On Thu, Jul 21, 2016 at 8:08 PM, Gervais de Montbrun
>  wrote:
> > Hi Martin
> >
> > Logs are attached.
> >
> > Thank you for any help you can offer.
> > :-)
> >
> > Cheers,
> > Gervais
>
> see also this one: https://bugzilla.redhat.com/show_bug.cgi?id=1358530
>
> the results are pretty similar.
>
> > On Jul 21, 2016, at 10:20 AM, Martin Perina  wrote:
> >
> > So could you please share logs?
> >
> > Thanks
> >
> > Martin
> >
> > On Thu, Jul 21, 2016 at 3:17 PM, Gervais de Montbrun
> >  wrote:
> >>
> >> Hi Oved,
> >>
> >> Thanks for the suggestion.
> >>
> >> I tried setting "management_ip = 0.0.0.0" but same result.
> >> BTW, management_ip='0.0.0.0' (as suggested in the post) doesn't work for
> >> me. vdsmd wouldn't start.
> >>
> >> Cheers,
> >> Gervais
> >>
> >>
> >>
> >> On Jul 20, 2016, at 10:50 AM, Oved Ourfali  wrote:
> >>
> >> Also, this thread seems similar.
> >> Also talking about IPV4/IPV6 issue.
> >> Does it help?
> >>
> >> [1] http://lists.ovirt.org/pipermail/users/2016-June/040602.html
> >>
> >> On Wed, Jul 20, 2016 at 4:43 PM, Martin Perina 
> wrote:
> >>>
> >>> Hi,
> >>>
> >>> could you please create a bug and attach engine host logs (all from
> >>> /var/log/ovirt-engine) and VDSM logs (from /var/log/vdsm)?
> >>>
> >>> Thanks
> >>>
> >>> Martin Perina
> >>>
> >>>
> >>> On Wed, Jul 20, 2016 at 1:50 PM, Gervais de Montbrun
> >>>  >>> > wrote:
> >>>
> >>> > Hi Qiong,
> >>> >
> >>> > I am experiencing the exact same issue. All four of my hosts are
> >>> > throwing
> >>> > the same error to the vdsm.log If you find a solution, please let me
> >>> > know
> >>
> >>
> >>
> >
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Centos 7 no bootable device

2016-07-22 Thread Samuli Heinonen
Hi Johan,

I had similar problems with some VM’s when moving VM’s to oVirt 3.6 from older 
versions. Iirc it was unable to find the disk device at all. The weird pard is 
that when I booted VM with recovery cd everything was ok. I was able to boot 
VM’s when I enabled boot menu from VM options (Boot options - Enable boot 
menu). Can you try if that helps in your case?

Best regards,
Samuli Heinonen

 
> On 20 Jul 2016, at 15:12, Johan Kooijman  wrote:
> 
> Hi all,
> 
> Situation as follows: mixed cluster with 3.5 and 3.6 nodes. Now in the 
> process of reinstalling the 3.5 nodes with 3.6 on CentOS 7.2. I can't live 
> migrate VM's while they're running on different versions. That's odd, but not 
> the weirdest issue.
> 
> The most interesting part is happening when I power down a VM, and then run 
> it on a 3.6 node. Only on CentOS 7 VM's, I'm getting a "no bootable device" 
> error. I have a mixed setup of ubuntu, CentOS 6 and CentOS 7. Ubuntu & CentOS 
> 6 are fine. 
> 
> Tried shooting grub in MBR again, to no effect. I start the VM then on a 3.5 
> node and all is fine.
> 
> -- 
> Met vriendelijke groeten / With kind regards,
> Johan Kooijman
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Windows 10 + qemu + Blue Iris = Blue screen

2016-07-22 Thread Michal Skrivanek

> On 21 Jul 2016, at 20:05, Blaster  wrote:
> 
> I am running an application called Blue Iris which records video from IP 
> cameras.
> 
> This was working great under Ovirt 3.6.3 + Windows 7.  Now I’ve upgraded to 
> Windows 10 and as soon as the Blue Iris service starts, the VM blue screens.
> 
> I talked to the software vendor, and they said it’s not their problem, they 
> aren’t doing anything that could cause a blue screen, so it must be  
> driver/memory/hardware problem.  They say the application works just fine 
> under Windows 10.
> 
> So thinking maybe the upgrade went bad, I created a new VM, used e1000 and 
> IDE interfaces (i.e., no Virtualized hardware or drivers were used) and 
> re-installed Blue Iris.

I would expect better luck with virtio drivers. Either way, if it was working 
before and not working in Win10 it’s likely related to drivers. Can you make 
sure you try latest drivers? Can you pinpoint the blue screen…to perhaps USB or 
other subsystem?
Might be worth trying on clean Win10 install just to rule out upgrade issues (I 
didn’t understand whether you cloned the old VM and just reinstalled blue iris 
or reinstalled everything) , and if it still reproduces it is likely some low 
level incompatibility in QEMU/KVM. You would likely have to try experiment with 
qemu cmdline or use latest qemu and check the qemu mailing list

Thanks,
michal

> 
> Still blue screens.
> 
> How do I go about figuring out what’s causing the blue screen?  
> 
> Thanks….
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade 3.6 to 4.0 and "ghost" incompatible cluster version

2016-07-22 Thread Milan Zamazal
"Federico Sayd"  writes:

> I'm trying to upgrade ovirt 3.6.3 to 4.0, but engine-setup complaints about 
> upgrading from incompatible version 3.3
>
> I see in the engine-setup log that  vds_groups table is checked to determine
> the compatibility version. The logs shows that engine-setup detects 2 clusters
> versions: 3.3 and 3.6. Indeed, there is 2 clusters registered in the table:
> "cluster-3.6" and "Default"
>
> "Cluster-3.6" (version 3.6)  is the only cluster in DC in my ovirt setup.
> "Default" (version 3.3)  should be a cluster that surely I deleted in a past 
> upgrade.
>
> Why a cluster named "Default" (with compatibility version 3.3) is still 
> present
> in vds_group table? Cluster "Default" isn't displayed anywhere in the web
> interface.

It looks like a bug to me.  The cluster should be either missing in the
database or present in the web interface.

Could you please provide us more details about the problem?  It might
help us to investigate the issue if you could do the following:

- Install ovirt-engine-webadmin-portal-debuginfo package.
- Restart Engine.
- Go to the main Clusters tab.
- Refresh the page in your browser.
- Send us the parts of engine.log and ui.log corresponding to the
  refresh action.

> Any clue to solve this issue?

As a workaround, if you are sure you don't have anything in Default
cluster, you may try to set compatibility_version to "3.6" for "Default"
cluster in vds_groups database table.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Network settings for multiple hosts

2016-07-22 Thread Alexis HAUSER
Hi,


Since I use several hosts with ovirt, I get very unstable reactions everytime I 
change anything about networks...

What are the requirement for networks when using multiple hosts ? If I add a 
logical network to a NIC to my first host, the second host becomes non 
operationnal...Do I really need to have the exact same logical network on both 
hosts ?

If I add the same network on my second hosts with no IP adress, it still 
becomes non operationnal...Also there are unrelated errors with iSCSI disk when 
I do that, VDSM etc...But my main interface on that second host is still up and 
working with ovirtmgmt on it...And the new interface I try to add is checked as 
"non required".

Another weird thing is that ifconfig doesn't show my new logical network on my 
first host, even if it has a new logical network shown as up and working in the 
web interface (this one has a correct IP addressing). Restarting vdsmd on that 
host doesn't change anything.

Any idea of what is going on, and how I should proceed ?




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-22 Thread Krutika Dhananjay
Hi David,

Could you also share the brick logs from the affected volume? They're
located at
/var/log/glusterfs/bricks/.log.

Also, could you share the volume configuration (output of `gluster volume
info `) for the affected volume(s) AND at the time you actually saw
this issue?

-Krutika




On Thu, Jul 21, 2016 at 11:23 PM, David Gossage  wrote:

> On Thu, Jul 21, 2016 at 11:47 AM, Scott  wrote:
>
>> Hi David,
>>
>> My backend storage is ZFS.
>>
>> I thought about moving from FUSE to NFS mounts for my Gluster volumes to
>> help test.  But since I use hosted engine this would be a real pain.  Its
>> difficult to modify the storage domain type/path in the
>> hosted-engine.conf.  And I don't want to go through the process of
>> re-deploying hosted engine.
>>
>>
> I found this
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1347553
>
> Not sure if related.
>
> But I also have zfs backend, another user in gluster mailing list had
> issues and used zfs backend although she used proxmox and got it working by
> changing disk to writeback cache I think it was.
>
> I also use hosted engine, but I run my gluster volume for HE actually on a
> LVM separate from zfs on xfs and if i recall it did not have the issues my
> gluster on zfs did.  I'm wondering now if the issue was zfs settings.
>
> Hopefully should have a test machone up soon I can play around with more.
>
> Scott
>>
>> On Thu, Jul 21, 2016 at 11:36 AM David Gossage <
>> dgoss...@carouselchecks.com> wrote:
>>
>>> What back end storage do you run gluster on?  xfs/zfs/ext4 etc?
>>>
>>> *David Gossage*
>>> *Carousel Checks Inc. | System Administrator*
>>> *Office* 708.613.2284
>>>
>>> On Thu, Jul 21, 2016 at 8:18 AM, Scott  wrote:
>>>
 I get similar problems with oVirt 4.0.1 and hosted engine.  After
 upgrading all my hosts to Gluster 3.7.13 (client and server), I get the
 following:

 $ sudo hosted-engine --set-maintenance --mode=none
 Traceback (most recent call last):
   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
 "__main__", fname, loader, pkg_name)
   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
 exec code in run_globals
   File
 "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/set_maintenance.py",
 line 73, in 
 if not maintenance.set_mode(sys.argv[1]):
   File
 "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/set_maintenance.py",
 line 61, in set_mode
 value=m_global,
   File
 "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
 line 259, in set_maintenance_mode
 str(value))
   File
 "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
 line 204, in set_global_md_flag
 all_stats = broker.get_stats_from_storage(service)
   File
 "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
 line 232, in get_stats_from_storage
 result = self._checked_communicate(request)
   File
 "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
 line 260, in _checked_communicate
 .format(message or response))
 ovirt_hosted_engine_ha.lib.exceptions.RequestError: Request failed:
 failed to read metadata: [Errno 1] Operation not permitted

 If I only upgrade one host, then things will continue to work but my
 nodes are constantly healing shards.  My logs are also flooded with:

 [2016-07-21 13:15:14.137734] W [fuse-bridge.c:2227:fuse_readv_cbk]
 0-glusterfs-fuse: 274714: READ => -1 gfid=4
 41f2789-f6b1-4918-a280-1b9905a11429 fd=0x7f19bc0041d0 (Operation not
 permitted)
 The message "W [MSGID: 114031]
 [client-rpc-fops.c:3050:client3_3_readv_cbk] 0-data-client-0: remote
 operation failed [Operation not permitted]" repeated 6 times between
 [2016-07-21 13:13:24.134985] and [2016-07-21 13:15:04.132226]
 The message "W [MSGID: 114031]
 [client-rpc-fops.c:3050:client3_3_readv_cbk] 0-data-client-1: remote
 operation failed [Operation not permitted]" repeated 8 times between
 [2016-07-21 13:13:34.133116] and [2016-07-21 13:15:14.137178]
 The message "W [MSGID: 114031]
 [client-rpc-fops.c:3050:client3_3_readv_cbk] 0-data-client-2: remote
 operation failed [Operation not permitted]" repeated 7 times between
 [2016-07-21 13:13:24.135071] and [2016-07-21 13:15:14.137666]
 [2016-07-21 13:15:24.134647] W [MSGID: 114031]
 [client-rpc-fops.c:3050:client3_3_readv_cbk] 0-data-client-0: remote
 operation failed [Operation not permitted]
 [2016-07-21 13:15:24.134764] W [MSGID: 114031]
 [client-rpc-fops.c:3050:client3_3_readv_cbk] 0-data-client-2: remote
 operation failed [Operation not permitted]
 [2016-07-21 13:15:24.134793] W [fuse-bridge.c:2227:fuse_readv_cbk]
 0-glusterfs-fuse: 274741: 

Re: [ovirt-users] Snapshot creation using REST API

2016-07-22 Thread Juan Hernández
On 07/21/2016 04:06 PM, Vishal Panchal wrote:
> Hello,
> 
> I have created a snapshot using Ovirt REST API. After creation in
> response I found a link that gives me creation status of snapshot but
> when I am using that URL I am getting 404 Error(URL Not Found). I am
> using Ovirt version 3 API .
> 
> Regards,
> *Vishal Panchal*
> Software Developer
> *+918140283911*
> *

This is a bug, I have opened the following BZ to track it:

  The "creation_status" resources don't work with V3
  https://bugzilla.redhat.com/1359139

-- 
Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
3ºD, 28016 Madrid, Spain
Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt Node Next mass deploy

2016-07-22 Thread Giorgio Biacchi
On 07/21/2016 01:40 PM, Giorgio Biacchi wrote:
> Hi list,
> starting from here
> (http://lists.ovirt.org/pipermail/devel/2016-January/012073.html) and 
> adjusting
> broken links now I'm able to pxe boot CentOS 7 + kernel arguments:
> 
> LABEL node_4
>  MENU LABEL Ovirt Node 4.0
>  KERNEL centos7/x86_64/vmlinuz
>  APPEND initrd=centos7/x86_64/initrd.img ramdisk_size=10 ksdevice=link
> inst.ks=http://172.20.22.10/ks/ks_ovirt-node-4.0.cfg
> inst.updates=http://jenkins.ovirt.org/job/ovirt-node-ng_master_build-artifacts-fc22-x86_64/lastSuccessfulBuild/artifact/exported-artifacts/product.img
> inst.stage2=http://mi.mirror.garr.it/mirrors/CentOS/7/os/x86_64/
> 
> I think this method is the best for me because with a custom kickstart I can 
> set
> ssh keys and custom hooks and have a fully automated installation, but I'm not
> sure if the lastSuccessfulBuild/artifact/exported-artifacts/product.img is the
> correct image to pass to have a "stable" node.
> 
> There's any other "stable" product.img I can use?
> 
> Thanks
> 

Hello again,
just found out that the simplest method to obtain a stable product.img and
ovirt-node-ng-image.squashfs.img is to loop mount an ovirt node iso, get the
files from there and make them available via http.

Now with a modified PXE and kickstart file I'm able to automate the installation
process.

Here's my PXE conf and kickstart file, maybe they will be useful for someone...

---  ---

LABEL node_4
MENU LABEL Ovirt Node 4.0.2 (testing)
KERNEL centos7/x86_64/vmlinuz
APPEND initrd=centos7/x86_64/initrd.img ramdisk_size=10 ksdevice=link
inst.ks=http://172.20.22.10/ks/ks_ovirt-node-4.0.cfg
inst.updates=http://172.20.22.10/node-4.0.2/product.img
inst.stage2=http://mi.mirror.garr.it/mirrors/CentOS/7/os/x86_64/

---  ---

---  ---
#
# CentOS 7.2 compatible kickstart for CI auto-installation
#

lang en_US.UTF-8
keyboard us
timezone --utc Etc/UTC --ntpservers=tempo.ien.it
auth --enableshadow --passalgo=sha512
selinux --permissive
network --bootproto=dhcp --onboot=on
firstboot --reconfig

#Set root password
rootpw --iscrypted 

# or use plain text
#rootpw --plaintext ovirt

reboot

clearpart --all --initlabel --disklabel=gpt
bootloader --timeout=1

# FIXME This should be fixed more elegantly with
https://bugzilla.redhat.com/663099#c14
# At best we could use: autopart --type=thinp
# autopart can not be used in CI currently, because updates.img is not passed to
# the installation

# Manual layout:
reqpart --add-boot
part pv.01 --size=42000 --grow
volgroup HostVG pv.01
logvol swap --vgname=HostVG --name=swap --fstype=swap --recommended
logvol none --vgname=HostVG --name=HostPool --thinpool --size=4 --grow
logvol / --vgname=HostVG --name=root --thin --poolname=HostPool
--fsoptions="defaults,discard" --size=6000
logvol /var --vgname=HostVG --name=var --thin --poolname=HostPool
--fsoptions="defaults,discard" --size=15000

#
# The trick is to loop in the squashfs image as a device
# from the host
#
liveimg --url="http://172.20.22.10/node-4.0.2/ovirt-node-ng-image.squashfs.img;

%pre
# Assumption: A virtio device with the serial livesrc is passed, pointing
# to the squashfs on the host.
mkdir -p /mnt/livesrc
mount /dev/disk/by-id/virtio-livesrc /mnt/livesrc
%end

%post
PATH=/bin:/sbin:/usr/bin:/usr/sbin
export PATH

#Setup public ssh keys, at least ovirt-engine one..

cd /root
mkdir --mode=700 .ssh

cat >> .ssh/authorized_keys << "PUBLIC_KEY"
ssh-rsa
B3NzaC1yc2EDAQABAAABAQDAhTqyQ6dloDVxjcmDw0CQHDXc6EVtvOqKzCUrNbZ1zt3sZveaWsOVE5NnzFQ6xvgGNXjou4eRuWcdgCows02GqVOPVYqlt8OBThU5lDqPwL7Znz33VO9vKegz8LgotRLSu7ivPPU7zlkNoEBGIDlf3VaQ1K7c+WzflNYkq4qn2dZdtqqQvqgXdAprfC99A37txNzHtu4X/KEWLc67QWPno3a8wpHl0bMYqaYWHLoROcyTvyXvJWrGYRhV0VUqNKcqqFL6fIWwv0ezqCkny1hqKiPch2Re8mEa84Fbd5tFscXhJ2n/R3C+5UkyVbAQPEiL7OhvDPe//USF+MWLMBQ9
ovirt-engine
PUBLIC_KEY

chmod 600 .ssh/authorized_keys
chmod 700 .ssh
chcon -t ssh_home_t .ssh/
chcon -t ssh_home_t .ssh/authorized_keys

#My custom VDSM hooks

mkdir -p /usr/libexec/vdsm/hooks/before_vdsm_start
cd /usr/libexec/vdsm/hooks/before_vdsm_start

cat >> 10_set_ib0_connected_mode << "EOF"
#!/bin/sh

echo Setting IB connected mode
echo connected > /sys/class/net/ib0/mode
sleep 3
MODE=$(cat /sys/class/net/ib0/mode)
RATE=$(cat /sys/class/infiniband/mlx4_0/ports/1/rate)
echo ib0 is now in $MODE mode with rate $RATE
EOF

chmod +x /usr/libexec/vdsm/hooks/before_vdsm_start/10_set_ib0_connected_mode

mkdir -p /usr/libexec/vdsm/hooks/after_network_setup
cd /usr/libexec/vdsm/hooks/after_network_setup

cat >> 10_set_ib0_connected_mode << "EOF"
#!/bin/sh

echo connected > /sys/class/net/ib0/mode
sleep 3
ip link set mtu 65520 dev ib0
EOF

chmod +x /usr/libexec/vdsm/hooks/after_network_setup/10_set_ib0_connected_mode

# FIXME maybe the folowing lines can be collapsed
# in future into i.e. "nodectl init"
set -x
imgbase --debug layout --init

%end
---  ---

As last thing.. installing a node from the iso give me a corrupted imagebase
because discard option is missing from fstab.

Bye

Re: [ovirt-users] stuck host in hosted engine migration 3.6->4.0

2016-07-22 Thread Gervais de Montbrun
Hi Simone,

I did have the issue you link to below when doing a `hosted-engine --deploy` on 
this server when I was setting it up to run 3.6. I've commented on the bug with 
my experiences. I did get the host working in 3.6 and there were no errors, but 
this one has cropped up since upgrading to 4.0.1.

I did not have the same issue on all of my hosts, but the error I am 
experiencing now:
JsonRpc (StompReactor)::ERROR::2016-07-22 
09:59:56,062::betterAsyncore::113::vds.dispatcher::(recv) SSL error during 
reading data: unexpected eof
JsonRpc (StompReactor)::ERROR::2016-07-22 
10:00:11,240::betterAsyncore::113::vds.dispatcher::(recv) SSL error during 
reading data: unexpected eof
JsonRpc (StompReactor)::ERROR::2016-07-22 
10:00:21,158::betterAsyncore::113::vds.dispatcher::(recv) SSL error during 
reading data: unexpected eof
JsonRpc (StompReactor)::ERROR::2016-07-22 
10:00:21,441::betterAsyncore::113::vds.dispatcher::(recv) SSL error during 
reading data: unexpected eof
JsonRpc (StompReactor)::ERROR::2016-07-22 
10:00:26,717::betterAsyncore::113::vds.dispatcher::(recv) SSL error during 
reading data: unexpected eof
JsonRpc (StompReactor)::ERROR::2016-07-22 
10:00:31,856::betterAsyncore::113::vds.dispatcher::(recv) SSL error during 
reading data: unexpected eof
JsonRpc (StompReactor)::ERROR::2016-07-22 
10:00:36,982::betterAsyncore::113::vds.dispatcher::(recv) SSL error during 
reading data: unexpected eof
JsonRpc (StompReactor)::ERROR::2016-07-22 
10:00:52,180::betterAsyncore::113::vds.dispatcher::(recv) SSL error during 
reading data: unexpected eof

is happening on all of them.
:-(

Cheers,
Gervais



> On Jul 22, 2016, at 5:35 AM, Simone Tiraboschi  wrote:
> 
> On Thu, Jul 21, 2016 at 8:08 PM, Gervais de Montbrun
>  wrote:
>> Hi Martin
>> 
>> Logs are attached.
>> 
>> Thank you for any help you can offer.
>> :-)
>> 
>> Cheers,
>> Gervais
> 
> see also this one: https://bugzilla.redhat.com/show_bug.cgi?id=1358530
> 
> the results are pretty similar.
> 
>> On Jul 21, 2016, at 10:20 AM, Martin Perina  wrote:
>> 
>> So could you please share logs?
>> 
>> Thanks
>> 
>> Martin
>> 
>> On Thu, Jul 21, 2016 at 3:17 PM, Gervais de Montbrun
>>  wrote:
>>> 
>>> Hi Oved,
>>> 
>>> Thanks for the suggestion.
>>> 
>>> I tried setting "management_ip = 0.0.0.0" but same result.
>>> BTW, management_ip='0.0.0.0' (as suggested in the post) doesn't work for
>>> me. vdsmd wouldn't start.
>>> 
>>> Cheers,
>>> Gervais
>>> 
>>> 
>>> 
>>> On Jul 20, 2016, at 10:50 AM, Oved Ourfali  wrote:
>>> 
>>> Also, this thread seems similar.
>>> Also talking about IPV4/IPV6 issue.
>>> Does it help?
>>> 
>>> [1] http://lists.ovirt.org/pipermail/users/2016-June/040602.html
>>> 
>>> On Wed, Jul 20, 2016 at 4:43 PM, Martin Perina  wrote:
 
 Hi,
 
 could you please create a bug and attach engine host logs (all from
 /var/log/ovirt-engine) and VDSM logs (from /var/log/vdsm)?
 
 Thanks
 
 Martin Perina
 
 
 On Wed, Jul 20, 2016 at 1:50 PM, Gervais de Montbrun
  wrote:
 
> Hi Qiong,
> 
> I am experiencing the exact same issue. All four of my hosts are
> throwing
> the same error to the vdsm.log If you find a solution, please let me
> know
>>> 
>>> 
>>> 
>> 
>> 
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade 3.6 to 4.0 and "ghost" incompatible cluster version

2016-07-22 Thread Federico Alberto Sayd


El 22/07/16 a las 06:13, Milan Zamazal escribió:

"Federico Sayd"  writes:


I'm trying to upgrade ovirt 3.6.3 to 4.0, but engine-setup complaints about 
upgrading from incompatible version 3.3

I see in the engine-setup log that  vds_groups table is checked to determine
the compatibility version. The logs shows that engine-setup detects 2 clusters
versions: 3.3 and 3.6. Indeed, there is 2 clusters registered in the table:
"cluster-3.6" and "Default"

"Cluster-3.6" (version 3.6)  is the only cluster in DC in my ovirt setup.
"Default" (version 3.3)  should be a cluster that surely I deleted in a past 
upgrade.

Why a cluster named "Default" (with compatibility version 3.3) is still present
in vds_group table? Cluster "Default" isn't displayed anywhere in the web
interface.

It looks like a bug to me.  The cluster should be either missing in the
database or present in the web interface.

Could you please provide us more details about the problem?  It might
help us to investigate the issue if you could do the following:

- Install ovirt-engine-webadmin-portal-debuginfo package.
- Restart Engine.
- Go to the main Clusters tab.
- Refresh the page in your browser.
- Send us the parts of engine.log and ui.log corresponding to the
   refresh action.


Any clue to solve this issue?

As a workaround, if you are sure you don't have anything in Default
cluster, you may try to set compatibility_version to "3.6" for "Default"
cluster in vds_groups database table.

Hi Milan:


I solved the issue. I connected through ovirt-shell. The shell listed 
the two clusters and let me to delete the 3.3 "ghost" cluster.


I don't know why this cluster was visible by ovirt-shell but not by WebUI.

I migrated the engine from Centos 6 to Centos 7 in a new VM. I will try 
to install ovirt-engine-webadmin-portal-debuginfo in the old engine VM 
(I took a VM snapshot previously to delete the 3.3 cluster) and send 
you  the debug info.


Thanks for your help.

Federico
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users