Re: [ovirt-users] Engine migration and host import

2017-09-22 Thread Ben Bradley

On 20/09/17 15:41, Simone Tiraboschi wrote:


On Wed, Sep 20, 2017 at 12:30 AM, Ben Bradley > wrote:


Hi All

I've been running a single-host ovirt setup for several months,
having previously used a basic QEMU/KVM for a few years in lab
environments.

I currently have the ovirt engine running at the bare-metal level,
with the box also acting as the single host. I am also running this
with local storage.

I now have an extra host I can use and would like to migrate to a
hosted engine. The following documentation appears to be perfect and
pretty clear about the steps involved:

https://www.ovirt.org/develop/developer-guide/engine/migrate-to-hosted-engine/


and

https://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment



However I'd like to try and get a bit more of an understanding of
the process that happens behind the scenes during the cut-over from
one engine to a new/hosted engine.

As an experiment I attempted the following:
- created a new VM within my current environment (bare-metal engine)
- creating an engine-backup
- stopped the bare-metal engine
- restored the backup into the new VM
- ran engine-setup within the new VM
The new engine started up ok and I was able to connect and login to
the web UI. However my host was "unresponsive" and I was unable to
manage it in any way from the VM. I shut the VM down and started the
bare-metal ovirt-engine again on the host and everything worked as
before. I didn't try very hard to make it work however.

The magic missing from the basic process I tried is the
synchronising and importing of the existing host, which is what the
hosted-engine utility does.


No magic up to now: the host are simply in the DB you restored.
If the VM has network connectivity and the same host-name of the old 
machine you shouldn't see any issue.
If you changed the host-name moving to the VM, you should simply run 
engine-rename after the restore.


Thank you for the reply.
I tried this again this evening - again it failed.

The host is present within the new engine but I am unable to manage it.
Host is marked as down but Activate is greyed out. I can get get into 
the "Edit" screen for the host and on right-click I get the following 
options:

- Maintenance
- Confirm Host has been Rebooted
- SSH Management: Restart and Stop both available
The VMs are still running and accessible but are not listed as running 
under the web interface. This time however I did lose access to the 
ovirtmgmt bridge and the web interface, running VMs and host SSH session 
were unavailable until I rebooted.
Luckily I left ovirt-engine service enabled to restart on boot so 
everything came back up.


The engine URL is a CNAME so I just re-pointed to the hostname of the VM 
just before running engine-setup after the restore.


This time though I have kept the new engine VM so I can power it up 
again and try and debug.


I am going to try a few times over the weekend and I have setup serial 
console access so I can do a bit more debugging.


What ovirt logs could I check on the host to see if the new engine VM is 
able to connect and sync to the host properly?


Thanks, Ben


The only detail is that hosted-engine-setup will try to add the host 
where you are running it to the engine and so you have to manually 
remove it just after the restore in order to avoid a failure there.



Can anyone describe that process in a bit more detail?
Is it possible to perform any part of that process manually?

I'm planning to expand my lab and dev environments so for me it's
important to discover the following...
- That I'm able to reverse the process back to bare-metal engine if
I ever need/want to
- That I can setup a new VM or host with nothing more than an
engine-backup but still be able to regain control of exiting hosts
and VMs within the cluster

My main concern after my basic attempt at a "restore/migration"
above is that I might not be able to re-import/sync an existing host
after I have restored engine from a backup.

I have been able to export VMs to storage, remove them from ovirt,
re-install engine and restore, then import VMs from the export
domain. That all worked fine. But it involved shutting down all VMs
and removing their definitions from the environment.

Are there any pre-requisites to being able to re-import an existing
running host (and VMs), such as placing ALL hosts into maintenance
mode and shutting down any VMs first?

Any insight into host recovery/import/sync processes and steps will

Re: [ovirt-users] VM won't start if a Cinder disk is attached

2017-09-22 Thread Maxence SARTIAUX
Hi, 


Here's the vdsm.log from the time when i started the VM. 


https://pastebin.com/MWdTR0Gr (i've omited glusterfs volume & server list lines 
to have something a bit more readable) 


Ovirt version is 4.1.6.2-1.el7 (updated since the first mail) 
Ceph 12.2.0 
Cinder 10.0.5 


The cinder disk is a second disk, it's not the system. 

- Mail original -

De: "Luca 'remix_tj' Lorenzetto"  
À: "Maxence SARTIAUX"  
Cc: "users"  
Envoyé: Jeudi 21 Septembre 2017 22:49:07 
Objet : Re: [ovirt-users] VM won't start if a Cinder disk is attached 


Hi, 


can you attach vdsm.log? 


Which version are you running? IIRC in the past booting from Ceph was not 
possible, but should be possible since 4.1. 


Luca 


On Thu, Sep 21, 2017 at 3:42 PM, Maxence SARTIAUX < msarti...@it-optics.com > 
wrote: 






Hello 



I have a ovirt 4.1.5.2-1 ovirt cluster with a ceph luminous & openstack ocata 
cinder. 


I can create / remove / attach cinder disks with ovirt but when i attach a disk 
to a VM, the VM stay in "starting mode" (double up arrow grey) and never goes 
up, ovirt try every available hypervisors and end to detach the disk and stay 
in "starting up" state 


All i see in the libvirt logs are "connection timeout" nothing more, the 
hypervisors can contact the ceph cluster 


Nothing related in the ovirt logs & cinder 


Any ideas ? 


Thank you ! 







Maxence Sartiaux | System & Network Engineer 
Boulevard Initialis, 28 - 7000 Mons 
Tel :   +32 (0)65 84 23 85 (ext: 6016) 
Fax :   +32 (0)65 84 66 76 www.it-optics.com 

___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 







-- 

"E' assurdo impiegare gli uomini di intelligenza eccellente per fare 
calcoli che potrebbero essere affidati a chiunque se si usassero delle 
macchine" 
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) 

"Internet è la più grande biblioteca del mondo. 
Ma il problema è che i libri sono tutti sparsi sul pavimento" 
John Allen Paulos, Matematico (1945-vivente) 

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , < 
lorenzetto.l...@gmail.com > 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Failed gdeploy

2017-09-22 Thread Sean McMurray
My latest attempt to deploy went like this 
(/tmp/tmpaQJuTG/run-script.yml and /tmp/gdeployConfig.conf are pasted 
below the gdeploy transcript):


# gdeploy -k -vv -c /tmp/gdeployConfig.conf --trace
Using /etc/ansible/ansible.cfg as config file

PLAYBOOK: run-script.yml 
*

1 plays in /tmp/tmpaQJuTG/run-script.yml

PLAY [gluster_servers] 
***

META: ran handlers

TASK [Run a shell script] 


task path: /tmp/tmpaQJuTG/run-script.yml:7
changed: [192.168.1.3] => 
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h 
192.168.1.1,192.168.1.2,192.168.1.3) => {"changed": true, "failed": 
false, "failed_when_result": false, "item": 
"/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h 
192.168.1.1,192.168.1.2,192.168.1.3", "rc": 0, "stderr": "Shared 
connection to 192.168.1.3 closed.\r\n", "stdout": "", "stdout_lines": []}
changed: [192.168.1.2] => 
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h 
192.168.1.1,192.168.1.2,192.168.1.3) => {"changed": true, "failed": 
false, "failed_when_result": false, "item": 
"/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h 
192.168.1.1,192.168.1.2,192.168.1.3", "rc": 0, "stderr": "Shared 
connection to 192.168.1.2 closed.\r\n", "stdout": "", "stdout_lines": []}
changed: [192.168.1.1] => 
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h 
192.168.1.1,192.168.1.2,192.168.1.3) => {"changed": true, "failed": 
false, "failed_when_result": false, "item": 
"/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h 
192.168.1.1,192.168.1.2,192.168.1.3", "rc": 0, "stderr": "Shared 
connection to 192.168.1.1 closed.\r\n", "stdout": "scp: 
/tmp/*_host_ip_2017_09_22.txt: No such file or directory\r\nscp: 
/tmp/*_host_ip_2017_09_22.txt: No such file or directory\r\n", 
"stdout_lines": ["scp: /tmp/*_host_ip_2017_09_22.txt: No such file or 
directory", "scp: /tmp/*_host_ip_2017_09_22.txt: No such file or 
directory"]}

META: ran handlers
META: ran handlers

PLAY RECAP 
***

192.168.1.1: ok=1changed=1unreachable=0 failed=0
192.168.1.2: ok=1changed=1unreachable=0 failed=0
192.168.1.3: ok=1changed=1unreachable=0 failed=0

Using /etc/ansible/ansible.cfg as config file

PLAYBOOK: chkconfig_service.yml 
**

1 plays in /tmp/tmpaQJuTG/chkconfig_service.yml

PLAY [gluster_servers] 
***

META: ran handlers

TASK [Enable or disable services] 


task path: /tmp/tmpaQJuTG/chkconfig_service.yml:7
ok: [192.168.1.3] => (item=chronyd) => {"changed": false, "enabled": 
true, "item": "chronyd", "name": "chronyd", "status": 
{"ActiveEnterTimestamp": "Fri 2017-09-22 08:05:49 PDT", 
"ActiveEnterTimestampMonotonic": "57218106256", "ActiveExitTimestamp": 
"Fri 2017-09-22 08:05:49 PDT", "ActiveExitTimestampMonotonic": 
"57218037256", "ActiveState": "active", "After": 
"systemd-journald.socket var.mount tmp.mount ntpd.service -.mount 
system.slice sntp.service basic.target ntpdate.service", "AllowIsolate": 
"no", "AmbientCapabilities": "0", "AssertResult": "yes", 
"AssertTimestamp": "Fri 2017-09-22 08:05:49 PDT", 
"AssertTimestampMonotonic": "57218053902", "Before": "multi-user.target 
imgbase-config-vdsm.service shutdown.target", "BlockIOAccounting": "no", 
"BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", 
"CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", 
"CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", 
"CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": 
"no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": 
"18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": 
"Fri 2017-09-22 08:05:49 PDT", "ConditionTimestampMonotonic": 
"57218053870", "Conflicts": "shutdown.target systemd-timesyncd.service 
ntpd.service", "ControlGroup": "/system.slice/chronyd.service", 
"ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", 
"Description": "NTP client/server", "DevicePolicy": "auto", 
"Documentation": "man:chronyd(8) man:chrony.conf(5)", "EnvironmentFile": 
"/etc/sysconfig/chronyd (ignore_errors=yes)", "ExecMainCode": "0", 
"ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "7300", 

Re: [ovirt-users] Failed gdeploy

2017-09-22 Thread Sean McMurray
After reading https://bugzilla.redhat.com/show_bug.cgi?id=1469469 and 
https://bugzilla.redhat.com/show_bug.cgi?id=1491548#c2 I changed 
gdeployConfig.conf so that [script3] has 
file=/usr/share/gdeploy/scripts/disable-multipath.sh


That gives me a different failture:

TASK [Run a shell script] 


task path: /tmp/tmp4kfKbY/run-script.yml:7
failed: [192.168.1.3] 
(item=/usr/share/gdeploy/scripts/disable-multipath.sh) => {"changed": 
true, "failed": true, "failed_when_result": true, "item": 
"/usr/share/gdeploy/scripts/disable-multipath.sh", "rc": 1, "stderr": 
"Shared connection to 192.168.1.3 closed.\r\n", "stdout": "iscsiadm: No 
active sessions.\r\nThis script will prevent listing iscsi devices when 
multipath CLI is called\r\nwithout parameters, and so no LUNs will be 
discovered by applications like VDSM\r\n(oVirt, RHV) which shell-out to 
call `/usr/sbin/multipath` after target login\r\nSep 22 08:58:47 | DM 
multipath kernel driver not loaded\r\nSep 22 08:58:47 | DM multipath 
kernel driver not loaded\r\n", "stdout_lines": ["iscsiadm: No active 
sessions.", "This script will prevent listing iscsi devices when 
multipath CLI is called", "without parameters, and so no LUNs will be 
discovered by applications like VDSM", "(oVirt, RHV) which shell-out to 
call `/usr/sbin/multipath` after target login", "Sep 22 08:58:47 | DM 
multipath kernel driver not loaded", "Sep 22 08:58:47 | DM multipath 
kernel driver not loaded"]}
failed: [192.168.1.2] 
(item=/usr/share/gdeploy/scripts/disable-multipath.sh) => {"changed": 
true, "failed": true, "failed_when_result": true, "item": 
"/usr/share/gdeploy/scripts/disable-multipath.sh", "rc": 1, "stderr": 
"Shared connection to 192.168.1.2 closed.\r\n", "stdout": "iscsiadm: No 
active sessions.\r\nThis script will prevent listing iscsi devices when 
multipath CLI is called\r\nwithout parameters, and so no LUNs will be 
discovered by applications like VDSM\r\n(oVirt, RHV) which shell-out to 
call `/usr/sbin/multipath` after target login\r\nSep 22 15:57:47 | DM 
multipath kernel driver not loaded\r\nSep 22 15:57:47 | DM multipath 
kernel driver not loaded\r\n", "stdout_lines": ["iscsiadm: No active 
sessions.", "This script will prevent listing iscsi devices when 
multipath CLI is called", "without parameters, and so no LUNs will be 
discovered by applications like VDSM", "(oVirt, RHV) which shell-out to 
call `/usr/sbin/multipath` after target login", "Sep 22 15:57:47 | DM 
multipath kernel driver not loaded", "Sep 22 15:57:47 | DM multipath 
kernel driver not loaded"]}
failed: [192.168.1.1] 
(item=/usr/share/gdeploy/scripts/disable-multipath.sh) => {"changed": 
true, "failed": true, "failed_when_result": true, "item": 
"/usr/share/gdeploy/scripts/disable-multipath.sh", "rc": 1, "stderr": 
"Shared connection to 192.168.1.1 closed.\r\n", "stdout": "iscsiadm: No 
active sessions.\r\nThis script will prevent listing iscsi devices when 
multipath CLI is called\r\nwithout parameters, and so no LUNs will be 
discovered by applications like VDSM\r\n(oVirt, RHV) which shell-out to 
call `/usr/sbin/multipath` after target login\r\nSep 22 08:58:50 | DM 
multipath kernel driver not loaded\r\nSep 22 08:58:50 | DM multipath 
kernel driver not loaded\r\n", "stdout_lines": ["iscsiadm: No active 
sessions.", "This script will prevent listing iscsi devices when 
multipath CLI is called", "without parameters, and so no LUNs will be 
discovered by applications like VDSM", "(oVirt, RHV) which shell-out to 
call `/usr/sbin/multipath` after target login", "Sep 22 08:58:50 | DM 
multipath kernel driver not loaded", "Sep 22 08:58:50 | DM multipath 
kernel driver not loaded"]}

to retry, use: --limit @/tmp/tmp4kfKbY/run-script.retry

PLAY RECAP 
***

192.168.1.1: ok=0changed=0unreachable=0 failed=1
192.168.1.2: ok=0changed=0unreachable=0 failed=1
192.168.1.3: ok=0changed=0unreachable=0 failed=1


On 09/22/2017 09:05 AM, Sean McMurray wrote:
My latest attempt to deploy went like this 
(/tmp/tmpaQJuTG/run-script.yml and /tmp/gdeployConfig.conf are pasted 
below the gdeploy transcript):


# gdeploy -k -vv -c /tmp/gdeployConfig.conf --trace
Using /etc/ansible/ansible.cfg as config file

PLAYBOOK: run-script.yml 
*

1 plays in /tmp/tmpaQJuTG/run-script.yml

PLAY [gluster_servers] 
***

META: ran handlers

TASK [Run a shell script] 


task path: 

Re: [ovirt-users] SSLHandshakeException: Received fatal alert: certificate_expired

2017-09-22 Thread Neil
Thank you everyone.

I've updated to ovirt-engine-3.5.6.2-1 and this has resolved the problem as
it renewed my certs on engine-setup.

Much appreciated!

Regards.

Neil Wilson.

On Fri, Sep 22, 2017 at 3:18 PM, Neil  wrote:

> Thanks Sandro.
>
> I'll get cracking and report back if it fixed it.
>
> Thanks for all the help everyone.
>
>
> On Fri, Sep 22, 2017 at 3:14 PM, Sandro Bonazzola 
> wrote:
>
>>
>>
>> 2017-09-22 15:07 GMT+02:00 Neil :
>>
>>>
>>> Thanks for the guidance everyone.
>>>
>>> I've upgraded my engine now to ovirt-engine-3.4.4-1 but I've still got
>>> the same error unfortunately. Below is the output of the upgrade. Should
>>> this have fixed the issue or do I need to upgrade to 3.5 etc?
>>>
>>
>> I think you'll need 3.5.4 at least: https://bugzilla.redhat
>> .com/show_bug.cgi?id=1214860
>>
>>
>>
>>
>>>
>>>
>>> [ INFO  ] Stage: Initializing
>>> [ INFO  ] Stage: Environment setup
>>>   Configuration files: 
>>> ['/etc/ovirt-engine-setup.conf.d/10-packaging.conf',
>>> '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf']
>>>   Log file: /var/log/ovirt-engine/setup/ov
>>> irt-engine-setup-20170922125526-vw5khx.log
>>>   Version: otopi-1.2.3 (otopi-1.2.3-1.el6)
>>> [ INFO  ] Stage: Environment packages setup
>>> [ INFO  ] Yum Downloading: repomdPLa0LXtmp.xml (0%)
>>> [ INFO  ] Stage: Programs detection
>>> [ INFO  ] Stage: Environment setup
>>> [ INFO  ] Stage: Environment customization
>>>
>>>   --== PRODUCT OPTIONS ==--
>>>
>>>
>>>   --== PACKAGES ==--
>>>
>>> [ INFO  ] Checking for product updates...
>>>   Setup has found updates for some packages, do you wish to
>>> update them now? (Yes, No) [Yes]:
>>> [ INFO  ] Checking for an update for Setup...
>>>
>>>   --== NETWORK CONFIGURATION ==--
>>>
>>> [WARNING] Failed to resolve engine01.mydomain.za using DNS, it can be
>>> resolved only locally
>>>   Setup can automatically configure the firewall on this system.
>>>   Note: automatic configuration of the firewall may overwrite
>>> current settings.
>>>   Do you want Setup to configure the firewall? (Yes, No) [Yes]:
>>> no
>>>
>>>   --== DATABASE CONFIGURATION ==--
>>>
>>>
>>>   --== OVIRT ENGINE CONFIGURATION ==--
>>>
>>>   Skipping storing options as database already prepared
>>>
>>>   --== PKI CONFIGURATION ==--
>>>
>>>   PKI is already configured
>>>
>>>   --== APACHE CONFIGURATION ==--
>>>
>>>
>>>   --== SYSTEM CONFIGURATION ==--
>>>
>>>
>>>   --== MISC CONFIGURATION ==--
>>>
>>>
>>>   --== END OF CONFIGURATION ==--
>>>
>>> [ INFO  ] Stage: Setup validation
>>>   During execution engine service will be stopped (OK, Cancel)
>>> [OK]:
>>> [WARNING] Less than 16384MB of memory is available
>>> [ INFO  ] Cleaning stale zombie tasks
>>>
>>>   --== CONFIGURATION PREVIEW ==--
>>>
>>>   Engine database name: engine
>>>   Engine database secured connection  : False
>>>   Engine database host: localhost
>>>   Engine database user name   : engine
>>>   Engine database host name validation: False
>>>   Engine database port: 5432
>>>   Datacenter storage type : False
>>>   Update Firewall : False
>>>   Configure WebSocket Proxy   : True
>>>   Host FQDN   : engine01.mydomain.za
>>>   Upgrade packages: True
>>>
>>>   Please confirm installation settings (OK, Cancel) [OK]:
>>> [ INFO  ] Cleaning async tasks and compensations
>>> [ INFO  ] Checking the Engine database consistency
>>> [ INFO  ] Stage: Transaction setup
>>> [ INFO  ] Stopping engine service
>>> [ INFO  ] Stopping websocket-proxy service
>>> [ INFO  ] Stage: Misc configuration
>>> [ INFO  ] Stage: Package installation
>>> [ INFO  ] Yum Status: Downloading Packages
>>> [ INFO  ] Yum Download/Verify: ovirt-engine-3.4.4-1.el6.noarch
>>> [ INFO  ] Yum Downloading: (2/13): 
>>> ovirt-engine-backend-3.4.4-1.el6.noarch.rpm
>>> 2.0 M(19%)
>>> [ INFO  ] Yum Downloading: (2/13): 
>>> ovirt-engine-backend-3.4.4-1.el6.noarch.rpm
>>> 4.3 M(41%)
>>> [ INFO  ] Yum Downloading: (2/13): 
>>> ovirt-engine-backend-3.4.4-1.el6.noarch.rpm
>>> 6.3 M(60%)
>>> [ INFO  ] Yum Downloading: (2/13): 
>>> ovirt-engine-backend-3.4.4-1.el6.noarch.rpm
>>> 8.9 M(85%)
>>> [ INFO  ] Yum Download/Verify: ovirt-engine-backend-3.4.4-1.el6.noarch
>>> [ INFO  ] Yum Download/Verify: ovirt-engine-dbscripts-3.4.4-1.el6.noarch
>>> (I've taken out all the downloading progress)
>>>
>>> [ INFO  ] Yum Verify: 26/26: ovirt-engine-backend.noarch 0:3.4.0-1.el6 -
>>> ud
>>> [ INFO  ] Stage: Misc configuration
>>> [ INFO  ] Backing up database localhost:engine to
>>> 

Re: [ovirt-users] oVirt Node update question

2017-09-22 Thread Matthias Leopold

Hi Yuval,

i updated my nodes from 4.1.3 to 4.1.6 today and noticed that the

> /etc/yum.repos.d/ovirt-4.1-pre-dependencies.repo
> /etc/yum.repos.d/ovirt-4.1-pre.repo

files i moved away previously reappeared after rebooting, so i'm getting 
updates to 4.1.7-0.1.rc1.20170919143904.git0c14f08 proposed again. 
obviously i haven't fully understood this "layer" concept of imgbased. 
the practical question for me is: how do i get _permanently_ rid of 
these files in path "/etc/yum.repos.d/"?


thanks
matthias

Am 2017-08-31 um 16:24 schrieb Yuval Turgeman:

Yes that would do it, thanks for the update :)

On Thu, Aug 31, 2017 at 5:21 PM, Matthias Leopold 
> wrote:


Hi,

all of the nodes that already made updates in the past have

/etc/yum.repos.d/ovirt-4.1-pre-dependencies.repo
/etc/yum.repos.d/ovirt-4.1-pre.repo

i went through the logs in /var/log/ovirt-engine/host-deploy/ and my
own notes and discovered/remembered that this being presented with
RC versions started on 20170707 when i updated my nodes from 4.1.2
to 4.1.3-0.3.rc3.20170622082156.git47b4302 (!). probably there was a
short timespan when you erroneously published a RC version in the
wrong repo, my nodes "caught" it and dragged this along until today
when i finally cared ;-) I moved the
/etc/yum.repos.d/ovirt-4.1-pre*.repo files away and now everything
seems fine

Regards
Matthias

Am 2017-08-31 um 15:25 schrieb Yuval Turgeman:

Hi,

Don't quite understand how you got to that 4.1.6 rc, it's only
available in the pre release repo, can you paste the yum repos
that are enabled on your system ?

Thanks,
Yuval.

On Thu, Aug 31, 2017 at 4:19 PM, Matthias Leopold

>> wrote:

     Hi,

     thanks a lot.

     So i understand everything is fine with my nodes and i'll
wait until
     the update GUI shows the right version to update (4.1.5 at
the moment).

     Regards
     Matthias


     Am 2017-08-31 um 14:56 schrieb Yuval Turgeman:

         Hi,

         oVirt node ng is shipped with a placeholder rpm
preinstalled.
         The image-update rpms obsolete the placeholder rpm, so
once a
         new image-update rpm is published, yum update will pull
those
         packages.  So you have 1 system that was a fresh
install and the
         others were upgrades.
         Next, the post install script for those image-update
rpms will
         install --justdb the image-update rpms to the new image (so
         running yum update in the new image won't try to pull
again the
         same version).

         Regarding the 4.1.6 it's very strange, we'll need to
check the
         repos to see why it was published.

         As for nodectl, if there are no changes, it won't be
updated and
         you'll see an "old" version or a version that doesn't
seem to be
         matching the current image, but it is ok, we are
thinking of
         changing its name to make it less confusing.

         Hope this helps,
         Yuval.


         On Thu, Aug 31, 2017 at 11:17 AM, Matthias Leopold
         
         >
         

         

Re: [ovirt-users] SSLHandshakeException: Received fatal alert: certificate_expired

2017-09-22 Thread Neil
Thanks Sandro.

I'll get cracking and report back if it fixed it.

Thanks for all the help everyone.


On Fri, Sep 22, 2017 at 3:14 PM, Sandro Bonazzola 
wrote:

>
>
> 2017-09-22 15:07 GMT+02:00 Neil :
>
>>
>> Thanks for the guidance everyone.
>>
>> I've upgraded my engine now to ovirt-engine-3.4.4-1 but I've still got
>> the same error unfortunately. Below is the output of the upgrade. Should
>> this have fixed the issue or do I need to upgrade to 3.5 etc?
>>
>
> I think you'll need 3.5.4 at least: https://bugzilla.
> redhat.com/show_bug.cgi?id=1214860
>
>
>
>
>>
>>
>> [ INFO  ] Stage: Initializing
>> [ INFO  ] Stage: Environment setup
>>   Configuration files: 
>> ['/etc/ovirt-engine-setup.conf.d/10-packaging.conf',
>> '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf']
>>   Log file: /var/log/ovirt-engine/setup/ov
>> irt-engine-setup-20170922125526-vw5khx.log
>>   Version: otopi-1.2.3 (otopi-1.2.3-1.el6)
>> [ INFO  ] Stage: Environment packages setup
>> [ INFO  ] Yum Downloading: repomdPLa0LXtmp.xml (0%)
>> [ INFO  ] Stage: Programs detection
>> [ INFO  ] Stage: Environment setup
>> [ INFO  ] Stage: Environment customization
>>
>>   --== PRODUCT OPTIONS ==--
>>
>>
>>   --== PACKAGES ==--
>>
>> [ INFO  ] Checking for product updates...
>>   Setup has found updates for some packages, do you wish to
>> update them now? (Yes, No) [Yes]:
>> [ INFO  ] Checking for an update for Setup...
>>
>>   --== NETWORK CONFIGURATION ==--
>>
>> [WARNING] Failed to resolve engine01.mydomain.za using DNS, it can be
>> resolved only locally
>>   Setup can automatically configure the firewall on this system.
>>   Note: automatic configuration of the firewall may overwrite
>> current settings.
>>   Do you want Setup to configure the firewall? (Yes, No) [Yes]: no
>>
>>   --== DATABASE CONFIGURATION ==--
>>
>>
>>   --== OVIRT ENGINE CONFIGURATION ==--
>>
>>   Skipping storing options as database already prepared
>>
>>   --== PKI CONFIGURATION ==--
>>
>>   PKI is already configured
>>
>>   --== APACHE CONFIGURATION ==--
>>
>>
>>   --== SYSTEM CONFIGURATION ==--
>>
>>
>>   --== MISC CONFIGURATION ==--
>>
>>
>>   --== END OF CONFIGURATION ==--
>>
>> [ INFO  ] Stage: Setup validation
>>   During execution engine service will be stopped (OK, Cancel)
>> [OK]:
>> [WARNING] Less than 16384MB of memory is available
>> [ INFO  ] Cleaning stale zombie tasks
>>
>>   --== CONFIGURATION PREVIEW ==--
>>
>>   Engine database name: engine
>>   Engine database secured connection  : False
>>   Engine database host: localhost
>>   Engine database user name   : engine
>>   Engine database host name validation: False
>>   Engine database port: 5432
>>   Datacenter storage type : False
>>   Update Firewall : False
>>   Configure WebSocket Proxy   : True
>>   Host FQDN   : engine01.mydomain.za
>>   Upgrade packages: True
>>
>>   Please confirm installation settings (OK, Cancel) [OK]:
>> [ INFO  ] Cleaning async tasks and compensations
>> [ INFO  ] Checking the Engine database consistency
>> [ INFO  ] Stage: Transaction setup
>> [ INFO  ] Stopping engine service
>> [ INFO  ] Stopping websocket-proxy service
>> [ INFO  ] Stage: Misc configuration
>> [ INFO  ] Stage: Package installation
>> [ INFO  ] Yum Status: Downloading Packages
>> [ INFO  ] Yum Download/Verify: ovirt-engine-3.4.4-1.el6.noarch
>> [ INFO  ] Yum Downloading: (2/13): 
>> ovirt-engine-backend-3.4.4-1.el6.noarch.rpm
>> 2.0 M(19%)
>> [ INFO  ] Yum Downloading: (2/13): 
>> ovirt-engine-backend-3.4.4-1.el6.noarch.rpm
>> 4.3 M(41%)
>> [ INFO  ] Yum Downloading: (2/13): 
>> ovirt-engine-backend-3.4.4-1.el6.noarch.rpm
>> 6.3 M(60%)
>> [ INFO  ] Yum Downloading: (2/13): 
>> ovirt-engine-backend-3.4.4-1.el6.noarch.rpm
>> 8.9 M(85%)
>> [ INFO  ] Yum Download/Verify: ovirt-engine-backend-3.4.4-1.el6.noarch
>> [ INFO  ] Yum Download/Verify: ovirt-engine-dbscripts-3.4.4-1.el6.noarch
>> (I've taken out all the downloading progress)
>>
>> [ INFO  ] Yum Verify: 26/26: ovirt-engine-backend.noarch 0:3.4.0-1.el6 -
>> ud
>> [ INFO  ] Stage: Misc configuration
>> [ INFO  ] Backing up database localhost:engine to
>> '/var/lib/ovirt-engine/backups/engine-20170922143709.m_8fr_.dump'.
>> [ INFO  ] Updating Engine database schema
>> [ INFO  ] Generating post install configuration file
>> '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf'
>> [ INFO  ] Stage: Transaction commit
>> [ INFO  ] Stage: Closing up
>>
>>   --== SUMMARY ==--
>>
>> [WARNING] Less than 16384MB of memory is available
>>   SSH 

[ovirt-users] Task stuck at "Finalizing"

2017-09-22 Thread Wesley Stewart
Back when I first installed ovirt a couple months ago, I tried importing
VMs through my export domain and one of the VM's got "stuck" in the process.


Importing VM Server_BACKUP_20170807_010018 to Cluster OVIRT-Cluster Aug 13,
2017 11:59:29 PM N/A 751cb290-7cb8-4ed7-a4e7-e8b9743cf2dc
[image: Inline image 4] Validating Aug 13, 2017 11:59:29 PM until Aug 13,
2017 11:59:29 PM
[image: Inline image 5] Executing Aug 13, 2017 11:59:29 PM until Aug 13,
2017 11:59:37 PM
[image: Inline image 6] Finalizing

I tried running taskcleaner.sh:

sudo su postgres
./taskcleaner.sh -R -u postgres -d engine -s /tmp
(Found in /usr/share/ovirt-engine/setup/dbutils, oddly it would ONLY run
with the -s /tmp, so I am not sure if this is actually running on the
correct server).

Anyone know of what I can do to clear that task?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] SSLHandshakeException: Received fatal alert: certificate_expired

2017-09-22 Thread Sandro Bonazzola
2017-09-22 15:07 GMT+02:00 Neil :

>
> Thanks for the guidance everyone.
>
> I've upgraded my engine now to ovirt-engine-3.4.4-1 but I've still got the
> same error unfortunately. Below is the output of the upgrade. Should this
> have fixed the issue or do I need to upgrade to 3.5 etc?
>

I think you'll need 3.5.4 at least:
https://bugzilla.redhat.com/show_bug.cgi?id=1214860




>
>
> [ INFO  ] Stage: Initializing
> [ INFO  ] Stage: Environment setup
>   Configuration files: 
> ['/etc/ovirt-engine-setup.conf.d/10-packaging.conf',
> '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf']
>   Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-
> 20170922125526-vw5khx.log
>   Version: otopi-1.2.3 (otopi-1.2.3-1.el6)
> [ INFO  ] Stage: Environment packages setup
> [ INFO  ] Yum Downloading: repomdPLa0LXtmp.xml (0%)
> [ INFO  ] Stage: Programs detection
> [ INFO  ] Stage: Environment setup
> [ INFO  ] Stage: Environment customization
>
>   --== PRODUCT OPTIONS ==--
>
>
>   --== PACKAGES ==--
>
> [ INFO  ] Checking for product updates...
>   Setup has found updates for some packages, do you wish to update
> them now? (Yes, No) [Yes]:
> [ INFO  ] Checking for an update for Setup...
>
>   --== NETWORK CONFIGURATION ==--
>
> [WARNING] Failed to resolve engine01.mydomain.za using DNS, it can be
> resolved only locally
>   Setup can automatically configure the firewall on this system.
>   Note: automatic configuration of the firewall may overwrite
> current settings.
>   Do you want Setup to configure the firewall? (Yes, No) [Yes]: no
>
>   --== DATABASE CONFIGURATION ==--
>
>
>   --== OVIRT ENGINE CONFIGURATION ==--
>
>   Skipping storing options as database already prepared
>
>   --== PKI CONFIGURATION ==--
>
>   PKI is already configured
>
>   --== APACHE CONFIGURATION ==--
>
>
>   --== SYSTEM CONFIGURATION ==--
>
>
>   --== MISC CONFIGURATION ==--
>
>
>   --== END OF CONFIGURATION ==--
>
> [ INFO  ] Stage: Setup validation
>   During execution engine service will be stopped (OK, Cancel)
> [OK]:
> [WARNING] Less than 16384MB of memory is available
> [ INFO  ] Cleaning stale zombie tasks
>
>   --== CONFIGURATION PREVIEW ==--
>
>   Engine database name: engine
>   Engine database secured connection  : False
>   Engine database host: localhost
>   Engine database user name   : engine
>   Engine database host name validation: False
>   Engine database port: 5432
>   Datacenter storage type : False
>   Update Firewall : False
>   Configure WebSocket Proxy   : True
>   Host FQDN   : engine01.mydomain.za
>   Upgrade packages: True
>
>   Please confirm installation settings (OK, Cancel) [OK]:
> [ INFO  ] Cleaning async tasks and compensations
> [ INFO  ] Checking the Engine database consistency
> [ INFO  ] Stage: Transaction setup
> [ INFO  ] Stopping engine service
> [ INFO  ] Stopping websocket-proxy service
> [ INFO  ] Stage: Misc configuration
> [ INFO  ] Stage: Package installation
> [ INFO  ] Yum Status: Downloading Packages
> [ INFO  ] Yum Download/Verify: ovirt-engine-3.4.4-1.el6.noarch
> [ INFO  ] Yum Downloading: (2/13): ovirt-engine-backend-3.4.4-1.el6.noarch.rpm
> 2.0 M(19%)
> [ INFO  ] Yum Downloading: (2/13): ovirt-engine-backend-3.4.4-1.el6.noarch.rpm
> 4.3 M(41%)
> [ INFO  ] Yum Downloading: (2/13): ovirt-engine-backend-3.4.4-1.el6.noarch.rpm
> 6.3 M(60%)
> [ INFO  ] Yum Downloading: (2/13): ovirt-engine-backend-3.4.4-1.el6.noarch.rpm
> 8.9 M(85%)
> [ INFO  ] Yum Download/Verify: ovirt-engine-backend-3.4.4-1.el6.noarch
> [ INFO  ] Yum Download/Verify: ovirt-engine-dbscripts-3.4.4-1.el6.noarch
> (I've taken out all the downloading progress)
>
> [ INFO  ] Yum Verify: 26/26: ovirt-engine-backend.noarch 0:3.4.0-1.el6 - ud
> [ INFO  ] Stage: Misc configuration
> [ INFO  ] Backing up database localhost:engine to '/var/lib/ovirt-engine/
> backups/engine-20170922143709.m_8fr_.dump'.
> [ INFO  ] Updating Engine database schema
> [ INFO  ] Generating post install configuration file
> '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf'
> [ INFO  ] Stage: Transaction commit
> [ INFO  ] Stage: Closing up
>
>   --== SUMMARY ==--
>
> [WARNING] Less than 16384MB of memory is available
>   SSH fingerprint: 86:C7:AA:35:45:E9:83:3E:16:C9:2A:F5:68:52:68:84
>   Internal CA EE:91:B3:E7:40:D7:DD:A7:DD:77:
> 9C:3B:D5:A1:E7:BE:E2:C9:8B:AA
>   Web access is enabled at:
>   http://engine01.mydomain.za:80/ovirt-engine
>   https://engine01.mydomain.za:443/ovirt-engine
>   In order to configure 

Re: [ovirt-users] Snapshot removal time

2017-09-22 Thread Ala Hino
On Sep 22, 2017 3:54 PM, "Troels Arvin"  wrote:

Hello,

Ala wrote:
> What's the version of the manager (engine)?

4.1.1



> Could you please provide the link or the SPM and the host
> running the VM?

I don't understand that. I cannot provide intimate details about the
installation, nor a link to it.


Typo. I meant logs, not links.



--
Regards,
Troels

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] SSLHandshakeException: Received fatal alert: certificate_expired

2017-09-22 Thread Neil
Thanks for the guidance everyone.

I've upgraded my engine now to ovirt-engine-3.4.4-1 but I've still got the
same error unfortunately. Below is the output of the upgrade. Should this
have fixed the issue or do I need to upgrade to 3.5 etc?


[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
  Configuration files:
['/etc/ovirt-engine-setup.conf.d/10-packaging.conf',
'/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf']
  Log file:
/var/log/ovirt-engine/setup/ovirt-engine-setup-20170922125526-vw5khx.log
  Version: otopi-1.2.3 (otopi-1.2.3-1.el6)
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Yum Downloading: repomdPLa0LXtmp.xml (0%)
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[ INFO  ] Stage: Environment customization

  --== PRODUCT OPTIONS ==--


  --== PACKAGES ==--

[ INFO  ] Checking for product updates...
  Setup has found updates for some packages, do you wish to update
them now? (Yes, No) [Yes]:
[ INFO  ] Checking for an update for Setup...

  --== NETWORK CONFIGURATION ==--

[WARNING] Failed to resolve engine01.mydomain.za using DNS, it can be
resolved only locally
  Setup can automatically configure the firewall on this system.
  Note: automatic configuration of the firewall may overwrite
current settings.
  Do you want Setup to configure the firewall? (Yes, No) [Yes]: no

  --== DATABASE CONFIGURATION ==--


  --== OVIRT ENGINE CONFIGURATION ==--

  Skipping storing options as database already prepared

  --== PKI CONFIGURATION ==--

  PKI is already configured

  --== APACHE CONFIGURATION ==--


  --== SYSTEM CONFIGURATION ==--


  --== MISC CONFIGURATION ==--


  --== END OF CONFIGURATION ==--

[ INFO  ] Stage: Setup validation
  During execution engine service will be stopped (OK, Cancel)
[OK]:
[WARNING] Less than 16384MB of memory is available
[ INFO  ] Cleaning stale zombie tasks

  --== CONFIGURATION PREVIEW ==--

  Engine database name: engine
  Engine database secured connection  : False
  Engine database host: localhost
  Engine database user name   : engine
  Engine database host name validation: False
  Engine database port: 5432
  Datacenter storage type : False
  Update Firewall : False
  Configure WebSocket Proxy   : True
  Host FQDN   : engine01.mydomain.za
  Upgrade packages: True

  Please confirm installation settings (OK, Cancel) [OK]:
[ INFO  ] Cleaning async tasks and compensations
[ INFO  ] Checking the Engine database consistency
[ INFO  ] Stage: Transaction setup
[ INFO  ] Stopping engine service
[ INFO  ] Stopping websocket-proxy service
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Yum Status: Downloading Packages
[ INFO  ] Yum Download/Verify: ovirt-engine-3.4.4-1.el6.noarch
[ INFO  ] Yum Downloading: (2/13):
ovirt-engine-backend-3.4.4-1.el6.noarch.rpm 2.0 M(19%)
[ INFO  ] Yum Downloading: (2/13):
ovirt-engine-backend-3.4.4-1.el6.noarch.rpm 4.3 M(41%)
[ INFO  ] Yum Downloading: (2/13):
ovirt-engine-backend-3.4.4-1.el6.noarch.rpm 6.3 M(60%)
[ INFO  ] Yum Downloading: (2/13):
ovirt-engine-backend-3.4.4-1.el6.noarch.rpm 8.9 M(85%)
[ INFO  ] Yum Download/Verify: ovirt-engine-backend-3.4.4-1.el6.noarch
[ INFO  ] Yum Download/Verify: ovirt-engine-dbscripts-3.4.4-1.el6.noarch
(I've taken out all the downloading progress)

[ INFO  ] Yum Verify: 26/26: ovirt-engine-backend.noarch 0:3.4.0-1.el6 - ud
[ INFO  ] Stage: Misc configuration
[ INFO  ] Backing up database localhost:engine to
'/var/lib/ovirt-engine/backups/engine-20170922143709.m_8fr_.dump'.
[ INFO  ] Updating Engine database schema
[ INFO  ] Generating post install configuration file
'/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf'
[ INFO  ] Stage: Transaction commit
[ INFO  ] Stage: Closing up

  --== SUMMARY ==--

[WARNING] Less than 16384MB of memory is available
  SSH fingerprint: 86:C7:AA:35:45:E9:83:3E:16:C9:2A:F5:68:52:68:84
  Internal CA
EE:91:B3:E7:40:D7:DD:A7:DD:77:9C:3B:D5:A1:E7:BE:E2:C9:8B:AA
  Web access is enabled at:
  http://engine01.mydomain.za:80/ovirt-engine
  https://engine01.mydomain.za:443/ovirt-engine
  In order to configure firewalld, copy the files from
  /etc/ovirt-engine/firewalld to /etc/firewalld/services
  and execute the following commands:
  firewall-cmd -service ovirt-postgres
  firewall-cmd -service ovirt-https
  firewall-cmd -service ovirt-websocket-proxy
  firewall-cmd -service ovirt-http
  The following 

Re: [ovirt-users] Snapshot removal time

2017-09-22 Thread Troels Arvin
Hello,

Ala wrote:
> What's the version of the manager (engine)?

4.1.1



> Could you please provide the link or the SPM and the host 
> running the VM?

I don't understand that. I cannot provide intimate details about the 
installation, nor a link to it.


-- 
Regards,
Troels

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Snapshot removal time

2017-09-22 Thread Ala Hino
Hello,

What's the version of the manager (engine)?

Could you please provide the link or the SPM and the host running the VM?

Thanks,
Ala

On Sep 22, 2017 1:19 PM, "Troels Arvin"  wrote:

> Hello,
>
> I have a RHV 4.1 virtualized guest-server with a number of rather large
> VirtIO virtual disks attached. The virtual disks are allocated from a
> fibre channel (block) storage domain. The hypervisor servers run RHEL 7.4.
>
> When I take a snapshot of the guest, then it takes a long time to remove
> the snapshots again, when the guest is powered off (a snapshot of a 2 TiB
> disk takes around 3 hours to remove). However, when the guest is running,
> then snapshot removal is very quick (perhaps around five minutes per
> snapshot). The involved disks have not been written much to while they
> had snapshots.
>
> I would expect the opposite: I.e., when the guest is turned off, then I
> would assume that oVirt can handle snapshot removal in a much more
> aggressive fashion than when performing a live snapshot removal?
>
> When performing offline snapshot removal, then on the hypervisor having
> the SPM role, I see the following in output from "ps xauw":
>
> vdsm 10255 8.3 0.0 389144 27196 ? S convert -p -t none -T none -f qcow2 /rhev/data-center/mnt/blockSD/xxx/
> images/yyy/zzz -O raw /rhev/data-center/mnt/blockSD/xxx/images/yyy/
> zzz_MERGE
>
> I don't see the same kind of process running on a guest's hypervisor when
> online snapshot removal is in progress.
>
> I've read most of https://www.ovirt.org/develop/release-management/
> features/storage/remove-snapshot/
> My interpretation from that document is that I should expect to see "qemu-
> img commit" commands instead of "qemu-img convert" processes. Or?
>
> The RHV system involved is somewhat old, having been upgrade many times
> from 3.x through 4.1. Could it be that it carries around old left-overs
> which results in obsolete snapshot removal behavior?
>
> --
> Regards,
> Troels Arvin
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Snapshot removal time

2017-09-22 Thread Troels Arvin
Hello,

I have a RHV 4.1 virtualized guest-server with a number of rather large 
VirtIO virtual disks attached. The virtual disks are allocated from a 
fibre channel (block) storage domain. The hypervisor servers run RHEL 7.4.

When I take a snapshot of the guest, then it takes a long time to remove 
the snapshots again, when the guest is powered off (a snapshot of a 2 TiB 
disk takes around 3 hours to remove). However, when the guest is running, 
then snapshot removal is very quick (perhaps around five minutes per 
snapshot). The involved disks have not been written much to while they 
had snapshots.

I would expect the opposite: I.e., when the guest is turned off, then I 
would assume that oVirt can handle snapshot removal in a much more 
aggressive fashion than when performing a live snapshot removal?

When performing offline snapshot removal, then on the hypervisor having 
the SPM role, I see the following in output from "ps xauw":

vdsm 10255 8.3 0.0 389144 27196 ? Shttps://www.ovirt.org/develop/release-management/
features/storage/remove-snapshot/
My interpretation from that document is that I should expect to see "qemu-
img commit" commands instead of "qemu-img convert" processes. Or?

The RHV system involved is somewhat old, having been upgrade many times 
from 3.x through 4.1. Could it be that it carries around old left-overs 
which results in obsolete snapshot removal behavior?

-- 
Regards,
Troels Arvin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM won't start if a Cinder disk is attached

2017-09-22 Thread Luca 'remix_tj' Lorenzetto
I see only this traceback:

2017-09-22 09:23:13,094+0200 ERROR (jsonrpc/2) [jsonrpc.JsonRpcServer]
Internal server error (__init__:577)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
572, in _handle_request
res = method(**params)
  File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line
198, in _dynamicMethod
result = fn(*methodArgs)
  File "/usr/share/vdsm/API.py", line 1454, in getAllVmIoTunePolicies
io_tune_policies_dict = self._cif.getAllVmIoTunePolicies()
  File "/usr/share/vdsm/clientIF.py", line 454, in getAllVmIoTunePolicies
'current_values': v.getIoTune()}
  File "/usr/share/vdsm/virt/vm.py", line 2859, in getIoTune
result = self.getIoTuneResponse()
  File "/usr/share/vdsm/virt/vm.py", line 2878, in getIoTuneResponse
res = self._dom.blockIoTune(
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line
47, in __getattr__
% self.vmid)
NotConnectedError: VM u'1fa7225c-7d60-4617-ab65-f7b6fea0357f' was not
started yet or was shut down


I've absolutely no idea what blockIoTune does and if it is related.
Maybe someone with more experience about this setup can help.



On Fri, Sep 22, 2017 at 9:31 AM, Maxence SARTIAUX
 wrote:
> Hi,
>
> Here's the vdsm.log from the time when i started the VM.
>
> https://pastebin.com/MWdTR0Gr(i've omited glusterfs volume & server list
> lines to have something a bit more readable)
>
> Ovirt version is 4.1.6.2-1.el7 (updated since the first mail)
> Ceph 12.2.0
> Cinder 10.0.5
>
> The cinder disk is a second disk, it's not the system.
>
> 
> De: "Luca 'remix_tj' Lorenzetto" 
> À: "Maxence SARTIAUX" 
> Cc: "users" 
> Envoyé: Jeudi 21 Septembre 2017 22:49:07
> Objet : Re: [ovirt-users] VM won't start if a Cinder disk is attached
>
>
> Hi,
>
> can you attach vdsm.log?
>
> Which version are you running? IIRC in the past booting from Ceph was not
> possible, but should be possible since 4.1.
>
> Luca
>
> On Thu, Sep 21, 2017 at 3:42 PM, Maxence SARTIAUX 
> wrote:
>>
>> Hello
>>
>> I have a ovirt 4.1.5.2-1 ovirt cluster with a ceph luminous & openstack
>> ocata cinder.
>>
>> I can create / remove / attach cinder disks with ovirt but when i attach a
>> disk to a VM, the VM stay in "starting mode" (double up arrow grey) and
>> never goes up, ovirt try every available hypervisors and end to detach the
>> disk and stay in "starting up" state
>>
>> All i see in the libvirt logs are "connection timeout" nothing more, the
>> hypervisors can contact the ceph cluster
>>
>> Nothing related in the ovirt logs & cinder
>>
>> Any ideas ?
>>
>> Thank you !
>>
>>
>>
>> Maxence Sartiaux | System & Network Engineer
>> Boulevard Initialis, 28 - 7000 Mons
>> Tel :+32 (0)65 84 23 85 (ext: 6016)
>> Fax :+32 (0)65 84 66 76
>> www.it-optics.com
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
> --
> "E' assurdo impiegare gli uomini di intelligenza eccellente per fare
> calcoli che potrebbero essere affidati a chiunque se si usassero delle
> macchine"
> Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
>
> "Internet è la più grande biblioteca del mondo.
> Ma il problema è che i libri sono tutti sparsi sul pavimento"
> John Allen Paulos, Matematico (1945-vivente)
>
> Luca 'remix_tj' Lorenzetto, http://www.remixtj.net ,
> 
>



-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] SSLHandshakeException: Received fatal alert: certificate_expired

2017-09-22 Thread Martin Perina
On Fri, Sep 22, 2017 at 10:58 AM, Neil  wrote:

> Thanks Martin and Piotr,
>
> Correct, this was a very old installation from the old drey repo that was
> upgraded gradually over the years.
>
> I have tried engine-setup yesterday, prior to this looking under
> /var/log/ovirt-engine/setup it looks like 2014
>
> I've attached a log of the output of running it now, looks like a repo
> issue with trying to upgrade to the latest 3.4.x release, but not sure what
> else to look for?
>

​Hmm, it's so ancient version that oVirt 3.4 mirrors are probably not
working anymore. You can either:

1. Execute engine-setup --offline to skip updates check or
2. Edit /etc/yum.repos.d/ovirt*.conf files and switch from mirrors to main
site resources.ovirt.org


> Thanks for the assistance.
>
> Regards.
>
> Neil Wilson
>
>
> On Fri, Sep 22, 2017 at 10:38 AM, Piotr Kliczewski <
> piotr.kliczew...@gmail.com> wrote:
>
>> On Fri, Sep 22, 2017 at 10:35 AM, Martin Perina 
>> wrote:
>> >
>> >
>> > On Fri, Sep 22, 2017 at 10:18 AM, Neil  wrote:
>> >>
>> >> Hi Piotr,
>> >>
>> >> Thank you for the information.
>> >>
>> >> It looks like something has expired looking in the server.log now that
>> >> debug is enabled.
>> >>
>> >> 2017-09-22 09:35:26,462 INFO  [stdout] (MSC service thread 1-4)
>>  Version:
>> >> V3
>> >> 2017-09-22 09:35:26,464 INFO  [stdout] (MSC service thread 1-4)
>>  Subject:
>> >> CN=engine01.mydomain.za, O=mydomain, C=US
>> >> 2017-09-22 09:35:26,467 INFO  [stdout] (MSC service thread 1-4)
>> >> Signature Algorithm: SHA1withRSA, OID = 1.2.840.113549.1.1.5
>> >> 2017-09-22 09:35:26,471 INFO  [stdout] (MSC service thread 1-4)
>> >> 2017-09-22 09:35:26,472 INFO  [stdout] (MSC service thread 1-4)   Key:
>> >> Sun RSA public key, 1024 bits
>> >> 2017-09-22 09:35:26,474 INFO  [stdout] (MSC service thread 1-4)
>>  modulus:
>> >> 966706131850237857720016566132274169225143716493132034132811
>> 213711757321195965137528821713060454503460188878350322233731
>> 259812207539722762942035931744044702655933680916835641105243
>> 164032601213316092139626126181817086803318505413903188689260
>> 54438078223371655800890725486783860059873397983318033852172060923531
>> >> 2017-09-22 09:35:26,476 INFO  [stdout] (MSC service thread 1-4)
>>  public
>> >> exponent: 65537
>> >> 2017-09-22 09:35:26,477 INFO  [stdout] (MSC service thread 1-4)
>> >> Validity: [From: Sun Oct 14 22:26:46 SAST 2012,
>> >> 2017-09-22 09:35:26,478 INFO  [stdout] (MSC service thread 1-4)
>> >> To: Tue Sep 19 18:26:49 SAST 2017]
>> >> 2017-09-22 09:35:26,479 INFO  [stdout] (MSC service thread 1-4)
>>  Issuer:
>> >> CN=CA-engine01.mydomain.za.47472, O=mydomain, C=US
>> >>
>> >> Any idea how I can generate a new one and what cert it is that's
>> expired?
>> >
>> >
>> > It seems that your engine certificate has expired, but AFAIK this
>> > certificate should be automatically renewed during engine-setup. So
>> when did
>> > you execute engine-setup for last time? Any info/warning about this
>> shown
>> > during invocation?
>>
>> Correct, Martin was a bit faster then me :)
>>
>> >
>> > Also looking at server.log I found JBoss 7.1.1, so you are using really
>> > ancient oVirt, version, right?
>> >
>> >>
>> >> Please see the attached log for more info.
>> >>
>> >> Thank you so much for your assistance.
>> >>
>> >> Regards.
>> >>
>> >> Neil Wilson.
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> On Thu, Sep 21, 2017 at 8:41 PM, Piotr Kliczewski
>> >>  wrote:
>> >>>
>> >>> Neil,
>> >>>
>> >>> It seems that your engine certificate(s) is/are not ok. I would
>> >>> suggest to enable ssl debug in the engine by:
>> >>> - add '-Djavax.net.debug=all' to ovirt-engine.py file here [1].
>> >>> - restart your engine
>> >>> - check your server.log and check what is the issue.
>> >>>
>> >>> Hopefully we will be able to understand what happened in your setup.
>> >>>
>> >>> Thanks,
>> >>> Piotr
>> >>>
>> >>> [1]
>> >>> https://github.com/oVirt/ovirt-engine/blob/master/packaging/
>> services/ovirt-engine/ovirt-engine.py#L341
>> >>>
>> >>> On Thu, Sep 21, 2017 at 4:42 PM, Neil  wrote:
>> >>> > Further to the logs sent, on the nodes I'm also seeing the following
>> >>> > error
>> >>> > under /var/log/messages...
>> >>> >
>> >>> > Sep 20 03:43:12 node01 vdsm root ERROR invalid client certificate
>> with
>> >>> > subject "/C=US/O=UKDM/CN=engine01.mydomain.za"^C
>> >>> > Sep 20 03:43:12 node01 vdsm vds ERROR xml-rpc handler
>> >>> > exception#012Traceback
>> >>> > (most recent call last):#012  File "/usr/share/vdsm/BindingXMLRPC
>> .py",
>> >>> > line
>> >>> > 80, in threaded_start#012self.server.handle_request()#012  File
>> >>> > "/usr/lib64/python2.6/SocketServer.py", line 278, in
>> handle_request#012
>> >>> > self._handle_request_noblock()#012  File
>> >>> > "/usr/lib64/python2.6/SocketServer.py", line 288, in
>> >>> > _handle_request_noblock#012request, client_address =
>> >>> > 

Re: [ovirt-users] SSLHandshakeException: Received fatal alert: certificate_expired

2017-09-22 Thread Sandro Bonazzola
2017-09-21 15:26 GMT+02:00 Neil :

> Hi guys,
>
> Please could someone assist, my cluster is down and I can't access my vm's
> to switch some of them back on.
>
> I'm seeing the following error in the engine.log however I've checked my
> certs on my hosts (as some of the goolge results said to check), but the
> certs haven't expired...
>
>
> 2017-09-21 15:09:45,077 ERROR 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
> (DefaultQuartzScheduler_Worker-4) Command GetCapabilitiesVDSCommand(HostName
> = node02.mydomain.za, HostId = d2debdfe-76e7-40cf-a7fd-78a0f50f14d4,
> vds=Host[node02.mydomain.za]) execution failed. Exception:
> VDSNetworkException: javax.net.ssl.SSLHandshakeException: Received fatal
> alert: certificate_expired
> 2017-09-21 15:09:45,086 ERROR 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
> (DefaultQuartzScheduler_Worker-10) Command GetCapabilitiesVDSCommand(HostName
> = node01.mydomain.za, HostId = b108549c-1700-11e2-b936-9f5243b8ce13,
> vds=Host[node01.mydomain.za]) execution failed. Exception:
> VDSNetworkException: javax.net.ssl.SSLHandshakeException: Received fatal
> alert: certificate_expired
> 2017-09-21 15:09:48,173 ERROR
>
> My engine and host info is below...
>
> [root@engine01 ovirt-engine]# rpm -qa | grep -i ovirt
> ovirt-engine-lib-3.4.0-1.el6.noarch
> ovirt-engine-restapi-3.4.0-1.el6.noarch
> ovirt-engine-setup-plugin-ovirt-engine-3.4.0-1.el6.noarch
> ovirt-engine-3.4.0-1.el6.noarch
>

People already answered about the certificate expiration.
Please note ovirt-engine-3.4.0 is the first release in the 3.4 series which
received 4 updates in its lifecycle (latest is 3.4.4,
https://www.ovirt.org/develop/release-management/releases/3.4.4/ )

Please consider updating to a supported version as soon as possible.






> ovirt-engine-setup-plugin-websocket-proxy-3.4.0-1.el6.noarch
> ovirt-host-deploy-java-1.2.0-1.el6.noarch
> ovirt-engine-setup-3.4.0-1.el6.noarch
> ovirt-host-deploy-1.2.0-1.el6.noarch
> ovirt-engine-backend-3.4.0-1.el6.noarch
> ovirt-image-uploader-3.4.0-1.el6.noarch
> ovirt-engine-tools-3.4.0-1.el6.noarch
> ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch
> ovirt-engine-webadmin-portal-3.4.0-1.el6.noarch
> ovirt-engine-cli-3.4.0.5-1.el6.noarch
> ovirt-engine-setup-base-3.4.0-1.el6.noarch
> ovirt-iso-uploader-3.4.0-1.el6.noarch
> ovirt-engine-userportal-3.4.0-1.el6.noarch
> ovirt-log-collector-3.4.1-1.el6.noarch
> ovirt-engine-websocket-proxy-3.4.0-1.el6.noarch
> ovirt-engine-setup-plugin-ovirt-engine-common-3.4.0-1.el6.noarch
> ovirt-engine-dbscripts-3.4.0-1.el6.noarch
> [root@engine01 ovirt-engine]# cat /etc/redhat-release
> CentOS release 6.5 (Final)
>
>
> [root@node02 ~]# openssl x509 -in /etc/pki/vdsm/certs/vdsmcert.pem
> -enddate -noout ; date
> notAfter=May 27 08:36:17 2019 GMT
> Thu Sep 21 15:18:22 SAST 2017
> CentOS release 6.5 (Final)
> [root@node02 ~]# rpm -qa | grep vdsm
> vdsm-4.14.6-0.el6.x86_64
> vdsm-python-4.14.6-0.el6.x86_64
> vdsm-cli-4.14.6-0.el6.noarch
> vdsm-xmlrpc-4.14.6-0.el6.noarch
> vdsm-python-zombiereaper-4.14.6-0.el6.noarch
>
>
> [root@node01 ~]# openssl x509 -in /etc/pki/vdsm/certs/vdsmcert.pem
> -enddate -noout ; date
> notAfter=Jun 13 16:09:41 2018 GMT
> Thu Sep 21 15:18:52 SAST 2017
> CentOS release 6.5 (Final)
> [root@node01 ~]# rpm -qa | grep -i vdsm
> vdsm-4.14.6-0.el6.x86_64
> vdsm-xmlrpc-4.14.6-0.el6.noarch
> vdsm-cli-4.14.6-0.el6.noarch
> vdsm-python-zombiereaper-4.14.6-0.el6.noarch
> vdsm-python-4.14.6-0.el6.x86_64
>
> Please could I have some assistance, I'm rater desperate.
>
> Thank you.
>
> Regards.
>
> Neil Wilson
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Snapshot removal

2017-09-22 Thread Lionel Caignec
Hi,

i'm wondering if it possible to delete at same time snapshots of differents VM 
? Or is it necessary to do it only one at a time?

--
Lionel 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] SSLHandshakeException: Received fatal alert: certificate_expired

2017-09-22 Thread Piotr Kliczewski
On Fri, Sep 22, 2017 at 10:35 AM, Martin Perina  wrote:
>
>
> On Fri, Sep 22, 2017 at 10:18 AM, Neil  wrote:
>>
>> Hi Piotr,
>>
>> Thank you for the information.
>>
>> It looks like something has expired looking in the server.log now that
>> debug is enabled.
>>
>> 2017-09-22 09:35:26,462 INFO  [stdout] (MSC service thread 1-4)   Version:
>> V3
>> 2017-09-22 09:35:26,464 INFO  [stdout] (MSC service thread 1-4)   Subject:
>> CN=engine01.mydomain.za, O=mydomain, C=US
>> 2017-09-22 09:35:26,467 INFO  [stdout] (MSC service thread 1-4)
>> Signature Algorithm: SHA1withRSA, OID = 1.2.840.113549.1.1.5
>> 2017-09-22 09:35:26,471 INFO  [stdout] (MSC service thread 1-4)
>> 2017-09-22 09:35:26,472 INFO  [stdout] (MSC service thread 1-4)   Key:
>> Sun RSA public key, 1024 bits
>> 2017-09-22 09:35:26,474 INFO  [stdout] (MSC service thread 1-4)   modulus:
>> 96670613185023785772001656613227416922514371649313203413281121371175732119596513752882171306045450346018887835032223373125981220753972276294203593174404470265593368091683564110524316403260121331609213962612618181708680331850541390318868926054438078223371655800890725486783860059873397983318033852172060923531
>> 2017-09-22 09:35:26,476 INFO  [stdout] (MSC service thread 1-4)   public
>> exponent: 65537
>> 2017-09-22 09:35:26,477 INFO  [stdout] (MSC service thread 1-4)
>> Validity: [From: Sun Oct 14 22:26:46 SAST 2012,
>> 2017-09-22 09:35:26,478 INFO  [stdout] (MSC service thread 1-4)
>> To: Tue Sep 19 18:26:49 SAST 2017]
>> 2017-09-22 09:35:26,479 INFO  [stdout] (MSC service thread 1-4)   Issuer:
>> CN=CA-engine01.mydomain.za.47472, O=mydomain, C=US
>>
>> Any idea how I can generate a new one and what cert it is that's expired?
>
>
> It seems that your engine certificate has expired, but AFAIK this
> certificate should be automatically renewed during engine-setup. So when did
> you execute engine-setup for last time? Any info/warning about this shown
> during invocation?

Correct, Martin was a bit faster then me :)

>
> Also looking at server.log I found JBoss 7.1.1, so you are using really
> ancient oVirt, version, right?
>
>>
>> Please see the attached log for more info.
>>
>> Thank you so much for your assistance.
>>
>> Regards.
>>
>> Neil Wilson.
>>
>>
>>
>>
>>
>>
>> On Thu, Sep 21, 2017 at 8:41 PM, Piotr Kliczewski
>>  wrote:
>>>
>>> Neil,
>>>
>>> It seems that your engine certificate(s) is/are not ok. I would
>>> suggest to enable ssl debug in the engine by:
>>> - add '-Djavax.net.debug=all' to ovirt-engine.py file here [1].
>>> - restart your engine
>>> - check your server.log and check what is the issue.
>>>
>>> Hopefully we will be able to understand what happened in your setup.
>>>
>>> Thanks,
>>> Piotr
>>>
>>> [1]
>>> https://github.com/oVirt/ovirt-engine/blob/master/packaging/services/ovirt-engine/ovirt-engine.py#L341
>>>
>>> On Thu, Sep 21, 2017 at 4:42 PM, Neil  wrote:
>>> > Further to the logs sent, on the nodes I'm also seeing the following
>>> > error
>>> > under /var/log/messages...
>>> >
>>> > Sep 20 03:43:12 node01 vdsm root ERROR invalid client certificate with
>>> > subject "/C=US/O=UKDM/CN=engine01.mydomain.za"^C
>>> > Sep 20 03:43:12 node01 vdsm vds ERROR xml-rpc handler
>>> > exception#012Traceback
>>> > (most recent call last):#012  File "/usr/share/vdsm/BindingXMLRPC.py",
>>> > line
>>> > 80, in threaded_start#012self.server.handle_request()#012  File
>>> > "/usr/lib64/python2.6/SocketServer.py", line 278, in handle_request#012
>>> > self._handle_request_noblock()#012  File
>>> > "/usr/lib64/python2.6/SocketServer.py", line 288, in
>>> > _handle_request_noblock#012request, client_address =
>>> > self.get_request()#012  File "/usr/lib64/python2.6/SocketServer.py",
>>> > line
>>> > 456, in get_request#012return self.socket.accept()#012  File
>>> > "/usr/lib64/python2.6/site-packages/vdsm/SecureXMLRPCServer.py", line
>>> > 136,
>>> > in accept#012raise SSL.SSLError("%s, client %s" % (e,
>>> > address[0]))#012SSLError: no certificate returned, client 10.251.193.5
>>> >
>>> > Not sure if this is any further help in diagnosing the issue?
>>> >
>>> > Thanks, any assistance is appreciated.
>>> >
>>> > Regards.
>>> >
>>> > Neil Wilson.
>>> >
>>> >
>>> > On Thu, Sep 21, 2017 at 4:31 PM, Neil  wrote:
>>> >>
>>> >> Hi Piotr,
>>> >>
>>> >> Thank you for the reply. After sending the email I did go and check
>>> >> the
>>> >> engine one too
>>> >>
>>> >> [root@engine01 /]# openssl x509 -in /etc/pki/ovirt-engine/ca.pem
>>> >> -enddate
>>> >> -noout
>>> >> notAfter=Oct 13 16:26:46 2022 GMT
>>> >>
>>> >> I'm not sure if this one below is meant to verify or if this output is
>>> >> expected?
>>> >>
>>> >> [root@engine01 /]# openssl x509 -in
>>> >> /etc/pki/ovirt-engine/private/ca.pem
>>> >> -enddate -noout
>>> >> unable to load certificate
>>> >> 140642165552968:error:0906D06C:PEM routines:PEM_read_bio:no start
>>> >> 

Re: [ovirt-users] SSLHandshakeException: Received fatal alert: certificate_expired

2017-09-22 Thread Martin Perina
On Fri, Sep 22, 2017 at 10:18 AM, Neil  wrote:

> Hi Piotr,
>
> Thank you for the information.
>
> It looks like something has expired looking in the server.log now that
> debug is enabled.
>
> 2017-09-22 09:35:26,462 INFO  [stdout] (MSC service thread 1-4)   Version:
> V3
> 2017-09-22 09:35:26,464 INFO  [stdout] (MSC service thread 1-4)   Subject:
> CN=engine01.mydomain.za, O=mydomain, C=US
> 2017-09-22 09:35:26,467 INFO  [stdout] (MSC service thread 1-4)
> Signature Algorithm: SHA1withRSA, OID = 1.2.840.113549.1.1.5
> 2017-09-22 09:35:26,471 INFO  [stdout] (MSC service thread 1-4)
> 2017-09-22 09:35:26,472 INFO  [stdout] (MSC service thread 1-4)   Key:
>  Sun RSA public key, 1024 bits
> 2017-09-22 09:35:26,474 INFO  [stdout] (MSC service thread 1-4)   modulus:
> 966706131850237857720016566132274169225143716493132034132811
> 213711757321195965137528821713060454503460188878350322233731
> 259812207539722762942035931744044702655933680916835641105243
> 164032601213316092139626126181817086803318505413903188689260
> 54438078223371655800890725486783860059873397983318033852172060923531
> 2017-09-22 09:35:26,476 INFO  [stdout] (MSC service thread 1-4)   public
> exponent: 65537
> 2017-09-22 09:35:26,477 INFO  [stdout] (MSC service thread 1-4)
> Validity: [From: Sun Oct 14 22:26:46 SAST 2012,
> 2017-09-22 09:35:26,478 INFO  [stdout] (MSC service thread 1-4)
>  To: Tue Sep 19 18:26:49 SAST 2017]
> 2017-09-22 09:35:26,479 INFO  [stdout] (MSC service thread 1-4)   Issuer:
> CN=CA-engine01.mydomain.za.47472, O=mydomain, C=US
>
> Any idea how I can generate a new one and what cert it is that's expired?
>

​It seems that your engine certificate has expired, but AFAIK this
certificate should be automat​ically renewed during engine-setup. So when
did you execute engine-setup for last time? Any info/warning about this
shown during invocation?

Also looking at server.log I found JBoss 7.1.1, so you are using really
ancient oVirt, version, right?


> Please see the attached log for more info.
>
> Thank you so much for your assistance.
>
> Regards.
>
> Neil Wilson.
>
>
>
>
>
>
> On Thu, Sep 21, 2017 at 8:41 PM, Piotr Kliczewski <
> piotr.kliczew...@gmail.com> wrote:
>
>> Neil,
>>
>> It seems that your engine certificate(s) is/are not ok. I would
>> suggest to enable ssl debug in the engine by:
>> - add '-Djavax.net.debug=all' to ovirt-engine.py file here [1].
>> - restart your engine
>> - check your server.log and check what is the issue.
>>
>> Hopefully we will be able to understand what happened in your setup.
>>
>> Thanks,
>> Piotr
>>
>> [1] https://github.com/oVirt/ovirt-engine/blob/master/packaging/
>> services/ovirt-engine/ovirt-engine.py#L341
>>
>> On Thu, Sep 21, 2017 at 4:42 PM, Neil  wrote:
>> > Further to the logs sent, on the nodes I'm also seeing the following
>> error
>> > under /var/log/messages...
>> >
>> > Sep 20 03:43:12 node01 vdsm root ERROR invalid client certificate with
>> > subject "/C=US/O=UKDM/CN=engine01.mydomain.za"^C
>> > Sep 20 03:43:12 node01 vdsm vds ERROR xml-rpc handler
>> exception#012Traceback
>> > (most recent call last):#012  File "/usr/share/vdsm/BindingXMLRPC.py",
>> line
>> > 80, in threaded_start#012self.server.handle_request()#012  File
>> > "/usr/lib64/python2.6/SocketServer.py", line 278, in handle_request#012
>> > self._handle_request_noblock()#012  File
>> > "/usr/lib64/python2.6/SocketServer.py", line 288, in
>> > _handle_request_noblock#012request, client_address =
>> > self.get_request()#012  File "/usr/lib64/python2.6/SocketServer.py",
>> line
>> > 456, in get_request#012return self.socket.accept()#012  File
>> > "/usr/lib64/python2.6/site-packages/vdsm/SecureXMLRPCServer.py", line
>> 136,
>> > in accept#012raise SSL.SSLError("%s, client %s" % (e,
>> > address[0]))#012SSLError: no certificate returned, client 10.251.193.5
>> >
>> > Not sure if this is any further help in diagnosing the issue?
>> >
>> > Thanks, any assistance is appreciated.
>> >
>> > Regards.
>> >
>> > Neil Wilson.
>> >
>> >
>> > On Thu, Sep 21, 2017 at 4:31 PM, Neil  wrote:
>> >>
>> >> Hi Piotr,
>> >>
>> >> Thank you for the reply. After sending the email I did go and check the
>> >> engine one too
>> >>
>> >> [root@engine01 /]# openssl x509 -in /etc/pki/ovirt-engine/ca.pem
>> -enddate
>> >> -noout
>> >> notAfter=Oct 13 16:26:46 2022 GMT
>> >>
>> >> I'm not sure if this one below is meant to verify or if this output is
>> >> expected?
>> >>
>> >> [root@engine01 /]# openssl x509 -in /etc/pki/ovirt-engine/private/
>> ca.pem
>> >> -enddate -noout
>> >> unable to load certificate
>> >> 140642165552968:error:0906D06C:PEM routines:PEM_read_bio:no start
>> >> line:pem_lib.c:703:Expecting: TRUSTED CERTIFICATE
>> >>
>> >> My date is correct too Thu Sep 21 16:30:15 SAST 2017
>> >>
>> >> Any ideas?
>> >>
>> >> Googling surprisingly doesn't come up with much.
>> >>
>> >> Thank you.
>> >>
>> >> Regards.
>> >>
>> >> Neil Wilson.
>> 

Re: [ovirt-users] SSLHandshakeException: Received fatal alert: certificate_expired

2017-09-22 Thread Neil
Hi Piotr,

Thank you for the information.

It looks like something has expired looking in the server.log now that
debug is enabled.

2017-09-22 09:35:26,462 INFO  [stdout] (MSC service thread 1-4)   Version:
V3
2017-09-22 09:35:26,464 INFO  [stdout] (MSC service thread 1-4)   Subject:
CN=engine01.mydomain.za, O=mydomain, C=US
2017-09-22 09:35:26,467 INFO  [stdout] (MSC service thread 1-4)   Signature
Algorithm: SHA1withRSA, OID = 1.2.840.113549.1.1.5
2017-09-22 09:35:26,471 INFO  [stdout] (MSC service thread 1-4)
2017-09-22 09:35:26,472 INFO  [stdout] (MSC service thread 1-4)   Key:  Sun
RSA public key, 1024 bits
2017-09-22 09:35:26,474 INFO  [stdout] (MSC service thread 1-4)   modulus:
96670613185023785772001656613227416922514371649313203413281121371175732119596513752882171306045450346018887835032223373125981220753972276294203593174404470265593368091683564110524316403260121331609213962612618181708680331850541390318868926054438078223371655800890725486783860059873397983318033852172060923531
2017-09-22 09:35:26,476 INFO  [stdout] (MSC service thread 1-4)   public
exponent: 65537
2017-09-22 09:35:26,477 INFO  [stdout] (MSC service thread 1-4)   Validity:
[From: Sun Oct 14 22:26:46 SAST 2012,
2017-09-22 09:35:26,478 INFO  [stdout] (MSC service thread 1-4)
   To: Tue Sep 19 18:26:49 SAST 2017]
2017-09-22 09:35:26,479 INFO  [stdout] (MSC service thread 1-4)   Issuer:
CN=CA-engine01.mydomain.za.47472, O=mydomain, C=US

Any idea how I can generate a new one and what cert it is that's expired?

Please see the attached log for more info.

Thank you so much for your assistance.

Regards.

Neil Wilson.






On Thu, Sep 21, 2017 at 8:41 PM, Piotr Kliczewski <
piotr.kliczew...@gmail.com> wrote:

> Neil,
>
> It seems that your engine certificate(s) is/are not ok. I would
> suggest to enable ssl debug in the engine by:
> - add '-Djavax.net.debug=all' to ovirt-engine.py file here [1].
> - restart your engine
> - check your server.log and check what is the issue.
>
> Hopefully we will be able to understand what happened in your setup.
>
> Thanks,
> Piotr
>
> [1] https://github.com/oVirt/ovirt-engine/blob/master/
> packaging/services/ovirt-engine/ovirt-engine.py#L341
>
> On Thu, Sep 21, 2017 at 4:42 PM, Neil  wrote:
> > Further to the logs sent, on the nodes I'm also seeing the following
> error
> > under /var/log/messages...
> >
> > Sep 20 03:43:12 node01 vdsm root ERROR invalid client certificate with
> > subject "/C=US/O=UKDM/CN=engine01.mydomain.za"^C
> > Sep 20 03:43:12 node01 vdsm vds ERROR xml-rpc handler
> exception#012Traceback
> > (most recent call last):#012  File "/usr/share/vdsm/BindingXMLRPC.py",
> line
> > 80, in threaded_start#012self.server.handle_request()#012  File
> > "/usr/lib64/python2.6/SocketServer.py", line 278, in handle_request#012
> > self._handle_request_noblock()#012  File
> > "/usr/lib64/python2.6/SocketServer.py", line 288, in
> > _handle_request_noblock#012request, client_address =
> > self.get_request()#012  File "/usr/lib64/python2.6/SocketServer.py",
> line
> > 456, in get_request#012return self.socket.accept()#012  File
> > "/usr/lib64/python2.6/site-packages/vdsm/SecureXMLRPCServer.py", line
> 136,
> > in accept#012raise SSL.SSLError("%s, client %s" % (e,
> > address[0]))#012SSLError: no certificate returned, client 10.251.193.5
> >
> > Not sure if this is any further help in diagnosing the issue?
> >
> > Thanks, any assistance is appreciated.
> >
> > Regards.
> >
> > Neil Wilson.
> >
> >
> > On Thu, Sep 21, 2017 at 4:31 PM, Neil  wrote:
> >>
> >> Hi Piotr,
> >>
> >> Thank you for the reply. After sending the email I did go and check the
> >> engine one too
> >>
> >> [root@engine01 /]# openssl x509 -in /etc/pki/ovirt-engine/ca.pem
> -enddate
> >> -noout
> >> notAfter=Oct 13 16:26:46 2022 GMT
> >>
> >> I'm not sure if this one below is meant to verify or if this output is
> >> expected?
> >>
> >> [root@engine01 /]# openssl x509 -in /etc/pki/ovirt-engine/private/
> ca.pem
> >> -enddate -noout
> >> unable to load certificate
> >> 140642165552968:error:0906D06C:PEM routines:PEM_read_bio:no start
> >> line:pem_lib.c:703:Expecting: TRUSTED CERTIFICATE
> >>
> >> My date is correct too Thu Sep 21 16:30:15 SAST 2017
> >>
> >> Any ideas?
> >>
> >> Googling surprisingly doesn't come up with much.
> >>
> >> Thank you.
> >>
> >> Regards.
> >>
> >> Neil Wilson.
> >>
> >> On Thu, Sep 21, 2017 at 4:16 PM, Piotr Kliczewski
> >>  wrote:
> >>>
> >>> Neil,
> >>>
> >>> You checked both nodes what about the engine? Can you check engine
> certs?
> >>> You can find more info where they are located here [1].
> >>>
> >>> Thanks,
> >>> Piotr
> >>>
> >>> [1]
> >>> https://www.ovirt.org/develop/release-management/features/
> infra/pki/#ovirt-engine
> >>>
> >>> On Thu, Sep 21, 2017 at 3:26 PM, Neil  wrote:
> >>> > Hi guys,
> >>> >
> >>> > Please could someone assist, my cluster is down and I can't