Re: [ovirt-users] OVS not running / logwatch error after upgrade from 4.0.6 to 4.1.8

2018-01-19 Thread Derek Atkins

It is /var/run/openvswitch
However it will need to be recreated on every reboot.

-derek
Sent using my mobile device. Please excuse any typos.



On January 19, 2018 3:54:44 PM Darrell Budic  wrote:

OVS is an optional tech preview in 4.1.x, you don’t need it. It is annoying 
about the logwatch errors though…


I think I created the directory to avoid the errors, I forgot exactly what 
it was, sorry.



From: Derek Atkins 
Subject: [ovirt-users] OVS not running / logwatch error after upgrade from 
4.0.6 to 4.1.8

Date: January 19, 2018 at 10:44:56 AM CST
To: users

Hi,
I recently upgraded my 1-host ovirt deployment from 4.0.6 to 4.1.8.
Since then, the host has been reporting a cron.daily error:

/etc/cron.daily/logrotate:

logrotate_script: line 4: cd: /var/run/openvswitch: No such file or directory

This isn't surprising, since:

# systemctl status openvswitch
● openvswitch.service - Open vSwitch
  Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; disabled;
vendor preset: disabled)
  Active: inactive (dead)

The host was just upgraded by "yum update".
Was there anything special that needed to happen after the update?
Do I *NEED* OVS running?
The VMs all seem to be behaving properly.

Thanks,

-derek

--
  Derek Atkins 617-623-3745
  de...@ihtfp.com www.ihtfp.com
  Computer and Internet Security Consultant

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] OVS not running / logwatch error after upgrade from 4.0.6 to 4.1.8

2018-01-19 Thread Darrell Budic
OVS is an optional tech preview in 4.1.x, you don’t need it. It is annoying 
about the logwatch errors though…

I think I created the directory to avoid the errors, I forgot exactly what it 
was, sorry.

> From: Derek Atkins 
> Subject: [ovirt-users] OVS not running / logwatch error after upgrade from 
> 4.0.6 to 4.1.8
> Date: January 19, 2018 at 10:44:56 AM CST
> To: users
> 
> Hi,
> I recently upgraded my 1-host ovirt deployment from 4.0.6 to 4.1.8.
> Since then, the host has been reporting a cron.daily error:
> 
> /etc/cron.daily/logrotate:
> 
> logrotate_script: line 4: cd: /var/run/openvswitch: No such file or directory
> 
> This isn't surprising, since:
> 
> # systemctl status openvswitch
> ● openvswitch.service - Open vSwitch
>   Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; disabled;
> vendor preset: disabled)
>   Active: inactive (dead)
> 
> The host was just upgraded by "yum update".
> Was there anything special that needed to happen after the update?
> Do I *NEED* OVS running?
> The VMs all seem to be behaving properly.
> 
> Thanks,
> 
> -derek
> 
> -- 
>   Derek Atkins 617-623-3745
>   de...@ihtfp.com www.ihtfp.com
>   Computer and Internet Security Consultant
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] OVS not running / logwatch error after upgrade from 4.0.6 to 4.1.8

2018-01-19 Thread Derek Atkins
Hi,
I recently upgraded my 1-host ovirt deployment from 4.0.6 to 4.1.8.
Since then, the host has been reporting a cron.daily error:

/etc/cron.daily/logrotate:

logrotate_script: line 4: cd: /var/run/openvswitch: No such file or directory

This isn't surprising, since:

# systemctl status openvswitch
● openvswitch.service - Open vSwitch
   Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; disabled;
vendor preset: disabled)
   Active: inactive (dead)

The host was just upgraded by "yum update".
Was there anything special that needed to happen after the update?
Do I *NEED* OVS running?
The VMs all seem to be behaving properly.

Thanks,

-derek

-- 
   Derek Atkins 617-623-3745
   de...@ihtfp.com www.ihtfp.com
   Computer and Internet Security Consultant

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt storage access failure from host

2018-01-19 Thread Alex K
Hi Martin,

No deployments have been done and the servers have been restarted several
time since then.

I will go to ovirt 4.2 as soon as BZ - 1477589 is fixed. The false routing
error blocks migration of VMs to the host with this error.

Alex

On Fri, Jan 19, 2018 at 4:29 PM, Martin Sivak  wrote:

> Hi,
>
> Have you been adding or redeploying a host lately? If yes, then try
> restarting ovirt-ha-broker service. If it helps then it might be a
> case of this bug: https://bugzilla.redhat.com/1527394
>
> The ovirt-ha-agent and brokers from oVirt 4.2 are fixed already, but
> we havent backported the fix yet.
>
> Best regards
>
> Martin Sivak
>
> On Fri, Jan 19, 2018 at 1:01 PM, Alex K  wrote:
> > Hi All,
> >
> > I have a 3 server ovirt 4.1 selft hosted setup with gluster replica 3.
> >
> > I see that suddenly one of the hosts reported as unresponsive and at same
> > time the /var/log/messages logged:
> >
> > ovirt-ha-broker ovirt_hosted_engine_ha.broker.listener.ConnectionHandler
> > ERROR Error handling request, data: 'set-storage-domain FilesystemBackend
> > dom_type=glusterfs
> > sd_uuid=ad7b9e2a-7ae3-46ad-9429-5f5ef452eac8'#012Traceback (most recent
> call
> > last):#012  File
> > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_
> ha/broker/listener.py",
> > line 166, in handle#012data)#012  File
> > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_
> ha/broker/listener.py",
> > line 299, in _dispatch#012.set_storage_domain(client, sd_type,
> > **options)#012  File
> > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_
> ha/broker/storage_broker.py",
> > line 66, in set_storage_domain#012self._backends[client].
> connect()#012
> > File
> > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_
> ha/lib/storage_backends.py",
> > line 462, in connect#012self._dom_type)#012  File
> > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_
> ha/lib/storage_backends.py",
> > line 107, in get_domain_path#012" in {1}".format(sd_uuid,
> > parent))#012BackendFailureException: path to storage domain
> > ad7b9e2a-7ae3-46ad-9429-5f5ef452eac8 not found in
> > /rhev/data-center/mnt/glusterSD
> > Jan 15 11:04:56 v1 journal: vdsm root ERROR failed to retrieve Hosted
> Engine
> > HA info#012Traceback (most recent call last):#012  File
> > "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in
> > _getHaInfo#012stats = instance.get_all_stats()#012  File
> > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_
> ha/client/client.py",
> > line 103, in get_all_stats#012self._configure_broker_conn(
> broker)#012
> > File
> > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_
> ha/client/client.py",
> > line 180, in _configure_broker_conn#012dom_type=dom_type)#012  File
> > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_
> ha/lib/brokerlink.py",
> > line 177, in set_storage_domain#012.format(sd_type, options,
> > e))#012RequestError: Failed to set storage domain FilesystemBackend,
> options
> > {'dom_type': 'glusterfs', 'sd_uuid':
> > 'ad7b9e2a-7ae3-46ad-9429-5f5ef452eac8'}: Request failed:  > 'ovirt_hosted_engine_ha.lib.storage_backends.BackendFailureException'>
> >
> >
> > At VDSM logs i see the following continuously logged:
> > [jsonrpc.JsonRpcServer] RPC call VM.getStats failed (error 1) in 0.00
> > seconds (__init__:539)
> >
> > No errors seen at gluster at same time frame.
> >
> > Any hints on what is causing this issue? It seems a storage access issue
> but
> > gluster was up and volumes ok. The VMs that I am running on top are
> Windows
> > 10 and Windows 2016 64 bit.
> >
> >
> > Thanx,
> > Alex
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt storage access failure from host

2018-01-19 Thread Martin Sivak
Hi,

Have you been adding or redeploying a host lately? If yes, then try
restarting ovirt-ha-broker service. If it helps then it might be a
case of this bug: https://bugzilla.redhat.com/1527394

The ovirt-ha-agent and brokers from oVirt 4.2 are fixed already, but
we havent backported the fix yet.

Best regards

Martin Sivak

On Fri, Jan 19, 2018 at 1:01 PM, Alex K  wrote:
> Hi All,
>
> I have a 3 server ovirt 4.1 selft hosted setup with gluster replica 3.
>
> I see that suddenly one of the hosts reported as unresponsive and at same
> time the /var/log/messages logged:
>
> ovirt-ha-broker ovirt_hosted_engine_ha.broker.listener.ConnectionHandler
> ERROR Error handling request, data: 'set-storage-domain FilesystemBackend
> dom_type=glusterfs
> sd_uuid=ad7b9e2a-7ae3-46ad-9429-5f5ef452eac8'#012Traceback (most recent call
> last):#012  File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py",
> line 166, in handle#012data)#012  File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py",
> line 299, in _dispatch#012.set_storage_domain(client, sd_type,
> **options)#012  File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
> line 66, in set_storage_domain#012self._backends[client].connect()#012
> File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
> line 462, in connect#012self._dom_type)#012  File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
> line 107, in get_domain_path#012" in {1}".format(sd_uuid,
> parent))#012BackendFailureException: path to storage domain
> ad7b9e2a-7ae3-46ad-9429-5f5ef452eac8 not found in
> /rhev/data-center/mnt/glusterSD
> Jan 15 11:04:56 v1 journal: vdsm root ERROR failed to retrieve Hosted Engine
> HA info#012Traceback (most recent call last):#012  File
> "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in
> _getHaInfo#012stats = instance.get_all_stats()#012  File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
> line 103, in get_all_stats#012self._configure_broker_conn(broker)#012
> File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
> line 180, in _configure_broker_conn#012dom_type=dom_type)#012  File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
> line 177, in set_storage_domain#012.format(sd_type, options,
> e))#012RequestError: Failed to set storage domain FilesystemBackend, options
> {'dom_type': 'glusterfs', 'sd_uuid':
> 'ad7b9e2a-7ae3-46ad-9429-5f5ef452eac8'}: Request failed:  'ovirt_hosted_engine_ha.lib.storage_backends.BackendFailureException'>
>
>
> At VDSM logs i see the following continuously logged:
> [jsonrpc.JsonRpcServer] RPC call VM.getStats failed (error 1) in 0.00
> seconds (__init__:539)
>
> No errors seen at gluster at same time frame.
>
> Any hints on what is causing this issue? It seems a storage access issue but
> gluster was up and volumes ok. The VMs that I am running on top are Windows
> 10 and Windows 2016 64 bit.
>
>
> Thanx,
> Alex
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] correct settings for gluster based storage domain

2018-01-19 Thread Artem Tambovskiy
Ok,
Alexey, you have picked the third option and leaving host selection to DNS
resolver.

But in general the solution 2 also should work, right?

Regards,
Artem



On Fri, Jan 19, 2018 at 4:50 PM, Николаев Алексей <
alexeynikolaev.p...@yandex.ru> wrote:

> https://ovirt.org/documentation/self-hosted/chap-Deploying_Self-Hosted_
> Engine/
>
>
> For Gluster storage, specify the full address, using either the FQDN or IP
> address, and path name of the shared storage domain.
>
> *Important:* Only replica 3 Gluster storage is supported. Ensure the
> following configuration has been made:
>
>-
>
>In the /etc/glusterfs/glusterd.vol file on all three Gluster servers,
>set rpc-auth-allow-insecure to on.
>
>  option rpc-auth-allow-insecure on
>
>-
>
>Configure the volume as follows:
>
>  gluster volume set volume cluster.quorum-type auto
>  gluster volume set volume network.ping-timeout 10
>  gluster volume set volume auth.allow \*
>  gluster volume set volume group virt
>  gluster volume set volume storage.owner-uid 36
>  gluster volume set volume storage.owner-gid 36
>  gluster volume set volume server.allow-insecure on
>
>
>
> I have problems with hosted engine storage on gluster replica 3 arbiter
> with oVIrt 4.1.
> I recommend update oVirt to 4.2. I have no problems with 4.2.
>
>
> 19.01.2018, 16:43, "Artem Tambovskiy" :
>
>
> I'm still troubleshooting the my oVirt 4.1.8 cluster and idea came to my
> mind that I have an issue with storage settings for hosted_engine storage
> domain.
>
> But in general if I have a 2 ovirt nodes running gluster + 3rd host as
> arbiter, how the settings should looks like?
>
> lets say I have a 3 nodes:
> ovirt1.domain.com (gluster + ovirt)
> ovirt2.domain.com (gluster + ovirt)
> ovirt3.domain.com (gluster)
>
> How the correct storage domain config should looks like?
>
> Option 1:
>  /etc/ovirt-hosted-engine/hosted-engine.conf
> 
> storage=ovirt1.domain.com:/engine
> mnt_options=backup-volfile-servers=ovirt2.domain.com:ovirt3.domain.com
>
> Option 2:
>  /etc/ovirt-hosted-engine/hosted-engine.conf
> 
> storage=localhost:/engine
> mnt_options=backup-volfile-servers=ovirt1.domain.com:ovirt2.domain.com:o
> virt3.domain.com
>
> Option 3:
> Setup a DNS record gluster.domain.com pointing to IP addresses of gluster
> nodes
>
>  /etc/ovirt-hosted-engine/hosted-engine.conf
> 
> storage=gluster.domain.com:/engine
> mnt_options=
>
> Of course its related not only to hosted engine domain, but to all gluster
> based storage domains.
>
> Thank you in advance!
> Regards,
> Artem
>
> ,
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Questions about converged infrastructure setup and glusterFS sizing/performance

2018-01-19 Thread Jayme
I am attempting to narrow down choices for storage in a new oVirt build
that will eventually be used for a mix of dev and production servers.

My current space usage excluding backups sits at about only 1TB so I figure
3-5 TB would be more than enough for VM storage only + some room to grow.
There will be around 24 linux VMs total but 80% of them are VERY low usage
and low spec servers.

I've been considering a 3 host hyperconverged oVirt setup, replica 3
arbiter 1 setup with a disaster recovery plan to replicate the gluster
volume to a separate server.  I would of course do additional incremental
backups to an alternate server as well probably with rsync or some other
method.

Some questions:

1. Is it recommended to use SSDs for glusterFS or can the performance of
regular server/sas drives be sufficient enough performance.  If using SSDs
is it recommended to use enterprise SSDs are consumer SSDs good enough due
to the redundancy of glusterFS?   I would love to hear of any use cases
from any of you regarding hardware specs you used in hyperconverged setups
and what level of performance you are seeing.

2. Is it recommended to RAID the drives that form the gluster bricks?  If
so what raid level?

3. How do I calculate how much space will be usable in a replicate 3
arbiter 1 configuration?  Will it be 75% of total drive capacity minus what
I lose from raid (if I raid the drives)?

4. For replication of the gluster volume, is it possible for me to
replicate the entire volume to a single drive/raid array in an alternate
server or does the replicated volume need to match the configuration of the
main glusterFS volume (i.e. same amount of drives/configuration etc).

5. Has the meltdown bug caused or expected to cause major issues with oVirt
hyperconverged setup due to performance loss from the patches.  I've been
reading articles suggesting up to 30% performance loss on some
converged/storage setups due to how CPU intensive converged setups are.

Thanks in advance!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] correct settings for gluster based storage domain

2018-01-19 Thread Николаев Алексей
https://ovirt.org/documentation/self-hosted/chap-Deploying_Self-Hosted_Engine/ For Gluster storage, specify the full address, using either the FQDN or IP address, and path name of the shared storage domain.Important: Only replica 3 Gluster storage is supported. Ensure the following configuration has been made:In the /etc/glusterfs/glusterd.vol file on all three Gluster servers, set rpc-auth-allow-insecure to on.  option rpc-auth-allow-insecure on
Configure the volume as follows:  gluster volume set volume cluster.quorum-type auto
  gluster volume set volume network.ping-timeout 10
  gluster volume set volume auth.allow \*
  gluster volume set volume group virt
  gluster volume set volume storage.owner-uid 36
  gluster volume set volume storage.owner-gid 36
  gluster volume set volume server.allow-insecure on I have problems with hosted engine storage on gluster replica 3 arbiter with oVIrt 4.1.I recommend update oVirt to 4.2. I have no problems with 4.2.  19.01.2018, 16:43, "Artem Tambovskiy" :I'm still troubleshooting the my oVirt 4.1.8 cluster and idea came to my mind that I have an issue with storage settings for hosted_engine storage domain. But in general if I have a 2 ovirt nodes running gluster + 3rd host as arbiter, how the settings should looks like? lets say I have a 3 nodes: ovirt1.domain.com (gluster + ovirt)ovirt2.domain.com (gluster + ovirt)ovirt3.domain.com (gluster)How the correct storage domain config should looks like? Option 1: /etc/ovirt-hosted-engine/hosted-engine.conf storage=ovirt1.domain.com:/enginemnt_options=backup-volfile-servers=ovirt2.domain.com:ovirt3.domain.com Option 2: /etc/ovirt-hosted-engine/hosted-engine.conf storage=localhost:/enginemnt_options=backup-volfile-servers=ovirt1.domain.com:ovirt2.domain.com:ovirt3.domain.com Option 3:Setup a DNS record gluster.domain.com pointing to IP addresses of gluster nodes /etc/ovirt-hosted-engine/hosted-engine.conf storage=gluster.domain.com:/enginemnt_options=Of course its related not only to hosted engine domain, but to all gluster based storage domains.Thank you in advance!Regards,Artem ,___Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] correct settings for gluster based storage domain

2018-01-19 Thread Artem Tambovskiy
I'm still troubleshooting the my oVirt 4.1.8 cluster and idea came to my
mind that I have an issue with storage settings for hosted_engine storage
domain.

But in general if I have a 2 ovirt nodes running gluster + 3rd host as
arbiter, how the settings should looks like?

lets say I have a 3 nodes:
ovirt1.domain.com (gluster + ovirt)
ovirt2.domain.com (gluster + ovirt)
ovirt3.domain.com (gluster)

How the correct storage domain config should looks like?

Option 1:
 /etc/ovirt-hosted-engine/hosted-engine.conf

storage=ovirt1.domain.com:/engine
mnt_options=backup-volfile-servers=ovirt2.domain.com:ovirt3.domain.com

Option 2:
 /etc/ovirt-hosted-engine/hosted-engine.conf

storage=localhost:/engine
mnt_options=backup-volfile-servers=ovirt1.domain.com:ovirt2.domain.com:o
virt3.domain.com

Option 3:
Setup a DNS record gluster.domain.com pointing to IP addresses of gluster
nodes

 /etc/ovirt-hosted-engine/hosted-engine.conf

storage=gluster.domain.com:/engine
mnt_options=

Of course its related not only to hosted engine domain, but to all gluster
based storage domains.

Thank you in advance!
Regards,
Artem
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VirtIO-SCSI and viodiskcache custom property

2018-01-19 Thread Matthias Leopold

Hi,

is there a reason why the viodiskcache custom property isn't honored 
when using VirtIO-SCSI?


On a Cinder (Ceph) disk "viodiskcache=writeback" is ignored with 
VirtIO-SCSI and honored when using VirtIO.


On an iSCSI disk "viodiskcache=writeback" is ignored with VirtIO-SCSI 
and the VM can't be started when using VirtIO with "unsupported 
configuration: native I/O needs either no disk cache or directsync cache 
mode, QEMU will fallback to aio=threads"


We actually want to use "viodiskcache=writeback" with Cinder (Ceph) disks.

oVirt version: 4.1.8

Thanks
Matthias

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 4.2.1 Second Release Candidate is now available

2018-01-19 Thread Sandro Bonazzola
2018-01-19 13:02 GMT+01:00 Gabriel Stein :

> When will be the official 4.2.1 released? Looking forward the gateway
> bugfix(BZ 1528906) but I will wait for it...
>
>
We are tentatively targeting January 30th but it will depend on blockers /
regression that will be discovered in this release candidate testing.
Help testing the release candidates will translate in a more stable GA.
Since we discovered already a regression in a disaster recovery flow we are
now planning another (and hopefully last) release candidate next week.



> Best Regards,
>
> Gabriel
>
> Gabriel Stein
> --
> Gabriel Ferraz Stein
> Tel.: +49 (0)  170 2881531
>
> 2018-01-19 12:53 GMT+01:00 Sandro Bonazzola :
>
>> The oVirt Project is pleased to announce the availability of the oVirt
>> 4.2.1 Second Release Candidate, as of January 18th, 2017
>>
>> This update is a release candidate of the second in a series of
>> stabilization updates to the 4.2
>> series.
>> This is pre-release software. This pre-release should not to be used in
>> production.
>>
>> [WARNING] right after we finished to compose the release candidate we
>> discovered a regression in a disaster recovery flow causing wrong MAC
>> address to be assigned to re-imported VMs.
>>
>> This release is available now for:
>> * Red Hat Enterprise Linux 7.4 or later
>> * CentOS Linux (or similar) 7.4 or later
>>
>> This release supports Hypervisor Hosts running:
>> * Red Hat Enterprise Linux 7.4 or later
>> * CentOS Linux (or similar) 7.4 or later
>> * oVirt Node 4.2
>>
>> See the release notes [1] for installation / upgrade instructions and
>> a list of new features and bugs fixed.
>>
>> Notes:
>> - oVirt Appliance is already available
>> - oVirt Node will be available soon [2]
>>
>> Additional Resources:
>> * Read more about the oVirt 4.2.1 release highlights:http://www.ovirt.or
>> g/release/4.2.1/
>> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
>> * Check out the latest project news on the oVirt blog:
>> http://www.ovirt.org/blog/
>>
>> [1] http://www.ovirt.org/release/4.2.1/
>> [2] http://resources.ovirt.org/pub/ovirt-4.2-pre/iso/
>>
>> --
>>
>> SANDRO BONAZZOLA
>>
>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>>
>> Red Hat EMEA 
>> 
>> TRIED. TESTED. TRUSTED. 
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>


-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problems with some vms

2018-01-19 Thread Endre Karlson
Do anyone have any ideas on this?

2018-01-17 12:07 GMT+01:00 Endre Karlson :

> One brick was at a point down for replacement.
>
> It has been replaced and all vols are up
>
> Status of volume: data
> Gluster process TCP Port  RDMA Port  Online
> Pid
> 
> --
> Brick ovirt0:/gluster/brick3/data   49152 0  Y
>  22467
> Brick ovirt2:/gluster/brick3/data   49152 0  Y
>  20736
> Brick ovirt3:/gluster/brick3/data   49152 0  Y
>  23148
> Brick ovirt0:/gluster/brick4/data   49153 0  Y
>  22497
> Brick ovirt2:/gluster/brick4/data   49153 0  Y
>  20742
> Brick ovirt3:/gluster/brick4/data   49153 0  Y
>  23158
> Brick ovirt0:/gluster/brick5/data   49154 0  Y
>  22473
> Brick ovirt2:/gluster/brick5/data   49154 0  Y
>  20748
> Brick ovirt3:/gluster/brick5/data   49154 0  Y
>  23156
> Brick ovirt0:/gluster/brick6/data   49155 0  Y
>  22479
> Brick ovirt2:/gluster/brick6_1/data 49161 0  Y
>  21203
> Brick ovirt3:/gluster/brick6/data   49155 0  Y
>  23157
> Brick ovirt0:/gluster/brick7/data   49156 0  Y
>  22485
> Brick ovirt2:/gluster/brick7/data   49156 0  Y
>  20763
> Brick ovirt3:/gluster/brick7/data   49156 0  Y
>  23155
> Brick ovirt0:/gluster/brick8/data   49157 0  Y
>  22491
> Brick ovirt2:/gluster/brick8/data   49157 0  Y
>  20771
> Brick ovirt3:/gluster/brick8/data   49157 0  Y
>  23154
> Self-heal Daemon on localhost   N/A   N/AY
>  23238
> Bitrot Daemon on localhost  N/A   N/AY
>  24870
> Scrubber Daemon on localhostN/A   N/AY
>  24889
> Self-heal Daemon on ovirt2  N/A   N/AY
>  24271
> Bitrot Daemon on ovirt2 N/A   N/AY
>  24856
> Scrubber Daemon on ovirt2   N/A   N/AY
>  24866
> Self-heal Daemon on ovirt0  N/A   N/AY
>  29409
> Bitrot Daemon on ovirt0 N/A   N/AY
>  5457
> Scrubber Daemon on ovirt0   N/A   N/AY
>  5468
>
> Task Status of Volume data
> 
> --
> There are no active volume tasks
>
> Status of volume: engine
> Gluster process TCP Port  RDMA Port  Online
> Pid
> 
> --
> Brick ovirt0:/gluster/brick1/engine 49158 0  Y
>  22511
> Brick ovirt2:/gluster/brick1/engine 49158 0  Y
>  20780
> Brick ovirt3:/gluster/brick1/engine 49158 0  Y
>  23199
> Self-heal Daemon on localhost   N/A   N/AY
>  23238
> Self-heal Daemon on ovirt0  N/A   N/AY
>  29409
> Self-heal Daemon on ovirt2  N/A   N/AY
>  24271
>
> Task Status of Volume engine
> 
> --
> There are no active volume tasks
>
> Status of volume: iso
> Gluster process TCP Port  RDMA Port  Online
> Pid
> 
> --
> Brick ovirt0:/gluster/brick2/iso49159 0  Y
>  22520
> Brick ovirt2:/gluster/brick2/iso49159 0  Y
>  20789
> Brick ovirt3:/gluster/brick2/iso49159 0  Y
>  23208
> NFS Server on localhost N/A   N/AN
>  N/A
> Self-heal Daemon on localhost   N/A   N/AY
>  23238
> NFS Server on ovirt2N/A   N/AN
>  N/A
> Self-heal Daemon on ovirt2  N/A   N/AY
>  24271
> NFS Server on ovirt0N/A   N/AN
>  N/A
> Self-heal Daemon on ovirt0  N/A   N/AY
>  29409
>
> Task Status of Volume iso
> 
> --
> There are no active volume tasks
>
>
> 2018-01-17 8:13 GMT+01:00 Gobinda Das :
>
>> Hi,
>>  I can see some error in log:
>> [2018-01-14 11:19:49.886571] E [socket.c:2309:socket_connect_finish]
>> 0-engine-client-0: connection to 10.2.0.120:24007 failed (Connection
>> timed out)
>> [2018-01-14 11:20:05.630669] E [socket.c:2309:socket_connect_finish]
>> 0-engine-client-0: connection to 10.2.0.120:24007 failed (Connection
>> timed out)
>> [2018-01-14 12:01:09.089925] E [MSGID: 114058]
>> 

Re: [ovirt-users] [ANN] oVirt 4.2.1 Second Release Candidate is now available

2018-01-19 Thread Gabriel Stein
When will be the official 4.2.1 released? Looking forward the gateway
bugfix(BZ 1528906) but I will wait for it...

Best Regards,

Gabriel

Gabriel Stein
--
Gabriel Ferraz Stein
Tel.: +49 (0)  170 2881531

2018-01-19 12:53 GMT+01:00 Sandro Bonazzola :

> The oVirt Project is pleased to announce the availability of the oVirt
> 4.2.1 Second Release Candidate, as of January 18th, 2017
>
> This update is a release candidate of the second in a series of
> stabilization updates to the 4.2
> series.
> This is pre-release software. This pre-release should not to be used in
> production.
>
> [WARNING] right after we finished to compose the release candidate we
> discovered a regression in a disaster recovery flow causing wrong MAC
> address to be assigned to re-imported VMs.
>
> This release is available now for:
> * Red Hat Enterprise Linux 7.4 or later
> * CentOS Linux (or similar) 7.4 or later
>
> This release supports Hypervisor Hosts running:
> * Red Hat Enterprise Linux 7.4 or later
> * CentOS Linux (or similar) 7.4 or later
> * oVirt Node 4.2
>
> See the release notes [1] for installation / upgrade instructions and
> a list of new features and bugs fixed.
>
> Notes:
> - oVirt Appliance is already available
> - oVirt Node will be available soon [2]
>
> Additional Resources:
> * Read more about the oVirt 4.2.1 release highlights:http://www.ovirt.
> org/release/4.2.1/
> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
> * Check out the latest project news on the oVirt blog:
> http://www.ovirt.org/blog/
>
> [1] http://www.ovirt.org/release/4.2.1/
> [2] http://resources.ovirt.org/pub/ovirt-4.2-pre/iso/
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA 
> 
> TRIED. TESTED. TRUSTED. 
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt storage access failure from host

2018-01-19 Thread Alex K
Hi All,

I have a 3 server ovirt 4.1 selft hosted setup with gluster replica 3.

I see that suddenly one of the hosts reported as unresponsive and at same
time the /var/log/messages logged:

ovirt-ha-broker ovirt_hosted_engine_ha.broker.listener.ConnectionHandler
ERROR Error handling request, data: 'set-storage-domain FilesystemBackend
dom_type=glusterfs
sd_uuid=ad7b9e2a-7ae3-46ad-9429-5f5ef452eac8'#012Traceback (most recent
call last):#012  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py",
line 166, in handle#012data)#012  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py",
line 299, in _dispatch#012.set_storage_domain(client, sd_type,
**options)#012  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
line 66, in set_storage_domain#012self._backends[client].connect()#012
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
line 462, in connect#012self._dom_type)#012  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
line 107, in get_domain_path#012" in {1}".format(sd_uuid,
parent))#012BackendFailureException: path to storage domain
ad7b9e2a-7ae3-46ad-9429-5f5ef452eac8 not found in
/rhev/data-center/mnt/glusterSD
Jan 15 11:04:56 v1 journal: vdsm root ERROR failed to retrieve Hosted
Engine HA info#012Traceback (most recent call last):#012  File
"/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in
_getHaInfo#012stats = instance.get_all_stats()#012  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 103, in get_all_stats#012self._configure_broker_conn(broker)#012
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 180, in _configure_broker_conn#012dom_type=dom_type)#012  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 177, in set_storage_domain#012.format(sd_type, options,
e))#012RequestError: Failed to set storage domain FilesystemBackend,
options {'dom_type': 'glusterfs', 'sd_uuid':
'ad7b9e2a-7ae3-46ad-9429-5f5ef452eac8'}: Request failed: 


At VDSM logs i see the following continuously logged:
[jsonrpc.JsonRpcServer] RPC call VM.getStats failed (error 1) in 0.00
seconds (__init__:539)

No errors seen at gluster at same time frame.

Any hints on what is causing this issue? It seems a storage access issue
but gluster was up and volumes ok. The VMs that I am running on top are
Windows 10 and Windows 2016 64 bit.


Thanx,
Alex
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [ANN] oVirt 4.2.1 Second Release Candidate is now available

2018-01-19 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.2.1 Second Release Candidate, as of January 18th, 2017

This update is a release candidate of the second in a series of
stabilization updates to the 4.2
series.
This is pre-release software. This pre-release should not to be used in
production.

[WARNING] right after we finished to compose the release candidate we
discovered a regression in a disaster recovery flow causing wrong MAC
address to be assigned to re-imported VMs.

This release is available now for:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later
* oVirt Node 4.2

See the release notes [1] for installation / upgrade instructions and
a list of new features and bugs fixed.

Notes:
- oVirt Appliance is already available
- oVirt Node will be available soon [2]

Additional Resources:
* Read more about the oVirt 4.2.1 release highlights:
http://www.ovirt.org/release/4.2.1/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] http://www.ovirt.org/release/4.2.1/
[2] http://resources.ovirt.org/pub/ovirt-4.2-pre/iso/

-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Q: Optimal settings for DB hosting

2018-01-19 Thread Gianluca Cecchi
On Fri, Jan 19, 2018 at 11:15 AM, Yaniv Kaul  wrote:

>
>
> On Jan 19, 2018 10:52 AM, "andreil1"  wrote:
>
>
>
> Migration disabled.
>
>
Why this enforcing? If the VM is so important I see it as a limitation not
to be able to move it in case of need


> Pass-through host CPU enabled
>
>
I don't know if this is so important.
Tested with Oracle RDBMS and not used in my case.


>
> Any idea of NUMA settings ?
>
>
> Indeed. + Huge pages, in both host and guest.
>

Do you think NUMA so essential? It implies non-migratable VM...
In my tests I didn't set NUMA


>
> In short, use high performance VM. See ovirt.org feature page.
> Y.
>
>
>
In my opinion the  main limitation of "High performance VM" is to be
not-migratable (probably implied because you set NUMA?)
In that case could it be possible to have NUMA as choice, so that at the
same time you can choose if you want a migratable or not-migratable high
performance VM?
Also CPU passthrough, I don't remember if it is included/fixed option in
high perf VM...

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Q: Optimal settings for DB hosting

2018-01-19 Thread Yaniv Kaul
On Jan 19, 2018 10:52 AM, "andreil1"  wrote:

Hi !

What is optimal setting for oVirt KVM guests for database hosting on Xeon
server (2 x Xeon 4-core each) (in my case this is Firebird based
accounting/stock control system with several clients active)?

1st of course its preallocated disk image.
VirtO-SCSI enabled


- It's not clear that virtio-scsi is faster than virtio-blk in all cases.
Test.
- What's the backend storage?

Migration disabled.
Balloning disabled.
CPU shares disabled
Pass-through host CPU enabled


What about NUMA and pinning?


What should be other CPU settings?
For example, Xeon have 2 threads per core, should I set in oVirt 1 or 2
threads per virtual CPU?
IO Threads on or off?


On.

Any idea of NUMA settings ?


Indeed. + Huge pages, in both host and guest.


Node running 4 VMs total, CPU load is quite low, RAM is enough to
preallocate for each VM + 4GB for node itself.


In short, use high performance VM. See ovirt.org feature page.
Y.


Thanks in advance !
Andrei
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt home lab hardware

2018-01-19 Thread Abdurrahman A. Ibrahim
On 19 Jan 2018 12:33 am, "Jamie Lawrence"  wrote:


I'm a bit out of date on the topic, but unless they have changed, avoid the
NUCs. I bought a couple but gave up on them because of a firmware bug (?
limitation, at least) that Intel said it won't fix. It causes a lockup on
boot if a monitor is not connected. I've been told that different models
don't have that problem, but have also heard of other weird problems. The
impression I got is that they were meant for being media servers stuck to
the back of TVs, and other use cases tend to be precluded by unexpected
limitations, bugs and general strange behavior. Perhaps this was fixed, but
I ended up with a bad taste in my mouth.

You can pick up dirt-cheap rackmount kit on Ebay. A few R710s or similar
can be had for a few $hundred and would be absolutely lovely overkill for a
home network. The downside of this approach at home is going to be power
consumption and noise, depending on local energy costs and the nature of
your dwelling. Also, some people may not be that fond of the datacenter
look for home decor.

Unless you live in a cheap energy locale/don't pay for your power/enjoy
malformed space heaters, don't underestimate the cost of running server
kit. Getting rid of rack mount machines and switches and moving everything
to a couple machines built with energy consumption in mind cut my
electricity costs by over half.

-j


> On Jan 18, 2018, at 12:52 PM, Abdurrahman A. Ibrahim <
a.rahman.at...@gmail.com> wrote:
>
> Hello,
>
> I am planning to buy home lab hardware to be used by oVirt.
>
> Any recommendations for used hardware i can buy from eBay for example?
> Also, have you tried oVirt on Intel NUC or any other SMB servers before?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt home lab hardware

2018-01-19 Thread Abdurrahman A. Ibrahim
One more thing,

Do you have power switch?

Best regards,
Ab

On 19 Jan 2018 10:44 am, "Abdurrahman A. Ibrahim" 
wrote:

> Thank you Jayme for your reply,
>
> I have some concerns regarding space, noise and power consumption.
>
> Would you mind to share your experience regarding those three parameters
> with us?
>
> Best regards,
> Ab
>
>
>
>
> On 18 Jan 2018 10:23 pm, "Jayme"  wrote:
>
> For rackmount Dell R710's are fairly popular for home labs, they have good
> specs and can be found at reasonable prices on ebay.
>
> On Thu, Jan 18, 2018 at 4:52 PM, Abdurrahman A. Ibrahim <
> a.rahman.at...@gmail.com> wrote:
>
>> Hello,
>>
>> I am planning to buy home lab hardware to be used by oVirt.
>>
>> Any recommendations for used hardware i can buy from eBay for example?
>> Also, have you tried oVirt on Intel NUC or any other SMB servers before?
>>
>> Thanks,
>> Ab
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Q: Optimal settings for DB hosting

2018-01-19 Thread andreil1
Hi !

What is optimal setting for oVirt KVM guests for database hosting on Xeon 
server (2 x Xeon 4-core each) (in my case this is Firebird based 
accounting/stock control system with several clients active)?

1st of course its preallocated disk image.
VirtO-SCSI enabled
Migration disabled.
Balloning disabled.
CPU shares disabled
Pass-through host CPU enabled

What should be other CPU settings?
For example, Xeon have 2 threads per core, should I set in oVirt 1 or 2 threads 
per virtual CPU?
IO Threads on or off?
Any idea of NUMA settings ?

Node running 4 VMs total, CPU load is quite low, RAM is enough to preallocate 
for each VM + 4GB for node itself.

Thanks in advance !
Andrei
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt home lab hardware

2018-01-19 Thread Joop

On 18-1-2018 21:52, Abdurrahman A. Ibrahim wrote:

Hello,

I am planning to buy home lab hardware to be used by oVirt.

Any recommendations for used hardware i can buy from eBay for example?
Also, have you tried oVirt on Intel NUC or any other SMB servers before?

My shuttle XH110 are doing double duty as oVirt cluster and workstation. 
They have dual NICs and upto 32G RAM. If you need something bigger than 
a cube version has lots more storage and more memory (SZ17R8V2).
If you want servers than have a look at https://tinkertry.com/ which as 
a couple of projects which are nice but probably a bit more noisier than 
the shuttles which are quite silent most of the time.


Regards,

Joop
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt home lab hardware

2018-01-19 Thread Gianluca Cecchi
On Thu, Jan 18, 2018 at 9:52 PM, Abdurrahman A. Ibrahim <
a.rahman.at...@gmail.com> wrote:

> Hello,
>
> I am planning to buy home lab hardware to be used by oVirt.
>
> Any recommendations for used hardware i can buy from eBay for example?
> Also, have you tried oVirt on Intel NUC or any other SMB servers before?
>
> Thanks,
> Ab
>

I'm currently using:

NUC6i5SYH since 2015 without any problem
it's rock solid with 2xssd disks and 16Gb of ram
oVirt as single host with hosted engine and connected through a 24" monitor
via hdmi
I use it as my home/work station, normally working on a VM (currently
Fedora 27) inside this oVirt infra (currently 4.2.0) + other VMs
In the host (plain CentOS 7.4) I installed firefox browser and from there I
connect to the engine VM (running on itself) and from there to the F27 VM
through Spice
I use this F27 VM in full screen so it is like it is my pc itself
Probably over-complicated, but I use it to have a constant feeling with
oVirt and its functionalities...
No problem at all from the usability point of view (I don't use heavy
graphic, so the setup is ok for me, it could not be for you)
The only problem is when you indeed have to power off the NUC, because if
you simply restart, it has problems (apparently) syncing with the monitor
(as Jamie noted).
My simple workaround when I have to reboot the NUC (I think 4-5 times in 3
years basically to update oVirt and together the bios) has been:
- shutdown the infra
- the NUC tends to restart and not poweroff
- keep power button pressed so the NUC powers off
- power off monitor
- detach and reconnect the hdmi cable
- power on monitor
- power on NUC

If you have to daily restart the pc this can be annoying for sure.
I didn't now about a general NUC problem, so I thought it was my particular
combination NUC/Monitor (Dell U2515H) and CentOS drivers for the graphic
adapter.
I have to say that in 2015 when I bought it I both tried to install CentOS
7.x at that time and Fedaora 24 at that time and the display problems were
present only in CentOS.
And after various CentOS updates (now on 7.4  updates) it seems to me it is
quite better, so I think the problems discussed by Jamie could have been
resolved in CentOS  too.

I have also another NUC in another site. This is NUC6i3SYH again with 2xssd
and 16Gb of ram
It has ESXi free 6.0.2 with some VMs and a nested HCI oVirt environment
(now at 4.2.1 rc1)
It is not connected through a monitor and it works ok.
I connected the monitor only when I had to install and/or reboot (probably
2-3 times in 2,5 years).
Now it is powered on since 8 months...

Both the NUCs are connected to an APC UPS

No experience on other similar hw, because I felt good with the NUCs for my
needs.

HIH for your choice,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] web 404 after reinstall ovirt-engine

2018-01-19 Thread Martin Perina
Hi,

so according to log you have successfully configured ovirt-engine at
2018-01-19 12:56:00. But in the engine.log I can see that engine cannot
connect to the database:

  2018-01-19 12:56:00,098+08 WARN
[org.jboss.jca.core.connectionmanager.pool.strategy.OnePool] (JCA
PoolFiller) IJ000610: Unable to fill pool: java:/ENGINEDataSourceNoJTA:
javax.resource.ResourceException: IJ031084: Unable to create connection:
  ...
  Caused by: org.postgresql.util.PSQLException: FATAL: password
authentication failed for user "engine"

Is it a clean database installation or do you already have some other
database running on this PostgreSQL instance? Or have you performed any
manual configuration changes to PostgreSQL prior to engine installation?

Anyway I see you have executed engine-cleanup at 2018-01-19 13:47:06 which
removed your installation, so we cannot perform any additional checks to
see why engine cannot access the db.

So if this is a new installation, then I'd recommend you to start again
from clean environment (complete OS reinstall is the easiest method),
execute engine-setup and if the issue is still appearing, please create a
bug and attach complete host logs using engine-log-collector tool  and
attach it to the bug?

Thanks

Martin


On Fri, Jan 19, 2018 at 8:57 AM, 董青龙  wrote:

> Here are the logs, please check.
>
> At 2018-01-19 14:43:02, "Martin Perina"  wrote:
>
> Hi,
>
> could you please share with us complete engine logs from
> /var/log/ovirt-engine and subdirectories?
>
> Thanks
>
> Martin
>
>
> On Fri, Jan 19, 2018 at 8:35 AM, Arman Khalatyan 
> wrote:
>
>> looks like your database is not running
>> what about if you re-run the engine-setup??
>>
>>
>> Am 19.01.2018 7:11 vorm. schrieb "董青龙" :
>>
>>> Hi, all
>>> I installed ovirt-engine 4.1.8.2 for the second time and I got
>>> "successful" after I excuted "engine-setup". But I got "404" when I tried
>>> to access webadmin portal using "https://FQDN/ovirt-engine;. By the
>>> way, I could access the web after I installed ovirt-engine for the first
>>> time. Then I excuted "engine-cleanup" and "yum remove ovirt-engine" and
>>> installed ovirt-engine for the second time. I also tried to remove
>>> ovirt-engine-dwh and postgresql but after I reinstalled ovirt-engine I
>>> still got "404".
>>> Can I fix this problem? Hope some can help, thanks!
>>> Here are some logs in "/var/log/ovirt-engine/engine.log":
>>> ...
>>> 2018-01-19 12:56:06,288+08 ERROR [org.ovirt.engine.ui.frontend.
>>> server.dashboard.DashboardDataServlet] (ServerService Thread Pool --
>>> 61) [] Could not access engine's DWH configuration table:
>>> java.sql.SQLException: javax.resource.ResourceException: IJ000453:
>>> Unable to get managed connection for java:/ENGINEDataSource
>>> ...
>>> Caused by: javax.resource.ResourceException: IJ000453: Unable to get
>>> managed connection for java:/ENGINEDataSource
>>> ...
>>> Caused by: javax.resource.ResourceException: IJ031084: Unable to create
>>> connection
>>> ...
>>> Caused by: org.postgresql.util.PSQLException: FATAL: password
>>> authentication failed for user "engine"
>>> ...
>>> 2018-01-19 12:56:06,292+08 WARN  [org.ovirt.engine.ui.frontend
>>> .server.dashboard.DashboardDataServlet] (ServerService Thread Pool --
>>> 61) [] No valid DWH configurations were found, assuming DWH database isn't
>>> setup.
>>> 2018-01-19 12:56:06,292+08 INFO  [org.ovirt.engine.ui.frontend
>>> .server.dashboard.DashboardDataServlet] (ServerService Thread Pool --
>>> 61) [] Dashboard DB query cache has been disabled.
>>>
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
> Martin Perina
> Associate Manager, Software Engineering
> Red Hat Czech s.r.o.
>
>
>
>
>



-- 
Martin Perina
Associate Manager, Software Engineering
Red Hat Czech s.r.o.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users