[ovirt-users] Re: need network design advice for iSCSI

2019-01-20 Thread Eitan Raviv
Shani,
Can you help here with  iSCSI bonding?
Thanks

On Mon, Jan 21, 2019 at 7:51 AM Uwe Laverenz  wrote:
>
> Hi John,
>
> Am 20.01.19 um 18:32 schrieb John Florian:
>
> > As for how to get there, whatever exactly that might look like, I'm also
> > having troubles figuring that out.  I figured I would transform the
> > setup described below into one where each host has:
> >
> >   * 2 NICs bonded with LACP for my ovirtmgmt and "main" net
> >   * 1 NIC for my 1st storage net
> >   * 1 NIC for my 2nd storage net
>
> This is exactly the setup I use. I have run this successfully with
> CentOS/LIO and FreeNAS iSCSI targets with good performance.
>
> In short:
>
> - 2 separate, isolated networks for iSCSI with dedicated adapters
>on hosts and iSCSI target
> - jumbo frames enabled
> - no VLANs config needed on hosts, untagged VLANs on switch
> - do _not_ use LACP, let multipathd handle failovers
>
> Same experience as Vinicius: what did _not_ work for me is the
> iSCSI-Bonding in OVirt. It seems to require that all storage IPs are
> reachable from all other IPs, which is not the case in every setup.
>
> To get multipathing to work I use multipath directly:
>
> > https://www.mail-archive.com/users@ovirt.org/msg42735.html
>
> I will post a bonnie++ result later. If you need more details please let
> me know.
>
> cu,
> Uwe
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/E2QKV7CZR27NT6MRSNL352KLOQ5OAGDR/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q6QGINTOXIWTYXGHRIBEQY7JUA6TIGTJ/


[ovirt-users] Re: need network design advice for iSCSI

2019-01-20 Thread Uwe Laverenz

Hi John,

Am 20.01.19 um 18:32 schrieb John Florian:

As for how to get there, whatever exactly that might look like, I'm also 
having troubles figuring that out.  I figured I would transform the 
setup described below into one where each host has:


  * 2 NICs bonded with LACP for my ovirtmgmt and "main" net
  * 1 NIC for my 1st storage net
  * 1 NIC for my 2nd storage net


This is exactly the setup I use. I have run this successfully with 
CentOS/LIO and FreeNAS iSCSI targets with good performance.


In short:

- 2 separate, isolated networks for iSCSI with dedicated adapters
  on hosts and iSCSI target
- jumbo frames enabled
- no VLANs config needed on hosts, untagged VLANs on switch
- do _not_ use LACP, let multipathd handle failovers

Same experience as Vinicius: what did _not_ work for me is the 
iSCSI-Bonding in OVirt. It seems to require that all storage IPs are 
reachable from all other IPs, which is not the case in every setup.


To get multipathing to work I use multipath directly:


https://www.mail-archive.com/users@ovirt.org/msg42735.html


I will post a bonnie++ result later. If you need more details please let 
me know.


cu,
Uwe
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E2QKV7CZR27NT6MRSNL352KLOQ5OAGDR/


[ovirt-users] Re: GlusterFS and oVirt

2019-01-20 Thread Strahil
I'm not an expert but based on my experience I can recommend you to:1. Check time sync (NTP/chrony)2. Check your volumes are configured as described here : https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/configuring_red_hat_virtualization_with_red_hat_gluster_storage/index#Configuring_Volumes_Using_the_Command_Line_InterfaceBest Regards,StrahilOn Jan 20, 2019 15:04, Magnus Isaksson  wrote:

Hello


I have quite some trouble getting Gluster to work.


I have 4 nodes running CentOS and oVirt. These 4 nodes is split up in 2 clusters.
I do not run Gluster via oVirt, i run it stand alone to be able to use all 4 nodes into one gluster volume.


I can add all peers successfully, and i can create an volume and start it with sucess, but after that it starts to getting troublesome.


If i run gluster volume status after starting the volume it times out, i have read that ping-time needs to bee more than 0 so i set it on 30, Still the same problem.


From now on, i can not stop a volume nor remove it, i have to stop glusterd and remove it from /var/lib/gluster/vols/* on all nodes to be able to do anything with gluster.


From time to time when i do a gluster peer status it shows "disconnected" and when i run it again directly after it show "connected"


I get a lot of these errors in glusterd.log

[2019-01-20 12:53:46.087848] W [rpc-clnt-ping.c:223:rpc_clnt_ping_cbk] 0-management: socket disconnected
[2019-01-20 12:53:46.087858] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk] 0-management: RPC_CLNT_PING notify failed
[2019-01-20 12:53:55.091598] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk] 0-management: RPC_CLNT_PING notify failed
[2019-01-20 12:53:56.094846] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk] 0-management: RPC_CLNT_PING notify failed
[2019-01-20 12:53:56.097482] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk] 0-management: RPC_CLNT_PING notify failed



What am i doing wrong?



//Magnus



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IENKXORUEAKEJ7SOE5BJKOOBOAJXDQ32/


[ovirt-users] GlusterFS and oVirt

2019-01-20 Thread Magnus Isaksson
Hello


I have quite some trouble getting Gluster to work.


I have 4 nodes running CentOS and oVirt. These 4 nodes is split up in 2 
clusters.

I do not run Gluster via oVirt, i run it stand alone to be able to use all 4 
nodes into one gluster volume.


I can add all peers successfully, and i can create an volume and start it with 
sucess, but after that it starts to getting troublesome.


If i run gluster volume status after starting the volume it times out, i have 
read that ping-time needs to bee more than 0 so i set it on 30, Still the same 
problem.


>From now on, i can not stop a volume nor remove it, i have to stop glusterd 
>and remove it from /var/lib/gluster/vols/* on all nodes to be able to do 
>anything with gluster.


>From time to time when i do a gluster peer status it shows "disconnected" and 
>when i run it again directly after it show "connected"


I get a lot of these errors in glusterd.log

[2019-01-20 12:53:46.087848] W [rpc-clnt-ping.c:223:rpc_clnt_ping_cbk] 
0-management: socket disconnected
[2019-01-20 12:53:46.087858] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk] 
0-management: RPC_CLNT_PING notify failed
[2019-01-20 12:53:55.091598] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk] 
0-management: RPC_CLNT_PING notify failed
[2019-01-20 12:53:56.094846] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk] 
0-management: RPC_CLNT_PING notify failed
[2019-01-20 12:53:56.097482] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk] 
0-management: RPC_CLNT_PING notify failed


What am i doing wrong?


//Magnus
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6XHSOBCDZ6AG4AVC4MLQNRJUJ2NKYTI2/


[ovirt-users] Re: need network design advice for iSCSI

2019-01-20 Thread John Florian
So just to make sure I follow:

  * I will want a distinct VLAN and IP address for each NIC acting as an
iSCSI initiator.
  * In the middle the switch would be configured as basic access ports
without any LACP.
  * Do I want the same for the target?  The QNAP docs say that for MPIO
I would want to use their port trunking feature and a single IP for
both NICs on that end, which confuses me as it seems to contradict
the idea of two (or more) completely independent channels. 

As for how to get there, whatever exactly that might look like, I'm also
having troubles figuring that out.  I figured I would transform the
setup described below into one where each host has:

  * 2 NICs bonded with LACP for my ovirtmgmt and "main" net
  * 1 NIC for my 1st storage net
  * 1 NIC for my 2nd storage net

To get there though,  I need to remove the 4 existing logical storage
nets from my hosts, pull 2 NICs out of the existing bond and so on.  But
when I've attempted that, I get things into a funky state where the
hosts become non-operational because the old storage nets are
"required".  I unchecked that setting thinking that to be the right
path.  But I could never get much further towards the new setup because
the existing storage domain as all the old connections and I see no way
to "forget" them, at least through the engine -- I didn't try to fight
it behind its back with iscsiadmin to do session logouts.  Somewhere in
all this mess I got into a Catch-22 where I couldn't do anything with
the old SD because no host was suitable and no host could be made
suitable because the SD couldn't be connected.  I tried all sorts of
things of varying levels of scariness but wound up putting things back
to present for now since I clearly need some further advice.

One option that struck me as a possibility, but exceeded my risk
aversion threshold was to remove the storage domain entirely and create
a new one pointing to the same LUNs.  Is that what I need to do to
forget the old connections?  Is that safe to all my existing logical
disks, etc?  Does the engine just see an group of LUNs with oVirt
"things" and magically reconstruct it all from what's there?  I'm
guessing that's the case because I have recreated an engine before and
know that all the critical bits live in the SD, but I just want to be
sure I don't commit to something really boneheaded.

On 1/17/19 7:43 PM, Vinícius Ferrão wrote:
> MPIO by concept is when you have two dedicated paths for iSCSI.
>
> So you don’t put iSCSI inside LACP, because it won’t do the MPIO
> magic. Since it’s the same path with a single IP.
>
> The right approach is two subjects, completely segregated without
> routing. You can use the same switch, it will not be redundant on the
> switch part, but it will be on the connections and you have two paths
> to follow load balancing between them.
>
> But to be honest I never get how oVirt handles MPIO. The iSCSI
> Multipath button on the interface request that all points, on
> different paths, to be reached, which doesn’t make sense for my
> understanding. In the past I’ve opened a ticket about this but I
> simply gave up. Ended using XenServer instead for this case
> specifically, which I was trying to avoid.
>
> Sent from my iPhone
>
> On 17 Jan 2019, at 22:14, John Florian  > wrote:
>
>> I have an existing 4.2 setup with 2 hosts, both with a quad-gbit NIC
>> and a QNAP TS-569 Pro NAS with twin gbit NIC and five 7k2 drives.  At
>> present, the I have 5 VLANs, each with their own subnet as:
>>
>>  1. my "main" net (VLAN 1, 172.16.7.0/24)
>>  2. ovirtmgmt (VLAN 100, 192.168.100.0/24)
>>  3. four storage nets (VLANs 101-104, 192.168.101.0/24 -
>> 192.168.104.0/24)
>>
>> On the NAS, I enslaved both NICs into a 802.3ad LAG and then bound an
>> IP address for each of the four storage nets giving me:
>>
>>   * bond0.101@bond0: 192.168.101.101
>>   * bond0.102@bond0: 192.168.102.102
>>   * bond0.103@bond0: 192.168.103.103
>>   * bond0.104@bond0: 192.168.104.104
>>
>> The hosts are similar, but with all four NICs enslaved into a 802.3ad
>> LAG:
>>
>> Host 1:
>>
>>   * bond0.101@bond0: 192.168.101.203
>>   * bond0.102@bond0: 192.168.102.203
>>   * bond0.103@bond0: 192.168.103.203
>>   * bond0.104@bond0: 192.168.104.203
>>
>> Host 2:
>>
>>   * bond0.101@bond0: 192.168.101.204
>>   * bond0.102@bond0: 192.168.102.204
>>   * bond0.103@bond0: 192.168.103.204
>>   * bond0.104@bond0: 192.168.104.204
>>
>> I believe my performance could be better though.  While running
>> bonnie++ on a VM, the NAS reports top disk throughput around 70MB/s
>> and the network (both NICs) topping out around 90MB/s.  I suspect I'm
>> being hurt by the load balancing across the NICs.  I've played with
>> various load balancing options for the LAGs (src-dst-ip and
>> src-dst-mac) but with little difference in effect.  Watching the
>> resource monitor on the NAS, I can see that one NIC almost exclusive
>> does transmits while the other is 

[ovirt-users] Re: How to import from another oVirt / RHV environment

2019-01-20 Thread Arik Hadas
On Fri, Jan 18, 2019 at 12:17 PM Gianluca Cecchi 
wrote:

> On Thu, Jan 17, 2019 at 6:57 PM Arik Hadas  wrote:
>
>>
>>
>> On Thu, Jan 17, 2019 at 7:54 PM Arik Hadas  wrote:
>>
>>>
>>>
>>> On Thu, Jan 17, 2019 at 6:54 PM Gianluca Cecchi <
>>> gianluca.cec...@gmail.com> wrote:
>>>
 On Thu, Jan 17, 2019 at 5:42 PM Gianluca Cecchi <
 gianluca.cec...@gmail.com> wrote:

> On Thu, Jan 17, 2019 at 4:47 PM Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> On Thu, Jan 17, 2019 at 4:24 PM Arik Hadas  wrote:
>>
>>>
>>>
>>> On Thu, Jan 17, 2019 at 4:53 PM Gianluca Cecchi <
>>> gianluca.cec...@gmail.com> wrote:
>>>
 Hello,
 I have two different oVirt 4.2 environments and I want to migrate
 some big VMs from one to another.
 I'm not able to detach and attach the block based domain where are
 the disks of source.
 And I cannot use export domain functionality.

>>>
>>> you can export them to ova on some device that can later be mounted
>>> to the destination environment.
>>> this is similar to the export domain functionality - but you didn't
>>> specify why the export domain functionality is not applicable for you.
>>>
>>
>>
> Tried by I got error.
> The VM from which I try to create OVA is composed by 3 disks: 15 + 60 +
> 440 Gb
>
> This is the sequence of events seen in engine:
>
> Starting to export Vm dbatest5 as a Virtual Appliance 1/17/19 5:33:35 PM
> VDSM ov200 command TeardownImageVDS failed: Cannot deactivate Logical
> Volume: ('General Storage Exception: ("5 [] [\' Logical volume
> fa33df49-b09d-4f86-9719-ede649542c21/08abaac5-ef82-4755-adc5-7341ce1cde33
> in
> use.\']\\nfa33df49-b09d-4f86-9719-ede649542c21/[\'08abaac5-ef82-4755-adc5-7341ce1cde33\']",)',)
> 1/17/19 9:48:02 PM
>
> Failed to export Vm dbatest5 as a Virtual Appliance to path
> /export/ovirt/dbatest5.ova on Host ov200 1/17/19 9:48:03 PM
>
> Disk dbatest5_Disk1 was successfully removed from domain ovsd3750 (User
> admin@internal-authz). 1/17/19 9:48:04 PM
>
> Disk dbatest5_Disk2 was successfully removed from domain ovsd3750 (User
> admin@internal-authz). 1/17/19 9:48:05 PM
>
> Disk dbatest5_Disk3 was successfully removed from domain ovsd3750 (User
> admin@internal-authz). 1/17/1 99:48:05 PM
>
> And this left this file
> [root@ov200 ~]# ll /export/ovirt/dbatest5.ova.tmp
> -rw-r--r--. 1 root root 552574404608 Jan 17 22:47
> /export/ovirt/dbatest5.ova.tmp
> [root@ov200 ~]#
>
> The ".tmp" extension worried me about possibly not completed ova... is
> this the case... anyway I tried then to import it, see below
>
> I have not understtod which LV it tries to deactivate...
>

Hard to tell from the above anlysis - can you please file a bug and attach
the full engine log?


>
>
>
>> Ah, ok, thanks.
>> I think you are referring to this feature page and I see in my 4.2.7
>> env I can do it for a powered off VM:
>>
>> https://ovirt.org/develop/release-management/features/virt/enhance-import-export-with-ova.html
>>
>
>>> Right
>>>
>>
> On destination host I get this, but I don't know if it depends on ova not
> exactly completed in its part; from the "Broken pipe" error I suspect so...:
>
> ./upload_ova_as_vm.py /export/ovirt/dbatest5.ova.tmp RHVDBA rhvsd3720
>
> Uploaded 69.46%
>
> Uploaded 69.70%
>
> Uploaded 69.95%
>
> Uploaded 70.21%
>
> Uploaded 70.45%
>
> Uploaded 70.71%
> Uploaded 70.72%
>
> Traceback (most recent call last):
>
>   File "./upload_ova_as_vm.py", line 227, in 
>
>proxy_connection.send(chunk)
>   File "/usr/lib64/python2.7/httplib.py", line 857, in send
>
> self.sock.sendall(data)
>
>   File "/usr/lib64/python2.7/ssl.py", line 744, in sendall
>
> v = self.send(data[count:])
>
>   File "/usr/lib64/python2.7/ssl.py", line 710, in send
>
> v = self._sslobj.write(data)
>
> socket.error: [Errno 32] Broken pipe
>
>
> [root@rhvh200 ~]#
>

Yeah, it could well be a result of having an invalid ova.


>
>
>>
>>>

>> I will try
>> Are the two described TBD features:
>> - export as ova also a running VM
>> - stream to export to ovirt-imageio daemon
>> supposed to be included in 4.3, or is there already a planned target
>> release for them?
>>
>
>>> The first one is included in 4.3 already (in general, the ova handling
>>> is better in 4.3 compared to 4.2 in terms of speed).
>>>
>>
>> I meant to say: in general, the ova handling is better in 4.3 compared to
>> 4.2.
>>
>
> I have verified on a 4.3rc2 env that I can indeed execute "export as ova"
> for a running VM too.
> I have a CentOS Atomic 7 VM and when you export as ova, a snapshot is
> executed and the the ova file seems directly generated:
>
> [root@hcinode1 vdsm]# ll /export/
> total 1141632
> -rw---. 1 root root 1401305088 Jan 18 11:10 c7atomic1.ova.tmp
> [root@hcinode1 vdsm]# ll /export/
> total 1356700
> -rw---. 1 root root 1401305088 Jan 18 11:10 c7atomic1.ova
> 

[ovirt-users] Re: ETL service aggregation to hourly tables has encountered an error. Please consult the service log for more details.

2019-01-20 Thread Yedidyah Bar David
On Fri, Jan 18, 2019 at 12:35 PM Sandro Bonazzola  wrote:
>
>
>
> Il giorno mar 15 gen 2019 alle ore 08:18 Yedidyah Bar David  
> ha scritto:
>>
>> On Mon, Jan 14, 2019 at 7:06 PM  wrote:
>> >
>> > Dears,
>> > I have an a some error in Ovirt 4.2.7
>> > In dash I see:
>> > ETL service aggregation to hourly tables has encountered an error. Please 
>> > consult the service log for more details.
>> > In log ovirt engine server:
>> > 2019-01-14 
>> > 15:59:59|rwL6AB|euUXph|wfcjQ7|OVIRT_ENGINE_DWH|HourlyTimeKeepingJob|Default|5|tWarn|tWarn_1|2019-01-14
>> >  15:59:59| ETL service aggregation to hourly tables has encountered an 
>> > error. lastHourAgg value =Mon Jan 14 14:00:00 EET 2019 and runTime = Mon 
>> > Jan 14 15:59:59 EET 2019 .Please consult the service log for more 
>> > details.|42
>> > In some sources people said the problem is in PostgreSQL DB, but I don't 
>> > understand how can I fix this problem?
>>
>> The "service log" refers to /var/log/ovirt-engine-dwh/ovirt-engine-dwhd.log .
>> Please check/share that. Thanks.
>>
>> Adding Shirly.
>>
>> Also, we might want to change the message to mention the log location.
>
>
> Didi is this tracked in a BZ?

Now opened: https://bugzilla.redhat.com/show_bug.cgi?id=1667726

>
>>
>>
>> Best regards,
>> --
>> Didi
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/I76Y3SNM33R4ZWTNGXJ6NYU6RZPYZBEI/
>
>
>
> --
>
> SANDRO BONAZZOLA
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA
>
> sbona...@redhat.com



-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T6BVG5WERSSCTUHBAGTEUMCW4US3ANDD/


[ovirt-users] Re: ETL service aggregation to hourly tables has encountered an error. Please consult the service log for more details.

2019-01-20 Thread Yedidyah Bar David
On Fri, Jan 18, 2019 at 12:35 PM Sandro Bonazzola  wrote:
>
> Didi, Shirly, can you please check these logs?

I can't see here anything explaining why it failed, I hope
Shirly can, or can tell what other information can help.

>
> Il giorno mar 15 gen 2019 alle ore 09:13  ha scritto:
>>
>> 2018-10-22 17:58:40|ETL Service Started
>> ovirtEngineDbDriverClass|org.postgresql.Driver
>> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
>> hoursToKeepDaily|0
>> hoursToKeepHourly|720
>> ovirtEngineDbPassword|**
>> runDeleteTime|3
>> ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
>> runInterleave|60
>> limitRows|limit 1000
>> ovirtEngineHistoryDbUser|ovirt_engine_history
>> ovirtEngineDbUser|engine
>> deleteIncrement|10
>> timeBetweenErrorEvents|30
>> hoursToKeepSamples|24
>> deleteMultiplier|1000
>> lastErrorSent|2011-07-03 12:46:47.00
>> etlVersion|4.2.4.3
>> dwhAggregationDebug|false
>> dwhUuid|69462636-22a6-4aae-9703-70ce55856985
>> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
>> ovirtEngineHistoryDbPassword|**
>> 2018-10-23 
>> 16:59:59|QUC3MI|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|HourlyTimeKeepingJob|Default|5|tWarn|tWarn_1|2018-10-23
>>  16:59:59| ETL service aggregation to hourly tables has encountered an 
>> error. lastHourAgg value =Tue Oct 23 15:00:00 EEST 2018 and runTime = Tue 
>> Oct 23 16:59:59 EEST 2018 .Please consult the service log for more 
>> details.|42
>> 2018-10-24 
>> 17:59:59|xMmYXu|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|HourlyTimeKeepingJob|Default|5|tWarn|tWarn_1|2018-10-24
>>  17:59:59| ETL service aggregation to hourly tables has encountered an 
>> error. lastHourAgg value =Wed Oct 24 16:00:00 EEST 2018 and runTime = Wed 
>> Oct 24 17:59:59 EEST 2018 .Please consult the service log for more 
>> details.|42
>> 2018-10-25 
>> 16:59:59|cUcnsD|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|HourlyTimeKeepingJob|Default|5|tWarn|tWarn_1|2018-10-25
>>  16:59:59| ETL service aggregation to hourly tables has encountered an 
>> error. lastHourAgg value =Thu Oct 25 15:00:00 EEST 2018 and runTime = Thu 
>> Oct 25 16:59:59 EEST 2018 .Please consult the service log for more 
>> details.|42
>> 2018-10-25 
>> 17:59:59|eJkgGv|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|HourlyTimeKeepingJob|Default|5|tWarn|tWarn_1|2018-10-25
>>  17:59:59| ETL service aggregation to hourly tables has encountered an 
>> error. lastHourAgg value =Thu Oct 25 16:00:00 EEST 2018 and runTime = Thu 
>> Oct 25 17:59:59 EEST 2018 .Please consult the service log for more 
>> details.|42
>> 2018-10-25 
>> 20:59:58|Sc5Lfp|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|HourlyTimeKeepingJob|Default|5|tWarn|tWarn_1|2018-10-25
>>  20:59:58| ETL service aggregation to hourly tables has encountered an 
>> error. lastHourAgg value =Thu Oct 25 19:00:00 EEST 2018 and runTime = Thu 
>> Oct 25 20:59:58 EEST 2018 .Please consult the service log for more 
>> details.|42
>> 2018-10-26 
>> 01:59:59|TQ4s8m|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|HourlyTimeKeepingJob|Default|5|tWarn|tWarn_1|2018-10-26
>>  01:59:59| ETL service aggregation to hourly tables has encountered an 
>> error. lastHourAgg value =Fri Oct 26 00:00:00 EEST 2018 and runTime = Fri 
>> Oct 26 01:59:59 EEST 2018 .Please consult the service log for more 
>> details.|42
>> 2018-10-26 
>> 15:59:55|Tiv1gZ|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|HourlyTimeKeepingJob|Default|5|tWarn|tWarn_1|2018-10-26
>>  15:59:55| ETL service aggregation to hourly tables has encountered an 
>> error. lastHourAgg value =Fri Oct 26 14:00:00 EEST 2018 and runTime = Fri 
>> Oct 26 15:59:55 EEST 2018 .Please consult the service log for more 
>> details.|42
>> 2018-10-27 
>> 22:59:59|tsRtk4|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|HourlyTimeKeepingJob|Default|5|tWarn|tWarn_1|2018-10-27
>>  22:59:59| ETL service aggregation to hourly tables has encountered an 
>> error. lastHourAgg value =Sat Oct 27 21:00:00 EEST 2018 and runTime = Sat 
>> Oct 27 22:59:59 EEST 2018 .Please consult the service log for more 
>> details.|42
>> 2018-10-28 
>> 03:00:00|We7dPQ|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|HourlyTimeKeepingJob|Default|5|tWarn|tWarn_1|2018-10-28
>>  03:00:00| ETL service aggregation to hourly tables has encountered an 
>> error. lastHourAgg value =Sun Oct 28 03:00:00 EET 2018 and runTime = Sun Oct 
>> 28 03:00:00 EET 2018 .Please consult the service log for more details.|42
>> 2018-10-28 
>> 03:00:14|c3NnXS|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|Default|5|tWarn|tWarn_1|Can
>>  not sample data, oVirt Engine is not updating the statistics. Please check 
>> your oVirt Engine status.|9704
>> 2018-10-28 
>> 03:01:19|3CFZLf|Ho8HCn|yvpC89|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|Default|5|tWarn|tWarn_1|Can
>>  not sample data, oVirt Engine is not updating the statistics. Please check 
>> your oVirt Engine status.|9704
>> 2018-10-28 
>>