Hi, 

----- Le 12 Déc 16, à 13:54, Shirly Radco <sra...@redhat.com> a écrit : 

> Hi Baptiste,

> Thank you very much for your reply.

> I understand that you updated your DWH to collect every 60 seconds instead of
> 20.
> I'm the oVirt DWH maintainer and I would really appreciate if you can share 
> what
> led you to this decision?
> And some details on your setup.

> Do you have it installed on the same machine as the engine or on a remote one?
> Is your database remote or local?
> What is the scale of you environment ? Number of hosts/vms...

> This will may help us with the bug Roy mentioned.

>From my mind, it was the ovirt_engine_history DB, i don't remember if there 
>was one or more tables that reported a lot of disk space usage. A full vacuum 
>corrected this size issue. 
For the bugzilla mentioned, i saw it and i applied the sampling suggestion to 
see if the DB grows more slowly. 

For our environment we have today (and growing) 
* 4 DC 
* 5 Clusters 
* 9 Storages domains (iscsi) 
* About 360 virtual disks in storage domains 
* 13 Hosts (growing) 
* About 250 VMs (growing) 

* The engine + DWH + DB server are all on the same server (hosted engine) 
* DB Size is about 3.2 GB (after the vacuum) 
* As all was on the same box, the engine setup via appliance was preferred and 
it was not possible to customize the size of the appliance at install/update, 
we wanted to keep the DB size as small as possible, but with some history. I 
saw that the engine appliance size will be customizable soon, so we will maybe 
extend the engine disk at update and keep a little bit more history or decrease 
the sampling interval again. 

Have a nice day. 

Regards. 

> Best regards,
> Shirly Radco
> BI Software Engineer
> Red Hat Israel Ltd.
> 34 Jerusalem Road
> Building A, 4th floor
> Ra'anana, Israel 4350109

> On Thu, Dec 8, 2016 at 5:01 PM, Baptiste Agasse <
> baptiste.aga...@lyra-network.com > wrote:

>> ----- Le 8 Déc 16, à 15:18, Roy Golan < rgo...@redhat.com > a écrit :

>>> Hi all,

>>> Following the thread about vacuum tool [1] I would like to gather some 
>>> feedback
>>> about your deployment's db vacuum status The info is completely anonymous 
>>> and
>>> function running it is a read only reporting one and should have little or 
>>> no
>>> effect on the db.

>>> The result can be pretty verbose but again will not disclose sensitive info.
>>> Anyway review it before pasting it. It should look something like that(a
>>> snippet of one table):

>>> INFO: vacuuming "pg_catalog.pg_ts_template"
>>> INFO: index "pg_ts_template_tmplname_index" now contains 5 row versions in 2
>>> pages
>>> DETAIL: 0 index row versions were removed.
>>> 0 index pages have been deleted, 0 are currently reusable.
>>> CPU 0.00s/0.00u sec elapsed 0.00 sec.

>>> 1. sudo su - postgres -c "psql engine -c 'vacuum verbose'" &> 
>>> /tmp/vacuum.log

>>> 2. review the /tmp/vacuum.log

>>> 3. paste it to http://paste.fedoraproject.org/ and reply with the link here

>>> [1] http://lists.ovirt.org/pipermail/devel/2016-December/014484.html

>>> Thanks,
>>> Roy

>>> _______________________________________________
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users

>> http://paste.fedoraproject.org/501769/48120789/

>> But, we run a full vacuum about one month ago that have free about 8GB of 
>> space
>> and we set DWH_SAMPLING=60 to decrease data size of DWH (install is ~ 1y and
>> half old, updated from 3.5 to 3.6 to 4.0).

>> Have a nice day.

>> Regards.

>> --
>> Baptiste

>> _______________________________________________
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users

-- 
Baptiste 
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.phx.ovirt.org/mailman/listinfo/users

Reply via email to