Re: [ovirt-users] WG: High Database Load after updating to oVirt 4.0.4

2017-01-18 Thread Tal Nisan
guchen,eberman, any news?

On Thu, Jan 12, 2017 at 3:01 PM, Roy Golan <rgo...@redhat.com> wrote:

> +guchen,eberman
>
> Guy, Eyal, can you run this on the 4.0.x setup with many disks
> explain analyze select * from getdisksvmguid(uuid_generate_v1(), false,
> uuid_generate_v1(), false);
>
> On 12 January 2017 at 14:53, Grundmann, Christian <
> christian.grundm...@fabasoft.com> wrote:
>
>> Hi,
>>
>> ok will write a bug
>>
>>
>>
>> Setup:
>>
>> 8 Nodes
>>
>> Around 100 VMs running all the time
>>
>> 100-200 VMs dynamically created and destroyed from template using vagrant
>>
>> Around 2 disks per VM
>>
>> 7 Storage Domains around 1TB each
>>
>>
>>
>> Christian
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *Von:* Roy Golan [mailto:rgo...@redhat.com]
>> *Gesendet:* Donnerstag, 12. Jänner 2017 13:41
>> *An:* Grundmann, Christian <christian.grundm...@fabasoft.com>
>> *Cc:* users@ovirt.org
>> *Betreff:* Re: [ovirt-users] WG: High Database Load after updating to
>> oVirt 4.0.4
>>
>>
>>
>>
>>
>> On 11 January 2017 at 17:16, Grundmann, Christian <
>> christian.grundm...@fabasoft.com> wrote:
>>
>> | select * from  getdisksvmguid($1, $2, $3, $4)
>>
>>
>>
>>
>>
>> At the moment its best that you open a bug and put all the info there.
>> I can tell that other setups, even big setup, didn't experience that
>>
>> so I guess some env factor is hiding here. How big is your setup,
>> hosts/vm/disks/domains?
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] WG: High Database Load after updating to oVirt 4.0.4

2017-01-12 Thread Roy Golan
+guchen,eberman

Guy, Eyal, can you run this on the 4.0.x setup with many disks
explain analyze select * from getdisksvmguid(uuid_generate_v1(), false,
uuid_generate_v1(), false);

On 12 January 2017 at 14:53, Grundmann, Christian <
christian.grundm...@fabasoft.com> wrote:

> Hi,
>
> ok will write a bug
>
>
>
> Setup:
>
> 8 Nodes
>
> Around 100 VMs running all the time
>
> 100-200 VMs dynamically created and destroyed from template using vagrant
>
> Around 2 disks per VM
>
> 7 Storage Domains around 1TB each
>
>
>
> Christian
>
>
>
>
>
>
>
>
>
>
>
> *Von:* Roy Golan [mailto:rgo...@redhat.com]
> *Gesendet:* Donnerstag, 12. Jänner 2017 13:41
> *An:* Grundmann, Christian <christian.grundm...@fabasoft.com>
> *Cc:* users@ovirt.org
> *Betreff:* Re: [ovirt-users] WG: High Database Load after updating to
> oVirt 4.0.4
>
>
>
>
>
> On 11 January 2017 at 17:16, Grundmann, Christian <
> christian.grundm...@fabasoft.com> wrote:
>
> | select * from  getdisksvmguid($1, $2, $3, $4)
>
>
>
>
>
> At the moment its best that you open a bug and put all the info there.
> I can tell that other setups, even big setup, didn't experience that
>
> so I guess some env factor is hiding here. How big is your setup,
> hosts/vm/disks/domains?
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] WG: High Database Load after updating to oVirt 4.0.4

2017-01-12 Thread Grundmann, Christian
Hi,
ok will write a bug

Setup:
8 Nodes
Around 100 VMs running all the time
100-200 VMs dynamically created and destroyed from template using vagrant
Around 2 disks per VM
7 Storage Domains around 1TB each

Christian





Von: Roy Golan [mailto:rgo...@redhat.com]
Gesendet: Donnerstag, 12. Jänner 2017 13:41
An: Grundmann, Christian <christian.grundm...@fabasoft.com>
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] WG: High Database Load after updating to oVirt 4.0.4


On 11 January 2017 at 17:16, Grundmann, Christian 
<christian.grundm...@fabasoft.com<mailto:christian.grundm...@fabasoft.com>> 
wrote:
| select * from  getdisksvmguid($1, $2, $3, $4)


At the moment its best that you open a bug and put all the info there.
I can tell that other setups, even big setup, didn't experience that
so I guess some env factor is hiding here. How big is your setup, 
hosts/vm/disks/domains?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] WG: High Database Load after updating to oVirt 4.0.4

2017-01-12 Thread Roy Golan
On 11 January 2017 at 17:16, Grundmann, Christian <
christian.grundm...@fabasoft.com> wrote:

> | select * from  getdisksvmguid($1, $2, $3, $4)



At the moment its best that you open a bug and put all the info there.
I can tell that other setups, even big setup, didn't experience that
so I guess some env factor is hiding here. How big is your setup,
hosts/vm/disks/domains?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] WG: High Database Load after updating to oVirt 4.0.4

2017-01-12 Thread Grundmann, Christian
Hi,

I already did the downgrade again because this is a showstopper. 4.0.3 is the 
last working Version for me

I have attached a full pg_stat_activity output from last try



Thx Christian



Von: Roy Golan [mailto:rgo...@redhat.com]
Gesendet: Donnerstag, 12. Jänner 2017 09:13
An: Grundmann, Christian <christian.grundm...@fabasoft.com>
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] WG: High Database Load after updating to oVirt 4.0.4







On 11 January 2017 at 17:16, Grundmann, Christian 
<christian.grundm...@fabasoft.com<mailto:christian.grundm...@fabasoft.com>> 
wrote:

   Hi,

   I updated to 4.0.6 today and again hitting this Problem can anyone plz help?



backend_start |  query_start  | 
state_change  | waiting |state| 
 query

   
---+---+---+-+-+--

   2017-01-11 15:52:41.612942+01 | 2017-01-11 16:14:45.676881+01 | 2017-01-11 
16:14:45.676882+01 | f   | active  | select * from  
getdisksvmguid($1, $2, $3, $4)

   2017-01-11 15:52:35.526771+01 | 2017-01-11 16:14:45.750546+01 | 2017-01-11 
16:14:45.750547+01 | f   | active  | select * from  
getdisksvmguid($1, $2, $3, $4)

   2017-01-11 14:48:41.133303+01 | 2017-01-11 16:14:42.89794+01  | 2017-01-11 
16:14:42.897991+01 | f   | idle| SELECT 1

   2017-01-11 14:48:43.504048+01 | 2017-01-11 16:14:46.794742+01 | 2017-01-11 
16:14:46.794813+01 | f   | idle| SELECT option_value FROM 
vdc_options WHERE option_name = 'DisconnectDwh'

   2017-01-11 14:48:43.531955+01 | 2017-01-11 16:14:34.541273+01 | 2017-01-11 
16:14:34.543513+01 | f   | idle| COMMIT

   2017-01-11 14:48:43.564148+01 | 2017-01-11 16:14:34.543635+01 | 2017-01-11 
16:14:34.544145+01 | f   | idle| COMMIT

   2017-01-11 14:48:43.569029+01 | 2017-01-11 16:00:01.86664+01  | 2017-01-11 
16:00:01.866711+01 | f   | idle in transaction | SELECT 'continueAgg', '1'  
 +

  |   | 
  | | | FROM history_configuration  
+

  |   | 
  | | | WHERE var_name = 
'lastHourAggr' +

  |   | 
  | | | AND var_datetime < 
'2017-01-11 15:00:00.00+0100'+

  |   | 
  | | |

   2017-01-11 14:48:43.572644+01 | 2017-01-11 14:48:43.57571+01  | 2017-01-11 
14:48:43.575736+01 | f   | idle| SET extra_float_digits = 3

   2017-01-11 14:48:43.577039+01 | 2017-01-11 14:48:43.580066+01 | 2017-01-11 
14:48:43.58009+01  | f   | idle| SET extra_float_digits = 3

   2017-01-11 14:48:54.308078+01 | 2017-01-11 16:14:46.931422+01 | 2017-01-11 
16:14:46.931423+01 | f   | active  | select * from  
getsnapshotbyleafguid($1)

   2017-01-11 14:48:54.465485+01 | 2017-01-11 16:14:21.113926+01 | 2017-01-11 
16:14:21.113959+01 | f   | idle| COMMIT

   2017-01-11 15:52:41.606561+01 | 2017-01-11 16:14:45.839754+01 | 2017-01-11 
16:14:45.839755+01 | f   | active  | select * from  
getdisksvmguid($1, $2, $3, $4)

   2017-01-11 14:48:56.477555+01 | 2017-01-11 16:14:45.276255+01 | 2017-01-11 
16:14:45.277038+01 | f   | idle| select * from  
getvdsbyvdsid($1, $2, $3)

   2017-01-11 15:52:41.736304+01 | 2017-01-11 16:14:44.48134+01  | 2017-01-11 
16:14:44.48134+01  | f   | active  | select * from  
getdisksvmguid($1, $2, $3, $4)

   2017-01-11 14:48:56.489949+01 | 2017-01-11 16:14:46.40924+01  | 2017-01-11 
16:14:46.409241+01 | f   | active  | select * from  
getdisksvmguid($1, $2, $3, $4)

   2017-01-11 15:52:41.618773+01 | 2017-01-11 16:14:45.732394+01 | 2017-01-11 
16:14:45.732394+01 | f   | active  | select * from  
getdisksvmguid($1, $2, $3, $4)

   2017-01-11 14:48:56.497824+01 | 2017-01-11 16:14:46.827751+01 | 2017-01-11 
16:14:46.827752+01 | f   | active  | select * from  
getsnapshotbyleafguid($1)

   2017-01-11 14:48:56.497732+01 | 2017-01-11 16:09:04.207597+01 | 2017-01-11 
16:09:04.342567+01 | f   | idle| select * from  
getvdsbyvdsid($1, $2, 

[ovirt-users] WG: High Database Load after updating to oVirt 4.0.4

2017-01-11 Thread Grundmann, Christian
-01-11 
16:14:46.074258+01 | f   | active  | select * from  
getdisksvmguid($1, $2, $3, $4)

2017-01-11 16:06:24.660881+01 | 2017-01-11 16:13:39.930702+01 | 2017-01-11 
16:13:39.95559+01  | f   | idle| select * from  
getvdsbyvdsid($1, $2, $3)

2017-01-11 16:06:24.690863+01 | 2017-01-11 16:07:28.763627+01 | 2017-01-11 
16:07:28.763684+01 | f   | idle| select * from  
getqosbyqosid($1)

2017-01-11 16:06:26.244997+01 | 2017-01-11 16:14:45.760047+01 | 2017-01-11 
16:14:45.760048+01 | f   | active  | select * from  
getdisksvmguid($1, $2, $3, $4)

2017-01-11 16:06:26.359194+01 | 2017-01-11 16:14:46.90043+01  | 2017-01-11 
16:14:46.929003+01 | f   | idle| select * from  
getvdsbyvdsid($1, $2, $3)

2017-01-11 16:06:26.377649+01 | 2017-01-11 16:14:45.035936+01 | 2017-01-11 
16:14:45.035937+01 | f   | active  | select * from  
getdisksvmguid($1, $2, $3, $4)

2017-01-11 16:06:40.128282+01 | 2017-01-11 16:12:43.764245+01 | 2017-01-11 
16:12:43.764293+01 | f   | idle| select * from  
getqosbyqosid($1)

2017-01-11 16:06:40.150762+01 | 2017-01-11 16:10:54.629416+01 | 2017-01-11 
16:10:54.629496+01 | f   | idle| select * from  
getstoragedomainidsbystoragepoolidandstatus($1, $2)

2017-01-11 16:14:46.934168+01 | 2017-01-11 16:14:46.964807+01 | 2017-01-11 
16:14:46.964809+01 | f   | active  | select 
backend_start,query_start,state_change,waiting,state,query from 
pg_stat_activity;



top - 16:13:43 up  1:36,  3 users,  load average: 41.62, 37.53, 21.37

Tasks: 286 total,  40 running, 246 sleeping,   0 stopped,   0 zombie

%Cpu(s): 99.3 us,  0.6 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st

KiB Mem : 16432648 total,  3626184 free,  6746368 used,  6060096 buff/cache

KiB Swap:  5242876 total,  5242876 free,0 used.  8104244 avail Mem



  PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND

17328 postgres  20   0 3485020  66760  20980 R  21.5  0.4   1:37.27 postgres: 
engine engine 127.0.0.1(35212) SELECT

8269 postgres  20   0 3484796  68200  22780 R  21.2  0.4   2:15.36 postgres: 
engine engine 127.0.0.1(47116) SELECT

9016 postgres  20   0 3520356 279756 199452 R  21.2  1.7   7:24.29 postgres: 
engine engine 127.0.0.1(42992) SELECT

16878 postgres  20   0 3498100  66160  18468 R  21.2  0.4   2:00.82 postgres: 
engine engine 127.0.0.1(34238) SELECT

16751 postgres  20   0 3486388 215784 169404 R  20.9  1.3   1:56.38 postgres: 
engine engine 127.0.0.1(34008) SELECT

17868 postgres  20   0 3487860 215472 167796 R  20.9  1.3   1:07.40 postgres: 
engine engine 127.0.0.1(36312) SELECT

8272 postgres  20   0 3490392  76912  25288 R  20.5  0.5   2:30.15 postgres: 
engine engine 127.0.0.1(47124) SELECT

8274 postgres  20   0 3495800  83144  26100 R  20.5  0.5   2:56.66 postgres: 
engine engine 127.0.0.1(47130) SELECT

9015 postgres  20   0 3523344 283388 198908 R  20.5  1.7   7:19.91 postgres: 
engine engine 127.0.0.1(42990) SELECT

16879 postgres  20   0 3488296  72180  23744 R  20.5  0.4   1:30.01 postgres: 
engine engine 127.0.0.1(34242) SELECT

17241 postgres  20   0 3486540 215716 168024 R  20.5  1.3   1:47.58 postgres: 
engine engine 127.0.0.1(35018) SELECT

17242 postgres  20   0 3495864  69172  20988 R  20.5  0.4   1:54.09 postgres: 
engine engine 127.0.0.1(35022) SELECT

17668 postgres  20   0 3488576  54484  15080 R  20.5  0.3   1:28.91 postgres: 
engine engine 127.0.0.1(35896) SELECT

8266 postgres  20   0 3490688 222344 170852 R  20.2  1.4   2:58.95 postgres: 
engine engine 127.0.0.1(47112) SELECT

8268 postgres  20   0 3503420 241888 177500 R  20.2  1.5   3:10.34 postgres: 
engine engine 127.0.0.1(47117) SELECT

8275 postgres  20   0 3510316 253340 181688 R  20.2  1.5   4:12.02 postgres: 
engine engine 127.0.0.1(47132) SELECT

9014 postgres  20   0 3523872 284636 199424 R  20.2  1.7   7:51.82 postgres: 
engine engine 127.0.0.1(42988) SELECT

9027 postgres  20   0 3514872 265384 189656 R  20.2  1.6   5:21.63 postgres: 
engine engine 127.0.0.1(43012) SELECT

17546 postgres  20   0 3475628  55248  19108 R  20.2  0.3   1:33.40 postgres: 
engine engine 127.0.0.1(35668) SELECT

17669 postgres  20   0 3483284  66920  22488 R  20.2  0.4   1:28.01 postgres: 
engine engine 127.0.0.1(35898) SELECT

17670 postgres  20   0 3504988  78300  22032 R  20.2  0.5   1:18.96 postgres: 
engine engine 127.0.0.1(35900) SELECT

17865 postgres  20   0 3485084  66688  21316 R  20.2  0.4   1:14.00 postgres: 
engine engine 127.0.0.1(36306) SELECT

7869 postgres  20   0 3492780 224272 171620 R  19.9  1.4   2:57.03 postgres: 
engine engine 127.0.0.1(46542) SELECT



Thx Christian







Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im Auftrag von 
Grundmann, Christian
Gesendet: Donnerstag, 29. September 2016 15:29
An: 'users@ovirt.org' <users@ovirt.org>
Betreff: [ovirt-users] WG: High Database Load after updating to oV

[ovirt-users] WG: High Database Load after updating to oVirt 4.0.4

2016-09-29 Thread Grundmann, Christian
Maybe an side effect off this bug?

https://bugzilla.redhat.com/1302752

I did a restore to 4.0.3 and the timeouts are gone

Von: Grundmann, Christian
Gesendet: Dienstag, 27. September 2016 10:33
An: users@ovirt.org
Betreff: High Database Load after updating to oVirt 4.0.4

After 4.0.4 Update we have a very high database load during startup of VMs
So high that the api calls getting timeouts

I attached the output of
select * from pg_stat_activity


is there a way to downgrade to 4.0.3?

Thx Christian



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users