In my case it was about 130k sql queries (before timeout occured ) if i want
schedule remote command for all ~ 1600 hosts. I was trying to execute via api
with spacecmd and by dividing into few command and matching hosts by first
letter but it was hanging on building pkg cache for minutes.
Wysłano z: Poczta systemu Windows
Od: Stephan Duehr
Wysłano: piątek, 21 listopada 2014 12:14
Do: [email protected]
Hi,
I don't think spacewalk proxy would help in this case.
One case where this hits you is scheduling config file deployment,
eg. deploy all file of a config channel to all systems subscribed
to the channel for a larger number of systems, in my test-case it
happened for > 400 systems.
See https://bugzilla.redhat.com/show_bug.cgi?id=1087844
I also don't think that Oracle would do any better, it looks like
spacewalk is sending the SQL Queries sequentially.
Also the missing parallelism is not the only problem, I've also
analyzed the SQL queries and observed, comparing 300 and 400 systems,
the total number of queries was 2172 for 300 systems and 4021 for 400
systems.
A workaround could be an API script to schedule the config deployment
actions.
Regards,
Stephan
On 11/21/2014 06:49 AM, Krzysztof Pawłowski wrote:
> Hello Amedeo,
> Does all spacewalk proxy use the same database server or i'm wrong ?
>
> 2014-10-29 16:56 GMT+01:00 Amedeo Salvati <[email protected]
> <mailto:[email protected]>>:
>
> this will scale out (not scale UP) your env, but you will change where
> your client will be connected -> to spacewalk or to spacewalk proxy
>
> Da: [email protected]
> <mailto:[email protected]>
> A: [email protected] <mailto:[email protected]>
> Cc:
> Data: Wed, 29 Oct 2014 15:49:09 +0100
> Oggetto: Re: [Spacewalk-list] Spacewalk performance tuning for
> deployments with 1000+ hosts
>
> > Does this solve problem with web ui ?
>
> > Sent from my Windows Phone
>
> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> > From: Amedeo Salvati <mailto:[email protected]>
> > Sent: 2014-10-29 12:03
> > To: [email protected] <mailto:[email protected]>
> > Cc: [email protected] <mailto:[email protected]>
> > Subject: Re: [Spacewalk-list]Spacewalk performance tuning for
> deployments with 1000+ hosts
>
> > repeat I think you must use rhn|spacewalk proxies
>
> > best regards
> > a
>
>
> > Da: [email protected]
> <mailto:[email protected]>
> > A: [email protected] <mailto:[email protected]>
> > Cc:
> > Data: Wed, 29 Oct 2014 07:22:30 +0100
> > Oggetto: Re: [Spacewalk-list] Spacewalk performance tuning for
> deployments with 1000+ hosts
>
> > > Unfortunately it doesn't help :(
>
> > > When i want do sth with all 1500 systems i get :
>
>
> > > Service Temporarily Unavailable
>
> > > The server is temporarily unable to service your request due to
> maintenance downtime or capacity problems. Please try again later.
>
>
>
> > > 2014-10-28 17:41 GMT+01:00 Matthew Madey <[email protected]
> <mailto:[email protected]>>:
>
>
> > > Here are some configurations you might find helpful for tuning
> Apache\Tomcat\Java\Networking.. Like others have mentioned.. when you get
> over 1000+ clients, it's a good idea to start
> scaling horizontally with Spacewalk Proxies. We use 4 proxies in our
> production environment and are servicing 8000+ clients. We can actually patch
> 1600 clients at a time and the GUI is still
> pretty responsive. Can't guarantee this will resolve your issue, but
> this worked for us.
>
> > > Add maxThreads to /etc/tomcat6/server.xml____
>
> __> > __
>
> > > <Connector port="8080" protocol="HTTP/1.1"
> connectionTimeout="20000" redirectPort="8443" URIEncoding="UTF-8"
> address="127.0.0.1" maxThreads="1024" maxKeepAliveRequests="1000"/>____
>
> > > <!-- A "Connector" using the shared thread pool-->____
>
> > > <!--____
>
> > > <Connector executor="tomcatThreadPool"____
>
> > > port="8080" protocol="HTTP/1.1" ____
>
> > > connectionTimeout="20000" ____
>
> > > redirectPort="8443" />____
>
> __> > __
>
> > > <!-- Define an AJP 1.3 Connector on port 8009 -->____
>
> > > <Connector port="8009" protocol="AJP/1.3" redirectPort="8443"
> URIEncoding="UTF-8" address="127.0.0.1" maxThreads="1024"/>____
>
> > > ____
>
> > > <Connector port="8009" protocol="AJP/1.3" redirectPort="8443"
> URIEncoding="UTF-8" address="::1" maxThreads="1024"/>____
>
> __> > __
>
> __> >
>
> > > Tune Apache to service more requests
> /etc/httpd/conf.d/zz-spacewalk-server.conf____
>
> __> > __
>
> __> > __
>
> > > #######################################################____
>
> > > # Authorship and versioning info____
>
> > > # $Author$____
>
> > > # $Date$____
>
> > > # $URL$____
>
> > > # $Rev$____
>
> > > # deployment_location: /etc/httpd/conf.d/____
>
> > > #######################################################____
>
> > > # ** DO NOT EDIT **____
>
> > > # Master configuration file for the rhn_server setup____
>
> > > #____
>
> __> > __
>
> > > ##____
>
> > > ## Spacewalk settings____
>
> > > ##____
>
> __> > __
>
> > > <VirtualHost *>____
>
> __> > __
>
> > > <IfModule mod_jk.c>____
>
> > > # Inherit the mod_jk settings defined in
> zz-spacewalk-www.conf____
>
> > > JkMountCopy On____
>
> > > </IfModule>____
>
> __> > __
>
> > > <Directory "/var/www/html/*">____
>
> > > AllowOverride all____
>
> > > </Directory>____
>
> __> > __
>
> > > RewriteEngine on____
>
> > > RewriteOptions inherit____
>
> > > </VirtualHost>____
>
> __> > __
>
> > > # Override default httpd prefork settings____
>
> > > <IfModule prefork.c>____
>
> > > StartServers 8____
>
> > > MinSpareServers 400____
>
> > > MaxSpareServers 400____
>
> > > ServerLimit 1024____
>
> > > MaxClients 1024____
>
> > > MaxRequestsPerChild 200____
>
> > > </IfModule>____
>
> __> > __
>
> > > Include /etc/rhn/satellite-httpd/conf/rhn/rhn_monitoring.conf____
>
> __> > __
>
> __> > __
>
> __> > __
>
> __> > Also added some network tuning to /etc/sysctl.conf
>
> __> > __
>
> > > net.ipv4.icmp_echo_ignore_broadcasts = 1____
>
> > > net.ipv4.conf.all.secure_redirects = 0____
>
> > > net.ipv4.tcp_max_syn_backlog = 8192____
>
> > > net.ipv4.conf.default.secure_redirects = 0____
>
> > > net.ipv4.tcp_syncookies = 1____
>
> > > net.ipv4.conf.all.accept_source_route = 0____
>
> > > net.ipv4.conf.all.rp_filter = 1____
>
> > > net.ipv4.conf.all.send_redirects = 0____
>
> > > net.ipv4.conf.default.accept_redirects = 0____
>
> > > net.ipv4.conf.all.accept_redirects = 0____
>
> > > net.ipv4.conf.default.send_redirects = 0____
>
> > > net.core.somaxconn = 1536____
>
> > > net.core.dev_weight = 512____
>
> > > ##3x normal for a queue and budget suited to networks greater
> than 100mbps____
>
> > > net.core.netdev_budget = 10000____
>
> > > net.core.netdev_max_backlog = 30000
>
>
>
> > > Depending on the amount of memory on your Spacewalk server, you
> may want to increase your JAVA_OPTS Xms and Xmx settings to something a
> little higher. Typically only needed if you are seeing Java Heap out of
> memory errors in your Spacewalk logs.
>
>
>
> > > On Tue, Oct 28, 2014 at 11:21 AM, Waldirio Manhães Pinheiro
> <[email protected] <mailto:[email protected]>> wrote:
>
> > > Dear Krzysztof
>
> > > Have you checked your numa configuration ?!
>
> > > Maybe you can customize you environment to use the same bus
> to application / memory.
>
> > > ______________
> > > Atenciosamente
> > > Waldirio
> > > msn: [email protected] <mailto:[email protected]>
> > > Skype: waldirio
> > > Site: www.waldirio.com.br <http://www.waldirio.com.br>
> > > Blog: blog.waldirio.com.br <http://blog.waldirio.com.br>
> > > LinkedIn:
> http://br.linkedin.com/pub/waldirio-pinheiro/22/b21/646
> > > PGP: www.waldirio.com.br/public.html
> <http://www.waldirio.com.br/public.html>
>
> > > On Tue, Oct 28, 2014 at 2:10 PM, Krzysztof Pawłowski
> <[email protected] <mailto:[email protected]>> wrote:
>
> > > We have dedicated machine for db pgsql (16GB RAM , 8
> cores) and seperate for spacewalk (16GB RAM, 8 Cores).
>
> > > I think that problem is with enormous number of queries
> to database. During such request db is not utilized 100% and spacewalk is
> also not 100% utilized.
>
>
>
> > > 2014-10-28 16:25 GMT+01:00 Götz Reinicke - IT Koordinator
> <[email protected] <mailto:[email protected]>>:
>
> > > Hi,
> > > Am 28.10.14 um 12:57 schrieb Krzysztof Pawłowski:
> > > > Hi,
> > > > Is there any guide about tuning spacewalk
> performance ? With every new
> > > > host spacewalk is getting slowly. Using SSM with
> more than 200-300 hosts
> > > > is impossible due timeouts. It's also not possible
> to deploy config
> > > > files to all hosts.
> > > > Standard java tuning was done, java gc is not the
> problem now.
> > > >
> > > > Any suggestions ?
>
> > > what server hardware do you use? What is the
> systemload while performing
> > > that tasks? CPU, RAM, Disksystem, IO, Network .... ?
>
>
> > > /Götz
>
> > > --
> > > Götz Reinicke
> > > IT-Koordinator
>
> > > Tel. +49 7141 969 82 420
> <tel:%2B49%207141%20969%2082%20420>
> > > E-Mail [email protected]
> <mailto:[email protected]>
>
> > > Filmakademie Baden-Württemberg GmbH
> > > Akademiehof 10
> > > 71638 Ludwigsburg
> > > www.filmakademie.de <http://www.filmakademie.de>
>
> > > Eintragung Amtsgericht Stuttgart HRB 205016
>
> > > Vorsitzender des Aufsichtsrats: Jürgen Walter MdL
>
>
> > [The entire original message is not included.]
>
> _______________________________________________
> Spacewalk-list mailing list
> [email protected] <mailto:[email protected]>
> https://www.redhat.com/mailman/listinfo/spacewalk-list
>
>
>
>
> _______________________________________________
> Spacewalk-list mailing list
> [email protected]
> https://www.redhat.com/mailman/listinfo/spacewalk-list
>
--
Stephan Dühr [email protected]
dass IT GmbH Phone: +49.221.3565666-90
http://www.dass-IT.de Fax: +49.221.3565666-10
Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRB52500
Geschäftsführer: S. Dühr, M. Außendorf, Jörg Steffens, P. Storz
_______________________________________________
Spacewalk-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/spacewalk-list
_______________________________________________
Spacewalk-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/spacewalk-list