We use method-1 for many reasons: 
1. Centralised, searchable Report and config location 
2. We know exactly how many servers we are patching per month 
3. Not all of our servers(2000+) are on the same network. (dmz, edmz, idmz, 
non-prod, prod, testing, etc), so not all servers have port 80 access to the 
PCA Proxy server, only port 22. 

Since the config files are at the PCA Proxy server, if we need to back out a 
patch set we install over a week ago, I can generate the list installed using 
the original config files and run it through our uninstall script. 

Auditors can look at number of servers patches, number of patches per server, 
number of servers patched to a specific SUN alert. 

How do we do it: 
1. We have a directory which is exported over NFS to be world writable (within 
our network only, dmz* servers cannot access NFS anyways), where the Engg drops 
in the tar file which contains the 3 files. File: hostname.tar --> 
hostname/uname.out, hostname/showrev.out and hostname/pkginfo.out 

2. Wrapper script runs every 20 minutes to look for a tar file in this 
directory, move it to a temp location and work off the list. It untars and 
creates and copies the following: 
a. patch_order, b. uninstall_patch_order, c. missing.patches.lst, d. 
missing.patches.html, e. installpatches.sh, f. uninstallpatches.sh, g. pca 
script, h. patchdiag.xref, i. pca.conf, j. uname.out, showrev.out and 
pkginfo.out 
This script via cron can handle about 12 server requests in 18 minutes with an 
average of 200 patches. 

3. Engg checks after 30 minutes for his/her patch bundle and uses sftp or wget 
to retrive it depending on the network. Our patch team gets an email of its 
success or failure. If it fails for any reason and it does often with new 
patches and hickups from Sunsolve, we go in and manually fix it. 

Good luck. 

-GGR 

-- 
Rajiv G Gunja 
Blog: http://osrocks.blogspot.com 

----- Original Message ----- 
From: "Fred Chagnon" <fchag...@gmail.com> 
To: "PCA (Patch Check Advanced) Discussion" <pca@lists.univie.ac.at> 
Sent: Monday, March 23, 2009 3:20:09 PM GMT -05:00 US/Canada Eastern 
Subject: Re: [pca] Reporting in a client / server scenario 

The other reason to favour method 1 would be to minimize processing on your 
clients. Sure running pca is low overhead, but the model does suggest a nice 
clean approach to just getting what's needed from the client systems, and doing 
all the logic and intelligence elsewhere behind the scenes. 

Fred 


On Mon, Mar 23, 2009 at 3:12 PM, French, David < david_fre...@intuit.com > 
wrote: 






I use Method 1 as it was easy to get the data to a central location which could 
then be queried later without having to access the server. For example, you can 
use the centralized files to allow managers and other to generate reports as 
they see fit via a web interface. If you used method 2, the only reports they 
could see would be what you pre-created on the clients. If you passed the query 
to the client, you’d have to set up a process to receive requests, run pca and 
respond. You also have the issue detailed in the next paragraph. 



Another reason Method 1 works for me is the clients use static xref files based 
on the date to patch to. That is I have a mechanism so servers define their 
patch to date. This way the patching from pre-prod environments to prod stays 
consistent but is done at different times. Due to this, the xref file is frozen 
on the server and reports will only be based on that xref file. If the 
processing is done on a central host, it can always use the latest xref file 
for all hosts to see the current state based on the most recent patches and not 
those the server knows about. 



Anyway, hope this helps. 



--Dave 







From: pca-boun...@lists.univie.ac.at [mailto: pca-boun...@lists.univie.ac.at ] 
On Behalf Of Asif Iqbal 
Sent: Monday, March 23, 2009 11:54 AM 
To: PCA (Patch Check Advanced) Discussion 
Subject: [pca] Reporting in a client / server scenario 






I am debating on which path to take in reporting the pca status of 300 clients. 

method 1 

- collect the showrev, uname and pkginfo for each client and send it to the 
server as hostname_*.out files 
- have the central server to generate report for each client based on its 
output files received 

method 2 

- generate the report of pca -l missing for each client on the client 
- send the generated report to the server 


I like method 1, because it can be plugged into my hobbit client with really 
minimal change 

Looking for suggestion(s) 

-- 
Asif Iqbal 
PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu 
A: Because it messes up the order in which people normally read text. 
Q: Why is top-posting such a bad thing? 




-- 
Fred Chagnon 
fchag...@gmail.com

Reply via email to