Is there a list of which branches take up how much space?
I'm currently rsyncing a mirror to my laptop but I don't have a very
big HD. Ive excluded iso and SRPMS because I do not need them, but
would like to know how much the rest is going to take up.
On Mon, Sep 12, 2011 at 1:24 AM, James A.
BTW, here is the mirror I am copying from, and below that is the rsync
command I am using
http://mirror.csclub.uwaterloo.ca/centos/6.0/
rsync -avSHP --delete --exclude-from rsync.excl
and here is rsync.excl file :
---snip---
SRPMS/
local/
isos/
---snip---
On Fri, Nov 11, 2011 at 4:48 PM, Alan
Oh yeah, and I am only taking the 6.0 branch - none of the previous releases.
On Fri, Nov 11, 2011 at 5:07 PM, Alan McKay alan.mc...@gmail.com wrote:
BTW, here is the mirror I am copying from, and below that is the rsync
command I am using
http://mirror.csclub.uwaterloo.ca/centos/6.0/
rsync
First of all, please do not top post, write your replies below the
original test (where ever possible.)
Sorry, I respectfully disagree - there are some circumstances where
top posting is more appropriate, and that was one of them -
essentially just adding a quick p.s. to a previous message of
OK, my first 2 replies - my bad, I agree with your disapproval
sigh
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of In Defense of Food
___
CentOS mailing list
CentOS@centos.org
Hey folks,
I was just reminded of the Scientific distro, which on the surface
appears to be quite similar to CentOS even when the developers over
there are rather coy about which Enterprise Linux distro they base
theirs on.
I wonder if anyone here has done a comparison of the two that they'd
On Thu, Nov 10, 2011 at 8:44 AM, Bob Hoffman b...@bobhoffman.com wrote:
This is a continuation of the thread about redhat vs centos and the
thought of moving from centos
due to redhats new business model.
Can someone fill me in on this new business model? Is there a thread
here on the list
And search for it. I hope nobody will start at it again, but AFTER you
read the Archives and have *specific* questions feel free to ask.
OK, Ill do some googling. I have the last several years of this list
in my gmail so away I go ...
--
“Don't eat anything you've ever seen advertised on
it's close to 200 replies. I'm new to centos so i had plenty of
emails to read;-)
Which thread is it, I poked around but have not found it.
What is the subject?
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of In Defense of Food
These seems to me to be the first message in the series and provides a
really good summary of the changes at Red Hat which seem to be making
life a lot more difficult for CentOS.
Just figured I'd pull it out of that thread and change the subject line.
Below Johnny's email I've copied another
Both CentOS and Scientific Linux *aim* at 100% binary compatibility
and they are both doing their best toward that goal. However, neither
is perfect.
That's interesting. So how is it they've managed to come out with 6.1
(and so long ago at that)?
--
“Don't eat anything you've ever seen
I searched the list archives and I found one answer to this which
suggested I should install the PG yum repos. I don't like that
answer for reasons which follow.
I'm running Centos 6.0 freshly installed, and I've decided that with
this box I'm sticking as much as possible to just the CentOS
Thanks for the quick reponses guys - tried that and it still does not
work which tells me I need those PG repos afterall, I guess. No
biggie. My desired for a clean system has been smashed, but I'll
live :-)
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan,
ah, ok. so, did you get the same errors as the rpm command or
something else??
I tried both and they both seemed to do the same thing. I'm not at
work now so don't have all the details but it did not tell me which
packages I needed. In the end I just installed the PG repo and that
fixed
Hey folks,
I just went through the archives trying to find some info on this but
did not come up with much other than it seems there are a few experts
here on the list.
I have no experience with clustering and have just taken over a Stem
Cell Research Lab that has a Grid Engine cluster. I have
Hey folks,
I'm running RHEL 5.3 on 6 boxes - and on every one of them
sensors-detect finds nothing.
5 of them are Sun fire 2250 machines, and 1 is Sunfire 4170.
Googling and searching this list does not seem to find anything.
When I log into the Sun hardware management interface (web
Definitely running an out of date OS won't help. Updating the kernel to
current fixed a similar problem I'd had with no usable sensors being detected.
Yeah, I'd really like to do that - but I've only been here a week now
and don't understand these systems well enough yet to know whether or
not
Hey folks,
I looked back through the list archives and there are surprisingly few
threads with SSD in the subject.
In my new job I've been handed over a number of things that were
outstanding with the previous Sys Admin, and one of them was an SSD
that was suspect. I just plugged it into a
Hey folks,
I've got a CentOS / RHEL (5.x) environment and am in the process of
migrating the 5.3 file server over to an Oracle/Sun 7120 appliance.
I want to keep my main 5.3 server as our NIS server but am moving NFS
and Samba functions over to the appliance.
NFS was a no brainer as one can
I've never heard of Samba authenticating off NIS, as Windows (SMB/CIFS)
and Unix (PAM, NIS, etc) use different incompatible password hashes. on
a pure Samba system that doesn't have an external authentication system
such as Active Directory, I've always had to use smbpasswd to setup the
SMB
p.s. even if I could get it to authenticate SMB from the current 5.3
box I'd be happy.
If I have to go the directory services route I can only say that I
hope it has improved a lot since the last time I installed it 18
months ago - though that was 389-ds ...
--
“Don't eat anything you've ever
I don't know that particular NAS, but does it allow you to setup an
anonymous SMB user?
If not, then setup a normal SMB share on the NAS and mount it on the CentOS
server, then rsync the data across
Moving the data is the easy part.
The problem here is that currently SMB runs on the 5.3 box
if you're running multiple windows systems with a server and DONT have
centralized authentication, you have a mess.
if you're not running windows systems, then why are you using SMB ?
NFS is the native file sharing system for Unix and Linux systems.
It is a bit of an oddball arrangement.
On Fri, Nov 25, 2011 at 8:11 PM, Fajar Priyanto fajar...@arinet.org wrote:
Hi Alan, sorry for the OT.
I'm very much interested on the 7120.
How much space do you have on it and what is the price?
I don't know the price - I've only been here a few weeks.
I'll have to check when I'm back at work
Nagios is probably the most popular, and is pretty powerful and
relatively easy to write your own plugins for.
I have to look at this in my new job in the next month or so. I'm
going to have a look at Zenoss and if that does not pan out and
nothing else turns up I'll fall back to Nagios.
Really
Hey guys and gals,
Anyone know of a half decent tool like DNSstuff.com only free?
I need to run some diag on a few domains but it is basically a 1 shot
deal and hard to justify buying.
thanks,
-Alan
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of
man dig
man nslookup
man whois
man traceroute
Clearly you've never used DNSstuff.com
Yeah, I can do all that, but the above tool does a full diagnosis for
you and makes debugging problems really quick and painless.
I could transmit this message via RFC1149, too, but it just would take
a lot
Hey folks,
I am sure there must be an easy way to do this.
I am currently running 5.3 and yum info db4 tells me that they have
version 4.3.29.
Is that telling me that this is the version in 5.3? Or that this is
the latest version in the 5.x stream?
If the former, then how do I find out what
Normally I would have a VM for this sort of thing but I still do not
have a machine available for that and I'm hesitant to put VMWare
Server on one of my production machines. I'm new here and have
already flagged that I need a box for VMs - hoping to have something
in place by this time next
I'd be hesitant to put an EOL product on my production machines as well.
Let me rephrase that - I am hesitant to put ANY virtualization on
these production machines. Mainly because I am very new here and do
not know the environment very well yet.
--
“Don't eat anything you've ever seen
Hey folks,
I'm trying to use a 5.3 box to run some JNLP apps, but all I get is a
view of XML.
I try doing some googling and don't come up with much other than this
one thread that says I may need both 32 and 64 bit Java to run JNLP.
But it is not clear to me how to do that.
thanks,
-Alan
--
Oh sorry, Firefox on 5.3
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of In Defense of Food
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
My Ubuntu desktop at home seems to show up to windows boxes on the
home lan and vice-versa, without me having to do anything to configure
it.
Something I've done in the past in small office situations is set up a
DNS server that knows the names of all the local machines and then
proxies off to a
Those are slowish times even for a 7200rpm disk. My desktop here at
home (Ubuntu) has a slow 7200 drive and hdparam reports a lot faster
than that. Well, it is a Caviar Green drive which means that 7200
is the fastest speed but it does spin slower too.
amckay@amckay-desktop:~$ sudo !!
sudo
You have not said anything yet in this thread about defragging that drive.
I just checked your original message and your drive is the exact same
as mine except yours is the 1.5 TB version and mine is 1.0.
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan,
Also - boot a live Linux CD and then from there do hdparam again and
compare results, If they differ vastly at least you know it is
something in your running system which is the culprit. If they are
roughly the same then it is likely the drive gone bad. Though check
the man page for hdparam to
enough it has that one running. But still I go to a JNLP
app and get only XML, no app.
Anyone?
On Thu, Dec 1, 2011 at 2:25 PM, Alan McKay alan.mc...@gmail.com wrote:
Hey folks,
I'm trying to use a 5.3 box to run some JNLP apps, but all I get is a
view of XML.
I try doing some googling
That did the trick - thanks so much!
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of In Defense of Food
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
Hey folks,
I just went through the archives to see what people are doing for backups,
and here is what I found :
- amanda
- bacula
- BackupPC
- FreeNAS
Here is my situation : we have pretty much all Sun hardware with a Sun
StorageTek SL24 tape unit backing it all up. OSes are a combination of
I'm pretty sure I saw a note on the networker list that 7.6 SP3 works
with update 27, update 29, and java 7.
Well we don't have a support contract - is it a free upgrade?
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of In Defense of Food
Anyone have any experience with this, which just came to my attention
http://www.arkeia.com/en/solutions/open-source-solutions
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of In Defense of Food
___
CentOS
I use backuppc, but find that in order to restore one has to be or know
the admin user password.
There appears to be no way to open this up to users to directly see and
restore from the file tree that it manages.
Huh? No. Users can do their own restores from the web interface without
My non-tape solution of choice is definitely rsync = box with ZFS,
snapshot however often you'd like. = forever incrementals.
For more redundancy and performance, add more ZFS boxes, do
replication between them.
Not sure whether ZFS now makes this OT - if so, sorry for not putting OT:
in
Hey folks,
I had some general questions and when reading through the list archives I
came across an iSCSI discussion back in February where a couple of
individuals were going back and forth about drafting up a best practices
doc and putting it into a wiki. Did that ever happen?And if so,
The Dell 6224 or 6248 switches are priced low
Hmmm, we seem to have different definitions of priced low :-)
http://search.dell.com/results.aspx?s=bsdc=cal=encs=cabsdt1k=PowerConnect+6224cat=allx=0y=0
$2000 for the 24 port.
I can get a Cisco small business switch for less than 1/4 that.
LOL! Cisco. If I told you that that particular device used to be called
Linksys, would it change your opinion of the device? I've got a Linksys
ADSL gateway that I'm quite sure couldn't keep up with the Dell. In fact,
I used to have that *exact* Linksys device and it died within 18 months
Hey folks,
I just did an update on a system that is taking the better part of a day (
5.3 --- 5.7 ) mainly due to file download times.
And I have 4 or 5 more systems to do.
I know I can create my own repository and then point them at it - but that
is difficult here because rsync is blocked
For CentOS 5 I've used automirror (
http://terrarum.net/administration/caching-rpms-with-automirror.html), but
it has a note that it doesn't work with CentOS 6. I have tries what that
page suggests as a replacement.
Bingo! That's exactly what I need!
Thanks!
--
“Don't eat anything
I am just trying out Zabbix and I have to say it sure is easy to set
up (once you get beyond a few minor quirks). I'm pretty impressed so
far with my evaluation.
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of In Defense of Food
OK, I've had a Zabbix and a Zenoss server running now for 2 or 3 days and
would like to morph this thread into a discussion of what each of these
systems can and cannot do.
At the base of what I see so far, Zabbix is only able to monitor devices
that have the Zabbix agent on it - is that correct?
Thoughts form anyone on any of this?
Network monitoring is not trivial no matter what tool you use. Pick
something that you trust to scale to the proportions you will need so
you don't do a lot of work and then hit a wall. And if you have a
lot of systems, avoid anything that needs
OK, I'm getting ready to finally dig into replacing our backups. Lots of
good info in this thread -but so far no mention of rsnapshot
Any comment on it ? Our environment is all Linux except for Mac desktops
which would like have a different solution for backups.
From the little I've read it
Do Zabbix or Zenoss allow for this sort of testing that Nagios has?
yes.
OK, thanks. I'll dig more into passive checks
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of In Defense of Food
___
CentOS
So going back to Amanda and Bacula ... I seem to recall that Amanda uses
standard tools on the back end like gtar and/or dump, is that right?
What does Bacula use? Does it use one of the standard tools? Or does it
have its own proprietary format that it uses?
thanks,
-Alan
--
“Don't eat
Hey folks,
Is there any way to fake a yum update just to get yum to force a download
of all the files it needs, without actually installing them.
I finally have a RPM cache/proxy working and I just want to populate it.
The server I want to actually update cannot be updated until tomorrow but
I'd
Why not just mirror the CentOS repo with rsync?
Well, for one - rsync is blocked by our firewall :-( Yes, even outgoing.
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of In Defense of Food
___
CentOS
On Mon, Dec 19, 2011 at 12:02 PM, cliff here c4iff...@gmail.com wrote:
Which is why you should use cobbler because it does all that for you.
I actually just installed cobbler a few weeks ago and will look into it for
this to see if it has a way to grab a repository without rsync
--
“Don't
On Mon, Dec 19, 2011 at 12:38 PM, cliff here c4iff...@gmail.com wrote:
Alan, if your worried about keeping an up to date repository locally and
consistently, then yes cobbler is the way to go. If all you want to do is
an update and save off the RPMS once.. then use the yum download only
The default config won't cache large files. And yum will try to use
different mirrors every time.
Aha. I thought I had it set for no file limit, but I guess using different
mirrors is what is confounding me.
So squid will cache a specific file from a specific site, I guess? And
even if
I've got automirror working on my CentOS 5.x machines. I can't say
I'm a real expert with it, but if you post your symptoms maybe I can
help you troubleshoot it.
Thanks but I've already been chatting with the author who is stumped at
this point - so I'm just going to give up.
He said he
Disable the mirrorlist line in the .repo file and point it at one
specific mirror?
Yeah that is what I can do - should work
Though I'm thinking at this point my easiest solution will be to take my
laptop home and rsync an entire repo to it, then take it back and rsync it
to my server. Ugly
Yes, the default setup really goes out of its way to defeat any
standard caching proxies and make the mirrors do extra work, although
once you accumulate the copies from 5 or 6 sources everything will
work like you expect. That used to bother me but now the mirrors seem
to be insanely fast
That is one advantage of the way automirror worked, since it was
specific to yum it didn't mind the mirror configuration.
Yes, would be nice if it worked for me :-(
One way around the mirror list issue is pointed out by Guru labs
(though I admit hijacking the DNS seems heavy handed)
http://www.gurulabs.com/goodies/guru-guides/YUM-automatic-local-mirror/
oh man, that is one nasty, dirty hack!
I'm jealous I did not think of it myself :-)
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of In Defense of Food
What kind of weird things?
I just finally got several boxes upgraded from 5.3 to 5.7 and so far have
not seen anything odd.
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of In Defense of Food
___
CentOS
Hey guys and gals,
Anyone have any experience with getting lm-sensors to run on Sun hardware?
In particular Sunfire x2250 and x4170
I was running 5.3 on these boxes and sensors-detect would not find
anything. I did a bit of research and as I recall thanks to this list
discovered some bugs that
don't those boxes have IPMI ?
H, the have an ILOM (monitoring hardware)
I'll look to see if there is a way to get what I need through there.
Though ultimately I'd like to get it from the linux side, maybe I can go
out the front door and in the back.
Is there a Linux tool for
On Thu, Dec 22, 2011 at 4:46 PM, John R Pierce pie...@hogranch.com wrote:
don't those boxes have IPMI ?
So I installed OpenIPMI and freeipmi and when I get the output of
ipmi-sensors I have to say it cannot be accurate. These are the same
numbers I was seeing from within the ILOM GUI and
Hey folks,
Is there a Linux tool that will monitor a disk and tell me which
directories are growing over time?
I could cobble something together myself of course, but if there is already
a good off-the-shelf solution, why bother?
Even if it only checks once per day that would be fine. Graphs
Might be overkill but cacti or Nagios+PNP would do this...
PNP? What's that ? I already have Icinga installed.
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of In Defense of Food
___
CentOS mailing
That sounds good.
Would you share the munin plugin later pls?
I'm interested too.
Sure will. This is not a top priority for me so I won't likely get to it
for another week or two, but once it is done I will share.
--
“Don't eat anything you've ever seen advertised on TV”
-
This is very strange - has been happening the last few days. I just
upgraded this system from 5.3 to 5.7 on Monday and the problem started some
time after that (but not immediately because I know I used yum Monday
evening after the upgrade)
I get the following error from yum, but it goes away
You mean in the terminal on solexa-db you just issued the yum install
in, you can issue as the next command
wget http://fedora.mirror.nexicom.net/epel/5/x86_64/repodata/repomd.xml
and it gets the xml file?
Yup, exactly
As a quick temporary fix/test I would comment mirrorlist and
Hi folks,
I've got a bit of a different scenario than I imagine most, and have spent
the last 60 or 90 minutes searching Amanda list archives and googling, but
did not come up with anything much. Then I went browsing around the
Amanda website and found vaulting and was wondering whether this
For one thing, I think you seriously need to look at backup up to offline
hard drives, instead of tapes. Unless you really want/need to archive the
tapes for seven years
Well, the scientists are talking longer than 7 years so HDs just are not
going to cut it
We back up to backup servers,
Aha, I forgot about /etc/yum.conf and found an erroneous entry there that
has fixed my problem!
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of In Defense of Food
___
CentOS mailing list
CentOS@centos.org
I would not have it doing the alerting.
I'd have something poll it and graph the temp so you can see a good graph
of room temp over time.
And have that same something do the alerting.
But do your servers have sensors too? You really need to monitor those as
well because there can be a huge
For long term storage, you may need to be able to not just put stuff
away, but also have a policy (and the resources!) to periodically
migrate data to newer media formats.
Yes, we've already begun this process - and we are taking into account the
sorts of issues you mentioned.
--
“Don't
I'll ask more specific questions if so :-)
Need to pull some usage data via a script and Oracle suppport says it
can't be done.
I have trouble believing that.
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of In Defense of Food
Hey folks,
I looked at the man page and don't see any way to do this - maybe it is a
function of the compression program used I dunno.
Is there any way to get gtar to report on the compression it achieved?
I can't just check file sizes because I'm writing data to tape.
The basic problem is
There is a --totals option, but that is before compression. I don't
think there is a way to do it.
Dang. THere is a tell command on mt which tells you what block number
you are on, but according to the man page only exists for some types of
drive. And evidently not mine :-(
That would
Is there some reason you aren't using amanda? Give it some holding
disk space and it will run multiple backups at once, buffering on
disk, and figure out how they should go on the tape for you.
I'm archiving, not backing up.
I looked at Amanda for a few days and it would be really clunky
I haven't used it for a while, but I thought it had an indexing
mechanism that would let you tell it what you want and it would tell
you the tapes you need and the order to restore them (for full +
incremental cases). And it could re-index the tapes if you lost the
disk copy. Maybe that
On Wed, Feb 1, 2012 at 11:32 AM, Les Mikesell lesmikes...@gmail.com wrote:
'Deploying' amanda is a matter of installing the rpm and editing a
couple of config files about the tape drive, tapes, targets, and
holding space. And maybe some firewall tweaking - but nothing really
complicated.
On Wed, Feb 1, 2012 at 2:10 PM, Lamar Owen lo...@pari.edu wrote:
What I would do is use the '-' special filename to pipe the uncompressed
tar to stdout, pipe to the compressor of choice, then pipe to tee, and have
one branch of the tee go to the tape and the other branch go to a program
to
Hey folks,
I'm reading up on gtar for tape archiving and it sounds kind of nasty and
not something I really want to rely on.
It looks like star from the schily tools is preferred. I'm using Centos
(and RHEL) 5.7 which seems to have star but not sdd.
Which leads me to believe that the Schily
Are you reading
something that favors Solaris/*bsd over GNU based systems?
No, why, are the Schily tools standard over there?
I've never had any doubts that current GNU tar would extract archives
made with it 10+ years ago - in fact I'm fairly sure I've done that.
Or that I'd be able to
I don't think so - I'm fairly sure I've seen GNUtar complain about bad
headers, say 'skipping to next header' and then find something. It
won't do that if you used the -z option because you generally can't
recover from errors in compression
Bam! As an aside to my current line of
If so, could I ask you a few questions?
I am in contact with their tech support as well but I think someone here
could be more helpful if they are using it.
My questions are technically OT for this list since it pertains to moving
from RHEL 5.7 to Ubuntu 11.11
Though it is really about Python /
Hey folks,
It looks to me like the httpd on CentOS is stuck at 2.2.2 - what's up
with that? Even after a yum upgrade.
I need 2.2.10 or greater, and would prefer to get it via yum or at
very last an RPM if at all possible. But I cannot even find an RPM
out there. For some reason both EPEL and
H, OK, I get it.
I know I can build the latest Apache on CentOS, and what we currently
do is put it into /usr/local - which I guess works.
I'd really prefer to have an RPM though.
Certainly the CentOS team as a way in which they produce this RPM.
Is this method public? And if so, is it
OK, here is the interesting part :-)
I'm new here as of about 4 months ago, and I just asked some coworkers
why we went with 2.2.10 instead of the 2.2.3 that comes with CentOS
Apparently at the time we'd been having some problems with mod_perl
crashing (and still are in fact - I'm working on it
Going with what CentOS ships, even if the package number indicates an
older release, you have the advantage that the upstream takes care for
security fixes by backporting.
Hmmm, I hadn't considered this but you are absolutely right!
--
“Don't eat anything you've ever seen advertised on TV”
For simple scripts I do them right inside of the kickstart file like
you do, but for more complex ones I store them up on the kickstart
server and use wget inside the kickstart script to put them in the
right place.
--
“Don't eat anything you've ever seen advertised on TV”
- Michael
How heavy of a workload is the DB managing?
Everything I've read says you have to be very careful if virtualizing
your DB. At very least give the virtual machine a real disk
partition.
Of course, I've only read about it - never done it myself :-) But am
about to do some benchmarking soon to
Not sure if anyone mentioned this yet, but you might want to have a
look at a product called BackupPC, which is based on rsync but puts a
really nice front end on it.
Not sure if it can work over SSH though. Just read the fine manual to find out.
--
“Don't eat anything you've ever seen
How about running iostat -x ? Sounds like the system is doing a lot
more than you think it is..
You might want to set yourself up with a performance monitoring system
like Munin to give you more extensive data, as well.
If you get that far, you'll find the iostat plugin to be a bit lacking
-
Hey folks,
I'm setting up some kickstart files for our standard configs, and need
to install munin-node, which of course does not come from you folks,
So I set up the Dag Wieers repository but in the repo file I set it to
disabled so that it will never get used by mistake.
Then when adding
Install yum-priorities and give dag a higher priority. This will make
sure that nothing is pulled from it unless it is not available in the
main repositories. You can use the exclude= setting on the base
repositories if there is something there that you would rather get
elsewhere.
Hey folks,
A week or two ago someone mentioned something about using their own
home-grown RPMs for managing config info on their boxes.
I really like this idea and would like to learn more about it. Are
there some examples out there?
I have lots of custom config info and think this would be an
1 - 100 of 305 matches
Mail list logo