I almost forgot ...

FlashBake[1]: automatic git for writers.

I setup git with a post commit hook to publish a website (via jekyll) as
soon as my commits had been pushed into the repo. It seems that the commit
hooks in git could even be setup to a pub-sub system of sorts as well. Push
to main repo and run a post commit hook (via python fabric) to have all
endpoints pull the new files after the commit. There are a lot of
interesting possibilities with the commit hooks in git[2].

[1]: http://bitbucketlabs.net/flashbake/
[2]: http://git-scm.com/book/en/Customizing-Git-Git-Hooks

On Sunday, October 7, 2012, Tom Limoncelli wrote:

> Here's a good example of RANCID in use.  Since they had a decade (?)
> of router config history in their database, they could do some
> interesting analysis:
>
> https://www.usenix.org/conference/lisa-09/analysis-network-configuration-artifacts
>
>
> On Sat, Oct 6, 2012 at 8:01 PM, Jesse Becker <[email protected]> wrote:
> > For networking gear, have you looked into RANCID?  It pulls
> > configrations from a fairly long list of devices, stuffs them into
> > CVS/SVN, and will send emails when it detects changes.
> >
> > http://www.shrubbery.net/rancid/
> >
> > On Fri, Oct 5, 2012 at 10:45 AM, Lawrence K. Chen, P.Eng.
> > <[email protected]> wrote:
> >> I should try that, we save copies of our F5 configs for backups, but
> >> sometimes I need to look through them to see what changed and when (now
> that
> >> I'm not the only one making changes on it), though its kind of a mess
> since
> >> its just a big directory of dated files.  Plus it would probably be more
> >> space efficient, though if I moved the backup directory to the NAS then
> >> space wouldn't be an issue.
> >>
> >> The nightly backup is of both a ucs and the scf....the scf into revision
> >> control I think would be helpful...being ascii and all.  While it would
> be a
> >> big harder with the gzip'd tar file with ucs extension.   For now I
> think
> >> I'm the only one that makes changes outside of the GUI to the F5,
> including
> >> some that don't get into the ucs.  They made it harder to add your own
> files
> >> to it...and there's no guarantee that when I upgrade they won't get
> ignored.
> >> That tripped me up the last time I upgraded the F5.  Plus someday we'll
> need
> >> to upgrade to new units.  Originally they said these would be the end
> of the
> >> line....though its probably more because when people's applications
> >> fail...they always blame the F5 for marking them down, or causing them
> to go
> >> down, etc.
> >>
> >> Like start of class rush slammed the student information system
> hard....I
> >> saw that the service was taking longer and longer to return to service
> >> checks, so I bumped out the timeout in the health monitor (to that
> >> recommended in the latest F5/peoplesoft guide).  12 hours later they
> made
> >> some change, and suddenly students are seeing other people's data.  And,
> >> they blamed the F5.  Wanted to know if it was caching or something.
>  No, we
> >> don't have that enabled anywhere. Kept insisting that we must be caching
> >> somewhere to cause this problem.  Didn't even know we had the feature.
>  In
> >> the aftermath, they want all the F5/peoplesoft recommendations
> implemented.
> >> Which includes caching, compression and use of oneconnect.  Well, we
> don't
> >> have a compression license...the free 5Mbps isn't going to cut it.
>  But, the
> >> features they claimed was breaking they're application are ones they
> want
> >> turned on now.  Though later it was revealed that the DBAs don't know
> how
> >> the web stuff works at all....but they'll play with its settings when
> they
> >> think they need playing with.  And, turns out there was a peoplesoft bug
> >> that was causing the session overlaps.  Even though the unit isn't
> EoSL, it
> >> is EOL...which apparently means we can't buy licenses to add
> functionality
> >> to it anymore.  They want more SSL TPS, since using 2048bit keys cuts
> our
> >> 5000 TPS license to a 1000 TPS license.
> >> ________________________________
> >>
> >> In a previous job, I had an epiphany that the most critical database
> that
> >> the company used was actually not that big.  At close of business each
> day,
> >> I did a full text dump of that database and auto-committed it into svn.
> >> This gave us a history of the database more or less in perpetuity, with
> a
> >> daily granularity.
> >>
> >> The idea was to protect against a situation where some bad data or
> >> corruption crept into the database but didn't get discovered for many
> moons.
> >> (Given the state of the application that was feeding data in, this was
> not
> >> inconceivable)  This would give us a way to go back and untangle things.
> >>
> >> --
> >> Christopher Manly
> >> Coordinator, Library Systems
> >> Cornell University Library Information Technologies
> >> [email protected]
> >> 607-255-3344
> >>
> >>
> >> ______________________________--
> Speaking at MacTech Conference 2012. http://mactech.com/conference
> http://EverythingSysadmin.com  -- my blog
> http://www.TomOnTime.com -- my videos
> _______________________________________________
> Discuss mailing list
> [email protected]
> https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
> This list provided by the League of Professional System Administrators
>  http://lopsa.org/
>
_______________________________________________
Discuss mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to