On Sat, Apr 16, 2016 at 4:32 AM, Craig Skinner <[email protected]>
wrote:

> A bloated way to do that is with an SQLite database, with a table's
> unique primary key being some (job number) attribute. Another column
> could auto timestamp on row insertion, so you could query on job number
> or time added. Unless you've other data to retain, it is rather bloated.
>

Not sure I agree - sqlite is pretty lightweight.  I have a job system that
runs hundreds of jobs on many systems, each dumping results into local
daily sqlite files which are then scp'd back and consolidated for
reporting.  This gives us the ease of standardized job results and
reporting without the need to have an HA DB every system can report to,
load DB clients all over the place, DB security with remote access, etc.
 (We need to gather results somehow, so rather than write some custom
format or something like XML, sqlite is an easy format to use).  You can
access sqlite on the command line in shell scripts if need be.  DB sizes
are in MB.

You might be saying bloated because it's writing SQL, etc. and for a
sysadmin who's focused on systems and is not a code-writer, that's totally
fair - SQLite is much more pleasant when you have perl or python and can
properly bind variables, etc.

I'd say the OP is crossing into programming rather than scripting.  I'm
making an artificial distinction (since shell scripts are certainly
programs) but in my experience, once you start needing more complex data
structures, you've outgrown the shell and should look at something like
perl, python, etc.  Not saying there aren't ways to do queues in
bash/ksh/etc., just...why would you?

-- 
andrew fabbro
[email protected]

Reply via email to