I really like some of the new centralized logging systems like http://logstash.net/. It can handle loads of different sources and sinks and when you throw in the full power of elasticsearch searching for interesting data is an order of magnitude more powerful then what we currently have on z/OS. You can throw your distributed systems into the mix for a nice holistic view of your entire stack.

On 4/12/2014 10:06 PM, John McKown wrote:
This is just my mind wandering around loose again. You kind indulgence is
appreciated.

But I've been thinking about the z/OS syslog for some reason lately. Given
what it was originally designed for, review by a human, it is a decent
design. But is it really as helpful as it could be in today's z/OS
environment? Should z/OS have a more generalized logging facility? I will
grant that subsystems have various "logs", but they each basically have
their own structure. Is there really a need for the z/OS system log
anymore? I really don't know. And I will admit that my mind has been
corrupted by using Linux too much lately. <grin/>

So, if such a thing is even needed any more, what might it look like?
Should it go to SPOOL? Should it be more like the OPERLOG and go to a
LOGGER destination? Or should it go "somewhere else"?

So what would I like? I know most will moan, but I _like_ structured,
textual, information. So I would prefer that the output be in something
like XML or JSON structure, not "column" based. And no encoded binary, OK?
Now I'm trying to come up with what sort of data should be in the "system
header" type data. These are just some fields that _I_ think would be
useful in a good, generic, logging facility. First would be the current
date in ISO8601 format, something like 2014-12-04T07:34:03-06:00 which is
the date/time as I am typing this near Dallas, TX. This tells us the local
time and gives us enough information to determine the UTC for comparison or
conversion. I would also like the z/OS sysplex name, the system name, the
CPU serial number, LPAR number, z/VM guest name (if applicable), job name
(address space name), RACF owner, step name, proc step name, program name
in the RB which issued the logging service call, program name in the first
RB chained to the JS TCB (which I think should be the EXEC PGM=... name in
most cases for batch jobs), ASID number, UNIX process id (==0 if not dubbed
because there is no PID of 0 in UNIX, or maybe -1), step number (as used in
SMF), substep number (again as defined in some SMF records).

Product specific data would be formally encoded as designed by the product.
Preferably, if in XML, with a DTD to describe it. And done so that standard
XML facilities such as XSLT and XPath can process it. Which I one reason
that I like XML a bit better than JSON at this point in time. There are a
lot of XML utilities around.

And, lastly, I do realize that the above would be very costly. Not
necessarily to just implement into z/OS, but to actually change z/OS code
to start using it. And that may be the real killer. IMO, one of the biggest
obstructions to designing new facility which "enhance" existing facilities
is the cost of implementing them. This combined with the current emphasis
on immediate return on investment. I.e. if I invest a million dollars in
something, I expect to get back 2 million in 6 months or less.

Well, I guess that I've bored you enough with this bit of weirdness. Like
many of my ideas, they sound good to me until others point out that they
are just silly/stupid/unnecessary.


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to