Any advice anyone ?
-- Forwarded message --
From: Neil Tompkins neil.tompk...@googlemail.com
Date: Thu, May 30, 2013 at 8:27 AM
Subject: Audit Table storage for Primary Key(s)
To: [MySQL] mysql@lists.mysql.com
Hi,
I've created a Audit table which tracks any changed fields for
On 30-05-2013 09:27, Neil Tompkins wrote:
Hi,
I've created a Audit table which tracks any changed fields for multiple
tables. In my Audit table I'm using a UUID for the primary key. However I
need to have a reference back to the primary key(s) of the table audited.
At the moment I've a
Thanks for your response. We expect to use the Audit log when looking into
exceptions and/or any need to debug table updates. I don't think a CSV
table would be sufficient as we are wanting to use a interface to query
this data at least on a daily basis if not weekly.
I use UUID because we have
Again: Unless you can give some idea as to the kind of lookups you will
be performing (which fields? Temporal values? etc.), it is impossible to
give advice on the table structure. I wouldn't blame anyone for not
being able to do so; saving data for debugging will always be a moving
target and
The kind of look ups will be trying to diagnose when and by who applied a
update. So the primary key of the audit is important. My question is for
performance, should the primary key be stored as a indexed field like I
mentioned before, or should I have a actual individual field per primary key
Based on the little information available, I would make a lookup field
consisting of tablename and primary keys.
(although I still believe that storing this information in the database
in the first place is probably the wrong approach, but to each his own)
/ Carsten
On 31-05-2013 12:58,
There's been a thirst for this kind of thing for sometime but possibly
you're looking for a cheaper option? Since 5.5 there's some incarnation of
an audit plugin which can be extended for your own needs which should allow
you to perform some persistence of the results with either a log file which
UUID PRIMARY KEY (or even secondary index) --
Once the table gets big enough (bigger than RAM cache), each row INSERTed (or
SELECTed) will be a disk hit. (Rule of Thumb: only 100 hits/sec.) This is
because _random_ keys (like UUID) make caching useless. Actually, the slowdown
will be
I'll ask the dumb question.
Why not create individual history tables corresponding to your 'main'
tables? So, if you have an 'address' table, then the original record could
be written to an 'address_his' table via an update or delete trigger
(depending on whether you allow deletions or not) when
Ah-ha, excuse my earlier response, I was under the impression you were
trying to track schema changes etc.
A
On Fri, May 31, 2013 at 7:54 PM, Rick James rja...@yahoo-inc.com wrote:
UUID PRIMARY KEY (or even secondary index) --
Once the table gets big enough (bigger than RAM cache), each row
-Original Message-
From: Vikas Shukla [mailto:myfriendvi...@gmail.com]
Sent: Thursday, May 30, 2013 7:19 PM
To: Robinson, Eric; mysql@lists.mysql.com
Subject: RE: Are There Slow Queries that Don't Show in the
Slow Query Logs?
Hi,
No, it does not represents the time from
11 matches
Mail list logo