Hi Guys

   Thank you very much for all the input, I appreciate it.

 @ Johan

Once again glad to deal with you. I do agree that triggers are not the way to go and I personally also do not like using it unless absolutely required. I do feel other solutions may be better, however , unfortunately they have had this issue burn them so many times that they are forcing a quick solution that they want implemented within a day or two.

This means that I have not time to implement proper solutions or test it yet. I have only recently taken over these servers and there are so much to do still to get things stable.

I have however provided the possible issues to management and will be making some suggestions in terms of more permanent solutions so thank you for your input their as well.

@ Sándor

In terms of the binary logs, the problem was not due to not knowing where it was saved. The problem was due to the fact that they generate about 90Gb worth of binary logs per day each being 500Mb in size, so it takes time to sift through all these files to find out when the deletes were performed. To add to that, they only picked up the issue 4 days after it actually happened so this meant sifting through a LOT of files.

The audit table was decided to only contain some basic information for now. However the entries for Delete attempts are not yet writing to the audit table.




     So to recap what has been done for now :

- Triggers to insert a record in audit table to show the table, type of query(insert/update) and who made the relevant change. - Trigger to prevent deletes from tables which will feedback an error to state that deletes are not allowed.

     What I need to still resolve:

-- Trigger for deletes should still log an entry into the audit table to notify which user attempted to do a delete.
                  -- More permanent solutions to be implemented.


Regards



Quoting Johan De Meersman <vegiv...@tuxera.be>:

Triggers are not the best way to go about this. Consider:

 * What is to stop a malicious user from truncating the audit table?
* Triggers may fail (corrupt target table, for instance) and a trigger failure may cancel the source statement * Triggers have a performance impact - you're basically doubling every DML action. * Triggers get executed sequentially for multi-inserts, slowing the whole operation down

I suggest having a look at one of the available audit plugins.

* Percona has one that can iirc also be compiled against the standard (oracle) community edition * Oracle have one too in newer versions (I think from 5.7?) but it's enterprise licensed * MariaDB has one in the community version, but only works against MariaDB server
 * McAfee also had one, but I'm unsure about it's current status

The benefit of a plugin is that the code runs out of the "userspace", has lower performance impact (parallelism) and cannot cause originating statements to fail. Additionally, I would assume that some of these, if not all, can also log towards an external target (file, network, ...).


/Johan

----- Original Message -----
From: "Sándor Halász" <h...@tbbs.net>
To: "MySql" <mysql@lists.mysql.com>
Sent: Wednesday, 7 December, 2016 14:56:55
Subject: Re: audit trails

2016/12/07 01:26 ... mach...@seworx.co.za:
  well in essence the following is required.

    we need to know who made what changes to tables.

There is a machination that you can try in every trigger that will add
the user-name to the binary log:

set @asdfasdfasd = CURRENT_USER();
INSERT INTO T VALUE ( ... @asdfasdfasd, UNIX_TIMESTAMP() ... );

The value assigned the variable @asdfasdfasd, since it is used to change
a table, will show up in the binary log. The function "UNIX_TIMESTAMP"
yields a number that matches TIMESTAMP in the binary log.

      we recently had a case of important data being deleted, however
finding it i binary logs proved 2 things :

1. it takes very long to find as we did not know in which file the
details were.

You did not know where the binary log was saved? That is set by you in
the global variables "log_bin_basename" and "log_bin_index".

I have managed to figure that part out almost fully and have one or two
more kinks to work out. We will be adding an error in the trigger for
deletes, however it should still log the delete attempts to audit table
and this is where I am stuck now.

I hit the error, however the attempt to delete is not being logged to
the audit table.

Only if the DELETE looks valid is the BEFORE DELETE trigger triggered.
If the deletion would yield inconsistency according to the constraints
that you set up and MySQL supports then the deletion is aborted and
rolled back ere AFTER DELETE trigger is triggered.

The problen however is now that they would like to know what query was
run. i.e. was it a straight query or was it run by calling some
procedure. I am however not sure if this will even be something that can
be logged.

I use
binlog_format=STATEMENT
; then the transaction is logged--but MySQL Cluster does not support this.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql

--
Unhappiness is discouraged and will be corrected with kitten pictures.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql

Reply via email to