Re: [Dev] Clarification regarding primary keys of LOGANALYZER table

2016-09-01 Thread Inosh Goonewardena
+1 for removing primary keys which prevents from storing all the log
entries.

Btw, you can avoid deadlocks from happening in this kind of scenario by
setting transaction isolation level to Read Uncommitted in datasource
configuration (dirty reads may occur due to this but it won't be an issue
in this use case).


On Thu, Sep 1, 2016 at 5:53 PM, Ruwan Abeykoon  wrote:

> Hi Tishan,
> Thanks for the findings.
>
> +1 for option 2 :Remove primary keys and just keep indexes.
> I guess there will be few issues if we ever want to do log-tail
> submission. But we can find a solution for that too later, as log-tail is
> not going to be used as of now.
>
> Cheers,
> Ruwan
>
>
>
> On Thu, Sep 1, 2016 at 5:48 PM, Tishan Dahanayakage 
> wrote:
>
>> Hi all,
>>
>> While doing performance tests on log analyzer I came across a DB deadlock
>> issue when bursting event.
>> Root cause for the issue is as follows.
>> - I have different set of log entries(~10) which I loop around
>> continously and publish with current timestamp.
>> - Currently _class, _content, _level, _eventTimeStamp are set as primary
>> keys of LOGANALYZER table
>> - So when sending above log events in high frequency events with same
>> timestamp hence same primary key will be sent to DAS
>> - When DAS try to persist this in batches deadlock happens.
>>
>> I got around this by modifying log content with loop variable for the
>> moment which will make sure that events with same primary key will not be
>> sent to DAS. But I have following question for clarification.
>>
>> 1. Making above attributes as primary keys we assume that a log with same
>> class, content and level cannot happen within the same millisecond. Will
>> this be always true even for a multi GW high throughput setup? Because if
>> it is true we will not only get this exception time to time but also we
>> will be replacing actually occurred log entries.
>>
>> 2. Is there any special reason to define these primary keys? Else shall
>> we remove primary keys and just keep indexes.
>>
>> Thanks
>> /Tishan
>>
>> --
>> Tishan Dahanayakage
>> Senior Software Engineer
>> WSO2, Inc.
>> Mobile:+94 716481328
>>
>> Disclaimer: This communication may contain privileged or other
>> confidential information and is intended exclusively for the addressee/s.
>> If you are not the intended recipient/s, or believe that you may have
>> received this communication in error, please reply to the sender indicating
>> that fact and delete the copy you received and in addition, you should not
>> print, copy, re-transmit, disseminate, or otherwise use the information
>> contained in this communication. Internet communications cannot be
>> guaranteed to be timely, secure, error or virus-free. The sender does not
>> accept liability for any errors or omissions.
>>
>
>
>
> --
>
> *Ruwan Abeykoon*
> *Associate Director/Architect**,*
> *WSO2, Inc. http://wso2.com  *
> *lean.enterprise.middleware.*
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Thanks & Regards,

Inosh Goonewardena
Associate Technical Lead- WSO2 Inc.
Mobile: +94779966317
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Clarification regarding primary keys of LOGANALYZER table

2016-09-01 Thread Ruwan Abeykoon
Hi Tishan,
Thanks for the findings.

+1 for option 2 :Remove primary keys and just keep indexes.
I guess there will be few issues if we ever want to do log-tail submission.
But we can find a solution for that too later, as log-tail is not going to
be used as of now.

Cheers,
Ruwan



On Thu, Sep 1, 2016 at 5:48 PM, Tishan Dahanayakage  wrote:

> Hi all,
>
> While doing performance tests on log analyzer I came across a DB deadlock
> issue when bursting event.
> Root cause for the issue is as follows.
> - I have different set of log entries(~10) which I loop around continously
> and publish with current timestamp.
> - Currently _class, _content, _level, _eventTimeStamp are set as primary
> keys of LOGANALYZER table
> - So when sending above log events in high frequency events with same
> timestamp hence same primary key will be sent to DAS
> - When DAS try to persist this in batches deadlock happens.
>
> I got around this by modifying log content with loop variable for the
> moment which will make sure that events with same primary key will not be
> sent to DAS. But I have following question for clarification.
>
> 1. Making above attributes as primary keys we assume that a log with same
> class, content and level cannot happen within the same millisecond. Will
> this be always true even for a multi GW high throughput setup? Because if
> it is true we will not only get this exception time to time but also we
> will be replacing actually occurred log entries.
>
> 2. Is there any special reason to define these primary keys? Else shall we
> remove primary keys and just keep indexes.
>
> Thanks
> /Tishan
>
> --
> Tishan Dahanayakage
> Senior Software Engineer
> WSO2, Inc.
> Mobile:+94 716481328
>
> Disclaimer: This communication may contain privileged or other
> confidential information and is intended exclusively for the addressee/s.
> If you are not the intended recipient/s, or believe that you may have
> received this communication in error, please reply to the sender indicating
> that fact and delete the copy you received and in addition, you should not
> print, copy, re-transmit, disseminate, or otherwise use the information
> contained in this communication. Internet communications cannot be
> guaranteed to be timely, secure, error or virus-free. The sender does not
> accept liability for any errors or omissions.
>



-- 

*Ruwan Abeykoon*
*Associate Director/Architect**,*
*WSO2, Inc. http://wso2.com  *
*lean.enterprise.middleware.*
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Clarification regarding primary keys of LOGANALYZER table

2016-09-01 Thread Tishan Dahanayakage
Hi all,

While doing performance tests on log analyzer I came across a DB deadlock
issue when bursting event.
Root cause for the issue is as follows.
- I have different set of log entries(~10) which I loop around continously
and publish with current timestamp.
- Currently _class, _content, _level, _eventTimeStamp are set as primary
keys of LOGANALYZER table
- So when sending above log events in high frequency events with same
timestamp hence same primary key will be sent to DAS
- When DAS try to persist this in batches deadlock happens.

I got around this by modifying log content with loop variable for the
moment which will make sure that events with same primary key will not be
sent to DAS. But I have following question for clarification.

1. Making above attributes as primary keys we assume that a log with same
class, content and level cannot happen within the same millisecond. Will
this be always true even for a multi GW high throughput setup? Because if
it is true we will not only get this exception time to time but also we
will be replacing actually occurred log entries.

2. Is there any special reason to define these primary keys? Else shall we
remove primary keys and just keep indexes.

Thanks
/Tishan

-- 
Tishan Dahanayakage
Senior Software Engineer
WSO2, Inc.
Mobile:+94 716481328

Disclaimer: This communication may contain privileged or other confidential
information and is intended exclusively for the addressee/s. If you are not
the intended recipient/s, or believe that you may have received this
communication in error, please reply to the sender indicating that fact and
delete the copy you received and in addition, you should not print, copy,
re-transmit, disseminate, or otherwise use the information contained in
this communication. Internet communications cannot be guaranteed to be
timely, secure, error or virus-free. The sender does not accept liability
for any errors or omissions.
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev