[ 
https://issues.apache.org/jira/browse/ATLAS-503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15311726#comment-15311726
 ] 

Hemanth Yamijala commented on ATLAS-503:
----------------------------------------

I tried recreating the test multiple times on my laptop, even without setting 
{{atlas.graph.storage.cache.db-cache-time}} and it did not give the cache 
eviction message. Just shows it is based on environment, I guess. For now, 
since my theory of *why* the problem occurs seems to be consistent with the 
proposed configuration change, I am requesting [~ssainath] to try and test this 
out in both the environments she was able to recreate the problem and ensure 
the configuration works fine. I have documented the details of these new 
configuration items in the new patch.

> Lock exceptions occurring due to concurrent updates to backend stores
> ---------------------------------------------------------------------
>
>                 Key: ATLAS-503
>                 URL: https://issues.apache.org/jira/browse/ATLAS-503
>             Project: Atlas
>          Issue Type: Bug
>            Reporter: Sharmadha Sainath
>            Assignee: Hemanth Yamijala
>            Priority: Critical
>             Fix For: 0.7-incubating
>
>         Attachments: ATLAS-503-1.patch, ATLAS-503.patch, hiv2atlaslogs.rtf
>
>
> On running a file containing 100 table creation commands using beeline -f , 
> all hive tables are created. But only 81 of them are imported into Atlas 
> (HiveHook enabled) when queries like "hive_table" is searched frequently 
> while the import process for the table is going on.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to