[ 
https://issues.apache.org/jira/browse/FALCON-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363162#comment-14363162
 ] 

pavan kumar kolamuri edited comment on FALCON-1068 at 3/16/15 1:03 PM:
-----------------------------------------------------------------------

Yes [~sowmyaramesh] what you said is correct . But the lock timeout is 5 
minutes. If some transaction holds lock for entity and get failed for some 
reason for another 5 minutes no graph operations are allowed during that 5 
minutes for that entity.

 We need to handle transaction failures and do retries and rollback even it 
failed after retries.


was (Author: pavan kumar):
Yes [~sowmyaramesh] what you said is correct . But the lock timeout is 5 
minutes. If some transaction holds lock for entity and get failed for some 
reason for another 5 minutes no graph operations are allowed during that 5 
minutes for that entity.

1) We need to handle transaction failures and do retries and rollback even it 
failed after retries.  

2) And also can we do this ?  while adding process instance in GraphDB we have 
a few operations to do , why can't we commit after every operation , that will 
help in reducing lock contentions. 

> When scheduling a process, Falcon throws "Bad Request;Could not commit 
> transaction due to exception during persistence"
> -----------------------------------------------------------------------------------------------------------------------
>
>                 Key: FALCON-1068
>                 URL: https://issues.apache.org/jira/browse/FALCON-1068
>             Project: Falcon
>          Issue Type: Bug
>            Reporter: Adam Kawa
>         Attachments: falcon.application.log.FALCON-1068.rtf
>
>
> I have a simple script "manage-entity.sh process dss" that deletes, submit 
> and schedules a Falcon process. 
> A couple of times per week, I get the "FalconCLIException: Bad Request;Could 
> not commit transaction due to exception during persistence" when submitting 
> the process. 
> The workaround is to restart Falcon server...
> e.g.:
> {code}
> $ ./manage-entity.sh process dss my-process.xml
> falcon/default/my-process(process) removed successfully (KILLED in ENGINE)
> Stacktrace:
> org.apache.falcon.client.FalconCLIException: Bad Request;Could not commit 
> transaction due to exception during persistence
>       at 
> org.apache.falcon.client.FalconCLIException.fromReponse(FalconCLIException.java:44)
>       at 
> org.apache.falcon.client.FalconClient.checkIfSuccessful(FalconClient.java:1162)
>       at 
> org.apache.falcon.client.FalconClient.sendEntityRequestWithObject(FalconClient.java:684)
>       at 
> org.apache.falcon.client.FalconClient.submitAndSchedule(FalconClient.java:347)
>       at org.apache.falcon.cli.FalconCLI.entityCommand(FalconCLI.java:371)
>       at org.apache.falcon.cli.FalconCLI.run(FalconCLI.java:182)
>       at org.apache.falcon.cli.FalconCLI.main(FalconCLI.java:132)
> $ ./falcon-restart.sh
> Hadoop is installed, adding hadoop classpath to falcon classpath
> Hadoop is installed, adding hadoop classpath to falcon classpath
> falcon started using hadoop version:  Hadoop 2.5.0
> $ ./manage-entity.sh process dss my-process.xml
> falcon/default/my-process(process) removed successfully (KILLED in ENGINE)
> schedule/default/my-process(process) scheduled successfully
> submit/falcon/default/Submit successful (process) my-process
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to