[ 
https://issues.apache.org/jira/browse/FALCON-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14355697#comment-14355697
 ] 

Sowmya Ramesh commented on FALCON-1068:
---------------------------------------

[~kawaa]: Can you confirm if retry is attempted or not? By default retries is 
set for acquiring lock and write attempts. It will be good to find the root 
cause but as I said from code walk though onDelete no graph operation is done. 
So I am not sure who is holding the lock. If retry is not attempted we should 
add the retry logic.

{code}
default storage.lock-retries is set to 3 and storage.write-attempts to 5
{code}

> When scheduling a process, Falcon throws "Bad Request;Could not commit 
> transaction due to exception during persistence"
> -----------------------------------------------------------------------------------------------------------------------
>
>                 Key: FALCON-1068
>                 URL: https://issues.apache.org/jira/browse/FALCON-1068
>             Project: Falcon
>          Issue Type: Bug
>            Reporter: Adam Kawa
>         Attachments: falcon.application.log.FALCON-1068.rtf
>
>
> I have a simple script "manage-entity.sh process dss" that deletes, submit 
> and schedules a Falcon process. 
> A couple of times per week, I get the "FalconCLIException: Bad Request;Could 
> not commit transaction due to exception during persistence" when submitting 
> the process. 
> The workaround is to restart Falcon server...
> e.g.:
> {code}
> $ ./manage-entity.sh process dss my-process.xml
> falcon/default/my-process(process) removed successfully (KILLED in ENGINE)
> Stacktrace:
> org.apache.falcon.client.FalconCLIException: Bad Request;Could not commit 
> transaction due to exception during persistence
>       at 
> org.apache.falcon.client.FalconCLIException.fromReponse(FalconCLIException.java:44)
>       at 
> org.apache.falcon.client.FalconClient.checkIfSuccessful(FalconClient.java:1162)
>       at 
> org.apache.falcon.client.FalconClient.sendEntityRequestWithObject(FalconClient.java:684)
>       at 
> org.apache.falcon.client.FalconClient.submitAndSchedule(FalconClient.java:347)
>       at org.apache.falcon.cli.FalconCLI.entityCommand(FalconCLI.java:371)
>       at org.apache.falcon.cli.FalconCLI.run(FalconCLI.java:182)
>       at org.apache.falcon.cli.FalconCLI.main(FalconCLI.java:132)
> $ ./falcon-restart.sh
> Hadoop is installed, adding hadoop classpath to falcon classpath
> Hadoop is installed, adding hadoop classpath to falcon classpath
> falcon started using hadoop version:  Hadoop 2.5.0
> $ ./manage-entity.sh process dss my-process.xml
> falcon/default/my-process(process) removed successfully (KILLED in ENGINE)
> schedule/default/my-process(process) scheduled successfully
> submit/falcon/default/Submit successful (process) my-process
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to