[
https://issues.apache.org/jira/browse/FALCON-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14358688#comment-14358688
]
pavan kumar kolamuri commented on FALCON-1068:
----------------------------------------------
I have also got the same issue. Here are the steps i have done
1) SubmitAndSchedule a process and it consumes feed data with some partition.
2) After it has run few instances successfully deleted the process and again
try to submitAndSchdule the process and i got
org.apache.falcon.client.FalconCLIException: Bad Request;Could not commit
transaction due to exception during persistence exception
One of the possible reasons is, issue in addition of process instance to
metadata store, if the process instance consumes feed of specific partition .
It will throw FalconException
{noformat}
java.lang.StringIndexOutOfBoundsException: String index out of range: 18
at java.lang.String.substring(String.java:1907)
at
org.apache.falcon.entity.v0.SchemaHelper.formatDateUTCToISO8601(SchemaHelper.java:64)
at
org.apache.falcon.metadata.InstanceRelationshipGraphBuilder.getFileSystemFeedInstanceName(InstanceRelationshipGraphBuilder.java:307)
at
org.apache.falcon.metadata.InstanceRelationshipGraphBuilder.getFeedInstanceName(InstanceRelationshipGraphBuilder.java:279)
at
org.apache.falcon.metadata.InstanceRelationshipGraphBuilder.addFeedInstance(InstanceRelationshipGraphBuilder.java:243)
at
org.apache.falcon.metadata.InstanceRelationshipGraphBuilder.addInputFeedInstances(InstanceRelationshipGraphBuilder.java:171)
at
org.apache.falcon.metadata.MetadataMappingService.onProcessInstanceExecuted(MetadataMappingService.java:290)
at
org.apache.falcon.metadata.MetadataMappingService.onSuccess(MetadataMappingService.java:264)
at
org.apache.falcon.workflow.WorkflowJobEndNotificationService.notifySuccess(WorkflowJobEndNotificationService.java:101)
at
org.apache.falcon.messaging.JMSMessageConsumer.onSuccess(JMSMessageConsumer.java:138)
at
org.apache.falcon.messaging.JMSMessageConsumer.onMessage(JMSMessageConsumer.java:110)
at
org.apache.activemq.ActiveMQMessageConsumer.dispatch(ActiveMQMessageConsumer.java:1229)
at
org.apache.activemq.ActiveMQSessionExecutor.dispatch(ActiveMQSessionExecutor.java:134)
at
org.apache.activemq.ActiveMQSessionExecutor.iterate(ActiveMQSessionExecutor.java:205)
at
org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:122)
at
org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:43)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
{noformat}
Feed Instance path for this is
hdfs://localhost:8020/data/in/2013/11/15/00/57/{cpm} . Because of this
transaction is not getting committed even tough its opened during addition of
process instance.
> When scheduling a process, Falcon throws "Bad Request;Could not commit
> transaction due to exception during persistence"
> -----------------------------------------------------------------------------------------------------------------------
>
> Key: FALCON-1068
> URL: https://issues.apache.org/jira/browse/FALCON-1068
> Project: Falcon
> Issue Type: Bug
> Reporter: Adam Kawa
> Attachments: falcon.application.log.FALCON-1068.rtf
>
>
> I have a simple script "manage-entity.sh process dss" that deletes, submit
> and schedules a Falcon process.
> A couple of times per week, I get the "FalconCLIException: Bad Request;Could
> not commit transaction due to exception during persistence" when submitting
> the process.
> The workaround is to restart Falcon server...
> e.g.:
> {code}
> $ ./manage-entity.sh process dss my-process.xml
> falcon/default/my-process(process) removed successfully (KILLED in ENGINE)
> Stacktrace:
> org.apache.falcon.client.FalconCLIException: Bad Request;Could not commit
> transaction due to exception during persistence
> at
> org.apache.falcon.client.FalconCLIException.fromReponse(FalconCLIException.java:44)
> at
> org.apache.falcon.client.FalconClient.checkIfSuccessful(FalconClient.java:1162)
> at
> org.apache.falcon.client.FalconClient.sendEntityRequestWithObject(FalconClient.java:684)
> at
> org.apache.falcon.client.FalconClient.submitAndSchedule(FalconClient.java:347)
> at org.apache.falcon.cli.FalconCLI.entityCommand(FalconCLI.java:371)
> at org.apache.falcon.cli.FalconCLI.run(FalconCLI.java:182)
> at org.apache.falcon.cli.FalconCLI.main(FalconCLI.java:132)
> $ ./falcon-restart.sh
> Hadoop is installed, adding hadoop classpath to falcon classpath
> Hadoop is installed, adding hadoop classpath to falcon classpath
> falcon started using hadoop version: Hadoop 2.5.0
> $ ./manage-entity.sh process dss my-process.xml
> falcon/default/my-process(process) removed successfully (KILLED in ENGINE)
> schedule/default/my-process(process) scheduled successfully
> submit/falcon/default/Submit successful (process) my-process
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)