was there any failure in the operator or redeploy of the operator? do
you have any killed container before seeing this error on the
operator?

- First initialization of operator correctly set filePath to  filePath
+ "/" + applicationId
  at this stage filePath is set to filePath + "/" + applicationId.

- If the operator is redeployed again due to upstream operator
failure, or this operator failure. The setup gets called again, which
again appends applicationId to
  the last set value of filePath causing applicationId appended twice.

- Tushar.


On Thu, Nov 10, 2016 at 7:50 AM, Feldkamp, Brandon (CONT)
<[email protected]> wrote:
> Cut off part of the stack trace
>
>
>
> Abandoning deployment due to setup failure. java.lang.RuntimeException:
> java.io.FileNotFoundException: File does not exist:
> hdfs://.../output/application_1478724068939_0002/application_1478724068939_0002/output.txt.0.1478726546727.tmp
>
>     at
> com.datatorrent.lib.io.fs.AbstractFileOutputOperator.setup(AbstractFileOutputOperator.java:418)
>
>     at
> com.capitalone.cerberus.lazarus.operators.FileOutputOperator.setup(FileOutputOperator.java:58)
>
>     at
> com.capitalone.cerberus.lazarus.operators.FileOutputOperator.setup(FileOutputOperator.java:27)
>
>     at com.datatorrent.stram.engine.Node.setup(Node.java:187)
>
>     at
> com.datatorrent.stram.engine.StreamingContainer.setupNode(StreamingContainer.java:1309)
>
>     at
> com.datatorrent.stram.engine.StreamingContainer.access$100(StreamingContainer.java:130)
>
>     at
> com.datatorrent.stram.engine.StreamingContainer$2.run(StreamingContainer.java:1388)
>
> Caused by: java.io.FileNotFoundException: File does not exist:
>
> hdfs://.../output/application_1478724068939_0002/application_1478724068939_0002/output.txt.0.1478726546727.tmp
>
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1219)
>
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1211)
>
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1211)
>
>     at
> com.datatorrent.lib.io.fs.AbstractFileOutputOperator.setup(AbstractFileOutputOperator.java:411)
>
>     ... 6 more
>
>
>
>
>
> From: "Feldkamp, Brandon (CONT)" <[email protected]>
> Reply-To: "[email protected]" <[email protected]>
> Date: Wednesday, November 9, 2016 at 9:09 PM
> To: "[email protected]" <[email protected]>
> Subject: error with AbstractFileOutputOperator rolling files from tmp
>
>
>
> Hello,
>
>
>
> I’m seeing this error:
>
>
>
> hdfs://.../output/application_1478724068939_0002/application_1478724068939_0002/output.txt.0.1478726546727.tmp
>
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1219)
>
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1211)
>
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1211)
>
>     at
> com.datatorrent.lib.io.fs.AbstractFileOutputOperator.setup(AbstractFileOutputOperator.java:411)
>
>     ... 6 more
>
>
>
> For some reason “application_1478724068939_0002” is being added to the path
> twice. Any idea why this could be happening?
>
>
>
> This is how we set up the path in our FileOutputOperator which extends
> AbstractFileOutputOperator
>
>
>
> @Override
> public void setup(Context.OperatorContext context) {
>   …
>
>   //create directories based on application_id
>   String applicationId =
> context.getValue(Context.DAGContext.APPLICATION_ID);
>   setFilePath((getFilePath()+"/"+applicationId));
>
>
> …
>
>
>   super.setup(context);
> }
>
>
>
>
>
>
>
> ________________________________
>
> The information contained in this e-mail is confidential and/or proprietary
> to Capital One and/or its affiliates and may only be used solely in
> performance of work or services for Capital One. The information transmitted
> herewith is intended only for use by the individual or entity to which it is
> addressed. If the reader of this message is not the intended recipient, you
> are hereby notified that any review, retransmission, dissemination,
> distribution, copying or other use of, or taking of any action in reliance
> upon this information is strictly prohibited. If you have received this
> communication in error, please contact the sender and delete the material
> from your computer.
>
>
> ________________________________
>
> The information contained in this e-mail is confidential and/or proprietary
> to Capital One and/or its affiliates and may only be used solely in
> performance of work or services for Capital One. The information transmitted
> herewith is intended only for use by the individual or entity to which it is
> addressed. If the reader of this message is not the intended recipient, you
> are hereby notified that any review, retransmission, dissemination,
> distribution, copying or other use of, or taking of any action in reliance
> upon this information is strictly prohibited. If you have received this
> communication in error, please contact the sender and delete the material
> from your computer.

Reply via email to