Hello, I'm running into the following issue when trying to write to IGFS
backed by s3a storage as of 1.6.0:
hadoop fs -put file igfs://igfs@<node>/folder/
Throws:
2016-06-02 00:58:05,137 ERROR [igfs-igfs-ipc-#97%null%]
igfs.IgfsMetaManager (Log4JLogger.java:error(495)) - File create in DUAL
mode failed [path=/folder/file._COPYING_, simpleCreate=false,
props={permission=0644, locWrite=false}, overwrite=true, bufferSize=131072,
replication=3, blockSize=33554432]
class org.apache.ignite.IgniteCheckedException: Failed to open output
stream to the file created in the secondary file system because it no
longer exists: /folder/file._COPYING_
at
org.apache.ignite.internal.processors.igfs.IgfsMetaManager.fsException(IgfsMetaManager.java:2897)
at
org.apache.ignite.internal.processors.igfs.IgfsMetaManager.onSuccessCreate(IgfsMetaManager.java:1859)
at
org.apache.ignite.internal.processors.igfs.IgfsMetaManager$4.onSuccess(IgfsMetaManager.java:1949)
at
org.apache.ignite.internal.processors.igfs.IgfsMetaManager$4.onSuccess(IgfsMetaManager.java:1943)
at
org.apache.ignite.internal.processors.igfs.IgfsMetaManager.synchronizeAndExecute(IgfsMetaManager.java:2793)
at
org.apache.ignite.internal.processors.igfs.IgfsMetaManager.synchronizeAndExecute(IgfsMetaManager.java:2608)
at
org.apache.ignite.internal.processors.igfs.IgfsMetaManager.createDual(IgfsMetaManager.java:1967)
at
org.apache.ignite.internal.processors.igfs.IgfsImpl$15.call(IgfsImpl.java:1007)
at
org.apache.ignite.internal.processors.igfs.IgfsImpl$15.call(IgfsImpl.java:992)
at
org.apache.ignite.internal.processors.igfs.IgfsImpl.safeOp(IgfsImpl.java:1942)
at
org.apache.ignite.internal.processors.igfs.IgfsImpl.create0(IgfsImpl.java:992)
at
org.apache.ignite.internal.processors.igfs.IgfsImpl.create(IgfsImpl.java:965)
at
org.apache.ignite.internal.processors.igfs.IgfsIpcHandler$2.apply(IgfsIpcHandler.java:399)
at
org.apache.ignite.internal.processors.igfs.IgfsIpcHandler$2.apply(IgfsIpcHandler.java:319)
at org.apache.ignite.igfs.IgfsUserContext.doAs(IgfsUserContext.java:54)
at
org.apache.ignite.internal.processors.igfs.IgfsIpcHandler.processPathControlRequest(IgfsIpcHandler.java:319)
at
org.apache.ignite.internal.processors.igfs.IgfsIpcHandler.execute(IgfsIpcHandler.java:240)
at
org.apache.ignite.internal.processors.igfs.IgfsIpcHandler.access$000(IgfsIpcHandler.java:56)
at
org.apache.ignite.internal.processors.igfs.IgfsIpcHandler$1.run(IgfsIpcHandler.java:166)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.igfs.IgfsException: Failed to open
output stream to the file created in the secondary file system because it
no longer exists: /folder/file._COPYING_
... 22 more
My default-config has the following block:
<property name="fileSystemConfiguration">
<list>
<bean
class="org.apache.ignite.configuration.FileSystemConfiguration">
<!-- IGFS name you will use to access IGFS through
Hadoop API. -->
<property name="name" value="igfs"/>
<!-- Caches with these names must be configured. -->
<property name="metaCacheName" value="igfs-meta"/>
<property name="dataCacheName" value="igfs-data"/>
<!-- Configure TCP endpoint for communication with the
file system instance. -->
<property name="ipcEndpointConfiguration">
<bean
class="org.apache.ignite.igfs.IgfsIpcEndpointConfiguration">
<property name="type" value="TCP" />
<property name="host" value="0.0.0.0" />
<property name="port" value="10500" />
</bean>
</property>
<property name="secondaryFileSystem">
<bean
class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem">
<property name="fileSystemFactory">
<bean
class="org.apache.ignite.hadoop.fs.CachingHadoopFileSystemFactory">
<property name="uri"
value="s3a://my-bucket"/>
<property name="configPaths">
<list>
<value>/opt/ignite/core-site.xml</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
</list>
</property>
And interestingly I can create folders which get reflected in the
underlying s3a bucket fine - it's just writing files that doesn't work
Any debugging help would be much appreciated - thanks!