[ 
https://issues.apache.org/jira/browse/HDDS-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek reassigned HDDS-3272:
---------------------------------

    Assignee: Marton Elek

> Smoke Test: hdfs commands failing on hadoop 27 docker-compose
> -------------------------------------------------------------
>
>                 Key: HDDS-3272
>                 URL: https://issues.apache.org/jira/browse/HDDS-3272
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>    Affects Versions: 0.5.0
>            Reporter: Dinesh Chitlangia
>            Assignee: Marton Elek
>            Priority: Blocker
>
> Discovered by [~bharat] when testing 0.5.0-beta RC2.
>  
>  
> issue when running hdfs commands on hadoop 27
> docker-compose. I see the same test failing when running the smoke test.
> $ docker exec -it c7fe17804044 bash
> bash-4.4$ hdfs dfs -put /opt/hadoop/NOTICE.txt o3fs://bucket1.vol1/kk
> 2020-03-22 04:40:14 WARN  NativeCodeLoader:60 - Unable to load
> native-hadoop library for your platform... using builtin-java classes where
> applicable
> 2020-03-22 04:40:15 INFO  MetricsConfig:118 - Loaded properties from
> hadoop-metrics2.properties
> 2020-03-22 04:40:16 INFO  MetricsSystemImpl:374 - Scheduled Metric snapshot
> period at 10 second(s).
> 2020-03-22 04:40:16 INFO  MetricsSystemImpl:191 - XceiverClientMetrics
> metrics system started
> -put: Fatal internal error
> java.lang.NullPointerException: client is null
> at java.util.Objects.requireNonNull(Objects.java:228)
> at
> org.apache.hadoop.hdds.scm.XceiverClientRatis.getClient(XceiverClientRatis.java:201)
> at
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendRequestAsync(XceiverClientRatis.java:227)
> at
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendCommandAsync(XceiverClientRatis.java:305)
> at
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.writeChunkAsync(ContainerProtocolCalls.java:315)
> at
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunkToContainer(BlockOutputStream.java:599)
> at
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunk(BlockOutputStream.java:452)
> at
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFlush(BlockOutputStream.java:463)
> at
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.close(BlockOutputStream.java:486)
> at
> org.apache.hadoop.ozone.[client.io|http://client.io/].BlockOutputStreamEntry.close(BlockOutputStreamEntry.java:144)
> at
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleStreamAction(KeyOutputStream.java:481)
> at
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleFlushOrClose(KeyOutputStream.java:455)
> at
> org.apache.hadoop.ozone.client.io.KeyOutputStream.close(KeyOutputStream.java:508)
> at
> org.apache.hadoop.fs.ozone.OzoneFSOutputStream.close(OzoneFSOutputStream.java:56)
> at
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:62)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:120)
> at
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
> at
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
> at
> org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:267)
> at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> The same command when using ozone fs is working fine.
>  docker exec -it fe5d39cf6eed bash
> bash-4.2$ ozone fs -put /opt/hadoop/NOTICE.txt o3fs://bucket1.vol1/kk
> 2020-03-22 04:41:10,999 [main] INFO impl.MetricsConfig: Loaded properties
> from hadoop-metrics2.properties
> 2020-03-22 04:41:11,123 [main] INFO impl.MetricsSystemImpl: Scheduled
> Metric snapshot period at 10 second(s).
> 2020-03-22 04:41:11,127 [main] INFO impl.MetricsSystemImpl:
> XceiverClientMetrics metrics system started
> bash-4.2$ ozone fs -ls o3fs://bucket1.vol1/
> Found 1 items
> -rw-rw-rw-   3 hadoop hadoop      17540 2020-03-22 04:41
> o3fs://bucket1.vol1/kk



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org

Reply via email to