[
https://issues.apache.org/jira/browse/HDDS-321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606885#comment-16606885
]
Shashikant Banerjee commented on HDDS-321:
------------------------------------------
I tried the same command and it seems to work well.
{code:java}
[root@ctr-e138-1518143905142-459606-01-000002 ozone-0.2.1-SNAPSHOT]#
./bin/ozone fs -put TEST_DIR1/ /
2018-09-07 09:10:34,755 WARN util.NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes where
applicable
put: `TEST_DIR1/': No such file or directory
[root@ctr-e138-1518143905142-459606-01-000002 ozone-0.2.1-SNAPSHOT]#
./bin/ozone fs -put /tmp/TEST_DIR1/ /
2018-09-07 09:10:45,209 WARN util.NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes where
applicable
2018-09-07 09:10:46,629 INFO conf.ConfUtils: raft.rpc.type = GRPC (default)
2018-09-07 09:10:46,641 INFO conf.ConfUtils: raft.grpc.message.size.max =
33554432 (custom)
2018-09-07 09:10:46,653 INFO conf.ConfUtils: raft.client.rpc.retryInterval =
300 ms (default)
2018-09-07 09:10:46,657 INFO conf.ConfUtils:
raft.client.async.outstanding-requests.max = 100 (default)
2018-09-07 09:10:46,658 INFO conf.ConfUtils:
raft.client.async.scheduler-threads = 3 (default)
2018-09-07 09:10:46,856 INFO conf.ConfUtils: raft.grpc.flow.control.window =
1MB (=1048576) (default)
2018-09-07 09:10:46,856 INFO conf.ConfUtils: raft.grpc.message.size.max =
33554432 (custom)
2018-09-07 09:10:47,187 INFO conf.ConfUtils: raft.client.rpc.request.timeout =
3000 ms (default)
2018-09-07 09:10:47,998 INFO conf.ConfUtils: raft.grpc.flow.control.window =
1MB (=1048576) (default)
2018-09-07 09:10:47,998 INFO conf.ConfUtils: raft.grpc.message.size.max =
33554432 (custom)
2018-09-07 09:10:47,999 INFO conf.ConfUtils: raft.client.rpc.request.timeout =
3000 ms (default)
[root@ctr-e138-1518143905142-459606-01-000002 ozone-0.2.1-SNAPSHOT]#
./bin/ozone fs -lsr /TEST_DIR1
2018-09-07 09:13:26,584 WARN util.NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes where
applicable
lsr: DEPRECATED: Please use 'ls -R' instead.
drwxrwxrwx - 0 2018-09-07 09:10 /TEST_DIR1/SUB_DIR1
-rw-rw-rw- 1 4659 1970-02-13 18:40 /TEST_DIR1/passwd
{code}
> ozoneFS put/copyFromLocal command does not work for a directory when the
> directory contains file(s) as well as subdirectories
> -----------------------------------------------------------------------------------------------------------------------------
>
> Key: HDDS-321
> URL: https://issues.apache.org/jira/browse/HDDS-321
> Project: Hadoop Distributed Data Store
> Issue Type: Bug
> Reporter: Nilotpal Nandi
> Priority: Blocker
> Fix For: 0.2.1
>
>
> Steps taken :
> ---------------------
> # Created a local directory 'TEST_DIR1' which contains directory "SUB_DIR1"
> and a file "test_file1".
> # Ran "./ozone fs -put TEST_DIR1/ /" . The command kept on running ,
> throwing error on console.
> stack trace of the error thrown on the console :
> {noformat}
> 2018-08-02 12:55:46 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB
> (=1048576) (default)
> 2018-08-02 12:55:46 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432
> (custom)
> 2018-08-02 12:55:46 INFO ConfUtils:41 - raft.client.rpc.request.timeout =
> 3000 ms (default)
> Aug 02, 2018 12:55:46 PM
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
> WARNING: Failed to construct URI for proxy lookup, proceeding without proxy
> java.net.URISyntaxException: Illegal character in hostname at index 13:
> https://ozone_datanode_3.ozone_default:9858
> at java.net.URI$Parser.fail(URI.java:2848)
> at java.net.URI$Parser.parseHostname(URI.java:3387)
> at java.net.URI$Parser.parseServer(URI.java:3236)
> at java.net.URI$Parser.parseAuthority(URI.java:3155)
> at java.net.URI$Parser.parseHierarchical(URI.java:3097)
> at java.net.URI$Parser.parse(URI.java:3053)
> at java.net.URI.<init>(URI.java:673)
> at
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.detectProxy(ProxyDetectorImpl.java:128)
> at
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.proxyFor(ProxyDetectorImpl.java:118)
> at
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.startNewTransport(InternalSubchannel.java:207)
> at
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.obtainActiveTransport(InternalSubchannel.java:188)
> at
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$SubchannelImpl.requestConnection(ManagedChannelImpl.java:1130)
> at
> org.apache.ratis.shaded.io.grpc.PickFirstBalancerFactory$PickFirstBalancer.handleResolvedAddressGroups(PickFirstBalancerFactory.java:79)
> at
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl$1NamesResolved.run(ManagedChannelImpl.java:1032)
> at
> org.apache.ratis.shaded.io.grpc.internal.ChannelExecutor.drain(ChannelExecutor.java:73)
> at
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$4.get(ManagedChannelImpl.java:403)
> at
> org.apache.ratis.shaded.io.grpc.internal.ClientCallImpl.start(ClientCallImpl.java:238)
> at
> org.apache.ratis.shaded.io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1.start(CensusTracingModule.java:386)
> at
> org.apache.ratis.shaded.io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1.start(CensusStatsModule.java:679)
> at
> org.apache.ratis.shaded.io.grpc.stub.ClientCalls.startCall(ClientCalls.java:293)
> at
> org.apache.ratis.shaded.io.grpc.stub.ClientCalls.asyncStreamingRequestCall(ClientCalls.java:283)
> at
> org.apache.ratis.shaded.io.grpc.stub.ClientCalls.asyncBidiStreamingCall(ClientCalls.java:92)
> at
> org.apache.ratis.shaded.proto.grpc.RaftClientProtocolServiceGrpc$RaftClientProtocolServiceStub.append(RaftClientProtocolServiceGrpc.java:208)
> at
> org.apache.ratis.grpc.client.RaftClientProtocolClient.appendWithTimeout(RaftClientProtocolClient.java:139)
> at
> org.apache.ratis.grpc.client.GrpcClientRpc.sendRequest(GrpcClientRpc.java:109)
> at
> org.apache.ratis.grpc.client.GrpcClientRpc.sendRequest(GrpcClientRpc.java:88)
> at
> org.apache.ratis.client.impl.RaftClientImpl.sendRequest(RaftClientImpl.java:302)
> at
> org.apache.ratis.client.impl.RaftClientImpl.sendRequestWithRetry(RaftClientImpl.java:256)
> at org.apache.ratis.client.impl.RaftClientImpl.send(RaftClientImpl.java:192)
> at org.apache.ratis.client.impl.RaftClientImpl.send(RaftClientImpl.java:173)
> at org.apache.ratis.client.RaftClient.send(RaftClient.java:80)
> at
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendRequest(XceiverClientRatis.java:218)
> at
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendCommand(XceiverClientRatis.java:235)
> at
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.writeChunk(ContainerProtocolCalls.java:219)
> at
> org.apache.hadoop.hdds.scm.storage.ChunkOutputStream.writeChunkToContainer(ChunkOutputStream.java:220)
> at
> org.apache.hadoop.hdds.scm.storage.ChunkOutputStream.close(ChunkOutputStream.java:150)
> at
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream$ChunkOutputStreamEntry.close(ChunkGroupOutputStream.java:486)
> at
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.close(ChunkGroupOutputStream.java:326)
> at
> org.apache.hadoop.fs.ozone.OzoneFSOutputStream.close(OzoneFSOutputStream.java:57)
> at
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:70)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:129)
> at
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:485)
> at
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:407)
> at
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:342)
> at
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:277)
> at
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:262)
> at org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:352)
> at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:441)
> at
> org.apache.hadoop.fs.shell.CommandWithDestination.recursePath(CommandWithDestination.java:305)
> at org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:369)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
> at
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:257)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
> at
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:228)
> at
> org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:295)
> at
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:120)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:327)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:390){noformat}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]