[
https://issues.apache.org/jira/browse/HDFS-4538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13615538#comment-13615538
]
Colin Patrick McCabe commented on HDFS-4538:
--------------------------------------------
New client connecting to old server with
{{dfs.client.use.legacy.blockreader.local}} set.
{code}
10:29:25,873 DEBUG Client:598 - Connecting to localhost/127.0.0.1:6000
10:29:25,893 DEBUG Client:840 - IPC Client (1981738037) connection to
localhost/127.0.0.1:6000 from cmccabe: starting, having connections 1
10:29:25,901 DEBUG Client:901 - IPC Client (1981738037) connection to
localhost/127.0.0.1:6000 from cmccabe sending #0
10:29:25,906 DEBUG Client:956 - IPC Client (1981738037) connection to
localhost/127.0.0.1:6000 from cmccabe got value #0
10:29:25,906 DEBUG ProtobufRpcEngine:217 - Call: getFileInfo took 40ms
10:29:25,939 DEBUG Client:901 - IPC Client (1981738037) connection to
localhost/127.0.0.1:6000 from cmccabe sending #1
10:29:25,941 DEBUG Client:956 - IPC Client (1981738037) connection to
localhost/127.0.0.1:6000 from cmccabe got value #1
10:29:25,941 DEBUG ProtobufRpcEngine:217 - Call: getBlockLocations took 2ms
10:29:25,953 DEBUG DFSClient:173 - newInfo = LocatedBlocks{
fileLength=228
underConstruction=false
blocks=[LocatedBlock{BP-242405156-127.0.0.1-1363629671276:blk_8016763701977423490_1003;
getBlockSize()=228; corrupt=false; offset=0; locs=[127.0.0.1:6100]}]
lastLocatedBlock=LocatedBlock{BP-242405156-127.0.0.1-1363629671276:blk_8016763701977423490_1003;
getBlockSize()=228; corrupt=false; offset=0; locs=[127.0.0.1:6100]}
isLastBlockComplete=true}
10:29:25,954 DEBUG DFSClient:744 - Connecting to datanode 127.0.0.1:6100
10:29:25,963 DEBUG ClientDatanodeProtocolTranslatorPB:110 - Connecting to
datanode 127.0.0.1:6102 addr=/127.0.0.1:6102
10:29:25,978 DEBUG Client:300 - The ping interval is 60000 ms.
10:29:25,979 DEBUG Client:324 - RPC Server's Kerberos principal name for
protocol=org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolPB is null
10:29:25,979 DEBUG Client:341 - Use SIMPLE authentication for protocol
ClientDatanodeProtocolPB
10:29:25,979 DEBUG Client:598 - Connecting to /127.0.0.1:6102
10:29:25,980 DEBUG Client:840 - IPC Client (1981738037) connection to
/127.0.0.1:6102 from cmccabe: starting, having connections 2
10:29:25,980 DEBUG Client:901 - IPC Client (1981738037) connection to
/127.0.0.1:6102 from cmccabe sending #2
10:29:25,982 DEBUG Client:956 - IPC Client (1981738037) connection to
/127.0.0.1:6102 from cmccabe got value #2
10:29:25,982 DEBUG ProtobufRpcEngine:217 - Call: getBlockLocalPathInfo took 5ms
10:29:25,984 DEBUG DFSClient:263 - Cached location of block
BP-242405156-127.0.0.1-1363629671276:blk_8016763701977423490_1003 as
org.apache.hadoop.hdfs.protocol.BlockLocalPathInfo@79f03d7
10:29:25,984 DEBUG DFSClient:193 - New BlockReaderLocalLegacy for file
/r/data1/current/BP-242405156-127.0.0.1-1363629671276/current/finalized/blk_8016763701977423490
of size 228 startOffset 0 length 228 short circuit checksum true
foo bar baz
{code}
New client connecting to old server without
{{dfs.client.use.legacy.blockreader.local}} set.
{code}
log4j:ERROR Could not find value for key log4j.appender.TRACE
log4j:ERROR Could not instantiate appender named "TRACE".
10:40:04,804 DEBUG MutableMetricsFactory:42 - field
org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetri
cs.loginSuccess with annotation
@org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, about=,
value=[Rate of successful kerberos logins and lat
ency (milliseconds)], always=false, type=DEFAULT, sampleName=Ops)
10:40:04,807 DEBUG MutableMetricsFactory:42 - field
org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetri
cs.loginFailure with annotation
@org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, about=,
value=[Rate of failed kerberos logins and latency
(milliseconds)], always=false, type=DEFAULT, sampleName=Ops)
10:40:04,808 DEBUG MetricsSystemImpl:220 - UgiMetrics, User and group related
metrics
10:40:05,007 DEBUG Groups:180 - Creating new Groups object
10:40:05,010 DEBUG NativeCodeLoader:46 - Trying to load the custom-built
native-hadoop library...
10:40:05,010 DEBUG NativeCodeLoader:50 - Loaded the native-hadoop library
10:40:05,011 DEBUG JniBasedUnixGroupsMapping:51 - Using
JniBasedUnixGroupsMapping for Group resolution
10:40:05,012 DEBUG JniBasedUnixGroupsMappingWithFallback:44 - Group mapping
impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
10:40:05,012 DEBUG Groups:66 - Group mapping
impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback;
cacheTimeout=300000
10:40:05,023 DEBUG UserGroupInformation:177 - hadoop login
10:40:05,024 DEBUG UserGroupInformation:126 - hadoop login commit
10:40:05,028 DEBUG UserGroupInformation:156 - using local user:UnixPrincipal:
cmccabe
10:40:05,029 DEBUG UserGroupInformation:708 - UGI loginUser:cmccabe
(auth:SIMPLE)
10:40:05,169 DEBUG RetryUtils:74 - multipleLinearRandomRetry = null
10:40:05,187 DEBUG Server:245 - rpcKind=RPC_PROTOCOL_BUFFER,
rpcRequestWrapperClass=class
org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rp
cInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@7971f189
10:40:05,349 DEBUG DomainSocketFactory:68 - The short-circuit local reads
featureis enabled.
10:40:05,370 DEBUG Client:300 - The ping interval is 60000 ms.
10:40:05,372 DEBUG Client:324 - RPC Server's Kerberos principal name for
protocol=org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB is null
10:40:05,372 DEBUG Client:341 - Use SIMPLE authentication for protocol
ClientNamenodeProtocolPB
10:40:05,372 DEBUG Client:598 - Connecting to localhost/127.0.0.1:6000
10:40:05,393 DEBUG Client:840 - IPC Client (1747306536) connection to
localhost/127.0.0.1:6000 from cmccabe: starting, having connections 1
10:40:05,400 DEBUG Client:901 - IPC Client (1747306536) connection to
localhost/127.0.0.1:6000 from cmccabe sending #0
10:40:05,404 DEBUG Client:956 - IPC Client (1747306536) connection to
localhost/127.0.0.1:6000 from cmccabe got value #0
10:40:05,405 DEBUG ProtobufRpcEngine:217 - Call: getFileInfo took 40ms
10:40:05,436 DEBUG Client:901 - IPC Client (1747306536) connection to
localhost/127.0.0.1:6000 from cmccabe sending #1
10:40:05,438 DEBUG Client:956 - IPC Client (1747306536) connection to
localhost/127.0.0.1:6000 from cmccabe got value #1
10:40:05,438 DEBUG ProtobufRpcEngine:217 - Call: getBlockLocations took 2ms
10:40:05,455 DEBUG DFSClient:173 - newInfo = LocatedBlocks{
fileLength=228
underConstruction=false
blocks=[LocatedBlock{BP-242405156-127.0.0.1-1363629671276:blk_8016763701977423490_1003;
getBlockSize()=228; corrupt=false; offset=0; locs=[127.0.0.1:
6100]}]
lastLocatedBlock=LocatedBlock{BP-242405156-127.0.0.1-1363629671276:blk_8016763701977423490_1003;
getBlockSize()=228; corrupt=false; offset=0; locs=[1
27.0.0.1:6100]}
isLastBlockComplete=true}
10:40:05,456 DEBUG DFSClient:744 - Connecting to datanode 127.0.0.1:6100
10:40:05,459 WARN DomainSocketFactory:121 - error creating DomainSocket
java.net.ConnectException: connect(2) error: Connection refused when trying to
connect to '/r/socks/dn_sock.6100'
at org.apache.hadoop.net.unix.DomainSocket.connect0(Native Method)
at
org.apache.hadoop.net.unix.DomainSocket.connect(DomainSocket.java:316)
at
org.apache.hadoop.hdfs.DomainSocketFactory.create(DomainSocketFactory.java:117)
at
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:964)
at
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:471)
at
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:662)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:706)
at java.io.DataInputStream.read(DataInputStream.java:83)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:80)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:54)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:114)
at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:99)
at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:94)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:310)
at
org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:282)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:264)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:248)
at
org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:305)
10:40:05,463 DEBUG Client:901 - IPC Client (1747306536) connection to
localhost/127.0.0.1:6000 from cmccabe sending #2
10:40:05,464 DEBUG Client:956 - IPC Client (1747306536) connection to
localhost/127.0.0.1:6000 from cmccabe got value #2
10:40:05,464 DEBUG ProtobufRpcEngine:217 - Call: getServerDefaults took 1ms
Log file created at: 2013/03/19 00:38:44
Running on machine: keter
foo bar baz
{code}
> allow use of legacy blockreader
> -------------------------------
>
> Key: HDFS-4538
> URL: https://issues.apache.org/jira/browse/HDFS-4538
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: datanode, hdfs-client, performance
> Reporter: Colin Patrick McCabe
> Assignee: Colin Patrick McCabe
> Attachments: HDFS-4538.001.patch, HDFS-4538.002.patch,
> HDFS-4538.003.patch, HDFS-4538.004.patch
>
>
> Some users might want to use the legacy block reader, because it is available
> on Windows, whereas the secure solution has not yet been implemented there.
> As described in the mailing list discussion, let's enable this.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira