[ 
https://issues.apache.org/jira/browse/HDFS-15136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15136:
--------------------------------
    Summary: LOG flooding in secure mode when Cookies are not set in request 
header  (was: In secure mode when Cookies are not set in request header leads 
to exception flood in DEBUG log)

> LOG flooding in secure mode when Cookies are not set in request header
> ----------------------------------------------------------------------
>
>                 Key: HDFS-15136
>                 URL: https://issues.apache.org/jira/browse/HDFS-15136
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Renukaprasad C
>            Assignee: Renukaprasad C
>            Priority: Major
>         Attachments: HDFS-15136.0001.patch, HDFS-15136.0002.patch, 
> HDFS-15136.0003.patch
>
>
> In debug mode below exception gets logged when Cookie is not set in the 
> request header. This exception stack gets repeated and and has no meaning 
> here. 
> Instead, log the error in debug mode and continue without throw/catch/log of 
> exception.
> 2020-01-20 18:25:57,792 DEBUG 
> org.apache.hadoop.security.UserGroupInformation: PrivilegedAction 
> as:test/t...@hadoop.com (auth:KERBEROS) 
> from:org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:518)
> 2020-01-20 18:25:57,792 DEBUG 
> org.apache.hadoop.hdfs.web.URLConnectionFactory: open AuthenticatedURL 
> connection 
> https://IP:PORT/getJournal?jid=hacluster&segmentTxId=295&storageInfo=-64%3A39449123%3A1579244618105%3Amyhacluster&inProgressOk=true
> 2020-01-20 18:25:57,803 DEBUG 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator: JDK 
> performed authentication on our behalf.
> 2020-01-20 18:25:57,803 DEBUG 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL: Cannot 
> parse cookie header:
> java.lang.IllegalArgumentException: Empty cookie header string
>         at java.net.HttpCookie.parseInternal(HttpCookie.java:826)
>         at java.net.HttpCookie.parse(HttpCookie.java:202)
>         at java.net.HttpCookie.parse(HttpCookie.java:178)
>         at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL$AuthCookieHandler.put(AuthenticatedURL.java:99)
>         at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.extractToken(AuthenticatedURL.java:390)
>         at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:197)
>         at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:348)
>         at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.openConnection(URLConnectionFactory.java:186)
>         at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:470)
>         at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:464)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>         at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:518)
>         at 
> org.apache.hadoop.security.SecurityUtil.doAsCurrentUser(SecurityUtil.java:512)
>         at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog.getInputStream(EditLogFileInputStream.java:463)
>         at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.init(EditLogFileInputStream.java:157)
>         at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:208)
>         at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:266)
>         at 
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
>         at 
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
>         at 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:198)
>         at 
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
>         at 
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
>         at 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:198)
>         at 
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:253)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:188)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:925)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:773)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:331)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1119)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:732)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:638)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:700)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:943)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:916)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1655)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1725)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to