[ 
https://issues.apache.org/jira/browse/HADOOP-15913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated HADOOP-15913:
----------------------------------
    Description: 
We met this problem in a production environment, the stack trace like this:
{code}ERROR org.apache.hadoop.hive.ql.exec.Task: Ended Job = 
job_1541600895081_0580 with exception 'java.lang.NullPointerException(Inflater 
has been closed)'
java.lang.NullPointerException: Inflater has been closed
        at java.util.zip.Inflater.ensureOpen(Inflater.java:389)
        at java.util.zip.Inflater.inflate(Inflater.java:257)
        at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:152)
        at java.io.FilterInputStream.read(FilterInputStream.java:133)
        at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
        at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
        at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
        at java.io.InputStreamReader.read(InputStreamReader.java:184)
        at java.io.BufferedReader.fill(BufferedReader.java:154)
        at java.io.BufferedReader.readLine(BufferedReader.java:317)
        at java.io.BufferedReader.readLine(BufferedReader.java:382)
        at 
javax.xml.parsers.FactoryFinder.findJarServiceProvider(FactoryFinder.java:319)
        at javax.xml.parsers.FactoryFinder.find(FactoryFinder.java:255)
        at 
javax.xml.parsers.DocumentBuilderFactory.newInstance(DocumentBuilderFactory.java:121)
        at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2524)
        at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2501)
        at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2407)
        at org.apache.hadoop.conf.Configuration.get(Configuration.java:983)
        at 
org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2007)
        at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:479)
        at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:469)
        at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:188)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:601)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:599)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
        at 
org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:599)
        at org.apache.hadoop.mapred.JobClient.getJobInner(JobClient.java:609)
        at org.apache.hadoop.mapred.JobClient.getJob(JobClient.java:639)
        at 
org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:294)
        at 
org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:558)
        at 
org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:457)
        at 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:141)
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197){code}
 and can reproduce it in our test environment by steps below:
1. set configs:
{code}
hive.server2.async.exec.threads  = 50
hive.server2.async.exec.wait.queue.size = 100
{code}
2. open 4 beeline terminates in 4 different nodes.
3. create 30 queries in each beeline terminate, and all these 120 queries 
connect to one hiveserver2



> xml parsing error in a heavily multi-threaded environment
> ---------------------------------------------------------
>
>                 Key: HADOOP-15913
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15913
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: common
>    Affects Versions: 2.7.3
>            Reporter: Yeliang Cang
>            Priority: Critical
>
> We met this problem in a production environment, the stack trace like this:
> {code}ERROR org.apache.hadoop.hive.ql.exec.Task: Ended Job = 
> job_1541600895081_0580 with exception 
> 'java.lang.NullPointerException(Inflater has been closed)'
> java.lang.NullPointerException: Inflater has been closed
>         at java.util.zip.Inflater.ensureOpen(Inflater.java:389)
>         at java.util.zip.Inflater.inflate(Inflater.java:257)
>         at 
> java.util.zip.InflaterInputStream.read(InflaterInputStream.java:152)
>         at java.io.FilterInputStream.read(FilterInputStream.java:133)
>         at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
>         at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
>         at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
>         at java.io.InputStreamReader.read(InputStreamReader.java:184)
>         at java.io.BufferedReader.fill(BufferedReader.java:154)
>         at java.io.BufferedReader.readLine(BufferedReader.java:317)
>         at java.io.BufferedReader.readLine(BufferedReader.java:382)
>         at 
> javax.xml.parsers.FactoryFinder.findJarServiceProvider(FactoryFinder.java:319)
>         at javax.xml.parsers.FactoryFinder.find(FactoryFinder.java:255)
>         at 
> javax.xml.parsers.DocumentBuilderFactory.newInstance(DocumentBuilderFactory.java:121)
>         at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2524)
>         at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2501)
>         at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2407)
>         at org.apache.hadoop.conf.Configuration.get(Configuration.java:983)
>         at 
> org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2007)
>         at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:479)
>         at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:469)
>         at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:188)
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:601)
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:599)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>         at 
> org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:599)
>         at org.apache.hadoop.mapred.JobClient.getJobInner(JobClient.java:609)
>         at org.apache.hadoop.mapred.JobClient.getJob(JobClient.java:639)
>         at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:294)
>         at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:558)
>         at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:457)
>         at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:141)
>         at 
> org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197){code}
>  and can reproduce it in our test environment by steps below:
> 1. set configs:
> {code}
> hive.server2.async.exec.threads  = 50
> hive.server2.async.exec.wait.queue.size = 100
> {code}
> 2. open 4 beeline terminates in 4 different nodes.
> 3. create 30 queries in each beeline terminate, and all these 120 queries 
> connect to one hiveserver2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to