[jira] [Commented] (HIVE-13564) Deprecate HIVE_STATS_COLLECT_RAWDATASIZE
[ https://issues.apache.org/jira/browse/HIVE-13564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317879#comment-15317879 ] Lefty Leverenz commented on HIVE-13564: --- Doc note: This removes *hive.stats.collect.rawdatasize* from HiveConf.java, so the wiki needs to be updated with a "Removed In" bullet item. * [Configuration Properties -- hive.stats.collect.rawdatasize | https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.stats.collect.rawdatasize] Added a TODOC2.1 label. > Deprecate HIVE_STATS_COLLECT_RAWDATASIZE > > > Key: HIVE-13564 > URL: https://issues.apache.org/jira/browse/HIVE-13564 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer, Statistics >Affects Versions: 2.0.0 >Reporter: Pengcheng Xiong >Assignee: Pengcheng Xiong >Priority: Minor > Labels: TODOC2.1 > Fix For: 2.1.0 > > Attachments: HIVE-13564.01.patch > > > Reasons (1) It is only used in stats20.q (2) We already have a > "HIVESTATSAUTOGATHER" configuration to tell if we are going to collect > rawDataSize and #rows. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13760) Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running for more than the configured timeout value.
[ https://issues.apache.org/jira/browse/HIVE-13760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317878#comment-15317878 ] Xuefu Zhang commented on HIVE-13760: cool. sounds good then. > Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running > for more than the configured timeout value. > > > Key: HIVE-13760 > URL: https://issues.apache.org/jira/browse/HIVE-13760 > Project: Hive > Issue Type: Improvement > Components: Configuration >Affects Versions: 2.0.0 >Reporter: zhihai xu >Assignee: zhihai xu > Attachments: HIVE-13760.000.patch, HIVE-13760.001.patch > > > Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running > for more than the configured timeout value. The default value will be -1 , > which means no timeout. This will be useful for user to manage queries with > SLA. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13564) Deprecate HIVE_STATS_COLLECT_RAWDATASIZE
[ https://issues.apache.org/jira/browse/HIVE-13564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lefty Leverenz updated HIVE-13564: -- Labels: TODOC2.1 (was: ) > Deprecate HIVE_STATS_COLLECT_RAWDATASIZE > > > Key: HIVE-13564 > URL: https://issues.apache.org/jira/browse/HIVE-13564 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer, Statistics >Affects Versions: 2.0.0 >Reporter: Pengcheng Xiong >Assignee: Pengcheng Xiong >Priority: Minor > Labels: TODOC2.1 > Fix For: 2.1.0 > > Attachments: HIVE-13564.01.patch > > > Reasons (1) It is only used in stats20.q (2) We already have a > "HIVESTATSAUTOGATHER" configuration to tell if we are going to collect > rawDataSize and #rows. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13921) Fix spark on yarn tests for HoS
[ https://issues.apache.org/jira/browse/HIVE-13921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317859#comment-15317859 ] Rui Li commented on HIVE-13921: --- For {{constprog_partitioner}}, the patch updates the golden file for HoS. Difference was introduced in HIVE-13197. For {{index_bitmap3}}, it fails when moving files to dest dir because the target file already exists. To fix this, I think we should firstly delete the existing file, if we need to overwrite the dest dir. [~ashutoshc], would you mind take a look at this? Thank you. > Fix spark on yarn tests for HoS > --- > > Key: HIVE-13921 > URL: https://issues.apache.org/jira/browse/HIVE-13921 > Project: Hive > Issue Type: Test >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-13921.1.patch > > > {{index_bitmap3}} and {{constprog_partitioner}} have been failing. Let's fix > them here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13921) Fix spark on yarn tests for HoS
[ https://issues.apache.org/jira/browse/HIVE-13921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Li updated HIVE-13921: -- Attachment: HIVE-13921.1.patch > Fix spark on yarn tests for HoS > --- > > Key: HIVE-13921 > URL: https://issues.apache.org/jira/browse/HIVE-13921 > Project: Hive > Issue Type: Test >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-13921.1.patch > > > {{index_bitmap3}} and {{constprog_partitioner}} have been failing. Let's fix > them here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13921) Fix spark on yarn tests for HoS
[ https://issues.apache.org/jira/browse/HIVE-13921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Li updated HIVE-13921: -- Status: Patch Available (was: Open) > Fix spark on yarn tests for HoS > --- > > Key: HIVE-13921 > URL: https://issues.apache.org/jira/browse/HIVE-13921 > Project: Hive > Issue Type: Test >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-13921.1.patch > > > {{index_bitmap3}} and {{constprog_partitioner}} have been failing. Let's fix > them here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13903) getFunctionInfo is downloading jar on every call
[ https://issues.apache.org/jira/browse/HIVE-13903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amareshwari Sriramadasu updated HIVE-13903: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.1.0 Status: Resolved (was: Patch Available) Committed to master and branch-2.1 cc [~jcamachorodriguez] > getFunctionInfo is downloading jar on every call > > > Key: HIVE-13903 > URL: https://issues.apache.org/jira/browse/HIVE-13903 > Project: Hive > Issue Type: Bug >Reporter: Rajat Khandelwal >Assignee: Rajat Khandelwal > Fix For: 2.1.0 > > Attachments: HIVE-13903.01.patch > > > on queries using permanent udfs, the jar file of the udf is downloaded > multiple times. Each call originating from Registry.getFunctionInfo. This > increases time for the query, especially if that query is just an explain > query. The jar should be downloaded once, and not downloaded again if the udf > class is accessible in the current thread. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13954) Parquet logs should go to STDERR
[ https://issues.apache.org/jira/browse/HIVE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317786#comment-15317786 ] Hive QA commented on HIVE-13954: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12808447/HIVE-13954.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 10220 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_table_stats org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/22/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/22/console Test logs: http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-22/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 7 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12808447 - PreCommit-HIVE-MASTER-Build > Parquet logs should go to STDERR > > > Key: HIVE-13954 > URL: https://issues.apache.org/jira/browse/HIVE-13954 > Project: Hive > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-13954.1.patch > > > Parquet uses java util logging. When java logging is not configured using > default logging.properties file, parquet's default fallback handler writes to > STDOUT at INFO level. Hive writes all logging to STDERR and writes only the > query output to STDOUT. Writing logs to STDOUT may cause issues when > comparing query results. > If we provide default logging.properties for parquet then we can configure it > to write to file or stderr. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13960) Session timeout may happen before HIVE_SERVER2_IDLE_SESSION_TIMEOUT for back-to-back synchronous operations.
[ https://issues.apache.org/jira/browse/HIVE-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhihai xu updated HIVE-13960: - Description: Session timeout may happen before HIVE_SERVER2_IDLE_SESSION_TIMEOUT(hive.server2.idle.session.timeout) for back-to-back synchronous operations. This issue can happen with the following two operations op1 and op2: op2 is a synchronous long running operation, op2 is running right after op1 is closed. 1. closeOperation(op1) is called: this will set {{lastIdleTime}} with value System.currentTimeMillis() because {{opHandleSet}} becomes empty after {{closeOperation}} remove op1 from {{opHandleSet}}. 2. op2 is running for long time by calling {{executeStatement}} right after closeOperation(op1) is called. If op2 is running for more than HIVE_SERVER2_IDLE_SESSION_TIMEOUT, then the session will timeout even when op2 is still running. We hit this issue when we use PyHive to execute non-async operation The following is the exception we see: {code} File "/usr/local/lib/python2.7/dist-packages/pyhive/hive.py", line 126, in close _check_status(response) File "/usr/local/lib/python2.7/dist-packages/pyhive/hive.py", line 362, in _check_status raise OperationalError(response) OperationalError: TCloseSessionResp(status=TStatus(errorCode=0, errorMessage='Session does not exist!', sqlState=None, infoMessages=['*org.apache.hive.service.cli.HiveSQLException:Session does not exist!:12:11', 'org.apache.hive.service.cli.session.SessionManager:closeSession:SessionManager.java:311', 'org.apache.hive.service.cli.CLIService:closeSession:CLIService.java:221', 'org.apache.hive.service.cli.thrift.ThriftCLIService:CloseSession:ThriftCLIService.java:471', 'org.apache.hive.service.cli.thrift.TCLIService$Processor$CloseSession:getResult:TCLIService.java:1273', 'org.apache.hive.service.cli.thrift.TCLIService$Processor$CloseSession:getResult:TCLIService.java:1258', 'org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39', 'org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39', 'org.apache.hive.service.auth.TSetIpAddressProcessor:process:TSetIpAddressProcessor.java:56', 'org.apache.thrift.server.TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:285', 'java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1145', 'java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:615', 'java.lang.Thread:run:Thread.java:745'], statusCode=3)) TCloseSessionResp(status=TStatus(errorCode=0, errorMessage='Session does not exist!', sqlState=None, infoMessages=['*org.apache.hive.service.cli.HiveSQLException:Session does not exist!:12:11', 'org.apache.hive.service.cli.session.SessionManager:closeSession:SessionManager.java:311', 'org.apache.hive.service.cli.CLIService:closeSession:CLIService.java:221', 'org.apache.hive.service.cli.thrift.ThriftCLIService:CloseSession:ThriftCLIService.java:471', 'org.apache.hive.service.cli.thrift.TCLIService$Processor$CloseSession:getResult:TCLIService.java:1273', 'org.apache.hive.service.cli.thrift.TCLIService$Processor$CloseSession:getResult:TCLIService.java:1258', 'org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39', 'org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39', 'org.apache.hive.service.auth.TSetIpAddressProcessor:process:TSetIpAddressProcessor.java:56', 'org.apache.thrift.server.TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:285', 'java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1145', 'java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:615', 'java.lang.Thread:run:Thread.java:745'], statusCode=3)) {code} was: Session timeout may happen before HIVE_SERVER2_IDLE_SESSION_TIMEOUT(hive.server2.idle.session.timeout) for back-to-back synchronous operations. This issue can happen with the following two operations op1 and op2: op2 is a synchronous long running operation, op2 is running right after op1 is closed. 1. closeOperation(op1) is called: this will set {{lastIdleTime}} with value System.currentTimeMillis() because {{opHandleSet}} becomes empty after {{closeOperation}} remove op1 from {{opHandleSet}}. 2. op2 is running for long time by calling {{executeStatement}} right after closeOperation(op1) is called. If op2 is running for more than HIVE_SERVER2_IDLE_SESSION_TIMEOUT, then the session will timeout even when op2 is still running. We hit this issue when we use PyHive to execute non-async operation The following is the exception we see: File "/usr/local/lib/python2.7/dist-packages/pyhive/hive.py", line 126, in close _check_status(response) File "/usr/local/lib/python2.7/dist-packages/pyhive/hive.py", line 362, in _check_status raise OperationalError(response) OperationalError: TCloseSessionResp(status=TStatus(errorCode=0, errorMessage='Session does not exist!',
[jira] [Updated] (HIVE-13960) Session timeout may happen before HIVE_SERVER2_IDLE_SESSION_TIMEOUT for back-to-back synchronous operations.
[ https://issues.apache.org/jira/browse/HIVE-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhihai xu updated HIVE-13960: - Description: Session timeout may happen before HIVE_SERVER2_IDLE_SESSION_TIMEOUT(hive.server2.idle.session.timeout) for back-to-back synchronous operations. This issue can happen with the following two operations op1 and op2: op2 is a synchronous long running operation, op2 is running right after op1 is closed. 1. closeOperation(op1) is called: this will set {{lastIdleTime}} with value System.currentTimeMillis() because {{opHandleSet}} becomes empty after {{closeOperation}} remove op1 from {{opHandleSet}}. 2. op2 is running for long time by calling {{executeStatement}} right after closeOperation(op1) is called. If op2 is running for more than HIVE_SERVER2_IDLE_SESSION_TIMEOUT, then the session will timeout even when op2 is still running. We hit this issue when we use PyHive to execute non-async operation The following is the exception we see: File "/usr/local/lib/python2.7/dist-packages/pyhive/hive.py", line 126, in close _check_status(response) File "/usr/local/lib/python2.7/dist-packages/pyhive/hive.py", line 362, in _check_status raise OperationalError(response) OperationalError: TCloseSessionResp(status=TStatus(errorCode=0, errorMessage='Session does not exist!', sqlState=None, infoMessages=['*org.apache.hive.service.cli.HiveSQLException:Session does not exist!:12:11', 'org.apache.hive.service.cli.session.SessionManager:closeSession:SessionManager.java:311', 'org.apache.hive.service.cli.CLIService:closeSession:CLIService.java:221', 'org.apache.hive.service.cli.thrift.ThriftCLIService:CloseSession:ThriftCLIService.java:471', 'org.apache.hive.service.cli.thrift.TCLIService$Processor$CloseSession:getResult:TCLIService.java:1273', 'org.apache.hive.service.cli.thrift.TCLIService$Processor$CloseSession:getResult:TCLIService.java:1258', 'org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39', 'org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39', 'org.apache.hive.service.auth.TSetIpAddressProcessor:process:TSetIpAddressProcessor.java:56', 'org.apache.thrift.server.TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:285', 'java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1145', 'java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:615', 'java.lang.Thread:run:Thread.java:745'], statusCode=3)) TCloseSessionResp(status=TStatus(errorCode=0, errorMessage='Session does not exist!', sqlState=None, infoMessages=['*org.apache.hive.service.cli.HiveSQLException:Session does not exist!:12:11', 'org.apache.hive.service.cli.session.SessionManager:closeSession:SessionManager.java:311', 'org.apache.hive.service.cli.CLIService:closeSession:CLIService.java:221', 'org.apache.hive.service.cli.thrift.ThriftCLIService:CloseSession:ThriftCLIService.java:471', 'org.apache.hive.service.cli.thrift.TCLIService$Processor$CloseSession:getResult:TCLIService.java:1273', 'org.apache.hive.service.cli.thrift.TCLIService$Processor$CloseSession:getResult:TCLIService.java:1258', 'org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39', 'org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39', 'org.apache.hive.service.auth.TSetIpAddressProcessor:process:TSetIpAddressProcessor.java:56', 'org.apache.thrift.server.TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:285', 'java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1145', 'java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:615', 'java.lang.Thread:run:Thread.java:745'], statusCode=3)) was: Session timeout may happen before HIVE_SERVER2_IDLE_SESSION_TIMEOUT(hive.server2.idle.session.timeout) for back-to-back synchronous operations. This issue can happen with the following two operations op1 and op2: op2 is a synchronous long running operation, op2 is running right after op1 is closed. 1. closeOperation(op1) is called: this will set {{lastIdleTime}} with value System.currentTimeMillis() because {{opHandleSet}} becomes empty after {{closeOperation}} remove op1 from {{opHandleSet}}. 2. op2 is running for long time by calling {{executeStatement}} right after closeOperation(op1) is called. If op2 is running for more than HIVE_SERVER2_IDLE_SESSION_TIMEOUT, then the session will timeout even when op2 is still running. > Session timeout may happen before HIVE_SERVER2_IDLE_SESSION_TIMEOUT for > back-to-back synchronous operations. > > > Key: HIVE-13960 > URL: https://issues.apache.org/jira/browse/HIVE-13960 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: zhihai xu >
[jira] [Commented] (HIVE-13911) load inpath fails throwing org.apache.hadoop.security.AccessControlException
[ https://issues.apache.org/jira/browse/HIVE-13911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317776#comment-15317776 ] Ashutosh Chauhan commented on HIVE-13911: - +1 pending tests > load inpath fails throwing org.apache.hadoop.security.AccessControlException > > > Key: HIVE-13911 > URL: https://issues.apache.org/jira/browse/HIVE-13911 > Project: Hive > Issue Type: Bug >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: HIVE-13911.1.patch, HIVE-13911.2.patch, > HIVE-13911.3.patch, HIVE-13911.4.patch, HIVE-13911.5.patch > > > Similar to HIVE-13857 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13960) Session timeout may happen before HIVE_SERVER2_IDLE_SESSION_TIMEOUT for back-to-back synchronous operations.
[ https://issues.apache.org/jira/browse/HIVE-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317774#comment-15317774 ] zhihai xu commented on HIVE-13960: -- I attached a patch HIVE-13960.000.patch which moved {{lastIdleTime = 0;}} from {{release}} to {{acquire}} and also add a variable {{pendingCount}} to make sure only the last {{release}} can change {{lastIdleTime}}. > Session timeout may happen before HIVE_SERVER2_IDLE_SESSION_TIMEOUT for > back-to-back synchronous operations. > > > Key: HIVE-13960 > URL: https://issues.apache.org/jira/browse/HIVE-13960 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: zhihai xu >Assignee: zhihai xu > Attachments: HIVE-13960.000.patch > > > Session timeout may happen before > HIVE_SERVER2_IDLE_SESSION_TIMEOUT(hive.server2.idle.session.timeout) for > back-to-back synchronous operations. > This issue can happen with the following two operations op1 and op2: op2 is a > synchronous long running operation, op2 is running right after op1 is closed. > > 1. closeOperation(op1) is called: > this will set {{lastIdleTime}} with value System.currentTimeMillis() because > {{opHandleSet}} becomes empty after {{closeOperation}} remove op1 from > {{opHandleSet}}. > 2. op2 is running for long time by calling {{executeStatement}} right after > closeOperation(op1) is called. > If op2 is running for more than HIVE_SERVER2_IDLE_SESSION_TIMEOUT, then the > session will timeout even when op2 is still running. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13960) Session timeout may happen before HIVE_SERVER2_IDLE_SESSION_TIMEOUT for back-to-back synchronous operations.
[ https://issues.apache.org/jira/browse/HIVE-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhihai xu updated HIVE-13960: - Status: Patch Available (was: Open) > Session timeout may happen before HIVE_SERVER2_IDLE_SESSION_TIMEOUT for > back-to-back synchronous operations. > > > Key: HIVE-13960 > URL: https://issues.apache.org/jira/browse/HIVE-13960 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: zhihai xu >Assignee: zhihai xu > Attachments: HIVE-13960.000.patch > > > Session timeout may happen before > HIVE_SERVER2_IDLE_SESSION_TIMEOUT(hive.server2.idle.session.timeout) for > back-to-back synchronous operations. > This issue can happen with the following two operations op1 and op2: op2 is a > synchronous long running operation, op2 is running right after op1 is closed. > > 1. closeOperation(op1) is called: > this will set {{lastIdleTime}} with value System.currentTimeMillis() because > {{opHandleSet}} becomes empty after {{closeOperation}} remove op1 from > {{opHandleSet}}. > 2. op2 is running for long time by calling {{executeStatement}} right after > closeOperation(op1) is called. > If op2 is running for more than HIVE_SERVER2_IDLE_SESSION_TIMEOUT, then the > session will timeout even when op2 is still running. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13960) Session timeout may happen before HIVE_SERVER2_IDLE_SESSION_TIMEOUT for back-to-back synchronous operations.
[ https://issues.apache.org/jira/browse/HIVE-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhihai xu updated HIVE-13960: - Attachment: HIVE-13960.000.patch > Session timeout may happen before HIVE_SERVER2_IDLE_SESSION_TIMEOUT for > back-to-back synchronous operations. > > > Key: HIVE-13960 > URL: https://issues.apache.org/jira/browse/HIVE-13960 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: zhihai xu >Assignee: zhihai xu > Attachments: HIVE-13960.000.patch > > > Session timeout may happen before > HIVE_SERVER2_IDLE_SESSION_TIMEOUT(hive.server2.idle.session.timeout) for > back-to-back synchronous operations. > This issue can happen with the following two operations op1 and op2: op2 is a > synchronous long running operation, op2 is running right after op1 is closed. > > 1. closeOperation(op1) is called: > this will set {{lastIdleTime}} with value System.currentTimeMillis() because > {{opHandleSet}} becomes empty after {{closeOperation}} remove op1 from > {{opHandleSet}}. > 2. op2 is running for long time by calling {{executeStatement}} right after > closeOperation(op1) is called. > If op2 is running for more than HIVE_SERVER2_IDLE_SESSION_TIMEOUT, then the > session will timeout even when op2 is still running. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13599) LLAP: Incorrect handling of the preemption queue on finishable state updates
[ https://issues.apache.org/jira/browse/HIVE-13599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-13599: -- Resolution: Fixed Fix Version/s: 2.1.1 Status: Resolved (was: Patch Available) Committed to master and branch-2.1 > LLAP: Incorrect handling of the preemption queue on finishable state updates > > > Key: HIVE-13599 > URL: https://issues.apache.org/jira/browse/HIVE-13599 > Project: Hive > Issue Type: Bug > Components: llap >Affects Versions: 2.1.0 >Reporter: Prasanth Jayachandran >Assignee: Siddharth Seth >Priority: Critical > Fix For: 2.1.1 > > Attachments: HIVE-13599.01.patch, HIVE-13599.01.patch, > HIVE-13599.02.patch > > > When running some tests with pre-emption enabled, got the following exception > Looks like a race condition when removing items from pre-emption queue. > {code} > 16/04/23 23:32:00 [Wait-Queue-Scheduler-0[]] ERROR impl.TaskExecutorService : > Wait queue scheduler worker exited with failure! > java.util.NoSuchElementException > at java.util.AbstractQueue.remove(AbstractQueue.java:117) > ~[?:1.7.0_55] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.removeAndGetFromPreemptionQueue(TaskExecutorService.java:568) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.handleScheduleAttemptedRejection(TaskExecutorService.java:493) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.access$1100(TaskExecutorService.java:81) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService$WaitQueueWorker.run(TaskExecutorService.java:285) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > ~[?:1.7.0_55] > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > [?:1.7.0_55] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > [?:1.7.0_55] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [?:1.7.0_55] > at java.lang.Thread.run(Thread.java:745) [?:1.7.0_55] > 16/04/23 23:32:00 [Wait-Queue-Scheduler-0[]] INFO impl.LlapDaemon : > UncaughtExceptionHandler invoked > 16/04/23 23:32:00 [Wait-Queue-Scheduler-0[]] ERROR impl.LlapDaemon : Thread > Thread[Wait-Queue-Scheduler-0,5,main] threw an Exception. Shutting down now... > java.util.NoSuchElementException > at java.util.AbstractQueue.remove(AbstractQueue.java:117) > ~[?:1.7.0_55] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.removeAndGetFromPreemptionQueue(TaskExecutorService.java:568) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.handleScheduleAttemptedRejection(TaskExecutorService.java:493) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.access$1100(TaskExecutorService.java:81) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService$WaitQueueWorker.run(TaskExecutorService.java:285) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > ~[?:1.7.0_55] > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > [?:1.7.0_55] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > [?:1.7.0_55] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [?:1.7.0_55] > at java.lang.Thread.run(Thread.java:745) [?:1.7.0_55] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13599) LLAP: Incorrect handling of the preemption queue on finishable state updates
[ https://issues.apache.org/jira/browse/HIVE-13599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317762#comment-15317762 ] Siddharth Seth commented on HIVE-13599: --- Thanks for the review. Test failures are unrelated. Committing. > LLAP: Incorrect handling of the preemption queue on finishable state updates > > > Key: HIVE-13599 > URL: https://issues.apache.org/jira/browse/HIVE-13599 > Project: Hive > Issue Type: Bug > Components: llap >Affects Versions: 2.1.0 >Reporter: Prasanth Jayachandran >Assignee: Siddharth Seth >Priority: Critical > Attachments: HIVE-13599.01.patch, HIVE-13599.01.patch, > HIVE-13599.02.patch > > > When running some tests with pre-emption enabled, got the following exception > Looks like a race condition when removing items from pre-emption queue. > {code} > 16/04/23 23:32:00 [Wait-Queue-Scheduler-0[]] ERROR impl.TaskExecutorService : > Wait queue scheduler worker exited with failure! > java.util.NoSuchElementException > at java.util.AbstractQueue.remove(AbstractQueue.java:117) > ~[?:1.7.0_55] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.removeAndGetFromPreemptionQueue(TaskExecutorService.java:568) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.handleScheduleAttemptedRejection(TaskExecutorService.java:493) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.access$1100(TaskExecutorService.java:81) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService$WaitQueueWorker.run(TaskExecutorService.java:285) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > ~[?:1.7.0_55] > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > [?:1.7.0_55] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > [?:1.7.0_55] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [?:1.7.0_55] > at java.lang.Thread.run(Thread.java:745) [?:1.7.0_55] > 16/04/23 23:32:00 [Wait-Queue-Scheduler-0[]] INFO impl.LlapDaemon : > UncaughtExceptionHandler invoked > 16/04/23 23:32:00 [Wait-Queue-Scheduler-0[]] ERROR impl.LlapDaemon : Thread > Thread[Wait-Queue-Scheduler-0,5,main] threw an Exception. Shutting down now... > java.util.NoSuchElementException > at java.util.AbstractQueue.remove(AbstractQueue.java:117) > ~[?:1.7.0_55] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.removeAndGetFromPreemptionQueue(TaskExecutorService.java:568) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.handleScheduleAttemptedRejection(TaskExecutorService.java:493) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.access$1100(TaskExecutorService.java:81) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService$WaitQueueWorker.run(TaskExecutorService.java:285) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > ~[?:1.7.0_55] > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > [?:1.7.0_55] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > [?:1.7.0_55] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [?:1.7.0_55] > at java.lang.Thread.run(Thread.java:745) [?:1.7.0_55] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13953) Issues in HiveLockObject equals method
[ https://issues.apache.org/jira/browse/HIVE-13953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317748#comment-15317748 ] Chaoyu Tang commented on HIVE-13953: The eight failed tests are not related to this patch. the test stats_list_bucket.q fails even without this patch applied. Other seven failed tests are aged. > Issues in HiveLockObject equals method > -- > > Key: HIVE-13953 > URL: https://issues.apache.org/jira/browse/HIVE-13953 > Project: Hive > Issue Type: Bug > Components: Locking >Reporter: Chaoyu Tang >Assignee: Chaoyu Tang > Attachments: HIVE-13953.patch > > > There are two issues in equals method in HiveLockObject: > {code} > @Override > public boolean equals(Object o) { > if (!(o instanceof HiveLockObject)) { > return false; > } > HiveLockObject tgt = (HiveLockObject) o; > return Arrays.equals(pathNames, tgt.pathNames) && > data == null ? tgt.getData() == null : > tgt.getData() != null && data.equals(tgt.getData()); > } > {code} > 1. Arrays.equals(pathNames, tgt.pathNames) might return false for the same > path in HiveLockObject since in current Hive, the pathname components might > be stored in two ways, taking a dynamic partition path db/tbl/part1/part2 as > an example, it might be stored in the pathNames as an array of four elements, > db, tbl, part1, and part2 or as an array only having one element > db/tbl/part1/part2. It will be safer to comparing the pathNames using > StringUtils.equals(this.getName(), tgt.getName()) > 2. The comparison logic is not right. > {code} > @Override > public boolean equals(Object o) { > if (!(o instanceof HiveLockObject)) { > return false; > } > HiveLockObject tgt = (HiveLockObject) o; > return StringUtils.equals(this.getName(), tgt.getName()) && > (data == null ? tgt.getData() == null : data.equals(tgt.getData())); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13959) MoveTask should only release its query associated locks
[ https://issues.apache.org/jira/browse/HIVE-13959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chaoyu Tang updated HIVE-13959: --- Status: Patch Available (was: Open) [~alangates], [~ychena], could you review the patch? Thanks > MoveTask should only release its query associated locks > --- > > Key: HIVE-13959 > URL: https://issues.apache.org/jira/browse/HIVE-13959 > Project: Hive > Issue Type: Bug > Components: Locking >Reporter: Chaoyu Tang >Assignee: Chaoyu Tang > Attachments: HIVE-13959.patch > > > releaseLocks in MoveTask releases all locks under a HiveLockObject pathNames. > But some of locks under this pathNames might be for other queries and should > not be released. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13959) MoveTask should only release its query associated locks
[ https://issues.apache.org/jira/browse/HIVE-13959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chaoyu Tang updated HIVE-13959: --- Attachment: HIVE-13959.patch The patch only releases the locks held for MoveTask related query, where the lock was once acquired for this query and in the ctx.getHiveLocks(), so ctx.getHiveLocks().remove(lock) returns true. Have done manual tests with two Hive debug sessions running two different queries, and verified that one query with MoveTask does not remove the locks acquired for the other query running in other session. > MoveTask should only release its query associated locks > --- > > Key: HIVE-13959 > URL: https://issues.apache.org/jira/browse/HIVE-13959 > Project: Hive > Issue Type: Bug > Components: Locking >Reporter: Chaoyu Tang >Assignee: Chaoyu Tang > Attachments: HIVE-13959.patch > > > releaseLocks in MoveTask releases all locks under a HiveLockObject pathNames. > But some of locks under this pathNames might be for other queries and should > not be released. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HIVE-13957) vectorized IN is inconsistent with non-vectorized (at least for decimal in (string))
[ https://issues.apache.org/jira/browse/HIVE-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317698#comment-15317698 ] Gopal V edited comment on HIVE-13957 at 6/7/16 2:36 AM: Would it be better to convert the implicit conversion (as done in GenericUDF) into an explicit one if the type conversion is legal? The tangential reason is that the type of the first argument is enforced for the rest of the IN() expression - relaxing type safety and letting the UDF handle it at runtime. Similar type issues exist for concat() for instance, where the UDF internally type-casts all args to String, but the actual plan doesn't. was (Author: gopalv): Would it be better to convert the implicit conversion (as done in GenericUDF) into an explicit one if the type conversion is legal? Similar type issues exist for concat() for instance, where the UDF internally type-casts all args to String, but the actual plan doesn't. > vectorized IN is inconsistent with non-vectorized (at least for decimal in > (string)) > > > Key: HIVE-13957 > URL: https://issues.apache.org/jira/browse/HIVE-13957 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13957.patch, HIVE-13957.patch > > > The cast is applied to the column in regular IN, but vectorized IN applies it > to the IN() list -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13957) vectorized IN is inconsistent with non-vectorized (at least for decimal in (string))
[ https://issues.apache.org/jira/browse/HIVE-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317698#comment-15317698 ] Gopal V commented on HIVE-13957: Would it be better to convert the implicit conversion (as done in GenericUDF) into an explicit one if the type conversion is legal? Similar type issues exist for concat() for instance, where the UDF internally type-casts all args to String, but the actual plan doesn't. > vectorized IN is inconsistent with non-vectorized (at least for decimal in > (string)) > > > Key: HIVE-13957 > URL: https://issues.apache.org/jira/browse/HIVE-13957 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13957.patch, HIVE-13957.patch > > > The cast is applied to the column in regular IN, but vectorized IN applies it > to the IN() list -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13957) vectorized IN is inconsistent with non-vectorized (at least for decimal in (string))
[ https://issues.apache.org/jira/browse/HIVE-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13957: Attachment: HIVE-13957.patch > vectorized IN is inconsistent with non-vectorized (at least for decimal in > (string)) > > > Key: HIVE-13957 > URL: https://issues.apache.org/jira/browse/HIVE-13957 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13957.patch, HIVE-13957.patch > > > The cast is applied to the column in regular IN, but vectorized IN applies it > to the IN() list -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13957) vectorized IN is inconsistent with non-vectorized (at least for decimal in (string))
[ https://issues.apache.org/jira/browse/HIVE-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13957: Status: Patch Available (was: Open) > vectorized IN is inconsistent with non-vectorized (at least for decimal in > (string)) > > > Key: HIVE-13957 > URL: https://issues.apache.org/jira/browse/HIVE-13957 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13957.patch > > > The cast is applied to the column in regular IN, but vectorized IN applies it > to the IN() list -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13957) vectorized IN is inconsistent with non-vectorized (at least for decimal in (string))
[ https://issues.apache.org/jira/browse/HIVE-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13957: Attachment: HIVE-13957.patch The patch disables IN vectorization for such cases, since it looks like Col... specializations cannot be used if the column needs to be cast. [~mmccline] can you please review? > vectorized IN is inconsistent with non-vectorized (at least for decimal in > (string)) > > > Key: HIVE-13957 > URL: https://issues.apache.org/jira/browse/HIVE-13957 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13957.patch > > > The cast is applied to the column in regular IN, but vectorized IN applies it > to the IN() list -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13953) Issues in HiveLockObject equals method
[ https://issues.apache.org/jira/browse/HIVE-13953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317668#comment-15317668 ] Hive QA commented on HIVE-13953: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12808434/HIVE-13953.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 10220 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_table_stats org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/21/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/21/console Test logs: http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-21/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 8 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12808434 - PreCommit-HIVE-MASTER-Build > Issues in HiveLockObject equals method > -- > > Key: HIVE-13953 > URL: https://issues.apache.org/jira/browse/HIVE-13953 > Project: Hive > Issue Type: Bug > Components: Locking >Reporter: Chaoyu Tang >Assignee: Chaoyu Tang > Attachments: HIVE-13953.patch > > > There are two issues in equals method in HiveLockObject: > {code} > @Override > public boolean equals(Object o) { > if (!(o instanceof HiveLockObject)) { > return false; > } > HiveLockObject tgt = (HiveLockObject) o; > return Arrays.equals(pathNames, tgt.pathNames) && > data == null ? tgt.getData() == null : > tgt.getData() != null && data.equals(tgt.getData()); > } > {code} > 1. Arrays.equals(pathNames, tgt.pathNames) might return false for the same > path in HiveLockObject since in current Hive, the pathname components might > be stored in two ways, taking a dynamic partition path db/tbl/part1/part2 as > an example, it might be stored in the pathNames as an array of four elements, > db, tbl, part1, and part2 or as an array only having one element > db/tbl/part1/part2. It will be safer to comparing the pathNames using > StringUtils.equals(this.getName(), tgt.getName()) > 2. The comparison logic is not right. > {code} > @Override > public boolean equals(Object o) { > if (!(o instanceof HiveLockObject)) { > return false; > } > HiveLockObject tgt = (HiveLockObject) o; > return StringUtils.equals(this.getName(), tgt.getName()) && > (data == null ? tgt.getData() == null : data.equals(tgt.getData())); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13675) LLAP: add HMAC signatures to LLAPIF splits
[ https://issues.apache.org/jira/browse/HIVE-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317647#comment-15317647 ] Siddharth Seth commented on HIVE-13675: --- That would explain the failure. llap-client included, but llap-common isn't - which happens to be available on the client classpath. > LLAP: add HMAC signatures to LLAPIF splits > -- > > Key: HIVE-13675 > URL: https://issues.apache.org/jira/browse/HIVE-13675 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13675.01.patch, HIVE-13675.02.patch, > HIVE-13675.03.patch, HIVE-13675.04.patch, HIVE-13675.05.patch, > HIVE-13675.06.patch, HIVE-13675.07.patch, HIVE-13675.WIP.patch, > HIVE-13675.wo.13444.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13760) Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running for more than the configured timeout value.
[ https://issues.apache.org/jira/browse/HIVE-13760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317638#comment-15317638 ] zhihai xu commented on HIVE-13760: -- Thanks for the review [~xuefuz]! The timeout is set inside function {{SQLOperation.prepare}}, Currently {{SQLOperation.prepare}} is only called inside class {{SQLOperation}} by {{SQLOperation.runInternal}}, no one calls {{SQLOperation.prepare}} from outside class {{SQLOperation}}. So It looks like the timeout will include both compiling and running time. > Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running > for more than the configured timeout value. > > > Key: HIVE-13760 > URL: https://issues.apache.org/jira/browse/HIVE-13760 > Project: Hive > Issue Type: Improvement > Components: Configuration >Affects Versions: 2.0.0 >Reporter: zhihai xu >Assignee: zhihai xu > Attachments: HIVE-13760.000.patch, HIVE-13760.001.patch > > > Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running > for more than the configured timeout value. The default value will be -1 , > which means no timeout. This will be useful for user to manage queries with > SLA. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HIVE-13760) Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running for more than the configured timeout value.
[ https://issues.apache.org/jira/browse/HIVE-13760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317586#comment-15317586 ] Xuefu Zhang edited comment on HIVE-13760 at 6/7/16 1:04 AM: I'm not sure if the patch works as expected. Existing variable, queryTimeout in SQLOperation is to timeout query submission rather than query execution. However, I could be wrong though. Testing should help. was (Author: xuefuz): It seems to me that the patch doesn't work as expected. Existing variable, queryTimeout in SQLOperation is to tiemout query submission rather than query execution. I could be wrong though. > Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running > for more than the configured timeout value. > > > Key: HIVE-13760 > URL: https://issues.apache.org/jira/browse/HIVE-13760 > Project: Hive > Issue Type: Improvement > Components: Configuration >Affects Versions: 2.0.0 >Reporter: zhihai xu >Assignee: zhihai xu > Attachments: HIVE-13760.000.patch, HIVE-13760.001.patch > > > Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running > for more than the configured timeout value. The default value will be -1 , > which means no timeout. This will be useful for user to manage queries with > SLA. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13760) Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running for more than the configured timeout value.
[ https://issues.apache.org/jira/browse/HIVE-13760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317586#comment-15317586 ] Xuefu Zhang commented on HIVE-13760: It seems to me that the patch doesn't work as expected. Existing variable, queryTimeout in SQLOperation is to tiemout query submission rather than query execution. I could be wrong though. > Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running > for more than the configured timeout value. > > > Key: HIVE-13760 > URL: https://issues.apache.org/jira/browse/HIVE-13760 > Project: Hive > Issue Type: Improvement > Components: Configuration >Affects Versions: 2.0.0 >Reporter: zhihai xu >Assignee: zhihai xu > Attachments: HIVE-13760.000.patch, HIVE-13760.001.patch > > > Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running > for more than the configured timeout value. The default value will be -1 , > which means no timeout. This will be useful for user to manage queries with > SLA. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13098) Add a strict check for when the decimal gets converted to null due to insufficient width
[ https://issues.apache.org/jira/browse/HIVE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317581#comment-15317581 ] Sergey Shelukhin commented on HIVE-13098: - This is especially problematic for implicit conversions... > Add a strict check for when the decimal gets converted to null due to > insufficient width > > > Key: HIVE-13098 > URL: https://issues.apache.org/jira/browse/HIVE-13098 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin > > When e.g. 99 is selected as decimal(5,0), the result is null. This can be > problematic, esp. if the data is written to a table and lost without the user > realizing it. There should be an option to error out in such cases instead; > it should probably be on by default and the error message should instruct the > user on how to disable it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13958) hive.strict.checks.type.safety should apply to decimals, as well as IN... and BETWEEN... ops
[ https://issues.apache.org/jira/browse/HIVE-13958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13958: Description: String to decimal auto-casts should be prohibited for compares > hive.strict.checks.type.safety should apply to decimals, as well as IN... and > BETWEEN... ops > > > Key: HIVE-13958 > URL: https://issues.apache.org/jira/browse/HIVE-13958 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin > > String to decimal auto-casts should be prohibited for compares -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13911) load inpath fails throwing org.apache.hadoop.security.AccessControlException
[ https://issues.apache.org/jira/browse/HIVE-13911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317570#comment-15317570 ] Ashutosh Chauhan commented on HIVE-13911: - seems like srcGroup assignment is reversed. > load inpath fails throwing org.apache.hadoop.security.AccessControlException > > > Key: HIVE-13911 > URL: https://issues.apache.org/jira/browse/HIVE-13911 > Project: Hive > Issue Type: Bug >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: HIVE-13911.1.patch, HIVE-13911.2.patch, > HIVE-13911.3.patch, HIVE-13911.4.patch, HIVE-13911.5.patch > > > Similar to HIVE-13857 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13911) load inpath fails throwing org.apache.hadoop.security.AccessControlException
[ https://issues.apache.org/jira/browse/HIVE-13911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-13911: - Attachment: HIVE-13911.5.patch > load inpath fails throwing org.apache.hadoop.security.AccessControlException > > > Key: HIVE-13911 > URL: https://issues.apache.org/jira/browse/HIVE-13911 > Project: Hive > Issue Type: Bug >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: HIVE-13911.1.patch, HIVE-13911.2.patch, > HIVE-13911.3.patch, HIVE-13911.4.patch, HIVE-13911.5.patch > > > Similar to HIVE-13857 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13911) load inpath fails throwing org.apache.hadoop.security.AccessControlException
[ https://issues.apache.org/jira/browse/HIVE-13911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-13911: - Status: Patch Available (was: Open) > load inpath fails throwing org.apache.hadoop.security.AccessControlException > > > Key: HIVE-13911 > URL: https://issues.apache.org/jira/browse/HIVE-13911 > Project: Hive > Issue Type: Bug >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: HIVE-13911.1.patch, HIVE-13911.2.patch, > HIVE-13911.3.patch, HIVE-13911.4.patch, HIVE-13911.5.patch > > > Similar to HIVE-13857 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13911) load inpath fails throwing org.apache.hadoop.security.AccessControlException
[ https://issues.apache.org/jira/browse/HIVE-13911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-13911: - Status: Open (was: Patch Available) > load inpath fails throwing org.apache.hadoop.security.AccessControlException > > > Key: HIVE-13911 > URL: https://issues.apache.org/jira/browse/HIVE-13911 > Project: Hive > Issue Type: Bug >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: HIVE-13911.1.patch, HIVE-13911.2.patch, > HIVE-13911.3.patch, HIVE-13911.4.patch, HIVE-13911.5.patch > > > Similar to HIVE-13857 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13957) vectorized IN is inconsistent with non-vectorized if the types mismatch
[ https://issues.apache.org/jira/browse/HIVE-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13957: Summary: vectorized IN is inconsistent with non-vectorized if the types mismatch (was: vectorized IN is inconsistent with non-vectorized) > vectorized IN is inconsistent with non-vectorized if the types mismatch > --- > > Key: HIVE-13957 > URL: https://issues.apache.org/jira/browse/HIVE-13957 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > > The cast is applied to the column in regular IN, but vectorized IN applies it > to the IN() list -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13957) vectorized IN is inconsistent with non-vectorized (at least for decimal in (string))
[ https://issues.apache.org/jira/browse/HIVE-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13957: Summary: vectorized IN is inconsistent with non-vectorized (at least for decimal in (string)) (was: vectorized IN is inconsistent with non-vectorized if the types mismatch) > vectorized IN is inconsistent with non-vectorized (at least for decimal in > (string)) > > > Key: HIVE-13957 > URL: https://issues.apache.org/jira/browse/HIVE-13957 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > > The cast is applied to the column in regular IN, but vectorized IN applies it > to the IN() list -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13957) vectorized IN is inconsistent with non-vectorized
[ https://issues.apache.org/jira/browse/HIVE-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317567#comment-15317567 ] Sergey Shelukhin commented on HIVE-13957: - [~mmccline] fyi... I will take a look > vectorized IN is inconsistent with non-vectorized > - > > Key: HIVE-13957 > URL: https://issues.apache.org/jira/browse/HIVE-13957 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > > The cast is applied to the column in regular IN, but vectorized IN applies it > to the IN() list -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13392) disable speculative execution for ACID Compactor
[ https://issues.apache.org/jira/browse/HIVE-13392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-13392: -- Status: Patch Available (was: Open) > disable speculative execution for ACID Compactor > > > Key: HIVE-13392 > URL: https://issues.apache.org/jira/browse/HIVE-13392 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-13392.patch > > > https://developer.yahoo.com/hadoop/tutorial/module4.html > Speculative execution is enabled by default. You can disable speculative > execution for the mappers and reducers by setting the > mapred.map.tasks.speculative.execution and > mapred.reduce.tasks.speculative.execution JobConf options to false, > respectively. > CompactorMR is currently not set up to handle speculative execution and may > lead to something like > {code} > 2016-02-08 22:56:38,256 WARN [main] org.apache.hadoop.mapred.YarnChild: > Exception running child : > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): > Failed to CREATE_FILE > /apps/hive/warehouse/service_logs_v2/ds=2016-01-20/_tmp_6cf08b9f-c2e2-4182-bc81-e032801b147f/base_13858600/bucket_4 > for DFSClient_attempt_1454628390210_27756_m_01_1_131224698_1 on > 172.18.129.12 because this file lease is currently owned by > DFSClient_attempt_1454628390210_27756_m_01_0_-2027182532_1 on > 172.18.129.18 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2937) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2562) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2451) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2335) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:688) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151) > {code} > Short term: disable speculative execution for this job > Longer term perhaps make each task write to dir with UUID... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13675) LLAP: add HMAC signatures to LLAPIF splits
[ https://issues.apache.org/jira/browse/HIVE-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317547#comment-15317547 ] Sergey Shelukhin commented on HIVE-13675: - Nope, not part of epic jar, only client is. We cannot really remove it as long as QL depends on it, unless we are very careful about where things get imported. In this case, spark initializes the UDFs (I guess), and that pulls in UDFGetSplits, which references LLAP classes. I am assuming Spark also uses exec jar as an epic upload-all-of-hive Jar... we cannot expect it (or anyone) to pull all transitive dependencies when localizing jars to some executor or whatnot. > LLAP: add HMAC signatures to LLAPIF splits > -- > > Key: HIVE-13675 > URL: https://issues.apache.org/jira/browse/HIVE-13675 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13675.01.patch, HIVE-13675.02.patch, > HIVE-13675.03.patch, HIVE-13675.04.patch, HIVE-13675.05.patch, > HIVE-13675.06.patch, HIVE-13675.07.patch, HIVE-13675.WIP.patch, > HIVE-13675.wo.13444.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13392) disable speculative execution for ACID Compactor
[ https://issues.apache.org/jira/browse/HIVE-13392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-13392: -- Attachment: HIVE-13392.patch > disable speculative execution for ACID Compactor > > > Key: HIVE-13392 > URL: https://issues.apache.org/jira/browse/HIVE-13392 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-13392.patch > > > https://developer.yahoo.com/hadoop/tutorial/module4.html > Speculative execution is enabled by default. You can disable speculative > execution for the mappers and reducers by setting the > mapred.map.tasks.speculative.execution and > mapred.reduce.tasks.speculative.execution JobConf options to false, > respectively. > CompactorMR is currently not set up to handle speculative execution and may > lead to something like > {code} > 2016-02-08 22:56:38,256 WARN [main] org.apache.hadoop.mapred.YarnChild: > Exception running child : > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): > Failed to CREATE_FILE > /apps/hive/warehouse/service_logs_v2/ds=2016-01-20/_tmp_6cf08b9f-c2e2-4182-bc81-e032801b147f/base_13858600/bucket_4 > for DFSClient_attempt_1454628390210_27756_m_01_1_131224698_1 on > 172.18.129.12 because this file lease is currently owned by > DFSClient_attempt_1454628390210_27756_m_01_0_-2027182532_1 on > 172.18.129.18 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2937) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2562) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2451) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2335) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:688) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151) > {code} > Short term: disable speculative execution for this job > Longer term perhaps make each task write to dir with UUID... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13264) JDBC driver makes 2 Open Session Calls for every open session
[ https://issues.apache.org/jira/browse/HIVE-13264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] NITHIN MAHESH updated HIVE-13264: - Attachment: HIVE-13264.7.patch Fixed tests to check for the updated error message. > JDBC driver makes 2 Open Session Calls for every open session > - > > Key: HIVE-13264 > URL: https://issues.apache.org/jira/browse/HIVE-13264 > Project: Hive > Issue Type: Bug > Components: JDBC >Reporter: NITHIN MAHESH >Assignee: NITHIN MAHESH > Labels: jdbc > Attachments: HIVE-13264.1.patch, HIVE-13264.2.patch, > HIVE-13264.3.patch, HIVE-13264.4.patch, HIVE-13264.5.patch, > HIVE-13264.6.patch, HIVE-13264.6.patch, HIVE-13264.7.patch, HIVE-13264.patch > > > When HTTP is used as the transport mode by the Hive JDBC driver, we noticed > that there is an additional open/close session just to validate the > connection. > > TCLIService.Iface client = new TCLIService.Client(new > TBinaryProtocol(transport)); > TOpenSessionResp openResp = client.OpenSession(new TOpenSessionReq()); > if (openResp != null) { > client.CloseSession(new > TCloseSessionReq(openResp.getSessionHandle())); > } > > The open session call is a costly one and should not be used to test > transport. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13911) load inpath fails throwing org.apache.hadoop.security.AccessControlException
[ https://issues.apache.org/jira/browse/HIVE-13911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317535#comment-15317535 ] Hive QA commented on HIVE-13911: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12808431/HIVE-13911.4.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 10220 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_table_stats org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testDelayedLocalityNodeCommErrorImmediateAllocation {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/20/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/20/console Test logs: http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-20/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 8 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12808431 - PreCommit-HIVE-MASTER-Build > load inpath fails throwing org.apache.hadoop.security.AccessControlException > > > Key: HIVE-13911 > URL: https://issues.apache.org/jira/browse/HIVE-13911 > Project: Hive > Issue Type: Bug >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: HIVE-13911.1.patch, HIVE-13911.2.patch, > HIVE-13911.3.patch, HIVE-13911.4.patch > > > Similar to HIVE-13857 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13675) LLAP: add HMAC signatures to LLAPIF splits
[ https://issues.apache.org/jira/browse/HIVE-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317532#comment-15317532 ] Siddharth Seth commented on HIVE-13675: --- Thought llap-common was already part of the epic jar. llap-client used to be part of it (and I wanted to remove it). > LLAP: add HMAC signatures to LLAPIF splits > -- > > Key: HIVE-13675 > URL: https://issues.apache.org/jira/browse/HIVE-13675 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13675.01.patch, HIVE-13675.02.patch, > HIVE-13675.03.patch, HIVE-13675.04.patch, HIVE-13675.05.patch, > HIVE-13675.06.patch, HIVE-13675.07.patch, HIVE-13675.WIP.patch, > HIVE-13675.wo.13444.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13942) Correctness of CASE folding in the presence of NULL values
[ https://issues.apache.org/jira/browse/HIVE-13942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317395#comment-15317395 ] Ashutosh Chauhan commented on HIVE-13942: - +1 > Correctness of CASE folding in the presence of NULL values > -- > > Key: HIVE-13942 > URL: https://issues.apache.org/jira/browse/HIVE-13942 > Project: Hive > Issue Type: Sub-task > Components: CBO >Affects Versions: 2.2.0, 2.1.1 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-13942.01.patch, HIVE-13942.patch > > > Introduced in HIVE-13068. > {{(case when key<'90' then 2 else 4 end) > 3}} should not fold to {{key >= > '90'}}, as these two expressions are not equivalent (consider _null_ values). > Instead, it should fold to {{not NVL((key < '90'),false)}}. > This is caused by 1) some methods still calling original _RexUtil.simplify_ > method where the bug was originally present, and 2) further improvements > needed in _HiveRexUtil.simplify_. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13942) Correctness of CASE folding in the presence of NULL values
[ https://issues.apache.org/jira/browse/HIVE-13942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317381#comment-15317381 ] Hive QA commented on HIVE-13942: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12808421/HIVE-13942.01.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 13 failed/errored test(s), 10220 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_table_stats org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_auto_mult_tables_compact org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_values_orig_table_use_metadata org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_vector_dynpart_hashjoin_1 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testDelayedLocalityNodeCommErrorImmediateAllocation {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/19/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/19/console Test logs: http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-19/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 13 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12808421 - PreCommit-HIVE-MASTER-Build > Correctness of CASE folding in the presence of NULL values > -- > > Key: HIVE-13942 > URL: https://issues.apache.org/jira/browse/HIVE-13942 > Project: Hive > Issue Type: Sub-task > Components: CBO >Affects Versions: 2.2.0, 2.1.1 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-13942.01.patch, HIVE-13942.patch > > > Introduced in HIVE-13068. > {{(case when key<'90' then 2 else 4 end) > 3}} should not fold to {{key >= > '90'}}, as these two expressions are not equivalent (consider _null_ values). > Instead, it should fold to {{not NVL((key < '90'),false)}}. > This is caused by 1) some methods still calling original _RexUtil.simplify_ > method where the bug was originally present, and 2) further improvements > needed in _HiveRexUtil.simplify_. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13956) LLAP: external client output is writing to channel before it is writable again
[ https://issues.apache.org/jira/browse/HIVE-13956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317351#comment-15317351 ] Prasanth Jayachandran commented on HIVE-13956: -- LGTM, +1. Pending tests. > LLAP: external client output is writing to channel before it is writable again > -- > > Key: HIVE-13956 > URL: https://issues.apache.org/jira/browse/HIVE-13956 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Jason Dere >Assignee: Jason Dere > Attachments: HIVE-13956.1.patch > > > Rows are being written/flushed on the output channel without checking if the > channel is writable. Introduce a writability check/wait. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13948) Incorrect timezone handling in Writable results in wrong dates in queries
[ https://issues.apache.org/jira/browse/HIVE-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317338#comment-15317338 ] Jason Dere commented on HIVE-13948: --- +1 > Incorrect timezone handling in Writable results in wrong dates in queries > - > > Key: HIVE-13948 > URL: https://issues.apache.org/jira/browse/HIVE-13948 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Blocker > Attachments: HIVE-13948.patch, HIVE-13948.patch > > > Modifying TestDateWritable to cover 200 years, adding all timezones to the > set, and making it accumulate errors, results in the following set (I bet > many are duplicates via different names, but there's enough). > This ONLY logs errors where YMD date mismatches. There are many more where > YMD is the same but the time mismatches, omitted for brevity. > Queries as simple as "select date(...);" reproduce the error (if Java tz is > set to a problematic tz) > I was investigating some case for a specific date and it seems like the > conversion from dates to ms, namely offset calculation that takes the offset > at UTC midnight and the offset at arbitrary time derived from that, is > completely bogus and it's not clear why it would work. > I think we either need to derive date from UTC and then create local date > from YMD if needed (for many cases e.g. toString for sinks, it would not be > needed at all), and/or add a lookup table for timezone used (for popular > dates, e.g. 1900-present, it would be 40k-odd entries, although the price of > building it is another question). > Format: tz-expected-actual > {noformat} > 2016-06-04T18:33:57,499 ERROR [main[]]: io.TestDateWritable > (TestDateWritable.java:testDaylightSavingsTime(234)) - > DATE MISMATCH: > Africa/Abidjan: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Accra: 1918-01-01 00:00:52 != 1918-12-31 23:59:08 > Africa/Bamako: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Banjul: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Bissau: 1912-01-01 00:02:20 != 1912-12-31 23:57:40 > Africa/Bissau: 1975-01-01 01:00:00 != 1975-12-31 23:00:00 > Africa/Casablanca: 1913-10-26 00:30:20 != 1913-10-25 23:29:40 > Africa/Ceuta: 1901-01-01 00:21:16 != 1901-12-31 23:38:44 > Africa/Conakry: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Dakar: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/El_Aaiun: 1976-04-14 01:00:00 != 1976-04-13 23:00:00 > Africa/Freetown: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Lome: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Monrovia: 1972-05-01 00:44:30 != 1972-04-30 23:15:30 > Africa/Nouakchott: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Ouagadougou: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Sao_Tome: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Timbuktu: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > America/Anguilla: 1912-03-02 00:06:04 != 1912-03-01 23:53:56 > America/Antigua: 1951-01-01 01:00:00 != 1951-12-31 23:00:00 > America/Araguaina: 1914-01-01 00:12:48 != 1914-12-31 23:47:12 > America/Araguaina: 1932-10-03 01:00:00 != 1932-10-02 23:00:00 > America/Araguaina: 1949-12-01 01:00:00 != 1949-11-30 23:00:00 > America/Argentina/Buenos_Aires: 1920-05-01 00:16:48 != 1920-04-30 23:43:12 > America/Argentina/Buenos_Aires: 1930-12-01 01:00:00 != 1930-11-30 23:00:00 > America/Argentina/Buenos_Aires: 1931-10-15 01:00:00 != 1931-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1932-11-01 01:00:00 != 1932-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1933-11-01 01:00:00 != 1933-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1934-11-01 01:00:00 != 1934-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1935-11-01 01:00:00 != 1935-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1936-11-01 01:00:00 != 1936-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1937-11-01 01:00:00 != 1937-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1938-11-01 01:00:00 != 1938-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1939-11-01 01:00:00 != 1939-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1940-07-01 01:00:00 != 1940-06-30 23:00:00 > America/Argentina/Buenos_Aires: 1941-10-15 01:00:00 != 1941-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1943-10-15 01:00:00 != 1943-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1946-10-01 01:00:00 != 1946-09-30 23:00:00 > America/Argentina/Buenos_Aires: 1963-12-15 01:00:00 != 1963-12-14 23:00:00 > America/Argentina/Buenos_Aires: 1964-10-15 01:00:00 != 1964-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1965-10-15 01:00:00 != 1965-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1966-10-15 01:00:00 != 1966-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1967-10-01 01:00:00 != 1967-09-30 23:00:00 > America/Argentina/Buenos_Aires:
[jira] [Updated] (HIVE-13956) LLAP: external client output is writing to channel before it is writable again
[ https://issues.apache.org/jira/browse/HIVE-13956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-13956: -- Status: Patch Available (was: Open) > LLAP: external client output is writing to channel before it is writable again > -- > > Key: HIVE-13956 > URL: https://issues.apache.org/jira/browse/HIVE-13956 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Jason Dere >Assignee: Jason Dere > Attachments: HIVE-13956.1.patch > > > Rows are being written/flushed on the output channel without checking if the > channel is writable. Introduce a writability check/wait. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13675) LLAP: add HMAC signatures to LLAPIF splits
[ https://issues.apache.org/jira/browse/HIVE-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13675: Attachment: HIVE-13675.07.patch Adding llap-common to the epic JAR; looks like spark cannot find it when registering the UDF that now depends on it. > LLAP: add HMAC signatures to LLAPIF splits > -- > > Key: HIVE-13675 > URL: https://issues.apache.org/jira/browse/HIVE-13675 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13675.01.patch, HIVE-13675.02.patch, > HIVE-13675.03.patch, HIVE-13675.04.patch, HIVE-13675.05.patch, > HIVE-13675.06.patch, HIVE-13675.07.patch, HIVE-13675.WIP.patch, > HIVE-13675.wo.13444.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13956) LLAP: external client output is writing to channel before it is writable again
[ https://issues.apache.org/jira/browse/HIVE-13956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-13956: -- Attachment: HIVE-13956.1.patch > LLAP: external client output is writing to channel before it is writable again > -- > > Key: HIVE-13956 > URL: https://issues.apache.org/jira/browse/HIVE-13956 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Jason Dere >Assignee: Jason Dere > Attachments: HIVE-13956.1.patch > > > Rows are being written/flushed on the output channel without checking if the > channel is writable. Introduce a writability check/wait. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13909) upgrade ACLs in LLAP registry when the cluster is upgraded to secure
[ https://issues.apache.org/jira/browse/HIVE-13909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13909: Fix Version/s: (was: 2.2.0) 2.1.0 > upgrade ACLs in LLAP registry when the cluster is upgraded to secure > > > Key: HIVE-13909 > URL: https://issues.apache.org/jira/browse/HIVE-13909 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: 2.1.0 > > Attachments: HIVE-13909.01.patch, HIVE-13909.patch > > > ZK model has authentication and authorization mixed together, so it's > impossible to set up acls that would carry over between unsecure and secure > clusters in the normal case (i.e. work for a specific users no matter the > authentication method). > To support cluster updates from unsecure to secure, we'd need to change the > ACLs ourselves. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13909) upgrade ACLs in LLAP registry when the cluster is upgraded to secure
[ https://issues.apache.org/jira/browse/HIVE-13909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13909: Target Version/s: (was: 2.1.1) > upgrade ACLs in LLAP registry when the cluster is upgraded to secure > > > Key: HIVE-13909 > URL: https://issues.apache.org/jira/browse/HIVE-13909 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: 2.1.0 > > Attachments: HIVE-13909.01.patch, HIVE-13909.patch > > > ZK model has authentication and authorization mixed together, so it's > impossible to set up acls that would carry over between unsecure and secure > clusters in the normal case (i.e. work for a specific users no matter the > authentication method). > To support cluster updates from unsecure to secure, we'd need to change the > ACLs ourselves. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13909) upgrade ACLs in LLAP registry when the cluster is upgraded to secure
[ https://issues.apache.org/jira/browse/HIVE-13909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317307#comment-15317307 ] Sergey Shelukhin commented on HIVE-13909: - Thanks! Done > upgrade ACLs in LLAP registry when the cluster is upgraded to secure > > > Key: HIVE-13909 > URL: https://issues.apache.org/jira/browse/HIVE-13909 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: 2.1.0 > > Attachments: HIVE-13909.01.patch, HIVE-13909.patch > > > ZK model has authentication and authorization mixed together, so it's > impossible to set up acls that would carry over between unsecure and secure > clusters in the normal case (i.e. work for a specific users no matter the > authentication method). > To support cluster updates from unsecure to secure, we'd need to change the > ACLs ourselves. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13909) upgrade ACLs in LLAP registry when the cluster is upgraded to secure
[ https://issues.apache.org/jira/browse/HIVE-13909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317282#comment-15317282 ] Jesus Camacho Rodriguez commented on HIVE-13909: [~sershe], you can push to branch-2.1 and set fix version to 2.1.0; I am waiting for HIVE-13955 to spin a new RC. Thanks > upgrade ACLs in LLAP registry when the cluster is upgraded to secure > > > Key: HIVE-13909 > URL: https://issues.apache.org/jira/browse/HIVE-13909 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: 2.2.0 > > Attachments: HIVE-13909.01.patch, HIVE-13909.patch > > > ZK model has authentication and authorization mixed together, so it's > impossible to set up acls that would carry over between unsecure and secure > clusters in the normal case (i.e. work for a specific users no matter the > authentication method). > To support cluster updates from unsecure to secure, we'd need to change the > ACLs ourselves. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13909) upgrade ACLs in LLAP registry when the cluster is upgraded to secure
[ https://issues.apache.org/jira/browse/HIVE-13909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13909: Resolution: Fixed Fix Version/s: 2.2.0 Target Version/s: 2.1.1 Status: Resolved (was: Patch Available) Committed to master. Pending 2.1 commit... > upgrade ACLs in LLAP registry when the cluster is upgraded to secure > > > Key: HIVE-13909 > URL: https://issues.apache.org/jira/browse/HIVE-13909 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: 2.2.0 > > Attachments: HIVE-13909.01.patch, HIVE-13909.patch > > > ZK model has authentication and authorization mixed together, so it's > impossible to set up acls that would carry over between unsecure and secure > clusters in the normal case (i.e. work for a specific users no matter the > authentication method). > To support cluster updates from unsecure to secure, we'd need to change the > ACLs ourselves. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13909) upgrade ACLs in LLAP registry when the cluster is upgraded to secure
[ https://issues.apache.org/jira/browse/HIVE-13909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317261#comment-15317261 ] Sergey Shelukhin commented on HIVE-13909: - [~jcamachorodriguez] what is the state of branch-2.1 right now? Can I commit there and mark for 2.1.1? > upgrade ACLs in LLAP registry when the cluster is upgraded to secure > > > Key: HIVE-13909 > URL: https://issues.apache.org/jira/browse/HIVE-13909 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13909.01.patch, HIVE-13909.patch > > > ZK model has authentication and authorization mixed together, so it's > impossible to set up acls that would carry over between unsecure and secure > clusters in the normal case (i.e. work for a specific users no matter the > authentication method). > To support cluster updates from unsecure to secure, we'd need to change the > ACLs ourselves. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13912) DbTxnManager.commitTxn(): ORA-00918: column ambiguously defined
[ https://issues.apache.org/jira/browse/HIVE-13912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-13912: -- Resolution: Fixed Fix Version/s: 1.3.0 Status: Resolved (was: Patch Available) committed to master and branch-1 Thanks Alan for the review > DbTxnManager.commitTxn(): ORA-00918: column ambiguously defined > --- > > Key: HIVE-13912 > URL: https://issues.apache.org/jira/browse/HIVE-13912 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 1.3.0, 2.1.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Blocker > Fix For: 1.3.0, 2.1.0 > > Attachments: HIVE-13912.patch > > > {noformat} > Caused by: MetaException(message:Unable to update transaction database > java.sql.SQLSyntaxErrorException: ORA-00918: column ambiguously defined > at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440) > at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396) > at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837) > at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445) > at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191) > at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523) > at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:193) > at > oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:852) > at > oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1153) > at > oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1275) > at > oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1477) > at > oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:392) > at > com.jolbox.bonecp.StatementHandle.executeQuery(StatementHandle.java:464) > at > org.apache.hadoop.hive.metastore.txn.TxnHandler.commitTxn(TxnHandler.java:662) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.commit_txn(HiveMetaStore.java:5864) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) > at com.sun.proxy.$Proxy49.commit_txn(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.commitTxn(HiveMetaStoreClient.java:2090) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:154) > at com.sun.proxy.$Proxy50.commitTxn(Unknown Source) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager$SynchronizedMetaStoreClient.commitTxn(DbTxnManager.java:655) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.commitTxn(DbTxnManager.java:356) > at > org.apache.hadoop.hive.ql.Driver.releaseLocksAndCommitOrRollback(Driver.java:1024) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1321) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1083) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1071) > {noformat} > caused by > {noformat} > (sqlGenerator.addLimitClause(1, "committed.ws_txnid, > committed.ws_commit_id, committed.ws_database," + > "committed.ws_table, committed.ws_partition, cur.ws_commit_id " > + > "from WRITE_SET committed INNER JOIN WRITE_SET cur " + > "ON committed.ws_database=cur.ws_database and > committed.ws_table=cur.ws_table " + > //For partitioned table we always track writes at partition > level (never at table) > //and for non partitioned - always at table level, thus the > same table should never > //have entries with partition key and w/o > "and (committed.ws_partition=cur.ws_partition or > (committed.ws_partition is null and cur.ws_partition is null)) " + > "where cur.ws_txnid <= committed.ws_commit_id" + //txns > overlap; could replace ws_txnid > // with txnid,
[jira] [Updated] (HIVE-13955) Include service-rpc and llap-ext-client in packaging files
[ https://issues.apache.org/jira/browse/HIVE-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13955: --- Priority: Blocker (was: Major) > Include service-rpc and llap-ext-client in packaging files > -- > > Key: HIVE-13955 > URL: https://issues.apache.org/jira/browse/HIVE-13955 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Blocker > Attachments: HIVE-13955.patch > > > Include info in packaging/pom.xml, packaging/src/main/assembly/src.xml, and > packaging/src/main/assembly/bin.xml -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13955) Include service-rpc and llap-ext-client in packaging files
[ https://issues.apache.org/jira/browse/HIVE-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317247#comment-15317247 ] Jesus Camacho Rodriguez commented on HIVE-13955: [~vgumashta], could you confirm that we do not need to include anything for service-rpc in packaging/src/main/assembly/bin.xml? Could you review the patch? Thanks Cc [~alangates], this is the reason why those two modules were missing in the _src_ of the RC. > Include service-rpc and llap-ext-client in packaging files > -- > > Key: HIVE-13955 > URL: https://issues.apache.org/jira/browse/HIVE-13955 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-13955.patch > > > Include info in packaging/pom.xml, packaging/src/main/assembly/src.xml, and > packaging/src/main/assembly/bin.xml -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13955) Include service-rpc and llap-ext-client in packaging files
[ https://issues.apache.org/jira/browse/HIVE-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13955: --- Summary: Include service-rpc and llap-ext-client in packaging files (was: Include service-rpc and llap-ext-client in packaging XML files) > Include service-rpc and llap-ext-client in packaging files > -- > > Key: HIVE-13955 > URL: https://issues.apache.org/jira/browse/HIVE-13955 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-13955.patch > > > Include info in packaging/pom.xml, packaging/src/main/assembly/src.xml, and > packaging/src/main/assembly/bin.xml -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13955) Include service-rpc and llap-ext-client in packaging XML files
[ https://issues.apache.org/jira/browse/HIVE-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13955: --- Description: Include info in packaging/pom.xml, packaging/src/main/assembly/src.xml, and packaging/src/main/assembly/bin.xml (was: Include info in src/main/assembly/src.xml and src/main/assembly/bin.xml) > Include service-rpc and llap-ext-client in packaging XML files > -- > > Key: HIVE-13955 > URL: https://issues.apache.org/jira/browse/HIVE-13955 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-13955.patch > > > Include info in packaging/pom.xml, packaging/src/main/assembly/src.xml, and > packaging/src/main/assembly/bin.xml -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13955) Include service-rpc and llap-ext-client in packaging XML files
[ https://issues.apache.org/jira/browse/HIVE-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13955: --- Attachment: HIVE-13955.patch > Include service-rpc and llap-ext-client in packaging XML files > -- > > Key: HIVE-13955 > URL: https://issues.apache.org/jira/browse/HIVE-13955 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-13955.patch > > > Include info in src/main/assembly/src.xml and src/main/assembly/bin.xml -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13955) Include service-rpc and llap-ext-client in packaging XML files
[ https://issues.apache.org/jira/browse/HIVE-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13955: --- Status: Patch Available (was: In Progress) > Include service-rpc and llap-ext-client in packaging XML files > -- > > Key: HIVE-13955 > URL: https://issues.apache.org/jira/browse/HIVE-13955 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > > Include info in src/main/assembly/src.xml and src/main/assembly/bin.xml -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (HIVE-13955) Include service-rpc and llap-ext-client in packaging XML files
[ https://issues.apache.org/jira/browse/HIVE-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-13955 started by Jesus Camacho Rodriguez. -- > Include service-rpc and llap-ext-client in packaging XML files > -- > > Key: HIVE-13955 > URL: https://issues.apache.org/jira/browse/HIVE-13955 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > > Include info in src/main/assembly/src.xml and src/main/assembly/bin.xml -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13790) log4j2 syslog appender not taking "LoggerFields" and "KeyValuePair" options
[ https://issues.apache.org/jira/browse/HIVE-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317218#comment-15317218 ] Prasanth Jayachandran commented on HIVE-13790: -- Shouldn't the loggerFields need the keys to be set in MDC? AFAIK hive does not set any keys to MDC. > log4j2 syslog appender not taking "LoggerFields" and "KeyValuePair" options > --- > > Key: HIVE-13790 > URL: https://issues.apache.org/jira/browse/HIVE-13790 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Metastore >Affects Versions: 2.0.0 > Environment: Hive 2.0.0, Hadoop 2.7.2, Spark 1.6.1, HBase 1.1.2 >Reporter: Alexandre Linte > > I'm trying to use the Syslog appender with log4j2 in Hive 2.0.0. The syslog > appender is configured on my hiveserver2 and my metastore. > With a simple configuration, the logs are well written in the logfile with a > generic pattern layout: > {noformat} > May 19 10:12:16 myhiveserver2.fr Starting HiveServer2 > May 19 10:12:18 myhiveserver2.fr Connected to metastore. > May 19 10:12:20 myhiveserver2.fr Service: CLIService is inited. > May 19 10:12:20 myhiveserver2.fr Service: ThriftBinaryCLIService is inited. > {noformat} > I tried to customize this pattern layout by adding the loggerFields parameter > in my hive-log4j2.properties. At the end, the configuration file is: > {noformat} > status = TRACE > name = HiveLog4j2 > packages = org.apache.hadoop.hive.ql.log > property.hive.log.level = INFO > property.hive.root.logger = SYSLOG > property.hive.query.id = hadoop > property.hive.log.dir = /var/log/bigdata > property.hive.log.file = bigdata.log > appenders = console, SYSLOG > appender.console.type = Console > appender.console.name = console > appender.console.target = SYSTEM_ERR > appender.console.layout.type = PatternLayout > appender.console.layout.pattern = %d{yy/MM/dd HH:mm:ss} [%t]: %p %c{2}: %m%n > appender.SYSLOG.type = Syslog > appender.SYSLOG.name = SYSLOG > appender.SYSLOG.host = 127.0.0.1 > appender.SYSLOG.port = 514 > appender.SYSLOG.protocol = UDP > appender.SYSLOG.facility = LOCAL1 > appender.SYSLOG.layout.type = loggerFields > appender.SYSLOG.layout.sdId = test > appender.SYSLOG.layout.enterpriseId = 18060 > appender.SYSLOG.layout.pairs.type = KeyValuePair > appender.SYSLOG.layout.pairs.key = service > appender.SYSLOG.layout.pairs.value = hiveserver2 > appender.SYSLOG.layout.pairs.key = loglevel > appender.SYSLOG.layout.pairs.value = %p > appender.SYSLOG.layout.pairs.key = message > appender.SYSLOG.layout.pairs.value = %c%m%n > loggers = NIOServerCnxn, ClientCnxnSocketNIO, DataNucleus, Datastore, JPOX > logger.NIOServerCnxn.name = org.apache.zookeeper.server.NIOServerCnxn > logger.NIOServerCnxn.level = WARN > logger.ClientCnxnSocketNIO.name = org.apache.zookeeper.ClientCnxnSocketNIO > logger.ClientCnxnSocketNIO.level = WARN > logger.DataNucleus.name = DataNucleus > logger.DataNucleus.level = ERROR > logger.Datastore.name = Datastore > logger.Datastore.level = ERROR > logger.JPOX.name = JPOX > logger.JPOX.level = ERROR > rootLogger.level = ${sys:hive.log.level} > rootLogger.appenderRefs = root > rootLogger.appenderRef.root.ref = ${sys:hive.root.logger} > {noformat} > Unfortunately, the logs are still written in a generic pattern layout. The > KeyValuePairs are not used. The log4j logs are: > {noformat} > 2016-05-19 10:36:14,866 main DEBUG Initializing configuration > org.apache.logging.log4j.core.config.properties.PropertiesConfiguration@5433a329 > 2016-05-19 10:36:16,575 main DEBUG Took 1.706004 seconds to load 3 plugins > from package org.apache.hadoop.hive.ql.log > 2016-05-19 10:36:16,575 main DEBUG PluginManager 'Core' found 80 plugins > 2016-05-19 10:36:16,576 main DEBUG PluginManager 'Level' found 0 plugins > 2016-05-19 10:36:16,578 main DEBUG Building Plugin[name=property, > class=org.apache.logging.log4j.core.config.Property]. Searching for builder > factory method... > 2016-05-19 10:36:16,583 main DEBUG No builder factory method found in class > org.apache.logging.log4j.core.config.Property. Going to try finding a factory > method instead. > 2016-05-19 10:36:16,583 main DEBUG Still building Plugin[name=property, > class=org.apache.logging.log4j.core.config.Property]. Searching for factory > method... > 2016-05-19 10:36:16,584 main DEBUG Found factory method [createProperty]: > public static org.apache.logging.log4j.core.config.Property > org.apache.logging.log4j.core.config.Property.createProperty(java.lang.String,java.lang.String). > 2016-05-19 10:36:16,611 main DEBUG TypeConverterRegistry initializing. > 2016-05-19 10:36:16,611 main DEBUG PluginManager 'TypeConverter' found 21 > plugins > 2016-05-19 10:36:16,636 main DEBUG Calling createProperty on class > org.apache.logging.log4j.core.config.Property for element
[jira] [Comment Edited] (HIVE-11431) Vectorization: select * Left Semi Join projections NPE
[ https://issues.apache.org/jira/browse/HIVE-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317190#comment-15317190 ] Sergey Shelukhin edited comment on HIVE-11431 at 6/6/16 8:42 PM: - This (or a very similar top of callstack) also happens for simpler queries - e.g. select * from foo where decimal_id in (1234, 134535, 4545) (in has integers, the in column is a decimal) was (Author: sershe): This (or a very similar top of callstack) also happens for simpler queries - e.g. select * from foo where string_id in (1234, 134535, 4545) (in has integers, the in column is a string) > Vectorization: select * Left Semi Join projections NPE > -- > > Key: HIVE-11431 > URL: https://issues.apache.org/jira/browse/HIVE-11431 > Project: Hive > Issue Type: Bug > Components: Vectorization >Affects Versions: 1.3.0, 1.2.1 >Reporter: Gopal V >Assignee: Matt McCline > Attachments: left-semi-bug.sql > > > The "select *" is meant to only apply to the left most table, not the right > most - the unprojected "d" from tmp1 triggers this NPE. > {code} > select * from tmp2 left semi join tmp1 where c1 = id and c0 = q; > {code} > {code} > Caused by: java.lang.NullPointerException > at java.lang.System.arraycopy(Native Method) > at org.apache.hadoop.io.Text.set(Text.java:225) > at > org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow$StringExtractorByValue.extract(VectorExtractRow.java:472) > at > org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.extractRow(VectorExtractRow.java:732) > at > org.apache.hadoop.hive.ql.exec.vector.VectorFileSinkOperator.process(VectorFileSinkOperator.java:96) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) > at > org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:136) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) > at > org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.process(VectorFilterOperator.java:117) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11431) Vectorization: select * Left Semi Join projections NPE
[ https://issues.apache.org/jira/browse/HIVE-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317190#comment-15317190 ] Sergey Shelukhin commented on HIVE-11431: - This (or a very similar top of callstack) also happens for simpler queries - e.g. select * from foo where string_id in (1234, 134535, 4545) (in has integers, the in column is a string) > Vectorization: select * Left Semi Join projections NPE > -- > > Key: HIVE-11431 > URL: https://issues.apache.org/jira/browse/HIVE-11431 > Project: Hive > Issue Type: Bug > Components: Vectorization >Affects Versions: 1.3.0, 1.2.1 >Reporter: Gopal V >Assignee: Matt McCline > Attachments: left-semi-bug.sql > > > The "select *" is meant to only apply to the left most table, not the right > most - the unprojected "d" from tmp1 triggers this NPE. > {code} > select * from tmp2 left semi join tmp1 where c1 = id and c0 = q; > {code} > {code} > Caused by: java.lang.NullPointerException > at java.lang.System.arraycopy(Native Method) > at org.apache.hadoop.io.Text.set(Text.java:225) > at > org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow$StringExtractorByValue.extract(VectorExtractRow.java:472) > at > org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.extractRow(VectorExtractRow.java:732) > at > org.apache.hadoop.hive.ql.exec.vector.VectorFileSinkOperator.process(VectorFileSinkOperator.java:96) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) > at > org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:136) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) > at > org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.process(VectorFilterOperator.java:117) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HIVE-13948) Incorrect timezone handling in Writable results in wrong dates in queries
[ https://issues.apache.org/jira/browse/HIVE-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317173#comment-15317173 ] Sergey Shelukhin edited comment on HIVE-13948 at 6/6/16 8:31 PM: - Minor fixes, mostly to comments. The patch seems to work end-to-end to fix problematic queries. q files need to be run in specific timezones to reproduce original issue (I was setting it via JAVA_TOOL_OPTIONS="-Duser.timezone=... ..."), so no q files are added. was (Author: sershe): Minor fixes, mostly to comments. The patch seems to work end-to-end to fix problematic queries. q files need to be run in specific timezones to reproduce original issue (I was setting it via JAVA_TOOL_OPTIONS="-Duser.timezone=Canada/Eastern ..."), so no q files are added. > Incorrect timezone handling in Writable results in wrong dates in queries > - > > Key: HIVE-13948 > URL: https://issues.apache.org/jira/browse/HIVE-13948 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Blocker > Attachments: HIVE-13948.patch, HIVE-13948.patch > > > Modifying TestDateWritable to cover 200 years, adding all timezones to the > set, and making it accumulate errors, results in the following set (I bet > many are duplicates via different names, but there's enough). > This ONLY logs errors where YMD date mismatches. There are many more where > YMD is the same but the time mismatches, omitted for brevity. > Queries as simple as "select date(...);" reproduce the error (if Java tz is > set to a problematic tz) > I was investigating some case for a specific date and it seems like the > conversion from dates to ms, namely offset calculation that takes the offset > at UTC midnight and the offset at arbitrary time derived from that, is > completely bogus and it's not clear why it would work. > I think we either need to derive date from UTC and then create local date > from YMD if needed (for many cases e.g. toString for sinks, it would not be > needed at all), and/or add a lookup table for timezone used (for popular > dates, e.g. 1900-present, it would be 40k-odd entries, although the price of > building it is another question). > Format: tz-expected-actual > {noformat} > 2016-06-04T18:33:57,499 ERROR [main[]]: io.TestDateWritable > (TestDateWritable.java:testDaylightSavingsTime(234)) - > DATE MISMATCH: > Africa/Abidjan: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Accra: 1918-01-01 00:00:52 != 1918-12-31 23:59:08 > Africa/Bamako: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Banjul: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Bissau: 1912-01-01 00:02:20 != 1912-12-31 23:57:40 > Africa/Bissau: 1975-01-01 01:00:00 != 1975-12-31 23:00:00 > Africa/Casablanca: 1913-10-26 00:30:20 != 1913-10-25 23:29:40 > Africa/Ceuta: 1901-01-01 00:21:16 != 1901-12-31 23:38:44 > Africa/Conakry: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Dakar: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/El_Aaiun: 1976-04-14 01:00:00 != 1976-04-13 23:00:00 > Africa/Freetown: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Lome: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Monrovia: 1972-05-01 00:44:30 != 1972-04-30 23:15:30 > Africa/Nouakchott: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Ouagadougou: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Sao_Tome: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Timbuktu: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > America/Anguilla: 1912-03-02 00:06:04 != 1912-03-01 23:53:56 > America/Antigua: 1951-01-01 01:00:00 != 1951-12-31 23:00:00 > America/Araguaina: 1914-01-01 00:12:48 != 1914-12-31 23:47:12 > America/Araguaina: 1932-10-03 01:00:00 != 1932-10-02 23:00:00 > America/Araguaina: 1949-12-01 01:00:00 != 1949-11-30 23:00:00 > America/Argentina/Buenos_Aires: 1920-05-01 00:16:48 != 1920-04-30 23:43:12 > America/Argentina/Buenos_Aires: 1930-12-01 01:00:00 != 1930-11-30 23:00:00 > America/Argentina/Buenos_Aires: 1931-10-15 01:00:00 != 1931-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1932-11-01 01:00:00 != 1932-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1933-11-01 01:00:00 != 1933-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1934-11-01 01:00:00 != 1934-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1935-11-01 01:00:00 != 1935-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1936-11-01 01:00:00 != 1936-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1937-11-01 01:00:00 != 1937-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1938-11-01 01:00:00 != 1938-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1939-11-01 01:00:00 != 1939-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1940-07-01 01:00:00 != 1940-06-30 23:00:00 >
[jira] [Updated] (HIVE-13948) Incorrect timezone handling in Writable results in wrong dates in queries
[ https://issues.apache.org/jira/browse/HIVE-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13948: Attachment: HIVE-13948.patch Minor fixes, mostly to comments. The patch seems to work end-to-end to fix problematic queries. q files need to be run in specific timezones to reproduce original issue (I was setting it via JAVA_TOOL_OPTIONS="-Duser.timezone=Canada/Eastern ..."), so no q files are added. > Incorrect timezone handling in Writable results in wrong dates in queries > - > > Key: HIVE-13948 > URL: https://issues.apache.org/jira/browse/HIVE-13948 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Blocker > Attachments: HIVE-13948.patch, HIVE-13948.patch > > > Modifying TestDateWritable to cover 200 years, adding all timezones to the > set, and making it accumulate errors, results in the following set (I bet > many are duplicates via different names, but there's enough). > This ONLY logs errors where YMD date mismatches. There are many more where > YMD is the same but the time mismatches, omitted for brevity. > Queries as simple as "select date(...);" reproduce the error (if Java tz is > set to a problematic tz) > I was investigating some case for a specific date and it seems like the > conversion from dates to ms, namely offset calculation that takes the offset > at UTC midnight and the offset at arbitrary time derived from that, is > completely bogus and it's not clear why it would work. > I think we either need to derive date from UTC and then create local date > from YMD if needed (for many cases e.g. toString for sinks, it would not be > needed at all), and/or add a lookup table for timezone used (for popular > dates, e.g. 1900-present, it would be 40k-odd entries, although the price of > building it is another question). > Format: tz-expected-actual > {noformat} > 2016-06-04T18:33:57,499 ERROR [main[]]: io.TestDateWritable > (TestDateWritable.java:testDaylightSavingsTime(234)) - > DATE MISMATCH: > Africa/Abidjan: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Accra: 1918-01-01 00:00:52 != 1918-12-31 23:59:08 > Africa/Bamako: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Banjul: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Bissau: 1912-01-01 00:02:20 != 1912-12-31 23:57:40 > Africa/Bissau: 1975-01-01 01:00:00 != 1975-12-31 23:00:00 > Africa/Casablanca: 1913-10-26 00:30:20 != 1913-10-25 23:29:40 > Africa/Ceuta: 1901-01-01 00:21:16 != 1901-12-31 23:38:44 > Africa/Conakry: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Dakar: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/El_Aaiun: 1976-04-14 01:00:00 != 1976-04-13 23:00:00 > Africa/Freetown: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Lome: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Monrovia: 1972-05-01 00:44:30 != 1972-04-30 23:15:30 > Africa/Nouakchott: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Ouagadougou: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Sao_Tome: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Timbuktu: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > America/Anguilla: 1912-03-02 00:06:04 != 1912-03-01 23:53:56 > America/Antigua: 1951-01-01 01:00:00 != 1951-12-31 23:00:00 > America/Araguaina: 1914-01-01 00:12:48 != 1914-12-31 23:47:12 > America/Araguaina: 1932-10-03 01:00:00 != 1932-10-02 23:00:00 > America/Araguaina: 1949-12-01 01:00:00 != 1949-11-30 23:00:00 > America/Argentina/Buenos_Aires: 1920-05-01 00:16:48 != 1920-04-30 23:43:12 > America/Argentina/Buenos_Aires: 1930-12-01 01:00:00 != 1930-11-30 23:00:00 > America/Argentina/Buenos_Aires: 1931-10-15 01:00:00 != 1931-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1932-11-01 01:00:00 != 1932-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1933-11-01 01:00:00 != 1933-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1934-11-01 01:00:00 != 1934-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1935-11-01 01:00:00 != 1935-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1936-11-01 01:00:00 != 1936-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1937-11-01 01:00:00 != 1937-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1938-11-01 01:00:00 != 1938-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1939-11-01 01:00:00 != 1939-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1940-07-01 01:00:00 != 1940-06-30 23:00:00 > America/Argentina/Buenos_Aires: 1941-10-15 01:00:00 != 1941-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1943-10-15 01:00:00 != 1943-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1946-10-01 01:00:00 != 1946-09-30 23:00:00 > America/Argentina/Buenos_Aires: 1963-12-15 01:00:00 != 1963-12-14 23:00:00 > America/Argentina/Buenos_Aires: 1964-10-15 01:00:00 !=
[jira] [Updated] (HIVE-13948) Incorrect timezone handling in Writable results in wrong dates in queries
[ https://issues.apache.org/jira/browse/HIVE-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13948: Attachment: (was: HIVE-13948.patch) > Incorrect timezone handling in Writable results in wrong dates in queries > - > > Key: HIVE-13948 > URL: https://issues.apache.org/jira/browse/HIVE-13948 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Blocker > Attachments: HIVE-13948.patch > > > Modifying TestDateWritable to cover 200 years, adding all timezones to the > set, and making it accumulate errors, results in the following set (I bet > many are duplicates via different names, but there's enough). > This ONLY logs errors where YMD date mismatches. There are many more where > YMD is the same but the time mismatches, omitted for brevity. > Queries as simple as "select date(...);" reproduce the error (if Java tz is > set to a problematic tz) > I was investigating some case for a specific date and it seems like the > conversion from dates to ms, namely offset calculation that takes the offset > at UTC midnight and the offset at arbitrary time derived from that, is > completely bogus and it's not clear why it would work. > I think we either need to derive date from UTC and then create local date > from YMD if needed (for many cases e.g. toString for sinks, it would not be > needed at all), and/or add a lookup table for timezone used (for popular > dates, e.g. 1900-present, it would be 40k-odd entries, although the price of > building it is another question). > Format: tz-expected-actual > {noformat} > 2016-06-04T18:33:57,499 ERROR [main[]]: io.TestDateWritable > (TestDateWritable.java:testDaylightSavingsTime(234)) - > DATE MISMATCH: > Africa/Abidjan: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Accra: 1918-01-01 00:00:52 != 1918-12-31 23:59:08 > Africa/Bamako: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Banjul: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Bissau: 1912-01-01 00:02:20 != 1912-12-31 23:57:40 > Africa/Bissau: 1975-01-01 01:00:00 != 1975-12-31 23:00:00 > Africa/Casablanca: 1913-10-26 00:30:20 != 1913-10-25 23:29:40 > Africa/Ceuta: 1901-01-01 00:21:16 != 1901-12-31 23:38:44 > Africa/Conakry: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Dakar: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/El_Aaiun: 1976-04-14 01:00:00 != 1976-04-13 23:00:00 > Africa/Freetown: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Lome: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Monrovia: 1972-05-01 00:44:30 != 1972-04-30 23:15:30 > Africa/Nouakchott: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Ouagadougou: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Sao_Tome: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Timbuktu: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > America/Anguilla: 1912-03-02 00:06:04 != 1912-03-01 23:53:56 > America/Antigua: 1951-01-01 01:00:00 != 1951-12-31 23:00:00 > America/Araguaina: 1914-01-01 00:12:48 != 1914-12-31 23:47:12 > America/Araguaina: 1932-10-03 01:00:00 != 1932-10-02 23:00:00 > America/Araguaina: 1949-12-01 01:00:00 != 1949-11-30 23:00:00 > America/Argentina/Buenos_Aires: 1920-05-01 00:16:48 != 1920-04-30 23:43:12 > America/Argentina/Buenos_Aires: 1930-12-01 01:00:00 != 1930-11-30 23:00:00 > America/Argentina/Buenos_Aires: 1931-10-15 01:00:00 != 1931-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1932-11-01 01:00:00 != 1932-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1933-11-01 01:00:00 != 1933-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1934-11-01 01:00:00 != 1934-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1935-11-01 01:00:00 != 1935-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1936-11-01 01:00:00 != 1936-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1937-11-01 01:00:00 != 1937-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1938-11-01 01:00:00 != 1938-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1939-11-01 01:00:00 != 1939-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1940-07-01 01:00:00 != 1940-06-30 23:00:00 > America/Argentina/Buenos_Aires: 1941-10-15 01:00:00 != 1941-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1943-10-15 01:00:00 != 1943-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1946-10-01 01:00:00 != 1946-09-30 23:00:00 > America/Argentina/Buenos_Aires: 1963-12-15 01:00:00 != 1963-12-14 23:00:00 > America/Argentina/Buenos_Aires: 1964-10-15 01:00:00 != 1964-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1965-10-15 01:00:00 != 1965-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1966-10-15 01:00:00 != 1966-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1967-10-01 01:00:00 != 1967-09-30 23:00:00 > America/Argentina/Buenos_Aires:
[jira] [Updated] (HIVE-13948) Incorrect timezone handling in Writable results in wrong dates in queries
[ https://issues.apache.org/jira/browse/HIVE-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13948: Attachment: HIVE-13948.patch > Incorrect timezone handling in Writable results in wrong dates in queries > - > > Key: HIVE-13948 > URL: https://issues.apache.org/jira/browse/HIVE-13948 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Blocker > Attachments: HIVE-13948.patch > > > Modifying TestDateWritable to cover 200 years, adding all timezones to the > set, and making it accumulate errors, results in the following set (I bet > many are duplicates via different names, but there's enough). > This ONLY logs errors where YMD date mismatches. There are many more where > YMD is the same but the time mismatches, omitted for brevity. > Queries as simple as "select date(...);" reproduce the error (if Java tz is > set to a problematic tz) > I was investigating some case for a specific date and it seems like the > conversion from dates to ms, namely offset calculation that takes the offset > at UTC midnight and the offset at arbitrary time derived from that, is > completely bogus and it's not clear why it would work. > I think we either need to derive date from UTC and then create local date > from YMD if needed (for many cases e.g. toString for sinks, it would not be > needed at all), and/or add a lookup table for timezone used (for popular > dates, e.g. 1900-present, it would be 40k-odd entries, although the price of > building it is another question). > Format: tz-expected-actual > {noformat} > 2016-06-04T18:33:57,499 ERROR [main[]]: io.TestDateWritable > (TestDateWritable.java:testDaylightSavingsTime(234)) - > DATE MISMATCH: > Africa/Abidjan: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Accra: 1918-01-01 00:00:52 != 1918-12-31 23:59:08 > Africa/Bamako: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Banjul: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Bissau: 1912-01-01 00:02:20 != 1912-12-31 23:57:40 > Africa/Bissau: 1975-01-01 01:00:00 != 1975-12-31 23:00:00 > Africa/Casablanca: 1913-10-26 00:30:20 != 1913-10-25 23:29:40 > Africa/Ceuta: 1901-01-01 00:21:16 != 1901-12-31 23:38:44 > Africa/Conakry: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Dakar: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/El_Aaiun: 1976-04-14 01:00:00 != 1976-04-13 23:00:00 > Africa/Freetown: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Lome: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Monrovia: 1972-05-01 00:44:30 != 1972-04-30 23:15:30 > Africa/Nouakchott: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Ouagadougou: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Sao_Tome: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Timbuktu: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > America/Anguilla: 1912-03-02 00:06:04 != 1912-03-01 23:53:56 > America/Antigua: 1951-01-01 01:00:00 != 1951-12-31 23:00:00 > America/Araguaina: 1914-01-01 00:12:48 != 1914-12-31 23:47:12 > America/Araguaina: 1932-10-03 01:00:00 != 1932-10-02 23:00:00 > America/Araguaina: 1949-12-01 01:00:00 != 1949-11-30 23:00:00 > America/Argentina/Buenos_Aires: 1920-05-01 00:16:48 != 1920-04-30 23:43:12 > America/Argentina/Buenos_Aires: 1930-12-01 01:00:00 != 1930-11-30 23:00:00 > America/Argentina/Buenos_Aires: 1931-10-15 01:00:00 != 1931-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1932-11-01 01:00:00 != 1932-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1933-11-01 01:00:00 != 1933-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1934-11-01 01:00:00 != 1934-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1935-11-01 01:00:00 != 1935-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1936-11-01 01:00:00 != 1936-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1937-11-01 01:00:00 != 1937-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1938-11-01 01:00:00 != 1938-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1939-11-01 01:00:00 != 1939-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1940-07-01 01:00:00 != 1940-06-30 23:00:00 > America/Argentina/Buenos_Aires: 1941-10-15 01:00:00 != 1941-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1943-10-15 01:00:00 != 1943-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1946-10-01 01:00:00 != 1946-09-30 23:00:00 > America/Argentina/Buenos_Aires: 1963-12-15 01:00:00 != 1963-12-14 23:00:00 > America/Argentina/Buenos_Aires: 1964-10-15 01:00:00 != 1964-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1965-10-15 01:00:00 != 1965-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1966-10-15 01:00:00 != 1966-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1967-10-01 01:00:00 != 1967-09-30 23:00:00 > America/Argentina/Buenos_Aires: 1968-10-06 01:00:00
[jira] [Commented] (HIVE-13931) Add support for HikariCP and replace BoneCP usage with HikariCP
[ https://issues.apache.org/jira/browse/HIVE-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317149#comment-15317149 ] Hive QA commented on HIVE-13931: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12808222/HIVE-13931.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 13 failed/errored test(s), 10220 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_table_stats org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_values_orig_table_use_metadata org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_create_with_constraints_duplicate_name org.apache.hadoop.hive.llap.security.TestLlapSignerImpl.testSigning org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testDelayedLocalityNodeCommErrorImmediateAllocation {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/18/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/18/console Test logs: http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-18/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 13 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12808222 - PreCommit-HIVE-MASTER-Build > Add support for HikariCP and replace BoneCP usage with HikariCP > --- > > Key: HIVE-13931 > URL: https://issues.apache.org/jira/browse/HIVE-13931 > Project: Hive > Issue Type: Bug > Components: Metastore >Reporter: Sushanth Sowmyan >Assignee: Sushanth Sowmyan > Attachments: HIVE-13931.2.patch, HIVE-13931.patch > > > Currently, we use BoneCP as our primary connection pooling mechanism > (overridable by users). However, BoneCP is no longer being actively > developed, and is considered deprecated, replaced by HikariCP. > Thus, we should add support for HikariCP, and try to replace our primary > usage of BoneCP with it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HIVE-13953) Issues in HiveLockObject equals method
[ https://issues.apache.org/jira/browse/HIVE-13953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317062#comment-15317062 ] Yongzhi Chen edited comment on HIVE-13953 at 6/6/16 8:18 PM: - The fix looks good, +1 pending the testing. http://introcs.cs.princeton.edu/java/11precedence/ was (Author: ychena): The fix looks good, +1 pending the testing. > Issues in HiveLockObject equals method > -- > > Key: HIVE-13953 > URL: https://issues.apache.org/jira/browse/HIVE-13953 > Project: Hive > Issue Type: Bug > Components: Locking >Reporter: Chaoyu Tang >Assignee: Chaoyu Tang > Attachments: HIVE-13953.patch > > > There are two issues in equals method in HiveLockObject: > {code} > @Override > public boolean equals(Object o) { > if (!(o instanceof HiveLockObject)) { > return false; > } > HiveLockObject tgt = (HiveLockObject) o; > return Arrays.equals(pathNames, tgt.pathNames) && > data == null ? tgt.getData() == null : > tgt.getData() != null && data.equals(tgt.getData()); > } > {code} > 1. Arrays.equals(pathNames, tgt.pathNames) might return false for the same > path in HiveLockObject since in current Hive, the pathname components might > be stored in two ways, taking a dynamic partition path db/tbl/part1/part2 as > an example, it might be stored in the pathNames as an array of four elements, > db, tbl, part1, and part2 or as an array only having one element > db/tbl/part1/part2. It will be safer to comparing the pathNames using > StringUtils.equals(this.getName(), tgt.getName()) > 2. The comparison logic is not right. > {code} > @Override > public boolean equals(Object o) { > if (!(o instanceof HiveLockObject)) { > return false; > } > HiveLockObject tgt = (HiveLockObject) o; > return StringUtils.equals(this.getName(), tgt.getName()) && > (data == null ? tgt.getData() == null : data.equals(tgt.getData())); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13904) Ignore case when retrieving ColumnInfo from RowResolver
[ https://issues.apache.org/jira/browse/HIVE-13904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317143#comment-15317143 ] Ashutosh Chauhan commented on HIVE-13904: - +1 > Ignore case when retrieving ColumnInfo from RowResolver > --- > > Key: HIVE-13904 > URL: https://issues.apache.org/jira/browse/HIVE-13904 > Project: Hive > Issue Type: Bug > Components: Parser >Affects Versions: 2.1.0, 2.0.1, 2.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-13904.01.patch, HIVE-13904.01.patch, > HIVE-13904.02.patch, HIVE-13904.patch > > > To reproduce: > {noformat} > -- upper case in subq > explain > select * from src b > where exists > (select a.key from src a > where b.VALUE = a.VALUE > ); > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13905) optimize ColumnStatsTask::constructColumnStatsFromPackedRows to have lesser number of getTable calls
[ https://issues.apache.org/jira/browse/HIVE-13905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13905: --- Fix Version/s: (was: 2.1.1) (was: 2.2.0) 2.1.0 > optimize ColumnStatsTask::constructColumnStatsFromPackedRows to have lesser > number of getTable calls > > > Key: HIVE-13905 > URL: https://issues.apache.org/jira/browse/HIVE-13905 > Project: Hive > Issue Type: Sub-task > Components: Query Planning >Affects Versions: 2.0.0 >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Fix For: 2.1.0 > > Attachments: HIVE-13905.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13806) Extension to folding NOT expressions in CBO
[ https://issues.apache.org/jira/browse/HIVE-13806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13806: --- Fix Version/s: (was: 2.1.1) (was: 2.2.0) 2.1.0 > Extension to folding NOT expressions in CBO > --- > > Key: HIVE-13806 > URL: https://issues.apache.org/jira/browse/HIVE-13806 > Project: Hive > Issue Type: Sub-task > Components: CBO >Affects Versions: 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Fix For: 2.1.0 > > Attachments: HIVE-13806.01.patch, HIVE-13806.patch > > > Follow-up of HIVE-13068. > Extension to folding expressions for NOT. > Currently, simplification is performed only if NOT is applied on a simple > operation (e.g. IS NOT NULL, =, <>, etc.). We should take advantage of NOT > distributivity when it is applied on OR/AND operations to try to simplify > predicates further. > Ex. ql/src/test/results/clientpositive/folder_predicate.q.out > {noformat} > explain > SELECT * FROM predicate_fold_tb WHERE not(value IS NOT NULL AND value = 3) > {noformat} > Plan: > {noformat} > STAGE DEPENDENCIES: > Stage-1 is a root stage > Stage-0 depends on stages: Stage-1 > STAGE PLANS: > Stage: Stage-1 > Map Reduce > Map Operator Tree: > TableScan > alias: predicate_fold_tb > Statistics: Num rows: 6 Data size: 7 Basic stats: COMPLETE Column > stats: NONE > Filter Operator > predicate: (not (value is not null and (value = 3))) (type: > boolean) > Statistics: Num rows: 3 Data size: 3 Basic stats: COMPLETE > Column stats: NONE > Select Operator > expressions: value (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 3 Data size: 3 Basic stats: COMPLETE > Column stats: NONE > File Output Operator > compressed: false > Statistics: Num rows: 3 Data size: 3 Basic stats: COMPLETE > Column stats: NONE > table: > input format: > org.apache.hadoop.mapred.SequenceFileInputFormat > output format: > org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat > serde: > org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe > Stage: Stage-0 > Fetch Operator > limit: -1 > Processor Tree: > ListSink > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13922) Optimize the code path that analyzes/updates col stats
[ https://issues.apache.org/jira/browse/HIVE-13922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13922: --- Fix Version/s: (was: 2.1.1) (was: 2.2.0) 2.1.0 > Optimize the code path that analyzes/updates col stats > -- > > Key: HIVE-13922 > URL: https://issues.apache.org/jira/browse/HIVE-13922 > Project: Hive > Issue Type: Sub-task > Components: Metastore >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Fix For: 2.1.0 > > Attachments: HIVE-13922.1.patch > > > 1. Depending on the number of partitions, > HiveMetastore::update_partition_column_statistics::getPartValsFromName > obtains the same table several times. > 2. In ObjectStore, number of get calls to obtain the column stats can be > considerably reduced when writing table/partition stats. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13933) Add an option to turn off parallel file moves
[ https://issues.apache.org/jira/browse/HIVE-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13933: --- Fix Version/s: (was: 2.1.1) (was: 2.2.0) 2.1.0 > Add an option to turn off parallel file moves > - > > Key: HIVE-13933 > URL: https://issues.apache.org/jira/browse/HIVE-13933 > Project: Hive > Issue Type: Improvement > Components: Query Processor >Affects Versions: 2.1.0 >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan > Fix For: 2.1.0 > > Attachments: HIVE-13933.patch > > > Since this is a new feature, it make sense to have an ability to turn it off. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13948) Incorrect timezone handling in Writable results in wrong dates in queries
[ https://issues.apache.org/jira/browse/HIVE-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13948: Status: Patch Available (was: Open) [~jdere] [~gopalv] [~ashutoshc] can someone take a look? > Incorrect timezone handling in Writable results in wrong dates in queries > - > > Key: HIVE-13948 > URL: https://issues.apache.org/jira/browse/HIVE-13948 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Blocker > Attachments: HIVE-13948.patch > > > Modifying TestDateWritable to cover 200 years, adding all timezones to the > set, and making it accumulate errors, results in the following set (I bet > many are duplicates via different names, but there's enough). > This ONLY logs errors where YMD date mismatches. There are many more where > YMD is the same but the time mismatches, omitted for brevity. > Queries as simple as "select date(...);" reproduce the error (if Java tz is > set to a problematic tz) > I was investigating some case for a specific date and it seems like the > conversion from dates to ms, namely offset calculation that takes the offset > at UTC midnight and the offset at arbitrary time derived from that, is > completely bogus and it's not clear why it would work. > I think we either need to derive date from UTC and then create local date > from YMD if needed (for many cases e.g. toString for sinks, it would not be > needed at all), and/or add a lookup table for timezone used (for popular > dates, e.g. 1900-present, it would be 40k-odd entries, although the price of > building it is another question). > Format: tz-expected-actual > {noformat} > 2016-06-04T18:33:57,499 ERROR [main[]]: io.TestDateWritable > (TestDateWritable.java:testDaylightSavingsTime(234)) - > DATE MISMATCH: > Africa/Abidjan: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Accra: 1918-01-01 00:00:52 != 1918-12-31 23:59:08 > Africa/Bamako: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Banjul: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Bissau: 1912-01-01 00:02:20 != 1912-12-31 23:57:40 > Africa/Bissau: 1975-01-01 01:00:00 != 1975-12-31 23:00:00 > Africa/Casablanca: 1913-10-26 00:30:20 != 1913-10-25 23:29:40 > Africa/Ceuta: 1901-01-01 00:21:16 != 1901-12-31 23:38:44 > Africa/Conakry: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Dakar: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/El_Aaiun: 1976-04-14 01:00:00 != 1976-04-13 23:00:00 > Africa/Freetown: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Lome: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Monrovia: 1972-05-01 00:44:30 != 1972-04-30 23:15:30 > Africa/Nouakchott: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Ouagadougou: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Sao_Tome: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Timbuktu: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > America/Anguilla: 1912-03-02 00:06:04 != 1912-03-01 23:53:56 > America/Antigua: 1951-01-01 01:00:00 != 1951-12-31 23:00:00 > America/Araguaina: 1914-01-01 00:12:48 != 1914-12-31 23:47:12 > America/Araguaina: 1932-10-03 01:00:00 != 1932-10-02 23:00:00 > America/Araguaina: 1949-12-01 01:00:00 != 1949-11-30 23:00:00 > America/Argentina/Buenos_Aires: 1920-05-01 00:16:48 != 1920-04-30 23:43:12 > America/Argentina/Buenos_Aires: 1930-12-01 01:00:00 != 1930-11-30 23:00:00 > America/Argentina/Buenos_Aires: 1931-10-15 01:00:00 != 1931-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1932-11-01 01:00:00 != 1932-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1933-11-01 01:00:00 != 1933-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1934-11-01 01:00:00 != 1934-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1935-11-01 01:00:00 != 1935-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1936-11-01 01:00:00 != 1936-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1937-11-01 01:00:00 != 1937-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1938-11-01 01:00:00 != 1938-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1939-11-01 01:00:00 != 1939-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1940-07-01 01:00:00 != 1940-06-30 23:00:00 > America/Argentina/Buenos_Aires: 1941-10-15 01:00:00 != 1941-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1943-10-15 01:00:00 != 1943-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1946-10-01 01:00:00 != 1946-09-30 23:00:00 > America/Argentina/Buenos_Aires: 1963-12-15 01:00:00 != 1963-12-14 23:00:00 > America/Argentina/Buenos_Aires: 1964-10-15 01:00:00 != 1964-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1965-10-15 01:00:00 != 1965-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1966-10-15 01:00:00 != 1966-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1967-10-01 01:00:00 !=
[jira] [Commented] (HIVE-13948) Incorrect timezone handling in Writable results in wrong dates in queries
[ https://issues.apache.org/jira/browse/HIVE-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317109#comment-15317109 ] Sergey Shelukhin commented on HIVE-13948: - q file test changes are not intended > Incorrect timezone handling in Writable results in wrong dates in queries > - > > Key: HIVE-13948 > URL: https://issues.apache.org/jira/browse/HIVE-13948 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Blocker > Attachments: HIVE-13948.patch > > > Modifying TestDateWritable to cover 200 years, adding all timezones to the > set, and making it accumulate errors, results in the following set (I bet > many are duplicates via different names, but there's enough). > This ONLY logs errors where YMD date mismatches. There are many more where > YMD is the same but the time mismatches, omitted for brevity. > Queries as simple as "select date(...);" reproduce the error (if Java tz is > set to a problematic tz) > I was investigating some case for a specific date and it seems like the > conversion from dates to ms, namely offset calculation that takes the offset > at UTC midnight and the offset at arbitrary time derived from that, is > completely bogus and it's not clear why it would work. > I think we either need to derive date from UTC and then create local date > from YMD if needed (for many cases e.g. toString for sinks, it would not be > needed at all), and/or add a lookup table for timezone used (for popular > dates, e.g. 1900-present, it would be 40k-odd entries, although the price of > building it is another question). > Format: tz-expected-actual > {noformat} > 2016-06-04T18:33:57,499 ERROR [main[]]: io.TestDateWritable > (TestDateWritable.java:testDaylightSavingsTime(234)) - > DATE MISMATCH: > Africa/Abidjan: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Accra: 1918-01-01 00:00:52 != 1918-12-31 23:59:08 > Africa/Bamako: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Banjul: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Bissau: 1912-01-01 00:02:20 != 1912-12-31 23:57:40 > Africa/Bissau: 1975-01-01 01:00:00 != 1975-12-31 23:00:00 > Africa/Casablanca: 1913-10-26 00:30:20 != 1913-10-25 23:29:40 > Africa/Ceuta: 1901-01-01 00:21:16 != 1901-12-31 23:38:44 > Africa/Conakry: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Dakar: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/El_Aaiun: 1976-04-14 01:00:00 != 1976-04-13 23:00:00 > Africa/Freetown: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Lome: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Monrovia: 1972-05-01 00:44:30 != 1972-04-30 23:15:30 > Africa/Nouakchott: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Ouagadougou: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Sao_Tome: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Timbuktu: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > America/Anguilla: 1912-03-02 00:06:04 != 1912-03-01 23:53:56 > America/Antigua: 1951-01-01 01:00:00 != 1951-12-31 23:00:00 > America/Araguaina: 1914-01-01 00:12:48 != 1914-12-31 23:47:12 > America/Araguaina: 1932-10-03 01:00:00 != 1932-10-02 23:00:00 > America/Araguaina: 1949-12-01 01:00:00 != 1949-11-30 23:00:00 > America/Argentina/Buenos_Aires: 1920-05-01 00:16:48 != 1920-04-30 23:43:12 > America/Argentina/Buenos_Aires: 1930-12-01 01:00:00 != 1930-11-30 23:00:00 > America/Argentina/Buenos_Aires: 1931-10-15 01:00:00 != 1931-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1932-11-01 01:00:00 != 1932-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1933-11-01 01:00:00 != 1933-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1934-11-01 01:00:00 != 1934-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1935-11-01 01:00:00 != 1935-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1936-11-01 01:00:00 != 1936-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1937-11-01 01:00:00 != 1937-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1938-11-01 01:00:00 != 1938-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1939-11-01 01:00:00 != 1939-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1940-07-01 01:00:00 != 1940-06-30 23:00:00 > America/Argentina/Buenos_Aires: 1941-10-15 01:00:00 != 1941-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1943-10-15 01:00:00 != 1943-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1946-10-01 01:00:00 != 1946-09-30 23:00:00 > America/Argentina/Buenos_Aires: 1963-12-15 01:00:00 != 1963-12-14 23:00:00 > America/Argentina/Buenos_Aires: 1964-10-15 01:00:00 != 1964-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1965-10-15 01:00:00 != 1965-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1966-10-15 01:00:00 != 1966-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1967-10-01 01:00:00 != 1967-09-30 23:00:00 >
[jira] [Updated] (HIVE-13948) Incorrect timezone handling in Writable results in wrong dates in queries
[ https://issues.apache.org/jira/browse/HIVE-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13948: Attachment: HIVE-13948.patch A patch. > Incorrect timezone handling in Writable results in wrong dates in queries > - > > Key: HIVE-13948 > URL: https://issues.apache.org/jira/browse/HIVE-13948 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Blocker > Attachments: HIVE-13948.patch > > > Modifying TestDateWritable to cover 200 years, adding all timezones to the > set, and making it accumulate errors, results in the following set (I bet > many are duplicates via different names, but there's enough). > This ONLY logs errors where YMD date mismatches. There are many more where > YMD is the same but the time mismatches, omitted for brevity. > Queries as simple as "select date(...);" reproduce the error (if Java tz is > set to a problematic tz) > I was investigating some case for a specific date and it seems like the > conversion from dates to ms, namely offset calculation that takes the offset > at UTC midnight and the offset at arbitrary time derived from that, is > completely bogus and it's not clear why it would work. > I think we either need to derive date from UTC and then create local date > from YMD if needed (for many cases e.g. toString for sinks, it would not be > needed at all), and/or add a lookup table for timezone used (for popular > dates, e.g. 1900-present, it would be 40k-odd entries, although the price of > building it is another question). > Format: tz-expected-actual > {noformat} > 2016-06-04T18:33:57,499 ERROR [main[]]: io.TestDateWritable > (TestDateWritable.java:testDaylightSavingsTime(234)) - > DATE MISMATCH: > Africa/Abidjan: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Accra: 1918-01-01 00:00:52 != 1918-12-31 23:59:08 > Africa/Bamako: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Banjul: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Bissau: 1912-01-01 00:02:20 != 1912-12-31 23:57:40 > Africa/Bissau: 1975-01-01 01:00:00 != 1975-12-31 23:00:00 > Africa/Casablanca: 1913-10-26 00:30:20 != 1913-10-25 23:29:40 > Africa/Ceuta: 1901-01-01 00:21:16 != 1901-12-31 23:38:44 > Africa/Conakry: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Dakar: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/El_Aaiun: 1976-04-14 01:00:00 != 1976-04-13 23:00:00 > Africa/Freetown: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Lome: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Monrovia: 1972-05-01 00:44:30 != 1972-04-30 23:15:30 > Africa/Nouakchott: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Ouagadougou: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Sao_Tome: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Timbuktu: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > America/Anguilla: 1912-03-02 00:06:04 != 1912-03-01 23:53:56 > America/Antigua: 1951-01-01 01:00:00 != 1951-12-31 23:00:00 > America/Araguaina: 1914-01-01 00:12:48 != 1914-12-31 23:47:12 > America/Araguaina: 1932-10-03 01:00:00 != 1932-10-02 23:00:00 > America/Araguaina: 1949-12-01 01:00:00 != 1949-11-30 23:00:00 > America/Argentina/Buenos_Aires: 1920-05-01 00:16:48 != 1920-04-30 23:43:12 > America/Argentina/Buenos_Aires: 1930-12-01 01:00:00 != 1930-11-30 23:00:00 > America/Argentina/Buenos_Aires: 1931-10-15 01:00:00 != 1931-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1932-11-01 01:00:00 != 1932-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1933-11-01 01:00:00 != 1933-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1934-11-01 01:00:00 != 1934-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1935-11-01 01:00:00 != 1935-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1936-11-01 01:00:00 != 1936-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1937-11-01 01:00:00 != 1937-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1938-11-01 01:00:00 != 1938-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1939-11-01 01:00:00 != 1939-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1940-07-01 01:00:00 != 1940-06-30 23:00:00 > America/Argentina/Buenos_Aires: 1941-10-15 01:00:00 != 1941-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1943-10-15 01:00:00 != 1943-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1946-10-01 01:00:00 != 1946-09-30 23:00:00 > America/Argentina/Buenos_Aires: 1963-12-15 01:00:00 != 1963-12-14 23:00:00 > America/Argentina/Buenos_Aires: 1964-10-15 01:00:00 != 1964-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1965-10-15 01:00:00 != 1965-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1966-10-15 01:00:00 != 1966-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1967-10-01 01:00:00 != 1967-09-30 23:00:00 > America/Argentina/Buenos_Aires:
[jira] [Commented] (HIVE-13749) Memory leak in Hive Metastore
[ https://issues.apache.org/jira/browse/HIVE-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317100#comment-15317100 ] Naveen Gangam commented on HIVE-13749: -- Thanks [~daijy] I have been running with some added instrumentation in the HMS code to figure out the cache sizes before and after. But your idea seems better, seeking info from the hadoop's end. There are 3 general areas that seem to be adding objects to the cache. 1) The compactor.Initiator and CompactorThread create about ~420k objects. These seem to be addressed in HIVE-13151. This environment is not running with this fix. 2) The Warehouse.getFs() and Warehouse.getFileStatusesForLocation() are invoked about ~900k times, but not all calls result in new object in the cache. 3) A small % of the calls are from drop_table_core. I will try to see other areas that use these FS apis that could be adding to this cache. Thejas, the fix from HIVE-3098 no longer exists in the codebase. It has been replaced by the fix in HIVE-8228 (simliar intent). The root cause could very well be the initiator thread. I will check their configuration to affirm this and use HIVE-13151 if needed. Thanks > Memory leak in Hive Metastore > - > > Key: HIVE-13749 > URL: https://issues.apache.org/jira/browse/HIVE-13749 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 1.1.0 >Reporter: Naveen Gangam >Assignee: Naveen Gangam > Attachments: HIVE-13749.patch, Top_Consumers7.html > > > Looking a heap dump of 10GB, a large number of Configuration objects(> 66k > instances) are being retained. These objects along with its retained set is > occupying about 95% of the heap space. This leads to HMS crashes every few > days. > I will attach an exported snapshot from the eclipse MAT. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13904) Ignore case when retrieving ColumnInfo from RowResolver
[ https://issues.apache.org/jira/browse/HIVE-13904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317082#comment-15317082 ] Jesus Camacho Rodriguez commented on HIVE-13904: I had to regenerate a couple of q files. [~ashutoshc], could you review it please? Thanks > Ignore case when retrieving ColumnInfo from RowResolver > --- > > Key: HIVE-13904 > URL: https://issues.apache.org/jira/browse/HIVE-13904 > Project: Hive > Issue Type: Bug > Components: Parser >Affects Versions: 2.1.0, 2.0.1, 2.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-13904.01.patch, HIVE-13904.01.patch, > HIVE-13904.02.patch, HIVE-13904.patch > > > To reproduce: > {noformat} > -- upper case in subq > explain > select * from src b > where exists > (select a.key from src a > where b.VALUE = a.VALUE > ); > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (HIVE-13904) Ignore case when retrieving ColumnInfo from RowResolver
[ https://issues.apache.org/jira/browse/HIVE-13904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-13904 started by Jesus Camacho Rodriguez. -- > Ignore case when retrieving ColumnInfo from RowResolver > --- > > Key: HIVE-13904 > URL: https://issues.apache.org/jira/browse/HIVE-13904 > Project: Hive > Issue Type: Bug > Components: Parser >Affects Versions: 2.1.0, 2.0.1, 2.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-13904.01.patch, HIVE-13904.01.patch, > HIVE-13904.02.patch, HIVE-13904.patch > > > To reproduce: > {noformat} > -- upper case in subq > explain > select * from src b > where exists > (select a.key from src a > where b.VALUE = a.VALUE > ); > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13904) Ignore case when retrieving ColumnInfo from RowResolver
[ https://issues.apache.org/jira/browse/HIVE-13904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13904: --- Attachment: HIVE-13904.02.patch > Ignore case when retrieving ColumnInfo from RowResolver > --- > > Key: HIVE-13904 > URL: https://issues.apache.org/jira/browse/HIVE-13904 > Project: Hive > Issue Type: Bug > Components: Parser >Affects Versions: 2.1.0, 2.0.1, 2.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-13904.01.patch, HIVE-13904.01.patch, > HIVE-13904.02.patch, HIVE-13904.patch > > > To reproduce: > {noformat} > -- upper case in subq > explain > select * from src b > where exists > (select a.key from src a > where b.VALUE = a.VALUE > ); > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13904) Ignore case when retrieving ColumnInfo from RowResolver
[ https://issues.apache.org/jira/browse/HIVE-13904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13904: --- Status: Patch Available (was: In Progress) > Ignore case when retrieving ColumnInfo from RowResolver > --- > > Key: HIVE-13904 > URL: https://issues.apache.org/jira/browse/HIVE-13904 > Project: Hive > Issue Type: Bug > Components: Parser >Affects Versions: 2.0.1, 2.1.0, 2.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-13904.01.patch, HIVE-13904.01.patch, > HIVE-13904.02.patch, HIVE-13904.patch > > > To reproduce: > {noformat} > -- upper case in subq > explain > select * from src b > where exists > (select a.key from src a > where b.VALUE = a.VALUE > ); > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13836) DbNotifications giving an error = Invalid state. Transaction has already started
[ https://issues.apache.org/jira/browse/HIVE-13836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317080#comment-15317080 ] Nachiket Vaidya commented on HIVE-13836: Thank you [~sushanth], [~ashutoshc] and [~spena] for your help. > DbNotifications giving an error = Invalid state. Transaction has already > started > > > Key: HIVE-13836 > URL: https://issues.apache.org/jira/browse/HIVE-13836 > Project: Hive > Issue Type: Bug >Reporter: Nachiket Vaidya >Assignee: Nachiket Vaidya >Priority: Critical > Labels: patch-available > Fix For: 2.2.0 > > Attachments: HIVE-13836.2.patch, HIVE-13836.patch > > > I used pyhs2 python client to create tables/partitions in hive. I was working > fine until I moved to multithreaded scripts which created 8 connections and > ran DDL queries concurrently. > I got the error as > {noformat} > 2016-05-04 17:49:26,226 ERROR > org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-4-thread-194]: > HMSHandler Fatal error: Invalid state. Transaction has already started > org.datanucleus.transaction.NucleusTransactionException: Invalid state. > Transaction has already started > at > org.datanucleus.transaction.TransactionManager.begin(TransactionManager.java:47) > at org.datanucleus.TransactionImpl.begin(TransactionImpl.java:131) > at > org.datanucleus.api.jdo.JDOTransaction.internalBegin(JDOTransaction.java:88) > at > org.datanucleus.api.jdo.JDOTransaction.begin(JDOTransaction.java:80) > at > org.apache.hadoop.hive.metastore.ObjectStore.openTransaction(ObjectStore.java:463) > at > org.apache.hadoop.hive.metastore.ObjectStore.addNotificationEvent(ObjectStore.java:7522) > at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114) > at com.sun.proxy.$Proxy10.addNotificationEvent(Unknown Source) > at > org.apache.hive.hcatalog.listener.DbNotificationListener.enqueue(DbNotificationListener.java:261) > at > org.apache.hive.hcatalog.listener.DbNotificationListener.onCreateTable(DbNotificationListener.java:123) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1483) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1502) > at sun.reflect.GeneratedMethodAccessor57.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:138) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) > at > com.sun.proxy.$Proxy14.create_table_with_environment_context(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:9267) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13904) Ignore case when retrieving ColumnInfo from RowResolver
[ https://issues.apache.org/jira/browse/HIVE-13904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13904: --- Status: Open (was: Patch Available) > Ignore case when retrieving ColumnInfo from RowResolver > --- > > Key: HIVE-13904 > URL: https://issues.apache.org/jira/browse/HIVE-13904 > Project: Hive > Issue Type: Bug > Components: Parser >Affects Versions: 2.0.1, 2.1.0, 2.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-13904.01.patch, HIVE-13904.01.patch, > HIVE-13904.patch > > > To reproduce: > {noformat} > -- upper case in subq > explain > select * from src b > where exists > (select a.key from src a > where b.VALUE = a.VALUE > ); > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13953) Issues in HiveLockObject equals method
[ https://issues.apache.org/jira/browse/HIVE-13953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317062#comment-15317062 ] Yongzhi Chen commented on HIVE-13953: - The fix looks good, +1 pending the testing. > Issues in HiveLockObject equals method > -- > > Key: HIVE-13953 > URL: https://issues.apache.org/jira/browse/HIVE-13953 > Project: Hive > Issue Type: Bug > Components: Locking >Reporter: Chaoyu Tang >Assignee: Chaoyu Tang > Attachments: HIVE-13953.patch > > > There are two issues in equals method in HiveLockObject: > {code} > @Override > public boolean equals(Object o) { > if (!(o instanceof HiveLockObject)) { > return false; > } > HiveLockObject tgt = (HiveLockObject) o; > return Arrays.equals(pathNames, tgt.pathNames) && > data == null ? tgt.getData() == null : > tgt.getData() != null && data.equals(tgt.getData()); > } > {code} > 1. Arrays.equals(pathNames, tgt.pathNames) might return false for the same > path in HiveLockObject since in current Hive, the pathname components might > be stored in two ways, taking a dynamic partition path db/tbl/part1/part2 as > an example, it might be stored in the pathNames as an array of four elements, > db, tbl, part1, and part2 or as an array only having one element > db/tbl/part1/part2. It will be safer to comparing the pathNames using > StringUtils.equals(this.getName(), tgt.getName()) > 2. The comparison logic is not right. > {code} > @Override > public boolean equals(Object o) { > if (!(o instanceof HiveLockObject)) { > return false; > } > HiveLockObject tgt = (HiveLockObject) o; > return StringUtils.equals(this.getName(), tgt.getName()) && > (data == null ? tgt.getData() == null : data.equals(tgt.getData())); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13954) Parquet logs should go to STDERR
[ https://issues.apache.org/jira/browse/HIVE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317008#comment-15317008 ] Gunther Hagleitner commented on HIVE-13954: --- LGTM +1 > Parquet logs should go to STDERR > > > Key: HIVE-13954 > URL: https://issues.apache.org/jira/browse/HIVE-13954 > Project: Hive > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-13954.1.patch > > > Parquet uses java util logging. When java logging is not configured using > default logging.properties file, parquet's default fallback handler writes to > STDOUT at INFO level. Hive writes all logging to STDERR and writes only the > query output to STDOUT. Writing logs to STDOUT may cause issues when > comparing query results. > If we provide default logging.properties for parquet then we can configure it > to write to file or stderr. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13749) Memory leak in Hive Metastore
[ https://issues.apache.org/jira/browse/HIVE-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317001#comment-15317001 ] Daniel Dai commented on HIVE-13749: --- [~ngangam], what I used to do before to diagnose is to use a patched hadoop client libraries to catch the stack of every invocation of FileSystem.get, and understand exactly where the leak coming from. I don't want to blindly remove it in shutdown, plus, UGI object might already get lost at that time and you might not able to remove it. Here is how I patch Hadoop: {code} --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java @@ -20,6 +20,8 @@ import java.io.Closeable; import java.io.FileNotFoundException; import java.io.IOException; +import java.io.StringWriter; +import java.io.PrintWriter; import java.lang.ref.WeakReference; import java.net.URI; import java.net.URISyntaxException; @@ -2699,6 +2701,10 @@ private FileSystem getInternal(URI uri, Configuration conf, Key key) throws IOEx } fs.key = key; map.put(key, fs); +StringWriter sw = new StringWriter(); +new Throwable("").printStackTrace(new PrintWriter(sw)); +LOG.info("calling context for getInternal:" + sw.toString()); +LOG.info("# of maps:" + map.size()); if (conf.getBoolean("fs.automatic.close", true)) { toAutoClose.add(key); } @@ -2752,6 +2758,7 @@ synchronized void closeAll(boolean onlyAutomatic) throws IOException { if (!exceptions.isEmpty()) { throw MultipleIOException.createIOException(exceptions); } + LOG.info("map size after closeAll:" + map.size()); } private class ClientFinalizer implements Runnable { @@ -2789,6 +2796,7 @@ synchronized void closeAll(UserGroupInformation ugi) throws IOException { if (!exceptions.isEmpty()) { throw MultipleIOException.createIOException(exceptions); } + LOG.info("map size after closeAll:" + map.size()); } /** FileSystem.Cache.Key */ {code} Here is how to instruct Hive to use this jar: 1. Put attached hadoop-common.jar into $HIVE_HOME/lib 2. export HADOOP_USER_CLASSPATH_FIRST=true 3. Make sure fs cache is enabled 4. Restart hivemetastore 5. Collect hivemetastore.log > Memory leak in Hive Metastore > - > > Key: HIVE-13749 > URL: https://issues.apache.org/jira/browse/HIVE-13749 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 1.1.0 >Reporter: Naveen Gangam >Assignee: Naveen Gangam > Attachments: HIVE-13749.patch, Top_Consumers7.html > > > Looking a heap dump of 10GB, a large number of Configuration objects(> 66k > instances) are being retained. These objects along with its retained set is > occupying about 95% of the heap space. This leads to HMS crashes every few > days. > I will attach an exported snapshot from the eclipse MAT. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13954) Parquet logs should go to STDERR
[ https://issues.apache.org/jira/browse/HIVE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15316994#comment-15316994 ] Prasanth Jayachandran commented on HIVE-13954: -- This writes to $TMPDIR/parquet-%u.log where %u is unique number to resolve conflicts. The default location for hive log is $TMPDIR/$USER/hive.log. We cannot use the same file for logging because with java file logging we cannot access the user name in properties file, hence the above pattern for parquet log file. > Parquet logs should go to STDERR > > > Key: HIVE-13954 > URL: https://issues.apache.org/jira/browse/HIVE-13954 > Project: Hive > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-13954.1.patch > > > Parquet uses java util logging. When java logging is not configured using > default logging.properties file, parquet's default fallback handler writes to > STDOUT at INFO level. Hive writes all logging to STDERR and writes only the > query output to STDOUT. Writing logs to STDOUT may cause issues when > comparing query results. > If we provide default logging.properties for parquet then we can configure it > to write to file or stderr. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13954) Parquet logs should go to STDERR
[ https://issues.apache.org/jira/browse/HIVE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-13954: - Status: Patch Available (was: Open) > Parquet logs should go to STDERR > > > Key: HIVE-13954 > URL: https://issues.apache.org/jira/browse/HIVE-13954 > Project: Hive > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-13954.1.patch > > > Parquet uses java util logging. When java logging is not configured using > default logging.properties file, parquet's default fallback handler writes to > STDOUT at INFO level. Hive writes all logging to STDERR and writes only the > query output to STDOUT. Writing logs to STDOUT may cause issues when > comparing query results. > If we provide default logging.properties for parquet then we can configure it > to write to file or stderr. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13954) Parquet logs should go to STDERR
[ https://issues.apache.org/jira/browse/HIVE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-13954: - Attachment: HIVE-13954.1.patch [~hagleitn] Can you please review this patch? > Parquet logs should go to STDERR > > > Key: HIVE-13954 > URL: https://issues.apache.org/jira/browse/HIVE-13954 > Project: Hive > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-13954.1.patch > > > Parquet uses java util logging. When java logging is not configured using > default logging.properties file, parquet's default fallback handler writes to > STDOUT at INFO level. Hive writes all logging to STDERR and writes only the > query output to STDOUT. Writing logs to STDOUT may cause issues when > comparing query results. > If we provide default logging.properties for parquet then we can configure it > to write to file or stderr. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-13948) Incorrect timezone handling in Writable results in wrong dates in queries
[ https://issues.apache.org/jira/browse/HIVE-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-13948: --- Assignee: Sergey Shelukhin > Incorrect timezone handling in Writable results in wrong dates in queries > - > > Key: HIVE-13948 > URL: https://issues.apache.org/jira/browse/HIVE-13948 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Blocker > > Modifying TestDateWritable to cover 200 years, adding all timezones to the > set, and making it accumulate errors, results in the following set (I bet > many are duplicates via different names, but there's enough). > This ONLY logs errors where YMD date mismatches. There are many more where > YMD is the same but the time mismatches, omitted for brevity. > Queries as simple as "select date(...);" reproduce the error (if Java tz is > set to a problematic tz) > I was investigating some case for a specific date and it seems like the > conversion from dates to ms, namely offset calculation that takes the offset > at UTC midnight and the offset at arbitrary time derived from that, is > completely bogus and it's not clear why it would work. > I think we either need to derive date from UTC and then create local date > from YMD if needed (for many cases e.g. toString for sinks, it would not be > needed at all), and/or add a lookup table for timezone used (for popular > dates, e.g. 1900-present, it would be 40k-odd entries, although the price of > building it is another question). > Format: tz-expected-actual > {noformat} > 2016-06-04T18:33:57,499 ERROR [main[]]: io.TestDateWritable > (TestDateWritable.java:testDaylightSavingsTime(234)) - > DATE MISMATCH: > Africa/Abidjan: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Accra: 1918-01-01 00:00:52 != 1918-12-31 23:59:08 > Africa/Bamako: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Banjul: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Bissau: 1912-01-01 00:02:20 != 1912-12-31 23:57:40 > Africa/Bissau: 1975-01-01 01:00:00 != 1975-12-31 23:00:00 > Africa/Casablanca: 1913-10-26 00:30:20 != 1913-10-25 23:29:40 > Africa/Ceuta: 1901-01-01 00:21:16 != 1901-12-31 23:38:44 > Africa/Conakry: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Dakar: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/El_Aaiun: 1976-04-14 01:00:00 != 1976-04-13 23:00:00 > Africa/Freetown: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Lome: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Monrovia: 1972-05-01 00:44:30 != 1972-04-30 23:15:30 > Africa/Nouakchott: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Ouagadougou: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Sao_Tome: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > Africa/Timbuktu: 1912-01-01 00:16:08 != 1912-12-31 23:43:52 > America/Anguilla: 1912-03-02 00:06:04 != 1912-03-01 23:53:56 > America/Antigua: 1951-01-01 01:00:00 != 1951-12-31 23:00:00 > America/Araguaina: 1914-01-01 00:12:48 != 1914-12-31 23:47:12 > America/Araguaina: 1932-10-03 01:00:00 != 1932-10-02 23:00:00 > America/Araguaina: 1949-12-01 01:00:00 != 1949-11-30 23:00:00 > America/Argentina/Buenos_Aires: 1920-05-01 00:16:48 != 1920-04-30 23:43:12 > America/Argentina/Buenos_Aires: 1930-12-01 01:00:00 != 1930-11-30 23:00:00 > America/Argentina/Buenos_Aires: 1931-10-15 01:00:00 != 1931-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1932-11-01 01:00:00 != 1932-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1933-11-01 01:00:00 != 1933-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1934-11-01 01:00:00 != 1934-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1935-11-01 01:00:00 != 1935-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1936-11-01 01:00:00 != 1936-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1937-11-01 01:00:00 != 1937-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1938-11-01 01:00:00 != 1938-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1939-11-01 01:00:00 != 1939-10-31 23:00:00 > America/Argentina/Buenos_Aires: 1940-07-01 01:00:00 != 1940-06-30 23:00:00 > America/Argentina/Buenos_Aires: 1941-10-15 01:00:00 != 1941-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1943-10-15 01:00:00 != 1943-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1946-10-01 01:00:00 != 1946-09-30 23:00:00 > America/Argentina/Buenos_Aires: 1963-12-15 01:00:00 != 1963-12-14 23:00:00 > America/Argentina/Buenos_Aires: 1964-10-15 01:00:00 != 1964-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1965-10-15 01:00:00 != 1965-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1966-10-15 01:00:00 != 1966-10-14 23:00:00 > America/Argentina/Buenos_Aires: 1967-10-01 01:00:00 != 1967-09-30 23:00:00 > America/Argentina/Buenos_Aires: 1968-10-06 01:00:00 != 1968-10-05 23:00:00 >
[jira] [Commented] (HIVE-13221) expose metastore APIs from HS2
[ https://issues.apache.org/jira/browse/HIVE-13221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15316926#comment-15316926 ] Sergey Shelukhin commented on HIVE-13221: - Well, to the original question, the threads are not started because this ticket is about exposing APIs ;) I just added the comment to avoid confusion as the init is kind of convoluted. > expose metastore APIs from HS2 > -- > > Key: HIVE-13221 > URL: https://issues.apache.org/jira/browse/HIVE-13221 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13221.01.patch, HIVE-13221.patch > > > I was always wondering why we don't do that, for the people who run HS2 and > also need metastore due to it being used externally; they don't need to run a > standalone metastore. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13599) LLAP: Incorrect handling of the preemption queue on finishable state updates
[ https://issues.apache.org/jira/browse/HIVE-13599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15316920#comment-15316920 ] Prasanth Jayachandran commented on HIVE-13599: -- LGTM, +1 > LLAP: Incorrect handling of the preemption queue on finishable state updates > > > Key: HIVE-13599 > URL: https://issues.apache.org/jira/browse/HIVE-13599 > Project: Hive > Issue Type: Bug > Components: llap >Affects Versions: 2.1.0 >Reporter: Prasanth Jayachandran >Assignee: Siddharth Seth >Priority: Critical > Attachments: HIVE-13599.01.patch, HIVE-13599.01.patch, > HIVE-13599.02.patch > > > When running some tests with pre-emption enabled, got the following exception > Looks like a race condition when removing items from pre-emption queue. > {code} > 16/04/23 23:32:00 [Wait-Queue-Scheduler-0[]] ERROR impl.TaskExecutorService : > Wait queue scheduler worker exited with failure! > java.util.NoSuchElementException > at java.util.AbstractQueue.remove(AbstractQueue.java:117) > ~[?:1.7.0_55] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.removeAndGetFromPreemptionQueue(TaskExecutorService.java:568) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.handleScheduleAttemptedRejection(TaskExecutorService.java:493) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.access$1100(TaskExecutorService.java:81) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService$WaitQueueWorker.run(TaskExecutorService.java:285) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > ~[?:1.7.0_55] > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > [?:1.7.0_55] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > [?:1.7.0_55] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [?:1.7.0_55] > at java.lang.Thread.run(Thread.java:745) [?:1.7.0_55] > 16/04/23 23:32:00 [Wait-Queue-Scheduler-0[]] INFO impl.LlapDaemon : > UncaughtExceptionHandler invoked > 16/04/23 23:32:00 [Wait-Queue-Scheduler-0[]] ERROR impl.LlapDaemon : Thread > Thread[Wait-Queue-Scheduler-0,5,main] threw an Exception. Shutting down now... > java.util.NoSuchElementException > at java.util.AbstractQueue.remove(AbstractQueue.java:117) > ~[?:1.7.0_55] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.removeAndGetFromPreemptionQueue(TaskExecutorService.java:568) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.handleScheduleAttemptedRejection(TaskExecutorService.java:493) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.access$1100(TaskExecutorService.java:81) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService$WaitQueueWorker.run(TaskExecutorService.java:285) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > ~[?:1.7.0_55] > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > [?:1.7.0_55] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > [?:1.7.0_55] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [?:1.7.0_55] > at java.lang.Thread.run(Thread.java:745) [?:1.7.0_55] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13904) Ignore case when retrieving ColumnInfo from RowResolver
[ https://issues.apache.org/jira/browse/HIVE-13904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15316917#comment-15316917 ] Hive QA commented on HIVE-13904: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12808407/HIVE-13904.01.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 10205 tests executed *Failed tests:* {noformat} TestMiniTezCliDriver-vector_acid3.q-union2.q-bucket4.q-and-12-more - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_table_stats org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_values_orig_table_use_metadata org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_subquery_exists org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_subquery_exists org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testDelayedLocalityNodeCommErrorImmediateAllocation org.apache.hadoop.hive.ql.TestTxnCommands.testSimpleAcidInsert {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/17/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/17/console Test logs: http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-17/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 15 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12808407 - PreCommit-HIVE-MASTER-Build > Ignore case when retrieving ColumnInfo from RowResolver > --- > > Key: HIVE-13904 > URL: https://issues.apache.org/jira/browse/HIVE-13904 > Project: Hive > Issue Type: Bug > Components: Parser >Affects Versions: 2.1.0, 2.0.1, 2.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-13904.01.patch, HIVE-13904.01.patch, > HIVE-13904.patch > > > To reproduce: > {noformat} > -- upper case in subq > explain > select * from src b > where exists > (select a.key from src a > where b.VALUE = a.VALUE > ); > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13953) Issues in HiveLockObject equals method
[ https://issues.apache.org/jira/browse/HIVE-13953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chaoyu Tang updated HIVE-13953: --- Attachment: HIVE-13953.patch > Issues in HiveLockObject equals method > -- > > Key: HIVE-13953 > URL: https://issues.apache.org/jira/browse/HIVE-13953 > Project: Hive > Issue Type: Bug > Components: Locking >Reporter: Chaoyu Tang >Assignee: Chaoyu Tang > Attachments: HIVE-13953.patch > > > There are two issues in equals method in HiveLockObject: > {code} > @Override > public boolean equals(Object o) { > if (!(o instanceof HiveLockObject)) { > return false; > } > HiveLockObject tgt = (HiveLockObject) o; > return Arrays.equals(pathNames, tgt.pathNames) && > data == null ? tgt.getData() == null : > tgt.getData() != null && data.equals(tgt.getData()); > } > {code} > 1. Arrays.equals(pathNames, tgt.pathNames) might return false for the same > path in HiveLockObject since in current Hive, the pathname components might > be stored in two ways, taking a dynamic partition path db/tbl/part1/part2 as > an example, it might be stored in the pathNames as an array of four elements, > db, tbl, part1, and part2 or as an array only having one element > db/tbl/part1/part2. It will be safer to comparing the pathNames using > StringUtils.equals(this.getName(), tgt.getName()) > 2. The comparison logic is not right. > {code} > @Override > public boolean equals(Object o) { > if (!(o instanceof HiveLockObject)) { > return false; > } > HiveLockObject tgt = (HiveLockObject) o; > return StringUtils.equals(this.getName(), tgt.getName()) && > (data == null ? tgt.getData() == null : data.equals(tgt.getData())); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)