[jira] [Commented] (HIVE-16091) Support subqueries in project/select
[ https://issues.apache.org/jira/browse/HIVE-16091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925572#comment-15925572 ] Hive QA commented on HIVE-16091: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12858779/HIVE-16091.5.patch {color:green}SUCCESS:{color} +1 due to 6 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 10351 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4140/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4140/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4140/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12858779 - PreCommit-HIVE-Build > Support subqueries in project/select > > > Key: HIVE-16091 > URL: https://issues.apache.org/jira/browse/HIVE-16091 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-16091.1.patch, HIVE-16091.2.patch, > HIVE-16091.3.patch, HIVE-16091.4.patch, HIVE-16091.5.patch > > > Currently scalar subqueries are supported in filter only (WHERE/HAVING). -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16178) corr/covar_samp UDAF standard compliance
[ https://issues.apache.org/jira/browse/HIVE-16178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925545#comment-15925545 ] Ashutosh Chauhan commented on HIVE-16178: - This needs to be fixed. Currently returning wrong results. > corr/covar_samp UDAF standard compliance > > > Key: HIVE-16178 > URL: https://issues.apache.org/jira/browse/HIVE-16178 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Zoltan Haindrich >Priority: Minor > > h3. corr > the standard defines corner cases when it should return null - but the > current result is NaN. > If N * SUMX2 equals SUMX * SUMX , then the result is the null value. > and > If N * SUMY2 equals SUMY * SUMY , then the result is the null value. > h3. covar_samp > returns 0 instead 1 > `If N is 1 (one), then the result is the null value.` > h3. check (x,y) vs (y,x) args in docs > the standard uses (y,x) order; and some of the function names are also > contain X and Y...so the order does matter..currently at least corr uses > (x,y) order which is okay - because its symmetric; but it would be great to > have the same order everywhere (check others) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-15978) Support regr_* functions
[ https://issues.apache.org/jira/browse/HIVE-15978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925544#comment-15925544 ] Ashutosh Chauhan commented on HIVE-15978: - +1 pending tests > Support regr_* functions > > > Key: HIVE-15978 > URL: https://issues.apache.org/jira/browse/HIVE-15978 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Carter Shanklin >Assignee: Zoltan Haindrich > Attachments: HIVE-15978.1.patch, HIVE-15978.2.patch, > HIVE-15978.2.patch > > > Support the standard regr_* functions, regr_slope, regr_intercept, regr_r2, > regr_sxx, regr_syy, regr_sxy, regr_avgx, regr_avgy, regr_count. SQL reference > section 10.9 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16206) Make Codahale metrics reporters pluggable
[ https://issues.apache.org/jira/browse/HIVE-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925527#comment-15925527 ] Sunitha Beeram commented on HIVE-16206: --- I have an initial patch available - this is my first contribution to apache projects, so please do let me know if anything is amiss/non-conformant. I have updated the current test case to work with the new conf that the patch introduces - once I get some feedback on the change, I plan to add tests for the existing conf to ensure backward compatibility is covered as well. > Make Codahale metrics reporters pluggable > - > > Key: HIVE-16206 > URL: https://issues.apache.org/jira/browse/HIVE-16206 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: Sunitha Beeram >Assignee: Sunitha Beeram > Attachments: HIVE-16206.patch > > > Hive metrics code currently allows pluggable metrics handlers - ie, handlers > that take care of providing interfaces for metrics collection as well as a > reporting; one of the 'handlers' is CodahaleMetrics. Codahale can work with > different reporters - currently supported ones are Console, JMX, JSON file > and hadoop2 sink. However, adding a new reporter involves changing that > class. We would like to make this conf driven just the way MetricsFactory > handles configurable Metrics classes. > Scope of work: > - Provide a new configuration option, HIVE_CODAHALE_REPORTER_CLASSES that > enumerates classes (like HIVE_METRICS_CLASS and unlike HIVE_METRICS_REPORTER). > - Move JsonFileReporter into its own class. > - Update CodahaleMetrics.java to read new config option and if the new option > is not present, look for the old option and instantiate accordingly) - ie, > make the code backward compatible. > - Update and add new tests. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16206) Make Codahale metrics reporters pluggable
[ https://issues.apache.org/jira/browse/HIVE-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunitha Beeram updated HIVE-16206: -- Attachment: HIVE-16206.patch > Make Codahale metrics reporters pluggable > - > > Key: HIVE-16206 > URL: https://issues.apache.org/jira/browse/HIVE-16206 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: Sunitha Beeram >Assignee: Sunitha Beeram > Attachments: HIVE-16206.patch > > > Hive metrics code currently allows pluggable metrics handlers - ie, handlers > that take care of providing interfaces for metrics collection as well as a > reporting; one of the 'handlers' is CodahaleMetrics. Codahale can work with > different reporters - currently supported ones are Console, JMX, JSON file > and hadoop2 sink. However, adding a new reporter involves changing that > class. We would like to make this conf driven just the way MetricsFactory > handles configurable Metrics classes. > Scope of work: > - Provide a new configuration option, HIVE_CODAHALE_REPORTER_CLASSES that > enumerates classes (like HIVE_METRICS_CLASS and unlike HIVE_METRICS_REPORTER). > - Move JsonFileReporter into its own class. > - Update CodahaleMetrics.java to read new config option and if the new option > is not present, look for the old option and instantiate accordingly) - ie, > make the code backward compatible. > - Update and add new tests. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16205) Improving type safety in Objectstore
[ https://issues.apache.org/jira/browse/HIVE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar updated HIVE-16205: --- Attachment: HIVE-16205.03.patch > Improving type safety in Objectstore > > > Key: HIVE-16205 > URL: https://issues.apache.org/jira/browse/HIVE-16205 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar > Attachments: HIVE-16205.01.patch, HIVE-16205.02.patch, > HIVE-16205.03.patch > > > Modify the queries in ObjectStore for better type safety -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16205) Improving type safety in Objectstore
[ https://issues.apache.org/jira/browse/HIVE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925522#comment-15925522 ] Vihang Karajgaonkar commented on HIVE-16205: Tests seem unrelated. They are working for me locally. Another pre-commit job run immediately after this job had TestSparkCliDriver failures too https://builds.apache.org/job/PreCommit-HIVE-Build/4138/ Reattaching the same patch to make sure. > Improving type safety in Objectstore > > > Key: HIVE-16205 > URL: https://issues.apache.org/jira/browse/HIVE-16205 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar > Attachments: HIVE-16205.01.patch, HIVE-16205.02.patch > > > Modify the queries in ObjectStore for better type safety -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Work started] (HIVE-16206) Make Codahale metrics reporters pluggable
[ https://issues.apache.org/jira/browse/HIVE-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-16206 started by Sunitha Beeram. - > Make Codahale metrics reporters pluggable > - > > Key: HIVE-16206 > URL: https://issues.apache.org/jira/browse/HIVE-16206 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: Sunitha Beeram >Assignee: Sunitha Beeram > > Hive metrics code currently allows pluggable metrics handlers - ie, handlers > that take care of providing interfaces for metrics collection as well as a > reporting; one of the 'handlers' is CodahaleMetrics. Codahale can work with > different reporters - currently supported ones are Console, JMX, JSON file > and hadoop2 sink. However, adding a new reporter involves changing that > class. We would like to make this conf driven just the way MetricsFactory > handles configurable Metrics classes. > Scope of work: > - Provide a new configuration option, HIVE_CODAHALE_REPORTER_CLASSES that > enumerates classes (like HIVE_METRICS_CLASS and unlike HIVE_METRICS_REPORTER). > - Move JsonFileReporter into its own class. > - Update CodahaleMetrics.java to read new config option and if the new option > is not present, look for the old option and instantiate accordingly) - ie, > make the code backward compatible. > - Update and add new tests. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-15947) Enhance Templeton service job operations reliability
[ https://issues.apache.org/jira/browse/HIVE-15947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925510#comment-15925510 ] Hive QA commented on HIVE-15947: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12858773/HIVE-15947.9.patch {color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 10361 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4139/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4139/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4139/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12858773 - PreCommit-HIVE-Build > Enhance Templeton service job operations reliability > > > Key: HIVE-15947 > URL: https://issues.apache.org/jira/browse/HIVE-15947 > Project: Hive > Issue Type: Bug >Reporter: Subramanyam Pattipaka >Assignee: Subramanyam Pattipaka > Attachments: HIVE-15947.2.patch, HIVE-15947.3.patch, > HIVE-15947.4.patch, HIVE-15947.6.patch, HIVE-15947.7.patch, > HIVE-15947.8.patch, HIVE-15947.9.patch, HIVE-15947.patch > > > Currently Templeton service doesn't restrict number of job operation > requests. It simply accepts and tries to run all operations. If more number > of concurrent job submit requests comes then the time to submit job > operations can increase significantly. Templetonused hdfs to store staging > file for job. If HDFS storage can't respond to large number of requests and > throttles then the job submission can take very large times in order of > minutes. > This behavior may not be suitable for all applications and client > applications may be looking for predictable and low response for successful > request or send throttle response to client to wait for some time before > re-requesting job operation. > In this JIRA, I am trying to address following job operations > 1) Submit new Job > 2) Get Job Status > 3) List jobs > These three operations has different complexity due to variance in use of > cluster resources like YARN/HDFS. > The idea is to introduce a new config templeton.job.submit.exec.max-procs > which controls maximum number of concurrent active job submissions within > Templeton and use this config to control better response times. If a new job > submission request sees that there are already > templeton.job.submit.exec.max-procs jobs getting submitted concurrently then > the request will fail with Http error 503 with reason >“Too many concurrent job submission requests received. Please wait for > some time before retrying.” > > The client is expected to catch this response and retry after waiting for > some time. The default value for the config > templeton.job.submit.exec.max-procs is set to ‘0’. This means by default job > submission requests are always accepted. The behavior needs to be enabled > based on requirements. > We can have similar behavior for Status and List operations with configs > templeton.job.status.exec.max-procs and templeton.list.job.exec.max-procs > respectively. > Once the job operation is started, the operation can take longer time. The > client which has requested for job operation may not be waiting for > indefinite amount of time. This work introduces configurations > templeton.exec.job.submit.timeout > templeton.exec.job.status.timeout > templeton.exec.job.list.timeout > to specify maximum amount of time job operation can execute. If time out > happens then list and status job requests returns to client with message > "List job request got timed out. Please retry the operation after waiting for > some time." > If submit job request gets timed out then > i) The job submit request thread which receives time out will check if > valid job id is generated in job request. > ii) If it is generated then issue kill job request on cancel thread > pool. Don't wait for operation to complete and returns to client with time > out message. > Side effects of enabling time out for submit operations > 1) This has a possibility for having active job for some time by the client > gets response and a list operation from client could potential show the newly > created job before it gets killed. > 2) We do best effort to kill the job and no guarantees. This means there is a > possibility of duplicate job created. One possible reason for this could be a > case where job is created and then operation timed out but kill request > failed due to
[jira] [Commented] (HIVE-16188) beeline should block the connection if given invalid database name.
[ https://issues.apache.org/jira/browse/HIVE-16188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925482#comment-15925482 ] Hive QA commented on HIVE-16188: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12858772/HIVE-16188.2.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 13 failed/errored test(s), 10343 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucket_map_join_spark2] (batchId=97) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketmapjoin2] (batchId=97) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[cbo_udf_udaf] (batchId=97) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[groupby_cube1] (batchId=97) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[join_vc] (batchId=97) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ptf_decimal] (batchId=97) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[sample3] (batchId=97) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[smb_mapjoin_19] (batchId=97) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[stats16] (batchId=97) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union23] (batchId=97) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union31] (batchId=97) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union] (batchId=97) org.apache.hadoop.hive.cli.TestSparkNegativeCliDriver.org.apache.hadoop.hive.cli.TestSparkNegativeCliDriver (batchId=232) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4138/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4138/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4138/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 13 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12858772 - PreCommit-HIVE-Build > beeline should block the connection if given invalid database name. > --- > > Key: HIVE-16188 > URL: https://issues.apache.org/jira/browse/HIVE-16188 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Pavas Garg >Assignee: Sahil Takiar >Priority: Minor > Attachments: HIVE-16188.1.patch, HIVE-16188.2.patch > > > When using beeline shell to connect to HS2 or impalaD as below - > Connection to HS2 using beeline tool on port 1 - > beeline -u > "jdbc:hive2://HS2-host-name:1/default;principal=hive/hs2-host-n...@domain.example.com" > Connection to ImpalaD using beeline tool on port 21050 - > beeline -u > "jdbc:hive2://impalad-host-name.com:21050/XXX;principal=impala/impalad-host-name@domain.example.com" > > Providing a invalid database name as XXX - the connection is made. > It should ideally stop the connection to be successfull. > Even though, the beeline tool does not allow to move forward, unless you > provide a valid DB name, like > Use ; -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16176) SchemaTool should exit with non-zero exit code when one or more validator's fail.
[ https://issues.apache.org/jira/browse/HIVE-16176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naveen Gangam updated HIVE-16176: - Status: Patch Available (was: Open) > SchemaTool should exit with non-zero exit code when one or more validator's > fail. > - > > Key: HIVE-16176 > URL: https://issues.apache.org/jira/browse/HIVE-16176 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 2.2.0 >Reporter: Naveen Gangam >Assignee: Naveen Gangam >Priority: Minor > Attachments: HIVE-16176.patch, HIVE-16176.patch > > > Currently schematool exits with a code of 0 when one or more schema tool > validation fail. Ideally, it should return a non-zero exit code when any of > the validators fail. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16176) SchemaTool should exit with non-zero exit code when one or more validator's fail.
[ https://issues.apache.org/jira/browse/HIVE-16176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naveen Gangam updated HIVE-16176: - Attachment: HIVE-16176.patch Re-attaching the same patch as the pre-commit test failure was from a missing file from another commit. Re-trying. > SchemaTool should exit with non-zero exit code when one or more validator's > fail. > - > > Key: HIVE-16176 > URL: https://issues.apache.org/jira/browse/HIVE-16176 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 2.2.0 >Reporter: Naveen Gangam >Assignee: Naveen Gangam >Priority: Minor > Attachments: HIVE-16176.patch, HIVE-16176.patch > > > Currently schematool exits with a code of 0 when one or more schema tool > validation fail. Ideally, it should return a non-zero exit code when any of > the validators fail. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16176) SchemaTool should exit with non-zero exit code when one or more validator's fail.
[ https://issues.apache.org/jira/browse/HIVE-16176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naveen Gangam updated HIVE-16176: - Status: Open (was: Patch Available) > SchemaTool should exit with non-zero exit code when one or more validator's > fail. > - > > Key: HIVE-16176 > URL: https://issues.apache.org/jira/browse/HIVE-16176 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 2.2.0 >Reporter: Naveen Gangam >Assignee: Naveen Gangam >Priority: Minor > Attachments: HIVE-16176.patch > > > Currently schematool exits with a code of 0 when one or more schema tool > validation fail. Ideally, it should return a non-zero exit code when any of > the validators fail. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-12274) Increase width of columns used for general configuration in the metastore.
[ https://issues.apache.org/jira/browse/HIVE-12274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naveen Gangam updated HIVE-12274: - Attachment: HIVE-12274.4.patch The PerfCliDriver test failures are due to incorrect bulking loading of data into derby database for TABLE_PARAMS and TAB_COL_STATS tables. Because the TAB_COL_STATS werent being loaded, the CBO optimizer was being disabled thus leading the query plans and explain outputs to be different. I am changing the derby bulking import API call query. the IMPORT_TABLE_LOB_FROM_EXT_FILE is not appropriate until you spilt the LOB data out to separate files from the main import file. I am changing int to just IMPORT_TABLE API call. > Increase width of columns used for general configuration in the metastore. > -- > > Key: HIVE-12274 > URL: https://issues.apache.org/jira/browse/HIVE-12274 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 2.0.0 >Reporter: Elliot West >Assignee: Naveen Gangam > Labels: metastore > Attachments: HIVE-12274.2.patch, HIVE-12274.3.patch, > HIVE-12274.4.patch, HIVE-12274.example.ddl.hql, HIVE-12274.patch > > > h2. Overview > This issue is very similar in principle to HIVE-1364. We are hitting a limit > when processing JSON data that has a large nested schema. The struct > definition is truncated when inserted into the metastore database column > {{COLUMNS_V2.YPE_NAME}} as it is greater than 4000 characters in length. > Given that the purpose of these columns is to hold very loosely defined > configuration values it seems rather limiting to impose such a relatively low > length bound. One can imagine that valid use cases will arise where > reasonable parameter/property values exceed the current limit. > h2. Context > These limitations were in by the [patch > attributed|https://github.com/apache/hive/commit/c21a526b0a752df2a51d20a2729cc8493c228799] > to HIVE-1364 which mentions the _"max length on Oracle 9i/10g/11g"_ as the > reason. However, nowadays the limit can be increased because: > * Oracle DB's {{varchar2}} supports 32767 bytes now, by setting the > configuration parameter {{MAX_STRING_SIZE}} to {{EXTENDED}}. > ([source|http://docs.oracle.com/database/121/SQLRF/sql_elements001.htm#SQLRF55623]) > * Postgres supports a max of 1GB for {{character}} datatype. > ([source|http://www.postgresql.org/docs/8.3/static/datatype-character.html]) > * MySQL can support upto 65535 bytes for the entire row. So long as the > {{PARAM_KEY}} value + {{PARAM_VALUE}} is less than 65535, we should be good. > ([source|http://dev.mysql.com/doc/refman/5.0/en/char.html]) > * SQL Server's {{varchar}} max length is 8000 and can go beyond using > "varchar(max)" with the same limitation as MySQL being 65535 bytes for the > entire row. ([source|http://dev.mysql.com/doc/refman/5.0/en/char.html]) > * Derby's {{varchar}} can be upto 32672 bytes. > ([source|https://db.apache.org/derby/docs/10.7/ref/rrefsqlj41207.html]) > h2. Proposal > Can these columns not use CLOB-like types as for example as used by > {{TBLS.VIEW_EXPANDED_TEXT}}? It would seem that suitable type equivalents > exist for all targeted database platforms: > * MySQL: {{mediumtext}} > * Postgres: {{text}} > * Oracle: {{CLOB}} > * Derby: {{LONG VARCHAR}} > I'd suggest that the candidates for type change are: > * {{COLUMNS_V2.TYPE_NAME}} > * {{TABLE_PARAMS.PARAM_VALUE}} > * {{SERDE_PARAMS.PARAM_VALUE}} > * {{SD_PARAMS.PARAM_VALUE}} > After updating the maximum length the metastore database needs to be > configured and restarted with the new settings. Altering {{MAX_STRING_SIZE}} > will update database objects and possibly invalidate them, as follows: > * Tables with virtual columns will be updated with new data type metadata for > virtual columns of {{VARCHAR2(4000)}}, 4000-byte {{NVARCHAR2}}, or > {{RAW(2000)}} type. > * Functional indexes will become unusable if a change to their associated > virtual columns causes the index key to exceed index key length limits. > Attempts to rebuild such indexes will fail with {{ORA-01450: maximum key > length exceeded}}. > * Views will be invalidated if they contain {{VARCHAR2(4000)}}, 4000-byte > {{NVARCHAR2}}, or {{RAW(2000)}} typed expression columns. > * Materialized views will be updated with new metadata {{VARCHAR2(4000)}}, > 4000-byte {{NVARCHAR2}}, and {{RAW(2000)}} typed expression columns > * So the limitation could be raised to 32672 bytes, with the caveat that > MySQL and SQL Server limit the row length to 65535 bytes, so that should also > be validated to provide consistency. > Finally, will this limitation persist in the work resulting from HIVE-9452? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-12274) Increase width of columns used for general configuration in the metastore.
[ https://issues.apache.org/jira/browse/HIVE-12274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naveen Gangam updated HIVE-12274: - Status: Patch Available (was: Open) > Increase width of columns used for general configuration in the metastore. > -- > > Key: HIVE-12274 > URL: https://issues.apache.org/jira/browse/HIVE-12274 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 2.0.0 >Reporter: Elliot West >Assignee: Naveen Gangam > Labels: metastore > Attachments: HIVE-12274.2.patch, HIVE-12274.3.patch, > HIVE-12274.4.patch, HIVE-12274.example.ddl.hql, HIVE-12274.patch > > > h2. Overview > This issue is very similar in principle to HIVE-1364. We are hitting a limit > when processing JSON data that has a large nested schema. The struct > definition is truncated when inserted into the metastore database column > {{COLUMNS_V2.YPE_NAME}} as it is greater than 4000 characters in length. > Given that the purpose of these columns is to hold very loosely defined > configuration values it seems rather limiting to impose such a relatively low > length bound. One can imagine that valid use cases will arise where > reasonable parameter/property values exceed the current limit. > h2. Context > These limitations were in by the [patch > attributed|https://github.com/apache/hive/commit/c21a526b0a752df2a51d20a2729cc8493c228799] > to HIVE-1364 which mentions the _"max length on Oracle 9i/10g/11g"_ as the > reason. However, nowadays the limit can be increased because: > * Oracle DB's {{varchar2}} supports 32767 bytes now, by setting the > configuration parameter {{MAX_STRING_SIZE}} to {{EXTENDED}}. > ([source|http://docs.oracle.com/database/121/SQLRF/sql_elements001.htm#SQLRF55623]) > * Postgres supports a max of 1GB for {{character}} datatype. > ([source|http://www.postgresql.org/docs/8.3/static/datatype-character.html]) > * MySQL can support upto 65535 bytes for the entire row. So long as the > {{PARAM_KEY}} value + {{PARAM_VALUE}} is less than 65535, we should be good. > ([source|http://dev.mysql.com/doc/refman/5.0/en/char.html]) > * SQL Server's {{varchar}} max length is 8000 and can go beyond using > "varchar(max)" with the same limitation as MySQL being 65535 bytes for the > entire row. ([source|http://dev.mysql.com/doc/refman/5.0/en/char.html]) > * Derby's {{varchar}} can be upto 32672 bytes. > ([source|https://db.apache.org/derby/docs/10.7/ref/rrefsqlj41207.html]) > h2. Proposal > Can these columns not use CLOB-like types as for example as used by > {{TBLS.VIEW_EXPANDED_TEXT}}? It would seem that suitable type equivalents > exist for all targeted database platforms: > * MySQL: {{mediumtext}} > * Postgres: {{text}} > * Oracle: {{CLOB}} > * Derby: {{LONG VARCHAR}} > I'd suggest that the candidates for type change are: > * {{COLUMNS_V2.TYPE_NAME}} > * {{TABLE_PARAMS.PARAM_VALUE}} > * {{SERDE_PARAMS.PARAM_VALUE}} > * {{SD_PARAMS.PARAM_VALUE}} > After updating the maximum length the metastore database needs to be > configured and restarted with the new settings. Altering {{MAX_STRING_SIZE}} > will update database objects and possibly invalidate them, as follows: > * Tables with virtual columns will be updated with new data type metadata for > virtual columns of {{VARCHAR2(4000)}}, 4000-byte {{NVARCHAR2}}, or > {{RAW(2000)}} type. > * Functional indexes will become unusable if a change to their associated > virtual columns causes the index key to exceed index key length limits. > Attempts to rebuild such indexes will fail with {{ORA-01450: maximum key > length exceeded}}. > * Views will be invalidated if they contain {{VARCHAR2(4000)}}, 4000-byte > {{NVARCHAR2}}, or {{RAW(2000)}} typed expression columns. > * Materialized views will be updated with new metadata {{VARCHAR2(4000)}}, > 4000-byte {{NVARCHAR2}}, and {{RAW(2000)}} typed expression columns > * So the limitation could be raised to 32672 bytes, with the caveat that > MySQL and SQL Server limit the row length to 65535 bytes, so that should also > be validated to provide consistency. > Finally, will this limitation persist in the work resulting from HIVE-9452? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-12274) Increase width of columns used for general configuration in the metastore.
[ https://issues.apache.org/jira/browse/HIVE-12274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naveen Gangam updated HIVE-12274: - Status: Open (was: Patch Available) > Increase width of columns used for general configuration in the metastore. > -- > > Key: HIVE-12274 > URL: https://issues.apache.org/jira/browse/HIVE-12274 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 2.0.0 >Reporter: Elliot West >Assignee: Naveen Gangam > Labels: metastore > Attachments: HIVE-12274.2.patch, HIVE-12274.3.patch, > HIVE-12274.example.ddl.hql, HIVE-12274.patch > > > h2. Overview > This issue is very similar in principle to HIVE-1364. We are hitting a limit > when processing JSON data that has a large nested schema. The struct > definition is truncated when inserted into the metastore database column > {{COLUMNS_V2.YPE_NAME}} as it is greater than 4000 characters in length. > Given that the purpose of these columns is to hold very loosely defined > configuration values it seems rather limiting to impose such a relatively low > length bound. One can imagine that valid use cases will arise where > reasonable parameter/property values exceed the current limit. > h2. Context > These limitations were in by the [patch > attributed|https://github.com/apache/hive/commit/c21a526b0a752df2a51d20a2729cc8493c228799] > to HIVE-1364 which mentions the _"max length on Oracle 9i/10g/11g"_ as the > reason. However, nowadays the limit can be increased because: > * Oracle DB's {{varchar2}} supports 32767 bytes now, by setting the > configuration parameter {{MAX_STRING_SIZE}} to {{EXTENDED}}. > ([source|http://docs.oracle.com/database/121/SQLRF/sql_elements001.htm#SQLRF55623]) > * Postgres supports a max of 1GB for {{character}} datatype. > ([source|http://www.postgresql.org/docs/8.3/static/datatype-character.html]) > * MySQL can support upto 65535 bytes for the entire row. So long as the > {{PARAM_KEY}} value + {{PARAM_VALUE}} is less than 65535, we should be good. > ([source|http://dev.mysql.com/doc/refman/5.0/en/char.html]) > * SQL Server's {{varchar}} max length is 8000 and can go beyond using > "varchar(max)" with the same limitation as MySQL being 65535 bytes for the > entire row. ([source|http://dev.mysql.com/doc/refman/5.0/en/char.html]) > * Derby's {{varchar}} can be upto 32672 bytes. > ([source|https://db.apache.org/derby/docs/10.7/ref/rrefsqlj41207.html]) > h2. Proposal > Can these columns not use CLOB-like types as for example as used by > {{TBLS.VIEW_EXPANDED_TEXT}}? It would seem that suitable type equivalents > exist for all targeted database platforms: > * MySQL: {{mediumtext}} > * Postgres: {{text}} > * Oracle: {{CLOB}} > * Derby: {{LONG VARCHAR}} > I'd suggest that the candidates for type change are: > * {{COLUMNS_V2.TYPE_NAME}} > * {{TABLE_PARAMS.PARAM_VALUE}} > * {{SERDE_PARAMS.PARAM_VALUE}} > * {{SD_PARAMS.PARAM_VALUE}} > After updating the maximum length the metastore database needs to be > configured and restarted with the new settings. Altering {{MAX_STRING_SIZE}} > will update database objects and possibly invalidate them, as follows: > * Tables with virtual columns will be updated with new data type metadata for > virtual columns of {{VARCHAR2(4000)}}, 4000-byte {{NVARCHAR2}}, or > {{RAW(2000)}} type. > * Functional indexes will become unusable if a change to their associated > virtual columns causes the index key to exceed index key length limits. > Attempts to rebuild such indexes will fail with {{ORA-01450: maximum key > length exceeded}}. > * Views will be invalidated if they contain {{VARCHAR2(4000)}}, 4000-byte > {{NVARCHAR2}}, or {{RAW(2000)}} typed expression columns. > * Materialized views will be updated with new metadata {{VARCHAR2(4000)}}, > 4000-byte {{NVARCHAR2}}, and {{RAW(2000)}} typed expression columns > * So the limitation could be raised to 32672 bytes, with the caveat that > MySQL and SQL Server limit the row length to 65535 bytes, so that should also > be validated to provide consistency. > Finally, will this limitation persist in the work resulting from HIVE-9452? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16216) update trunk/content/people.mdtext
[ https://issues.apache.org/jira/browse/HIVE-16216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925463#comment-15925463 ] Eugene Koifman commented on HIVE-16216: --- [~ashutoshc] could you review please > update trunk/content/people.mdtext > -- > > Key: HIVE-16216 > URL: https://issues.apache.org/jira/browse/HIVE-16216 > Project: Hive > Issue Type: New Feature > Components: Documentation >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-16216.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16216) update trunk/content/people.mdtext
[ https://issues.apache.org/jira/browse/HIVE-16216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-16216: -- Attachment: HIVE-16216.01.patch > update trunk/content/people.mdtext > -- > > Key: HIVE-16216 > URL: https://issues.apache.org/jira/browse/HIVE-16216 > Project: Hive > Issue Type: New Feature > Components: Documentation >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-16216.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (HIVE-16216) update trunk/content/people.mdtext
[ https://issues.apache.org/jira/browse/HIVE-16216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman reassigned HIVE-16216: - > update trunk/content/people.mdtext > -- > > Key: HIVE-16216 > URL: https://issues.apache.org/jira/browse/HIVE-16216 > Project: Hive > Issue Type: New Feature > Components: Documentation >Reporter: Eugene Koifman >Assignee: Eugene Koifman > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16213) ObjectStore can leak Queries when rolbackTransaction
[ https://issues.apache.org/jira/browse/HIVE-16213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925459#comment-15925459 ] Alexander Kolbasov commented on HIVE-16213: --- Seems that Datanucleus exceptions inherit from NucleusException which extends RuntimeException which is an unchecked exception - that's why IDE doesn't complain when it isn't documented. Looking at the source code I see a few exceptions that can be thrown: * NucleusUserException * NucleusDataStoreException * TransactionNotActiveException - probably shouldn't get this one in this particular case * NucleusTransactionException may be something else - difficult to tell. Seems that none of these are documented. > ObjectStore can leak Queries when rolbackTransaction > - > > Key: HIVE-16213 > URL: https://issues.apache.org/jira/browse/HIVE-16213 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Alexander Kolbasov >Assignee: Vihang Karajgaonkar > > In ObjectStore.java there are a few places with the code similar to: > {code} > Query query = null; > try { > openTransaction(); > query = pm.newQuery(Something.class); > ... > commited = commitTransaction(); > } finally { > if (!commited) { > rollbackTransaction(); > } > if (query != null) { > query.closeAll(); > } > } > {code} > The problem is that rollbackTransaction() may throw an exception in which > case query.closeAll() wouldn't be executed. > The fix would be to wrap rollbackTransaction in its own try-catch block. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16183) Fix potential thread safety issues with static variables
[ https://issues.apache.org/jira/browse/HIVE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925457#comment-15925457 ] Rui Li commented on HIVE-16183: --- Thanks Xuefu for working on this. I also left some minor comments on RB. Look good to me overall. +1 > Fix potential thread safety issues with static variables > > > Key: HIVE-16183 > URL: https://issues.apache.org/jira/browse/HIVE-16183 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Xuefu Zhang >Assignee: Xuefu Zhang > Attachments: HIVE-16183.1.patch, HIVE-16183.2.patch, HIVE-16183.patch > > > Many concurrency issues (HIVE-12768, HIVE-16175, HIVE-16060) have been found > with respect to class static variable usages. With fact that HS2 supports > concurrent compilation and task execution as well as some backend engines > (such as Spark) running multiple tasks in a single JVM, traditional > assumption (or mindset) of single threaded execution needs to be abandoned. > This purpose of this JIRA is to do a global scan of static variables in Hive > code base, and correct potential thread-safety issues. However, it's not > meant to be exhaustive. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16205) Improving type safety in Objectstore
[ https://issues.apache.org/jira/browse/HIVE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925456#comment-15925456 ] Hive QA commented on HIVE-16205: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12858767/HIVE-16205.02.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 10348 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join_without_localtask] (batchId=96) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_3] (batchId=96) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[date_join1] (batchId=96) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[groupby6_noskew] (batchId=96) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[mapjoin_test_outer] (batchId=96) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[merge2] (batchId=96) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[smb_mapjoin_11] (batchId=96) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vectorization_9] (batchId=96) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4137/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4137/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4137/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 8 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12858767 - PreCommit-HIVE-Build > Improving type safety in Objectstore > > > Key: HIVE-16205 > URL: https://issues.apache.org/jira/browse/HIVE-16205 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar > Attachments: HIVE-16205.01.patch, HIVE-16205.02.patch > > > Modify the queries in ObjectStore for better type safety -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-15979) Support character_length and octet_length
[ https://issues.apache.org/jira/browse/HIVE-15979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925427#comment-15925427 ] Teddy Choi commented on HIVE-15979: --- Fixed all failures. Thank you, [~ashutoshc]. > Support character_length and octet_length > - > > Key: HIVE-15979 > URL: https://issues.apache.org/jira/browse/HIVE-15979 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Carter Shanklin >Assignee: Teddy Choi > Attachments: HIVE-15979.1.patch, HIVE-15979.2.patch, > HIVE-15979.3.patch, HIVE-15979.4.patch, HIVE-15979.5.patch, HIVE-15979.6.patch > > > SQL defines standard ways to get number of characters and octets. SQL > reference: section 6.28. Example: > vagrant=# select character_length('欲速则不达'); > character_length > -- > 5 > (1 row) > vagrant=# select octet_length('欲速则不达'); > octet_length > -- >15 > (1 row) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-15979) Support character_length and octet_length
[ https://issues.apache.org/jira/browse/HIVE-15979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Teddy Choi updated HIVE-15979: -- Status: Patch Available (was: Open) > Support character_length and octet_length > - > > Key: HIVE-15979 > URL: https://issues.apache.org/jira/browse/HIVE-15979 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Carter Shanklin >Assignee: Teddy Choi > Attachments: HIVE-15979.1.patch, HIVE-15979.2.patch, > HIVE-15979.3.patch, HIVE-15979.4.patch, HIVE-15979.5.patch, HIVE-15979.6.patch > > > SQL defines standard ways to get number of characters and octets. SQL > reference: section 6.28. Example: > vagrant=# select character_length('欲速则不达'); > character_length > -- > 5 > (1 row) > vagrant=# select octet_length('欲速则不达'); > octet_length > -- >15 > (1 row) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-15979) Support character_length and octet_length
[ https://issues.apache.org/jira/browse/HIVE-15979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Teddy Choi updated HIVE-15979: -- Attachment: HIVE-15979.6.patch > Support character_length and octet_length > - > > Key: HIVE-15979 > URL: https://issues.apache.org/jira/browse/HIVE-15979 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Carter Shanklin >Assignee: Teddy Choi > Attachments: HIVE-15979.1.patch, HIVE-15979.2.patch, > HIVE-15979.3.patch, HIVE-15979.4.patch, HIVE-15979.5.patch, HIVE-15979.6.patch > > > SQL defines standard ways to get number of characters and octets. SQL > reference: section 6.28. Example: > vagrant=# select character_length('欲速则不达'); > character_length > -- > 5 > (1 row) > vagrant=# select octet_length('欲速则不达'); > octet_length > -- >15 > (1 row) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16195) MM tables: mm_conversions test is broken
[ https://issues.apache.org/jira/browse/HIVE-16195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925420#comment-15925420 ] Sergey Shelukhin commented on HIVE-16195: - The reason the converted tables do not work is that mapred.input.dir.recursive is set to false; it's true by default in Tez so it works on Tez/LLAP. Either the table needs to be marked "recursive" (a new feature) and mapred.input.dir.recursive should be used for it after conversion, or we need to get rid of directories during conversion. Probably the latter. > MM tables: mm_conversions test is broken > > > Key: HIVE-16195 > URL: https://issues.apache.org/jira/browse/HIVE-16195 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > > It worked at some point (per the current .out file), but now bunch of > conversions produce incorrect results. Needs to be fixed. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-15983) Support the named columns join
[ https://issues.apache.org/jira/browse/HIVE-15983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925412#comment-15925412 ] Hive QA commented on HIVE-15983: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12858758/HIVE-15983.03.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 10347 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr] (batchId=141) org.apache.hive.hcatalog.pig.TestTextFileHCatStorer.testDateCharTypes (batchId=174) org.apache.hive.hcatalog.pig.TestTextFileHCatStorer.testWriteSmallint (batchId=174) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4136/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4136/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4136/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12858758 - PreCommit-HIVE-Build > Support the named columns join > -- > > Key: HIVE-15983 > URL: https://issues.apache.org/jira/browse/HIVE-15983 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Carter Shanklin >Assignee: Pengcheng Xiong > Attachments: HIVE-15983.01.patch, HIVE-15983.02.patch, > HIVE-15983.03.patch > > > The named columns join is a common shortcut allowing joins on identically > named keys. Example: select * from t1 join t2 using c1 is equivalent to > select * from t1 join t2 on t1.c1 = t2.c1. SQL standard reference: Section 7.7 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-15665) LLAP: OrcFileMetadata objects in cache can impact heap usage
[ https://issues.apache.org/jira/browse/HIVE-15665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-15665: Attachment: (was: HIVE-15665.WIP.patch) > LLAP: OrcFileMetadata objects in cache can impact heap usage > > > Key: HIVE-15665 > URL: https://issues.apache.org/jira/browse/HIVE-15665 > Project: Hive > Issue Type: Improvement > Components: llap >Reporter: Rajesh Balamohan >Assignee: Sergey Shelukhin > Attachments: HIVE-15665.patch > > > OrcFileMetadata internally has filestats, stripestats etc which are allocated > in heap. On large data sets, this could have an impact on the heap usage and > the memory usage by different executors in LLAP. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-15665) LLAP: OrcFileMetadata objects in cache can impact heap usage
[ https://issues.apache.org/jira/browse/HIVE-15665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-15665: Attachment: HIVE-15665.patch Preliminary patch that probably doesn't work. I will test it in due course (or after seeing the list of HiveQA failures) > LLAP: OrcFileMetadata objects in cache can impact heap usage > > > Key: HIVE-15665 > URL: https://issues.apache.org/jira/browse/HIVE-15665 > Project: Hive > Issue Type: Improvement > Components: llap >Reporter: Rajesh Balamohan >Assignee: Sergey Shelukhin > Attachments: HIVE-15665.patch > > > OrcFileMetadata internally has filestats, stripestats etc which are allocated > in heap. On large data sets, this could have an impact on the heap usage and > the memory usage by different executors in LLAP. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16189) Table column stats might be invalidated in a failed table rename
[ https://issues.apache.org/jira/browse/HIVE-16189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chaoyu Tang updated HIVE-16189: --- Attachment: HIVE-16189.1.patch 1. Fixed the failed tests. 2. Add a test based on [~pxiong]'s suggestion, the test scenario is as following (see encryption_move_tbl.q): When renaming a table fails to move its table data from one encryption zone to another due to EZ incompatibility, table rename fails but its column stats are invalidated. When we describe formatted table column, we found that all column stats have gone. > Table column stats might be invalidated in a failed table rename > > > Key: HIVE-16189 > URL: https://issues.apache.org/jira/browse/HIVE-16189 > Project: Hive > Issue Type: Bug >Reporter: Chaoyu Tang >Assignee: Chaoyu Tang > Attachments: HIVE-16189.1.patch, HIVE-16189.patch > > > If the table rename does not succeed due to its failure in moving the data to > the new renamed table folder, the changes in TAB_COL_STATS are not rolled > back which leads to invalid column stats. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (HIVE-16189) Table column stats might be invalidated in a failed table rename
[ https://issues.apache.org/jira/browse/HIVE-16189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925409#comment-15925409 ] Chaoyu Tang edited comment on HIVE-16189 at 3/15/17 1:57 AM: - 1. Fixed the failed tests. 2. Add a test based on [~pxiong]'s suggestion, the test scenario is as following (see encryption_move_tbl.q): When renaming a table fails to move its table data from one encryption zone to another due to EZ incompatibility, table rename fails but its column stats are invalidated. When we describe formatted table column, we found that all column stats have gone. [~pxiong] could you review it to see if it makes sense? Thanks. was (Author: ctang.ma): 1. Fixed the failed tests. 2. Add a test based on [~pxiong]'s suggestion, the test scenario is as following (see encryption_move_tbl.q): When renaming a table fails to move its table data from one encryption zone to another due to EZ incompatibility, table rename fails but its column stats are invalidated. When we describe formatted table column, we found that all column stats have gone. > Table column stats might be invalidated in a failed table rename > > > Key: HIVE-16189 > URL: https://issues.apache.org/jira/browse/HIVE-16189 > Project: Hive > Issue Type: Bug >Reporter: Chaoyu Tang >Assignee: Chaoyu Tang > Attachments: HIVE-16189.1.patch, HIVE-16189.patch > > > If the table rename does not succeed due to its failure in moving the data to > the new renamed table folder, the changes in TAB_COL_STATS are not rolled > back which leads to invalid column stats. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-15665) LLAP: OrcFileMetadata objects in cache can impact heap usage
[ https://issues.apache.org/jira/browse/HIVE-15665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-15665: Status: Patch Available (was: Open) > LLAP: OrcFileMetadata objects in cache can impact heap usage > > > Key: HIVE-15665 > URL: https://issues.apache.org/jira/browse/HIVE-15665 > Project: Hive > Issue Type: Improvement > Components: llap >Reporter: Rajesh Balamohan >Assignee: Sergey Shelukhin > Attachments: HIVE-15665.patch > > > OrcFileMetadata internally has filestats, stripestats etc which are allocated > in heap. On large data sets, this could have an impact on the heap usage and > the memory usage by different executors in LLAP. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16213) ObjectStore can leak Queries when rolbackTransaction
[ https://issues.apache.org/jira/browse/HIVE-16213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925388#comment-15925388 ] Vihang Karajgaonkar commented on HIVE-16213: I looked at the method definition. Did not look into the documentation. My IDE tells me that currentTransaction.rollback() method can throw a {{JDOUserException}}. Not sure why Java is not throwing a compile time check. {noformat} /** * Rolls back the current transaction if it is active */ @Override public void rollbackTransaction() { if (openTrasactionCalls < 1) { debugLog("rolling back transaction: no open transactions: " + openTrasactionCalls); return; } debugLog("Rollback transaction, isActive: " + currentTransaction.isActive()); try { if (currentTransaction.isActive() && transactionStatus != TXN_STATUS.ROLLBACK) { currentTransaction.rollback(); } } finally { openTrasactionCalls = 0; transactionStatus = TXN_STATUS.ROLLBACK; // remove all detached objects from the cache, since the transaction is // being rolled back they are no longer relevant, and this prevents them // from reattaching in future transactions pm.evictAll(); } } {noformat} > ObjectStore can leak Queries when rolbackTransaction > - > > Key: HIVE-16213 > URL: https://issues.apache.org/jira/browse/HIVE-16213 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Alexander Kolbasov >Assignee: Vihang Karajgaonkar > > In ObjectStore.java there are a few places with the code similar to: > {code} > Query query = null; > try { > openTransaction(); > query = pm.newQuery(Something.class); > ... > commited = commitTransaction(); > } finally { > if (!commited) { > rollbackTransaction(); > } > if (query != null) { > query.closeAll(); > } > } > {code} > The problem is that rollbackTransaction() may throw an exception in which > case query.closeAll() wouldn't be executed. > The fix would be to wrap rollbackTransaction in its own try-catch block. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16213) ObjectStore can leak Queries when rolbackTransaction
[ https://issues.apache.org/jira/browse/HIVE-16213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925383#comment-15925383 ] Alexander Kolbasov commented on HIVE-16213: --- I am not 100% sure, but I think I saw exceptions from rollbackTransaction() sometime back. There is http://www.datanucleus.org/javadocs/core/2.2/org/datanucleus/exceptions/RollbackStateTransitionException.html as well. I noticed that several DataNucleus methods do not document exceptions that can be thrown from them. Is there an explicit documentation that states that rollbackTransaction() will never throw an exception? > ObjectStore can leak Queries when rolbackTransaction > - > > Key: HIVE-16213 > URL: https://issues.apache.org/jira/browse/HIVE-16213 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Alexander Kolbasov >Assignee: Vihang Karajgaonkar > > In ObjectStore.java there are a few places with the code similar to: > {code} > Query query = null; > try { > openTransaction(); > query = pm.newQuery(Something.class); > ... > commited = commitTransaction(); > } finally { > if (!commited) { > rollbackTransaction(); > } > if (query != null) { > query.closeAll(); > } > } > {code} > The problem is that rollbackTransaction() may throw an exception in which > case query.closeAll() wouldn't be executed. > The fix would be to wrap rollbackTransaction in its own try-catch block. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16191) simplify thread usage in TaskExecutorService
[ https://issues.apache.org/jira/browse/HIVE-16191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925369#comment-15925369 ] Sergey Shelukhin commented on HIVE-16191: - Yeah, so it's 3 useless objects (also the future); plus it uses an existing executor for the callback. That's not the LOC I had in mind - that's just object cruft. One way to cut on lines would be to remove the entire callback class and replace it with a little-known java feature knowт as "a finally block" :) What I meant was something like this http://thedailywtf.com/articles/The-Enterprise-User-Agent ; the fact that it "does the job" doesn't mean everything else is a style choice... Anyway, it doesn't matter here. > simplify thread usage in TaskExecutorService > > > Key: HIVE-16191 > URL: https://issues.apache.org/jira/browse/HIVE-16191 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-16191.patch > > > Remove executors, futures, decorators etc where not needed -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16210) Use jvm temporary tmp dir by default
[ https://issues.apache.org/jira/browse/HIVE-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925362#comment-15925362 ] Hive QA commented on HIVE-16210: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12858753/HIVE-16210.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 10346 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4135/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4135/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4135/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12858753 - PreCommit-HIVE-Build > Use jvm temporary tmp dir by default > > > Key: HIVE-16210 > URL: https://issues.apache.org/jira/browse/HIVE-16210 > Project: Hive > Issue Type: Improvement > Components: Druid integration >Reporter: slim bouguerra >Assignee: slim bouguerra > Attachments: HIVE-16210.patch > > > instead of using "/tmp" by default, it makes more sense to use the jvm > default tmp dir. This can have dramatic consequences if the indexed files are > huge. For instance application run by run containers can be provisioned with > a dedicated tmp dir. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16213) ObjectStore can leak Queries when rolbackTransaction
[ https://issues.apache.org/jira/browse/HIVE-16213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925360#comment-15925360 ] Vihang Karajgaonkar commented on HIVE-16213: Hi [~akolb] {{rollbackTransaction()}} does not throw exception according to its definition. Am I missing something? > ObjectStore can leak Queries when rolbackTransaction > - > > Key: HIVE-16213 > URL: https://issues.apache.org/jira/browse/HIVE-16213 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Alexander Kolbasov >Assignee: Vihang Karajgaonkar > > In ObjectStore.java there are a few places with the code similar to: > {code} > Query query = null; > try { > openTransaction(); > query = pm.newQuery(Something.class); > ... > commited = commitTransaction(); > } finally { > if (!commited) { > rollbackTransaction(); > } > if (query != null) { > query.closeAll(); > } > } > {code} > The problem is that rollbackTransaction() may throw an exception in which > case query.closeAll() wouldn't be executed. > The fix would be to wrap rollbackTransaction in its own try-catch block. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16215) counter recording for text cache may not fully work
[ https://issues.apache.org/jira/browse/HIVE-16215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-16215: Attachment: HIVE-16215.patch A simple patch. [~prasanth_j] can you take a look? > counter recording for text cache may not fully work > --- > > Key: HIVE-16215 > URL: https://issues.apache.org/jira/browse/HIVE-16215 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-16215.patch > > > StatsRecordingThreadPool is too specific -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16215) counter recording for text cache may not fully work
[ https://issues.apache.org/jira/browse/HIVE-16215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-16215: Status: Patch Available (was: Open) > counter recording for text cache may not fully work > --- > > Key: HIVE-16215 > URL: https://issues.apache.org/jira/browse/HIVE-16215 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-16215.patch > > > StatsRecordingThreadPool is too specific -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16212) MM tables: suspicious ORC HDFS counter changes
[ https://issues.apache.org/jira/browse/HIVE-16212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-16212: Fix Version/s: hive-14535 > MM tables: suspicious ORC HDFS counter changes > -- > > Key: HIVE-16212 > URL: https://issues.apache.org/jira/browse/HIVE-16212 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: hive-14535 > > > org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_llap_counters1] > (batchId=136) > org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_llap_counters] > (batchId=139) > org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] > (batchId=136) > org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_schema_evol_3a] > (batchId=137) > HDFS counters for operation counts go up (which I can repro locally). -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Resolved] (HIVE-16212) MM tables: suspicious ORC HDFS counter changes
[ https://issues.apache.org/jira/browse/HIVE-16212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin resolved HIVE-16212. - Resolution: Fixed Turned out to be an mkdir call that was a no-op for non-MM tables but nonetheless increased the number of ops reported in counters a lot (reads went from 2 to 10, and writes from 2 to 4). > MM tables: suspicious ORC HDFS counter changes > -- > > Key: HIVE-16212 > URL: https://issues.apache.org/jira/browse/HIVE-16212 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > > org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_llap_counters1] > (batchId=136) > org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_llap_counters] > (batchId=139) > org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] > (batchId=136) > org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_schema_evol_3a] > (batchId=137) > HDFS counters for operation counts go up (which I can repro locally). -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-15978) Support regr_* functions
[ https://issues.apache.org/jira/browse/HIVE-15978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-15978: Attachment: HIVE-15978.2.patch > Support regr_* functions > > > Key: HIVE-15978 > URL: https://issues.apache.org/jira/browse/HIVE-15978 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Carter Shanklin >Assignee: Zoltan Haindrich > Attachments: HIVE-15978.1.patch, HIVE-15978.2.patch, > HIVE-15978.2.patch > > > Support the standard regr_* functions, regr_slope, regr_intercept, regr_r2, > regr_sxx, regr_syy, regr_sxy, regr_avgx, regr_avgy, regr_count. SQL reference > section 10.9 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-15978) Support regr_* functions
[ https://issues.apache.org/jira/browse/HIVE-15978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-15978: Attachment: HIVE-15978.2.patch > Support regr_* functions > > > Key: HIVE-15978 > URL: https://issues.apache.org/jira/browse/HIVE-15978 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Carter Shanklin >Assignee: Zoltan Haindrich > Attachments: HIVE-15978.1.patch, HIVE-15978.2.patch, > HIVE-15978.2.patch > > > Support the standard regr_* functions, regr_slope, regr_intercept, regr_r2, > regr_sxx, regr_syy, regr_sxy, regr_avgx, regr_avgy, regr_count. SQL reference > section 10.9 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-15978) Support regr_* functions
[ https://issues.apache.org/jira/browse/HIVE-15978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-15978: Attachment: (was: HIVE-15978.2.patch) > Support regr_* functions > > > Key: HIVE-15978 > URL: https://issues.apache.org/jira/browse/HIVE-15978 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Carter Shanklin >Assignee: Zoltan Haindrich > Attachments: HIVE-15978.1.patch > > > Support the standard regr_* functions, regr_slope, regr_intercept, regr_r2, > regr_sxx, regr_syy, regr_sxy, regr_avgx, regr_avgy, regr_count. SQL reference > section 10.9 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-15978) Support regr_* functions
[ https://issues.apache.org/jira/browse/HIVE-15978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-15978: Attachment: HIVE-15978.2.patch #2) use decimal averaging aggregator in avgx/avgy > Support regr_* functions > > > Key: HIVE-15978 > URL: https://issues.apache.org/jira/browse/HIVE-15978 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Carter Shanklin >Assignee: Zoltan Haindrich > Attachments: HIVE-15978.1.patch, HIVE-15978.2.patch > > > Support the standard regr_* functions, regr_slope, regr_intercept, regr_r2, > regr_sxx, regr_syy, regr_sxy, regr_avgx, regr_avgy, regr_count. SQL reference > section 10.9 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (HIVE-16215) counter recording for text cache may not fully work
[ https://issues.apache.org/jira/browse/HIVE-16215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-16215: --- > counter recording for text cache may not fully work > --- > > Key: HIVE-16215 > URL: https://issues.apache.org/jira/browse/HIVE-16215 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > > StatsRecordingThreadPool is too specific -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16211) MERGE statement failing with ClassCastException
[ https://issues.apache.org/jira/browse/HIVE-16211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-16211: -- Attachment: HIVE-16211.2.patch Added a unit test. > MERGE statement failing with ClassCastException > --- > > Key: HIVE-16211 > URL: https://issues.apache.org/jira/browse/HIVE-16211 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal > Attachments: HIVE-16211.1.patch, HIVE-16211.2.patch > > > Issuing a merge statement gives this error, > hive> 2017-03-14T18:34:02,945 ERROR [17d1c728-8865-47f5-a6fd-2b156d183d0f > main] ql.Driver: FAILED: ClassCastException > org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc cannot be cast to > org.apache.hadoop.hive.ql.plan.ExprNodeColumnDesc > java.lang.ClassCastException: > org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc cannot be cast to > org.apache.hadoop.hive.ql.plan.ExprNodeColumnDesc > at > org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.generateSemiJoinOperatorPlan(DynamicPartitionPruningOptimization.java:410) > at > org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.process(DynamicPartitionPruningOptimization.java:226) > at > org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89) > at > org.apache.hadoop.hive.ql.lib.ForwardWalker.walk(ForwardWalker.java:74) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.runDynamicPartitionPruning(TezCompiler.java:359) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.optimizeOperatorPlan(TezCompiler.java:91) > at > org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:138) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11159) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10708) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeInternal(UpdateDeleteSemanticAnalyzer.java:70) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:257) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeMerge(UpdateDeleteSemanticAnalyzer.java:729) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeInternal(UpdateDeleteSemanticAnalyzer.java:84) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:257) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:455) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:336) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1197) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1290) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1123) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:233) > at org.apache.hadoop.util.RunJar.main(RunJar.java:148) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16211) MERGE statement failing with ClassCastException
[ https://issues.apache.org/jira/browse/HIVE-16211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925303#comment-15925303 ] Hive QA commented on HIVE-16211: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12858755/HIVE-16211.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 10346 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] (batchId=95) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4134/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4134/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4134/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12858755 - PreCommit-HIVE-Build > MERGE statement failing with ClassCastException > --- > > Key: HIVE-16211 > URL: https://issues.apache.org/jira/browse/HIVE-16211 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal > Attachments: HIVE-16211.1.patch > > > Issuing a merge statement gives this error, > hive> 2017-03-14T18:34:02,945 ERROR [17d1c728-8865-47f5-a6fd-2b156d183d0f > main] ql.Driver: FAILED: ClassCastException > org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc cannot be cast to > org.apache.hadoop.hive.ql.plan.ExprNodeColumnDesc > java.lang.ClassCastException: > org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc cannot be cast to > org.apache.hadoop.hive.ql.plan.ExprNodeColumnDesc > at > org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.generateSemiJoinOperatorPlan(DynamicPartitionPruningOptimization.java:410) > at > org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.process(DynamicPartitionPruningOptimization.java:226) > at > org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89) > at > org.apache.hadoop.hive.ql.lib.ForwardWalker.walk(ForwardWalker.java:74) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.runDynamicPartitionPruning(TezCompiler.java:359) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.optimizeOperatorPlan(TezCompiler.java:91) > at > org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:138) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11159) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10708) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeInternal(UpdateDeleteSemanticAnalyzer.java:70) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:257) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeMerge(UpdateDeleteSemanticAnalyzer.java:729) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeInternal(UpdateDeleteSemanticAnalyzer.java:84) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:257) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:455) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:336) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1197) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1290) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1123) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686) > at
[jira] [Updated] (HIVE-16214) Explore the possibillity of introducing a service-client module
[ https://issues.apache.org/jira/browse/HIVE-16214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-16214: Status: Patch Available (was: Open) > Explore the possibillity of introducing a service-client module > --- > > Key: HIVE-16214 > URL: https://issues.apache.org/jira/browse/HIVE-16214 > Project: Hive > Issue Type: Improvement >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich > Attachments: HIVE-16214.1.patch, HIVE-16214.2.patch > > > The jdbc driver pulls in a lot of things from hive...and that may affect the > jdbc driver user. > In this ticket I experiment with the extraction of the relevant parts of > service(wrt to the jdbc driver) into a service-client module. > I've opened a PR...to enable commit by commit view: > https://github.com/apache/hive/pull/158 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16214) Explore the possibillity of introducing a service-client module
[ https://issues.apache.org/jira/browse/HIVE-16214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-16214: Attachment: HIVE-16214.2.patch #2) it's also possible to relatively easily remove the metastore from play(wrt the jdbc driver), by moving 3 files to some other module ...a metastore-rpc or metastore-api module would be good > Explore the possibillity of introducing a service-client module > --- > > Key: HIVE-16214 > URL: https://issues.apache.org/jira/browse/HIVE-16214 > Project: Hive > Issue Type: Improvement >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich > Attachments: HIVE-16214.1.patch, HIVE-16214.2.patch > > > The jdbc driver pulls in a lot of things from hive...and that may affect the > jdbc driver user. > In this ticket I experiment with the extraction of the relevant parts of > service(wrt to the jdbc driver) into a service-client module. > I've opened a PR...to enable commit by commit view: > https://github.com/apache/hive/pull/158 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16191) simplify thread usage in TaskExecutorService
[ https://issues.apache.org/jira/browse/HIVE-16191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925253#comment-15925253 ] Siddharth Seth commented on HIVE-16191: --- There's one executor, and one wrapper class over that, and a callback. If you want to cut LOC - do that in one line. However, I think that is bad practice. bq. you prefer the "enterprise" low-quality style in small details Not sure what this means. > simplify thread usage in TaskExecutorService > > > Key: HIVE-16191 > URL: https://issues.apache.org/jira/browse/HIVE-16191 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-16191.patch > > > Remove executors, futures, decorators etc where not needed -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (HIVE-15616) Improve contents of qfile test output
[ https://issues.apache.org/jira/browse/HIVE-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925244#comment-15925244 ] Pengcheng Xiong edited comment on HIVE-15616 at 3/14/17 11:34 PM: -- I had a discussion with [~ashutoshc]. Previously, if we set "-Dtest.output.overwrite=true", we should see the test pass and the golden file be overwritten. Now, what i saw is {code} < Stage: Stage-8 < Column Stats Work < Column Stats Desc: < Columns: key, value < Col Output was too long and had to be truncated... Tests run: 5, Failures: 5, Errors: 0, Skipped: 0 {code} This changes the original behavior. What we are expecting is that: if we set "-Dtest.output.overwrite=true", it is OK to print the diff information as added by this patch, but please pass the test if the failure is just due to golden file change. This patch is going to be reverted. Could u please address the problem and resubmit the patch? Thanks. was (Author: pxiong): I had a discussion with [~ashutoshc]. Previously, if we set "-Dtest.output.overwrite=true", we should see the test pass and the golden file be overwritten. Now, what i saw is {code} < Stage: Stage-8 < Column Stats Work < Column Stats Desc: < Columns: key, value < Col Output was too long and had to be truncated... Tests run: 5, Failures: 5, Errors: 0, Skipped: 0 {code} This changes the original behavior. What we are expecting is that: if we set "-Dtest.output.overwrite=true", it is OK to print the diff information as added by this patch, but please pass the test. This patch is going to be reverted. Could u please address the problem and resubmit the patch? Thanks. > Improve contents of qfile test output > - > > Key: HIVE-15616 > URL: https://issues.apache.org/jira/browse/HIVE-15616 > Project: Hive > Issue Type: Improvement > Components: Tests >Affects Versions: 2.1.1 >Reporter: Barna Zsombor Klara >Assignee: Barna Zsombor Klara >Priority: Minor > Fix For: 2.2.0 > > Attachments: HIVE-15616.1.patch, HIVE-15616.2.patch, > HIVE-15616.3.patch, HIVE-15616.4.patch, HIVE-15616.patch > > > The current output of the failed qtests has a less than ideal signal to noise > ratio. > We have duplicated stack traces and messages between the error message/stack > trace/error out. > For diff errors the actual difference is missing from the error message and > can be found only in the standard out. > I would like to simplify this output by removing duplications, moving > relevant information to the top. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Reopened] (HIVE-15616) Improve contents of qfile test output
[ https://issues.apache.org/jira/browse/HIVE-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong reopened HIVE-15616: > Improve contents of qfile test output > - > > Key: HIVE-15616 > URL: https://issues.apache.org/jira/browse/HIVE-15616 > Project: Hive > Issue Type: Improvement > Components: Tests >Affects Versions: 2.1.1 >Reporter: Barna Zsombor Klara >Assignee: Barna Zsombor Klara >Priority: Minor > Fix For: 2.2.0 > > Attachments: HIVE-15616.1.patch, HIVE-15616.2.patch, > HIVE-15616.3.patch, HIVE-15616.4.patch, HIVE-15616.patch > > > The current output of the failed qtests has a less than ideal signal to noise > ratio. > We have duplicated stack traces and messages between the error message/stack > trace/error out. > For diff errors the actual difference is missing from the error message and > can be found only in the standard out. > I would like to simplify this output by removing duplications, moving > relevant information to the top. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-15616) Improve contents of qfile test output
[ https://issues.apache.org/jira/browse/HIVE-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925244#comment-15925244 ] Pengcheng Xiong commented on HIVE-15616: I had a discussion with [~ashutoshc]. Previously, if we set "-Dtest.output.overwrite=true", we should see the test pass and the golden file be overwritten. Now, what i saw is {code} < Stage: Stage-8 < Column Stats Work < Column Stats Desc: < Columns: key, value < Col Output was too long and had to be truncated... Tests run: 5, Failures: 5, Errors: 0, Skipped: 0 {code} This changes the original behavior. What we are expecting is that: if we set "-Dtest.output.overwrite=true", it is OK to print the diff information as added by this patch, but please pass the test. This patch is going to be reverted. Could u please address the problem and resubmit the patch? Thanks. > Improve contents of qfile test output > - > > Key: HIVE-15616 > URL: https://issues.apache.org/jira/browse/HIVE-15616 > Project: Hive > Issue Type: Improvement > Components: Tests >Affects Versions: 2.1.1 >Reporter: Barna Zsombor Klara >Assignee: Barna Zsombor Klara >Priority: Minor > Fix For: 2.2.0 > > Attachments: HIVE-15616.1.patch, HIVE-15616.2.patch, > HIVE-15616.3.patch, HIVE-15616.4.patch, HIVE-15616.patch > > > The current output of the failed qtests has a less than ideal signal to noise > ratio. > We have duplicated stack traces and messages between the error message/stack > trace/error out. > For diff errors the actual difference is missing from the error message and > can be found only in the standard out. > I would like to simplify this output by removing duplications, moving > relevant information to the top. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16213) ObjectStore can leak Queries when rolbackTransaction
[ https://issues.apache.org/jira/browse/HIVE-16213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925241#comment-15925241 ] Alexander Kolbasov commented on HIVE-16213: --- [~vihangk1] I am not working on it. Thank you for looking at this. > ObjectStore can leak Queries when rolbackTransaction > - > > Key: HIVE-16213 > URL: https://issues.apache.org/jira/browse/HIVE-16213 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Alexander Kolbasov >Assignee: Vihang Karajgaonkar > > In ObjectStore.java there are a few places with the code similar to: > {code} > Query query = null; > try { > openTransaction(); > query = pm.newQuery(Something.class); > ... > commited = commitTransaction(); > } finally { > if (!commited) { > rollbackTransaction(); > } > if (query != null) { > query.closeAll(); > } > } > {code} > The problem is that rollbackTransaction() may throw an exception in which > case query.closeAll() wouldn't be executed. > The fix would be to wrap rollbackTransaction in its own try-catch block. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16208) Vectorization: ProcessingModeHashAggregate::sumBatchSize is never reset
[ https://issues.apache.org/jira/browse/HIVE-16208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HIVE-16208: --- Affects Version/s: 2.2.0 Status: Patch Available (was: Open) > Vectorization: ProcessingModeHashAggregate::sumBatchSize is never reset > --- > > Key: HIVE-16208 > URL: https://issues.apache.org/jira/browse/HIVE-16208 > Project: Hive > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Gopal V >Assignee: Gopal V >Priority: Minor > Attachments: HIVE-16208.1.patch > > > When processing >2x the hash-table size in the vectorized group-by, the check > for fall-back to streaming is wrong because {{sumBatchSize*minReduction}} is > not reset when processing the next split. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16208) Vectorization: ProcessingModeHashAggregate::sumBatchSize is never reset
[ https://issues.apache.org/jira/browse/HIVE-16208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HIVE-16208: --- Attachment: HIVE-16208.1.patch > Vectorization: ProcessingModeHashAggregate::sumBatchSize is never reset > --- > > Key: HIVE-16208 > URL: https://issues.apache.org/jira/browse/HIVE-16208 > Project: Hive > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Gopal V >Assignee: Gopal V >Priority: Minor > Attachments: HIVE-16208.1.patch > > > When processing >2x the hash-table size in the vectorized group-by, the check > for fall-back to streaming is wrong because {{sumBatchSize*minReduction}} is > not reset when processing the next split. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (HIVE-16208) Vectorization: ProcessingModeHashAggregate::sumBatchSize is never reset
[ https://issues.apache.org/jira/browse/HIVE-16208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V reassigned HIVE-16208: -- Assignee: Gopal V > Vectorization: ProcessingModeHashAggregate::sumBatchSize is never reset > --- > > Key: HIVE-16208 > URL: https://issues.apache.org/jira/browse/HIVE-16208 > Project: Hive > Issue Type: Bug >Reporter: Gopal V >Assignee: Gopal V > > When processing >2x the hash-table size in the vectorized group-by, the check > for fall-back to streaming is wrong because {{sumBatchSize*minReduction}} is > not reset when processing the next split. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16208) Vectorization: ProcessingModeHashAggregate::sumBatchSize is never reset
[ https://issues.apache.org/jira/browse/HIVE-16208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HIVE-16208: --- Description: When processing >2x the hash-table size in the vectorized group-by, the check for fall-back to streaming is wrong because {{sumBatchSize*minReduction}} is not reset when processing the next split. (was: When processing >2x the hash-table size in the vectorized group-by, the check for fall-back to streaming is wrong because {{sumBatchSize*minReduction}} is not modified by a partial flush or a full flush.) > Vectorization: ProcessingModeHashAggregate::sumBatchSize is never reset > --- > > Key: HIVE-16208 > URL: https://issues.apache.org/jira/browse/HIVE-16208 > Project: Hive > Issue Type: Bug >Reporter: Gopal V > > When processing >2x the hash-table size in the vectorized group-by, the check > for fall-back to streaming is wrong because {{sumBatchSize*minReduction}} is > not reset when processing the next split. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16208) Vectorization: ProcessingModeHashAggregate::sumBatchSize is never reset
[ https://issues.apache.org/jira/browse/HIVE-16208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HIVE-16208: --- Priority: Minor (was: Major) > Vectorization: ProcessingModeHashAggregate::sumBatchSize is never reset > --- > > Key: HIVE-16208 > URL: https://issues.apache.org/jira/browse/HIVE-16208 > Project: Hive > Issue Type: Bug >Reporter: Gopal V >Assignee: Gopal V >Priority: Minor > > When processing >2x the hash-table size in the vectorized group-by, the check > for fall-back to streaming is wrong because {{sumBatchSize*minReduction}} is > not reset when processing the next split. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16130) Remove jackson classes from hive-jdbc standalone jar
[ https://issues.apache.org/jira/browse/HIVE-16130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925216#comment-15925216 ] Hive QA commented on HIVE-16130: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12858754/HIVE-16130.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 10346 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr] (batchId=141) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4133/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4133/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4133/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12858754 - PreCommit-HIVE-Build > Remove jackson classes from hive-jdbc standalone jar > > > Key: HIVE-16130 > URL: https://issues.apache.org/jira/browse/HIVE-16130 > Project: Hive > Issue Type: Bug >Reporter: Tao Li >Assignee: Tao Li > Attachments: HIVE-16130.1.patch, HIVE-16130.2.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-13780) Allow user to update AVRO table schema via command even if table's definition was defined through schema file
[ https://issues.apache.org/jira/browse/HIVE-13780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925211#comment-15925211 ] Adam Szita commented on HIVE-13780: --- Thanks for the review [~aihuaxu] > Allow user to update AVRO table schema via command even if table's definition > was defined through schema file > - > > Key: HIVE-13780 > URL: https://issues.apache.org/jira/browse/HIVE-13780 > Project: Hive > Issue Type: Improvement > Components: CLI >Affects Versions: 2.0.0 >Reporter: Eric Lin >Assignee: Adam Szita >Priority: Minor > Fix For: 2.2.0 > > Attachments: HIVE-13780.0.patch, HIVE-13780.1.patch, > HIVE-13780.3.patch > > > If a table is defined as below: > {code} > CREATE TABLE test > STORED AS AVRO > TBLPROPERTIES ('avro.schema.url'='/tmp/schema.json'); > {code} > if user tries to run command: > {code} > ALTER TABLE test CHANGE COLUMN col1 col1 STRING COMMENT 'test comment'; > {code} > The query will return without any warning, but has no affect to the table. > It would be good if we can allow user to ALTER table (add/change column, > update comment etc) even though the schema is defined through schema file. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (HIVE-16213) ObjectStore can leak Queries when rolbackTransaction
[ https://issues.apache.org/jira/browse/HIVE-16213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar reassigned HIVE-16213: -- Assignee: Vihang Karajgaonkar > ObjectStore can leak Queries when rolbackTransaction > - > > Key: HIVE-16213 > URL: https://issues.apache.org/jira/browse/HIVE-16213 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Alexander Kolbasov >Assignee: Vihang Karajgaonkar > > In ObjectStore.java there are a few places with the code similar to: > {code} > Query query = null; > try { > openTransaction(); > query = pm.newQuery(Something.class); > ... > commited = commitTransaction(); > } finally { > if (!commited) { > rollbackTransaction(); > } > if (query != null) { > query.closeAll(); > } > } > {code} > The problem is that rollbackTransaction() may throw an exception in which > case query.closeAll() wouldn't be executed. > The fix would be to wrap rollbackTransaction in its own try-catch block. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16213) ObjectStore can leak Queries when rolbackTransaction
[ https://issues.apache.org/jira/browse/HIVE-16213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925196#comment-15925196 ] Vihang Karajgaonkar commented on HIVE-16213: Thanks for creating this [~akolb]. If you are not actively working on this I can take it up. Let me know. > ObjectStore can leak Queries when rolbackTransaction > - > > Key: HIVE-16213 > URL: https://issues.apache.org/jira/browse/HIVE-16213 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Alexander Kolbasov > > In ObjectStore.java there are a few places with the code similar to: > {code} > Query query = null; > try { > openTransaction(); > query = pm.newQuery(Something.class); > ... > commited = commitTransaction(); > } finally { > if (!commited) { > rollbackTransaction(); > } > if (query != null) { > query.closeAll(); > } > } > {code} > The problem is that rollbackTransaction() may throw an exception in which > case query.closeAll() wouldn't be executed. > The fix would be to wrap rollbackTransaction in its own try-catch block. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16164) Provide mechanism for passing HMS notification ID between transactional and non-transactional listeners.
[ https://issues.apache.org/jira/browse/HIVE-16164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Peña updated HIVE-16164: --- Attachment: HIVE-16164.2.patch > Provide mechanism for passing HMS notification ID between transactional and > non-transactional listeners. > > > Key: HIVE-16164 > URL: https://issues.apache.org/jira/browse/HIVE-16164 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: Sergio Peña >Assignee: Sergio Peña > Attachments: HIVE-16164.1.patch, HIVE-16164.2.patch > > > The HMS DB notification listener currently stores an event ID on the HMS > backend DB so that external applications (such as backup apps) can request > incremental notifications based on the last event ID requested. > The HMS DB notification and backup applications are asynchronous. However, > there are sometimes that applications may be required to be in sync with the > latest HMS event in order to process an action. These applications will > provide a listener implementation that is called by the HMS after an HMS > transaction happened. > The problem is that the listener running after the transaction (or during the > non-transactional context) may need the DB event ID in order to sync all > events happened previous to that event ID, but this ID is never passed to the > non-transactional listeners. > We can pass this event information through the EnvironmentContext found on > each ListenerEvent implementations (such as CreateTableEvent), and send the > EnvironmentContext to the non-transactional listeners to get the event ID. > The DbNotificactionListener already knows the event ID after calling the > ObjectStore.addNotificationEvent(). We just need to set this event ID to the > EnvironmentContext from each of the event notifications and make sure that > this EnvironmentContext is sent to the non-transactional listeners. > Here's the code example when creating a table on {{create_table_core}}: > {noformat} > ms.createTable(tbl); > if (transactionalListeners.size() > 0) { > CreateTableEvent createTableEvent = new CreateTableEvent(tbl, true, this); > createTableEvent.setEnvironmentContext(envContext); > for (MetaStoreEventListener transactionalListener : > transactionalListeners) { > transactionalListener.onCreateTable(createTableEvent); // <- > Here the notification ID is generated > } > } > success = ms.commitTransaction(); > } finally { > if (!success) { > ms.rollbackTransaction(); > if (madeDir) { > wh.deleteDir(tblPath, true); > } > } > for (MetaStoreEventListener listener : listeners) { > CreateTableEvent createTableEvent = > new CreateTableEvent(tbl, success, this); > createTableEvent.setEnvironmentContext(envContext); > listener.onCreateTable(createTableEvent);// <- > Here we would like to consume notification ID > } > {noformat} > We could use a specific key name that will be used on the EnvironmentContext, > such as DB_NOTIFICATION_EVENT_ID. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16214) Explore the possibillity of introducing a service-client module
[ https://issues.apache.org/jira/browse/HIVE-16214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-16214: Description: The jdbc driver pulls in a lot of things from hive...and that may affect the jdbc driver user. In this ticket I experiment with the extraction of the relevant parts of service(wrt to the jdbc driver) into a service-client module. I've opened a PR...to enable commit by commit view: https://github.com/apache/hive/pull/158 was: The jdbc driver pulls in a lot of things from hive...and that may affect the jdbc driver user. In this ticket I experiment with the extraction of the relevant parts of service(wrt to the jdbc driver) into a service-client module. > Explore the possibillity of introducing a service-client module > --- > > Key: HIVE-16214 > URL: https://issues.apache.org/jira/browse/HIVE-16214 > Project: Hive > Issue Type: Improvement >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich > Attachments: HIVE-16214.1.patch > > > The jdbc driver pulls in a lot of things from hive...and that may affect the > jdbc driver user. > In this ticket I experiment with the extraction of the relevant parts of > service(wrt to the jdbc driver) into a service-client module. > I've opened a PR...to enable commit by commit view: > https://github.com/apache/hive/pull/158 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16214) Explore the possibillity of introducing a service-client module
[ https://issues.apache.org/jira/browse/HIVE-16214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925162#comment-15925162 ] ASF GitHub Bot commented on HIVE-16214: --- GitHub user kgyrtkirk opened a pull request: https://github.com/apache/hive/pull/158 HIVE-16214 Service client experiment You can merge this pull request into a Git repository by running: $ git pull https://github.com/kgyrtkirk/hive service-client Alternatively you can review and apply these changes as the patch at: https://github.com/apache/hive/pull/158.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #158 commit 90dc6386d033041737c8c4b14ca59a983ed806f2 Author: Zoltan HaindrichDate: 2017-03-14T21:24:16Z cripple embedded support commit 2e36af2e1c602970ded201598eb2072cc502c620 Author: Zoltan Haindrich Date: 2017-03-14T21:25:10Z move around files commit e3f443cee12102e579216a603f94deb8f902e87e Author: Zoltan Haindrich Date: 2017-03-14T21:30:51Z add reflection to use Hive.class commit a0ec66108acf167d35c36225fa277b94b814a020 Author: Zoltan Haindrich Date: 2017-03-14T21:37:18Z use TCLIService.Iface instead of implementation commit 2b1040dbf3093242686bd622bdd85414fb40aac0 Author: Zoltan Haindrich Date: 2017-03-14T21:59:54Z possibly re-enable embedded mode commit d44603edc987ccc9839d49d0b3dfcb6c6e8c4ebc Author: Zoltan Haindrich Date: 2017-03-14T22:01:12Z remove unneeded deps from service-client commit ede65c5cf7d1b83ebea04707d7c4853ad73b5dd6 Author: Zoltan Haindrich Date: 2017-03-14T22:05:29Z make service depend on client instead rpc > Explore the possibillity of introducing a service-client module > --- > > Key: HIVE-16214 > URL: https://issues.apache.org/jira/browse/HIVE-16214 > Project: Hive > Issue Type: Improvement >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich > Attachments: HIVE-16214.1.patch > > > The jdbc driver pulls in a lot of things from hive...and that may affect the > jdbc driver user. > In this ticket I experiment with the extraction of the relevant parts of > service(wrt to the jdbc driver) into a service-client module. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16181) Make logic for hdfs directory location extraction more generic, in webhcat test driver
[ https://issues.apache.org/jira/browse/HIVE-16181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925160#comment-15925160 ] Aswathy Chellammal Sreekumar commented on HIVE-16181: - Thanks for the review [~daijy] > Make logic for hdfs directory location extraction more generic, in webhcat > test driver > -- > > Key: HIVE-16181 > URL: https://issues.apache.org/jira/browse/HIVE-16181 > Project: Hive > Issue Type: Test > Components: WebHCat >Reporter: Aswathy Chellammal Sreekumar >Assignee: Aswathy Chellammal Sreekumar >Priority: Minor > Fix For: 2.2.0 > > Attachments: HIVE-16181.1.patch > > > Patch to make regular expression for directory location lookup in > setLocationPermGroup of TestDriverCurl more generic to accommodate patterns > without port number like hdfs://mycluster/hive/warehouse/ -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Resolved] (HIVE-16181) Make logic for hdfs directory location extraction more generic, in webhcat test driver
[ https://issues.apache.org/jira/browse/HIVE-16181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai resolved HIVE-16181. --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.2.0 I see. Patch looks good and committed to master. Thanks Aswathy! > Make logic for hdfs directory location extraction more generic, in webhcat > test driver > -- > > Key: HIVE-16181 > URL: https://issues.apache.org/jira/browse/HIVE-16181 > Project: Hive > Issue Type: Test > Components: WebHCat >Reporter: Aswathy Chellammal Sreekumar >Assignee: Aswathy Chellammal Sreekumar >Priority: Minor > Fix For: 2.2.0 > > Attachments: HIVE-16181.1.patch > > > Patch to make regular expression for directory location lookup in > setLocationPermGroup of TestDriverCurl more generic to accommodate patterns > without port number like hdfs://mycluster/hive/warehouse/ -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16214) Explore the possibillity of introducing a service-client module
[ https://issues.apache.org/jira/browse/HIVE-16214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-16214: Attachment: HIVE-16214.1.patch #1 - wip patch - see how well the tests like this change :) > Explore the possibillity of introducing a service-client module > --- > > Key: HIVE-16214 > URL: https://issues.apache.org/jira/browse/HIVE-16214 > Project: Hive > Issue Type: Improvement >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich > Attachments: HIVE-16214.1.patch > > > The jdbc driver pulls in a lot of things from hive...and that may affect the > jdbc driver user. > In this ticket I experiment with the extraction of the relevant parts of > service(wrt to the jdbc driver) into a service-client module. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16166) HS2 may still waste up to 15% of memory on duplicate strings
[ https://issues.apache.org/jira/browse/HIVE-16166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925147#comment-15925147 ] Sergio Peña commented on HIVE-16166: The patch looks good. +1 > HS2 may still waste up to 15% of memory on duplicate strings > > > Key: HIVE-16166 > URL: https://issues.apache.org/jira/browse/HIVE-16166 > Project: Hive > Issue Type: Improvement >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev > Attachments: ch_2_excerpt.txt, HIVE-16166.01.patch > > > A heap dump obtained from one of our users shows that 15% of memory is wasted > on duplicate strings, despite the recent optimizations that I made. The > problematic strings just come from different sources this time. See the > excerpt from the jxray (www.jxray.com) analysis attached. > Adding String.intern() calls in the appropriate places reduces the overhead > of duplicate strings with this workload to ~6%. The remaining duplicates come > mostly from JDK internal and MapReduce data structures, and thus are more > difficult to fix. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16091) Support subqueries in project/select
[ https://issues.apache.org/jira/browse/HIVE-16091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-16091: --- Status: Open (was: Patch Available) > Support subqueries in project/select > > > Key: HIVE-16091 > URL: https://issues.apache.org/jira/browse/HIVE-16091 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-16091.1.patch, HIVE-16091.2.patch, > HIVE-16091.3.patch, HIVE-16091.4.patch, HIVE-16091.5.patch > > > Currently scalar subqueries are supported in filter only (WHERE/HAVING). -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16091) Support subqueries in project/select
[ https://issues.apache.org/jira/browse/HIVE-16091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-16091: --- Status: Patch Available (was: Open) Patch 5 fixes failing tests > Support subqueries in project/select > > > Key: HIVE-16091 > URL: https://issues.apache.org/jira/browse/HIVE-16091 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-16091.1.patch, HIVE-16091.2.patch, > HIVE-16091.3.patch, HIVE-16091.4.patch, HIVE-16091.5.patch > > > Currently scalar subqueries are supported in filter only (WHERE/HAVING). -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (HIVE-16214) Explore the possibillity of introducing a service-client module
[ https://issues.apache.org/jira/browse/HIVE-16214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich reassigned HIVE-16214: --- > Explore the possibillity of introducing a service-client module > --- > > Key: HIVE-16214 > URL: https://issues.apache.org/jira/browse/HIVE-16214 > Project: Hive > Issue Type: Improvement >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich > > The jdbc driver pulls in a lot of things from hive...and that may affect the > jdbc driver user. > In this ticket I experiment with the extraction of the relevant parts of > service(wrt to the jdbc driver) into a service-client module. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16091) Support subqueries in project/select
[ https://issues.apache.org/jira/browse/HIVE-16091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-16091: --- Attachment: HIVE-16091.5.patch > Support subqueries in project/select > > > Key: HIVE-16091 > URL: https://issues.apache.org/jira/browse/HIVE-16091 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-16091.1.patch, HIVE-16091.2.patch, > HIVE-16091.3.patch, HIVE-16091.4.patch, HIVE-16091.5.patch > > > Currently scalar subqueries are supported in filter only (WHERE/HAVING). -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16164) Provide mechanism for passing HMS notification ID between transactional and non-transactional listeners.
[ https://issues.apache.org/jira/browse/HIVE-16164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925135#comment-15925135 ] Hive QA commented on HIVE-16164: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12858749/HIVE-16164.1.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 10345 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.metastore.TestHiveMetaStoreWithEnvironmentContext.testEnvironmentContext (batchId=200) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4132/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4132/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4132/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12858749 - PreCommit-HIVE-Build > Provide mechanism for passing HMS notification ID between transactional and > non-transactional listeners. > > > Key: HIVE-16164 > URL: https://issues.apache.org/jira/browse/HIVE-16164 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: Sergio Peña >Assignee: Sergio Peña > Attachments: HIVE-16164.1.patch > > > The HMS DB notification listener currently stores an event ID on the HMS > backend DB so that external applications (such as backup apps) can request > incremental notifications based on the last event ID requested. > The HMS DB notification and backup applications are asynchronous. However, > there are sometimes that applications may be required to be in sync with the > latest HMS event in order to process an action. These applications will > provide a listener implementation that is called by the HMS after an HMS > transaction happened. > The problem is that the listener running after the transaction (or during the > non-transactional context) may need the DB event ID in order to sync all > events happened previous to that event ID, but this ID is never passed to the > non-transactional listeners. > We can pass this event information through the EnvironmentContext found on > each ListenerEvent implementations (such as CreateTableEvent), and send the > EnvironmentContext to the non-transactional listeners to get the event ID. > The DbNotificactionListener already knows the event ID after calling the > ObjectStore.addNotificationEvent(). We just need to set this event ID to the > EnvironmentContext from each of the event notifications and make sure that > this EnvironmentContext is sent to the non-transactional listeners. > Here's the code example when creating a table on {{create_table_core}}: > {noformat} > ms.createTable(tbl); > if (transactionalListeners.size() > 0) { > CreateTableEvent createTableEvent = new CreateTableEvent(tbl, true, this); > createTableEvent.setEnvironmentContext(envContext); > for (MetaStoreEventListener transactionalListener : > transactionalListeners) { > transactionalListener.onCreateTable(createTableEvent); // <- > Here the notification ID is generated > } > } > success = ms.commitTransaction(); > } finally { > if (!success) { > ms.rollbackTransaction(); > if (madeDir) { > wh.deleteDir(tblPath, true); > } > } > for (MetaStoreEventListener listener : listeners) { > CreateTableEvent createTableEvent = > new CreateTableEvent(tbl, success, this); > createTableEvent.setEnvironmentContext(envContext); > listener.onCreateTable(createTableEvent);// <- > Here we would like to consume notification ID > } > {noformat} > We could use a specific key name that will be used on the EnvironmentContext, > such as DB_NOTIFICATION_EVENT_ID. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-15947) Enhance Templeton service job operations reliability
[ https://issues.apache.org/jira/browse/HIVE-15947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subramanyam Pattipaka updated HIVE-15947: - Attachment: HIVE-15947.9.patch Minor code comment fixes. > Enhance Templeton service job operations reliability > > > Key: HIVE-15947 > URL: https://issues.apache.org/jira/browse/HIVE-15947 > Project: Hive > Issue Type: Bug >Reporter: Subramanyam Pattipaka >Assignee: Subramanyam Pattipaka > Attachments: HIVE-15947.2.patch, HIVE-15947.3.patch, > HIVE-15947.4.patch, HIVE-15947.6.patch, HIVE-15947.7.patch, > HIVE-15947.8.patch, HIVE-15947.9.patch, HIVE-15947.patch > > > Currently Templeton service doesn't restrict number of job operation > requests. It simply accepts and tries to run all operations. If more number > of concurrent job submit requests comes then the time to submit job > operations can increase significantly. Templetonused hdfs to store staging > file for job. If HDFS storage can't respond to large number of requests and > throttles then the job submission can take very large times in order of > minutes. > This behavior may not be suitable for all applications and client > applications may be looking for predictable and low response for successful > request or send throttle response to client to wait for some time before > re-requesting job operation. > In this JIRA, I am trying to address following job operations > 1) Submit new Job > 2) Get Job Status > 3) List jobs > These three operations has different complexity due to variance in use of > cluster resources like YARN/HDFS. > The idea is to introduce a new config templeton.job.submit.exec.max-procs > which controls maximum number of concurrent active job submissions within > Templeton and use this config to control better response times. If a new job > submission request sees that there are already > templeton.job.submit.exec.max-procs jobs getting submitted concurrently then > the request will fail with Http error 503 with reason >“Too many concurrent job submission requests received. Please wait for > some time before retrying.” > > The client is expected to catch this response and retry after waiting for > some time. The default value for the config > templeton.job.submit.exec.max-procs is set to ‘0’. This means by default job > submission requests are always accepted. The behavior needs to be enabled > based on requirements. > We can have similar behavior for Status and List operations with configs > templeton.job.status.exec.max-procs and templeton.list.job.exec.max-procs > respectively. > Once the job operation is started, the operation can take longer time. The > client which has requested for job operation may not be waiting for > indefinite amount of time. This work introduces configurations > templeton.exec.job.submit.timeout > templeton.exec.job.status.timeout > templeton.exec.job.list.timeout > to specify maximum amount of time job operation can execute. If time out > happens then list and status job requests returns to client with message > "List job request got timed out. Please retry the operation after waiting for > some time." > If submit job request gets timed out then > i) The job submit request thread which receives time out will check if > valid job id is generated in job request. > ii) If it is generated then issue kill job request on cancel thread > pool. Don't wait for operation to complete and returns to client with time > out message. > Side effects of enabling time out for submit operations > 1) This has a possibility for having active job for some time by the client > gets response and a list operation from client could potential show the newly > created job before it gets killed. > 2) We do best effort to kill the job and no guarantees. This means there is a > possibility of duplicate job created. One possible reason for this could be a > case where job is created and then operation timed out but kill request > failed due to resource manager unavailability. When resource manager > restarts, it will restarts the job which got created. > Fixing this scenario is not part of the scope of this JIRA. The job operation > functionality can be enabled only if above side effects are acceptable. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16188) beeline should block the connection if given invalid database name.
[ https://issues.apache.org/jira/browse/HIVE-16188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-16188: Attachment: HIVE-16188.2.patch Attaching updated patch: * Fixed failing unit tests * Added a new test that makes sure an exception is thrown when connecting to a database that does not exist > beeline should block the connection if given invalid database name. > --- > > Key: HIVE-16188 > URL: https://issues.apache.org/jira/browse/HIVE-16188 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Pavas Garg >Assignee: Sahil Takiar >Priority: Minor > Attachments: HIVE-16188.1.patch, HIVE-16188.2.patch > > > When using beeline shell to connect to HS2 or impalaD as below - > Connection to HS2 using beeline tool on port 1 - > beeline -u > "jdbc:hive2://HS2-host-name:1/default;principal=hive/hs2-host-n...@domain.example.com" > Connection to ImpalaD using beeline tool on port 21050 - > beeline -u > "jdbc:hive2://impalad-host-name.com:21050/XXX;principal=impala/impalad-host-name@domain.example.com" > > Providing a invalid database name as XXX - the connection is made. > It should ideally stop the connection to be successfull. > Even though, the beeline tool does not allow to move forward, unless you > provide a valid DB name, like > Use ; -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16130) Remove jackson classes from hive-jdbc standalone jar
[ https://issues.apache.org/jira/browse/HIVE-16130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925095#comment-15925095 ] Vaibhav Gumashta commented on HIVE-16130: - +1 > Remove jackson classes from hive-jdbc standalone jar > > > Key: HIVE-16130 > URL: https://issues.apache.org/jira/browse/HIVE-16130 > Project: Hive > Issue Type: Bug >Reporter: Tao Li >Assignee: Tao Li > Attachments: HIVE-16130.1.patch, HIVE-16130.2.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-13780) Allow user to update AVRO table schema via command even if table's definition was defined through schema file
[ https://issues.apache.org/jira/browse/HIVE-13780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-13780: Resolution: Fixed Fix Version/s: 2.2.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks Adam for the work. > Allow user to update AVRO table schema via command even if table's definition > was defined through schema file > - > > Key: HIVE-13780 > URL: https://issues.apache.org/jira/browse/HIVE-13780 > Project: Hive > Issue Type: Improvement > Components: CLI >Affects Versions: 2.0.0 >Reporter: Eric Lin >Assignee: Adam Szita >Priority: Minor > Fix For: 2.2.0 > > Attachments: HIVE-13780.0.patch, HIVE-13780.1.patch, > HIVE-13780.3.patch > > > If a table is defined as below: > {code} > CREATE TABLE test > STORED AS AVRO > TBLPROPERTIES ('avro.schema.url'='/tmp/schema.json'); > {code} > if user tries to run command: > {code} > ALTER TABLE test CHANGE COLUMN col1 col1 STRING COMMENT 'test comment'; > {code} > The query will return without any warning, but has no affect to the table. > It would be good if we can allow user to ALTER table (add/change column, > update comment etc) even though the schema is defined through schema file. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16080) Add parquet to possible values for hive.default.fileformat and hive.default.fileformat.managed
[ https://issues.apache.org/jira/browse/HIVE-16080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925075#comment-15925075 ] Sergio Peña commented on HIVE-16080: +1. Looks good. > Add parquet to possible values for hive.default.fileformat and > hive.default.fileformat.managed > -- > > Key: HIVE-16080 > URL: https://issues.apache.org/jira/browse/HIVE-16080 > Project: Hive > Issue Type: Bug >Reporter: Sahil Takiar >Assignee: Sahil Takiar > Attachments: HIVE-16080.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16205) Improving type safety in Objectstore
[ https://issues.apache.org/jira/browse/HIVE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar updated HIVE-16205: --- Attachment: HIVE-16205.02.patch Fixed the test failure. Also, added a couple of tests in TestJdbcDriver2 to cover {{show tables in }} command > Improving type safety in Objectstore > > > Key: HIVE-16205 > URL: https://issues.apache.org/jira/browse/HIVE-16205 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar > Attachments: HIVE-16205.01.patch, HIVE-16205.02.patch > > > Modify the queries in ObjectStore for better type safety -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16181) Make logic for hdfs directory location extraction more generic, in webhcat test driver
[ https://issues.apache.org/jira/browse/HIVE-16181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aswathy Chellammal Sreekumar updated HIVE-16181: Description: Patch to make regular expression for directory location lookup in setLocationPermGroup of TestDriverCurl more generic to accommodate patterns without port number like hdfs://mycluster/hive/warehouse/ (was: Patch to make regular expression for directory location lookup in setLocationPermGroup of TestDriverCurl more generic to accommodate patterns without port number like hdfs://mycluster//hive/warehouse/) > Make logic for hdfs directory location extraction more generic, in webhcat > test driver > -- > > Key: HIVE-16181 > URL: https://issues.apache.org/jira/browse/HIVE-16181 > Project: Hive > Issue Type: Test > Components: WebHCat >Reporter: Aswathy Chellammal Sreekumar >Assignee: Aswathy Chellammal Sreekumar >Priority: Minor > Attachments: HIVE-16181.1.patch > > > Patch to make regular expression for directory location lookup in > setLocationPermGroup of TestDriverCurl more generic to accommodate patterns > without port number like hdfs://mycluster/hive/warehouse/ -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16181) Make logic for hdfs directory location extraction more generic, in webhcat test driver
[ https://issues.apache.org/jira/browse/HIVE-16181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925040#comment-15925040 ] Aswathy Chellammal Sreekumar commented on HIVE-16181: - With that, the match is going all the way to last part of hdfs location, retrieving just the folder name instead of the whole path > Make logic for hdfs directory location extraction more generic, in webhcat > test driver > -- > > Key: HIVE-16181 > URL: https://issues.apache.org/jira/browse/HIVE-16181 > Project: Hive > Issue Type: Test > Components: WebHCat >Reporter: Aswathy Chellammal Sreekumar >Assignee: Aswathy Chellammal Sreekumar >Priority: Minor > Attachments: HIVE-16181.1.patch > > > Patch to make regular expression for directory location lookup in > setLocationPermGroup of TestDriverCurl more generic to accommodate patterns > without port number like hdfs://mycluster//hive/warehouse/ -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-14016) Vectorization: Add support for Grouping Sets
[ https://issues.apache.org/jira/browse/HIVE-14016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925020#comment-15925020 ] Hive QA commented on HIVE-14016: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12858748/HIVE-14016.07.patch {color:green}SUCCESS:{color} +1 due to 22 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 10358 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_count] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_empty_where] (batchId=22) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_grouping_sets] (batchId=77) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_limit] (batchId=34) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr] (batchId=141) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4131/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4131/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4131/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 5 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12858748 - PreCommit-HIVE-Build > Vectorization: Add support for Grouping Sets > > > Key: HIVE-14016 > URL: https://issues.apache.org/jira/browse/HIVE-14016 > Project: Hive > Issue Type: Improvement > Components: Vectorization >Reporter: Gopal V >Assignee: Matt McCline > Attachments: HIVE-14016.01.patch, HIVE-14016.02.patch, > HIVE-14016.03.patch, HIVE-14016.04.patch, HIVE-14016.05.patch, > HIVE-14016.06.patch, HIVE-14016.07.patch > > > Rollup and Cube queries are not vectorized today due to the miss of > grouping-sets inside vector group by. > The cube and rollup operators can be shimmed onto the end of the pipeline by > converting a single row writer into a multiple row writer. > The corresponding non-vec loop is as follows > {code} > if (groupingSetsPresent) { > Object[] newKeysArray = newKeys.getKeyArray(); > Object[] cloneNewKeysArray = new Object[newKeysArray.length]; > for (int keyPos = 0; keyPos < groupingSetsPosition; keyPos++) { > cloneNewKeysArray[keyPos] = newKeysArray[keyPos]; > } > for (int groupingSetPos = 0; groupingSetPos < groupingSets.size(); > groupingSetPos++) { > for (int keyPos = 0; keyPos < groupingSetsPosition; keyPos++) { > newKeysArray[keyPos] = null; > } > FastBitSet bitset = groupingSetsBitSet[groupingSetPos]; > // Some keys need to be left to null corresponding to that grouping > set. > for (int keyPos = bitset.nextSetBit(0); keyPos >= 0; > keyPos = bitset.nextSetBit(keyPos+1)) { > newKeysArray[keyPos] = cloneNewKeysArray[keyPos]; > } > newKeysArray[groupingSetsPosition] = > newKeysGroupingSets[groupingSetPos]; > processKey(row, rowInspector); > } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-15983) Support the named columns join
[ https://issues.apache.org/jira/browse/HIVE-15983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong updated HIVE-15983: --- Status: Open (was: Patch Available) > Support the named columns join > -- > > Key: HIVE-15983 > URL: https://issues.apache.org/jira/browse/HIVE-15983 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Carter Shanklin >Assignee: Pengcheng Xiong > Attachments: HIVE-15983.01.patch, HIVE-15983.02.patch, > HIVE-15983.03.patch > > > The named columns join is a common shortcut allowing joins on identically > named keys. Example: select * from t1 join t2 using c1 is equivalent to > select * from t1 join t2 on t1.c1 = t2.c1. SQL standard reference: Section 7.7 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-15983) Support the named columns join
[ https://issues.apache.org/jira/browse/HIVE-15983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong updated HIVE-15983: --- Status: Patch Available (was: Open) > Support the named columns join > -- > > Key: HIVE-15983 > URL: https://issues.apache.org/jira/browse/HIVE-15983 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Carter Shanklin >Assignee: Pengcheng Xiong > Attachments: HIVE-15983.01.patch, HIVE-15983.02.patch, > HIVE-15983.03.patch > > > The named columns join is a common shortcut allowing joins on identically > named keys. Example: select * from t1 join t2 using c1 is equivalent to > select * from t1 join t2 on t1.c1 = t2.c1. SQL standard reference: Section 7.7 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-15983) Support the named columns join
[ https://issues.apache.org/jira/browse/HIVE-15983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong updated HIVE-15983: --- Status: Patch Available (was: Open) > Support the named columns join > -- > > Key: HIVE-15983 > URL: https://issues.apache.org/jira/browse/HIVE-15983 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Carter Shanklin >Assignee: Pengcheng Xiong > Attachments: HIVE-15983.01.patch, HIVE-15983.02.patch, > HIVE-15983.03.patch > > > The named columns join is a common shortcut allowing joins on identically > named keys. Example: select * from t1 join t2 using c1 is equivalent to > select * from t1 join t2 on t1.c1 = t2.c1. SQL standard reference: Section 7.7 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-15983) Support the named columns join
[ https://issues.apache.org/jira/browse/HIVE-15983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong updated HIVE-15983: --- Status: Open (was: Patch Available) > Support the named columns join > -- > > Key: HIVE-15983 > URL: https://issues.apache.org/jira/browse/HIVE-15983 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Carter Shanklin >Assignee: Pengcheng Xiong > Attachments: HIVE-15983.01.patch, HIVE-15983.02.patch, > HIVE-15983.03.patch > > > The named columns join is a common shortcut allowing joins on identically > named keys. Example: select * from t1 join t2 using c1 is equivalent to > select * from t1 join t2 on t1.c1 = t2.c1. SQL standard reference: Section 7.7 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-15983) Support the named columns join
[ https://issues.apache.org/jira/browse/HIVE-15983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong updated HIVE-15983: --- Attachment: HIVE-15983.03.patch > Support the named columns join > -- > > Key: HIVE-15983 > URL: https://issues.apache.org/jira/browse/HIVE-15983 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Carter Shanklin >Assignee: Pengcheng Xiong > Attachments: HIVE-15983.01.patch, HIVE-15983.02.patch, > HIVE-15983.03.patch > > > The named columns join is a common shortcut allowing joins on identically > named keys. Example: select * from t1 join t2 using c1 is equivalent to > select * from t1 join t2 on t1.c1 = t2.c1. SQL standard reference: Section 7.7 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16211) MERGE statement failing with ClassCastException
[ https://issues.apache.org/jira/browse/HIVE-16211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15924936#comment-15924936 ] Eugene Koifman commented on HIVE-16211: --- This probably needs a UT > MERGE statement failing with ClassCastException > --- > > Key: HIVE-16211 > URL: https://issues.apache.org/jira/browse/HIVE-16211 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal > Attachments: HIVE-16211.1.patch > > > Issuing a merge statement gives this error, > hive> 2017-03-14T18:34:02,945 ERROR [17d1c728-8865-47f5-a6fd-2b156d183d0f > main] ql.Driver: FAILED: ClassCastException > org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc cannot be cast to > org.apache.hadoop.hive.ql.plan.ExprNodeColumnDesc > java.lang.ClassCastException: > org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc cannot be cast to > org.apache.hadoop.hive.ql.plan.ExprNodeColumnDesc > at > org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.generateSemiJoinOperatorPlan(DynamicPartitionPruningOptimization.java:410) > at > org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.process(DynamicPartitionPruningOptimization.java:226) > at > org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89) > at > org.apache.hadoop.hive.ql.lib.ForwardWalker.walk(ForwardWalker.java:74) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.runDynamicPartitionPruning(TezCompiler.java:359) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.optimizeOperatorPlan(TezCompiler.java:91) > at > org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:138) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11159) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10708) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeInternal(UpdateDeleteSemanticAnalyzer.java:70) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:257) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeMerge(UpdateDeleteSemanticAnalyzer.java:729) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeInternal(UpdateDeleteSemanticAnalyzer.java:84) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:257) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:455) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:336) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1197) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1290) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1123) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:233) > at org.apache.hadoop.util.RunJar.main(RunJar.java:148) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16211) MERGE statement failing with ClassCastException
[ https://issues.apache.org/jira/browse/HIVE-16211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15924930#comment-15924930 ] Deepak Jaiswal commented on HIVE-16211: --- [~jdere] can you please review? > MERGE statement failing with ClassCastException > --- > > Key: HIVE-16211 > URL: https://issues.apache.org/jira/browse/HIVE-16211 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal > Attachments: HIVE-16211.1.patch > > > Issuing a merge statement gives this error, > hive> 2017-03-14T18:34:02,945 ERROR [17d1c728-8865-47f5-a6fd-2b156d183d0f > main] ql.Driver: FAILED: ClassCastException > org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc cannot be cast to > org.apache.hadoop.hive.ql.plan.ExprNodeColumnDesc > java.lang.ClassCastException: > org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc cannot be cast to > org.apache.hadoop.hive.ql.plan.ExprNodeColumnDesc > at > org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.generateSemiJoinOperatorPlan(DynamicPartitionPruningOptimization.java:410) > at > org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.process(DynamicPartitionPruningOptimization.java:226) > at > org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89) > at > org.apache.hadoop.hive.ql.lib.ForwardWalker.walk(ForwardWalker.java:74) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.runDynamicPartitionPruning(TezCompiler.java:359) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.optimizeOperatorPlan(TezCompiler.java:91) > at > org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:138) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11159) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10708) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeInternal(UpdateDeleteSemanticAnalyzer.java:70) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:257) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeMerge(UpdateDeleteSemanticAnalyzer.java:729) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeInternal(UpdateDeleteSemanticAnalyzer.java:84) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:257) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:455) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:336) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1197) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1290) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1123) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:233) > at org.apache.hadoop.util.RunJar.main(RunJar.java:148) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16211) MERGE statement failing with ClassCastException
[ https://issues.apache.org/jira/browse/HIVE-16211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-16211: -- Attachment: HIVE-16211.1.patch Find the ExprNodeColumnDesc correctly. > MERGE statement failing with ClassCastException > --- > > Key: HIVE-16211 > URL: https://issues.apache.org/jira/browse/HIVE-16211 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal > Attachments: HIVE-16211.1.patch > > > Issuing a merge statement gives this error, > hive> 2017-03-14T18:34:02,945 ERROR [17d1c728-8865-47f5-a6fd-2b156d183d0f > main] ql.Driver: FAILED: ClassCastException > org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc cannot be cast to > org.apache.hadoop.hive.ql.plan.ExprNodeColumnDesc > java.lang.ClassCastException: > org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc cannot be cast to > org.apache.hadoop.hive.ql.plan.ExprNodeColumnDesc > at > org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.generateSemiJoinOperatorPlan(DynamicPartitionPruningOptimization.java:410) > at > org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.process(DynamicPartitionPruningOptimization.java:226) > at > org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89) > at > org.apache.hadoop.hive.ql.lib.ForwardWalker.walk(ForwardWalker.java:74) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.runDynamicPartitionPruning(TezCompiler.java:359) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.optimizeOperatorPlan(TezCompiler.java:91) > at > org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:138) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11159) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10708) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeInternal(UpdateDeleteSemanticAnalyzer.java:70) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:257) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeMerge(UpdateDeleteSemanticAnalyzer.java:729) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeInternal(UpdateDeleteSemanticAnalyzer.java:84) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:257) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:455) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:336) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1197) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1290) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1123) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:233) > at org.apache.hadoop.util.RunJar.main(RunJar.java:148) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16130) Remove jackson classes from hive-jdbc standalone jar
[ https://issues.apache.org/jira/browse/HIVE-16130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15924928#comment-15924928 ] Tao Li commented on HIVE-16130: --- Attached another patch to remove jackson classes shaded from parquet-hadoop-bundle.jar and log4j-core.jar, while keeping the other classes from these 2 artifacts. > Remove jackson classes from hive-jdbc standalone jar > > > Key: HIVE-16130 > URL: https://issues.apache.org/jira/browse/HIVE-16130 > Project: Hive > Issue Type: Bug >Reporter: Tao Li >Assignee: Tao Li > Attachments: HIVE-16130.1.patch, HIVE-16130.2.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16211) MERGE statement failing with ClassCastException
[ https://issues.apache.org/jira/browse/HIVE-16211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-16211: -- Status: Patch Available (was: In Progress) > MERGE statement failing with ClassCastException > --- > > Key: HIVE-16211 > URL: https://issues.apache.org/jira/browse/HIVE-16211 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal > > Issuing a merge statement gives this error, > hive> 2017-03-14T18:34:02,945 ERROR [17d1c728-8865-47f5-a6fd-2b156d183d0f > main] ql.Driver: FAILED: ClassCastException > org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc cannot be cast to > org.apache.hadoop.hive.ql.plan.ExprNodeColumnDesc > java.lang.ClassCastException: > org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc cannot be cast to > org.apache.hadoop.hive.ql.plan.ExprNodeColumnDesc > at > org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.generateSemiJoinOperatorPlan(DynamicPartitionPruningOptimization.java:410) > at > org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.process(DynamicPartitionPruningOptimization.java:226) > at > org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89) > at > org.apache.hadoop.hive.ql.lib.ForwardWalker.walk(ForwardWalker.java:74) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.runDynamicPartitionPruning(TezCompiler.java:359) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.optimizeOperatorPlan(TezCompiler.java:91) > at > org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:138) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11159) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10708) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeInternal(UpdateDeleteSemanticAnalyzer.java:70) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:257) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeMerge(UpdateDeleteSemanticAnalyzer.java:729) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeInternal(UpdateDeleteSemanticAnalyzer.java:84) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:257) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:455) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:336) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1197) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1290) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1123) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:233) > at org.apache.hadoop.util.RunJar.main(RunJar.java:148) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-16130) Remove jackson classes from hive-jdbc standalone jar
[ https://issues.apache.org/jira/browse/HIVE-16130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Li updated HIVE-16130: -- Attachment: HIVE-16130.2.patch > Remove jackson classes from hive-jdbc standalone jar > > > Key: HIVE-16130 > URL: https://issues.apache.org/jira/browse/HIVE-16130 > Project: Hive > Issue Type: Bug >Reporter: Tao Li >Assignee: Tao Li > Attachments: HIVE-16130.1.patch, HIVE-16130.2.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Work started] (HIVE-16211) MERGE statement failing with ClassCastException
[ https://issues.apache.org/jira/browse/HIVE-16211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-16211 started by Deepak Jaiswal. - > MERGE statement failing with ClassCastException > --- > > Key: HIVE-16211 > URL: https://issues.apache.org/jira/browse/HIVE-16211 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal > > Issuing a merge statement gives this error, > hive> 2017-03-14T18:34:02,945 ERROR [17d1c728-8865-47f5-a6fd-2b156d183d0f > main] ql.Driver: FAILED: ClassCastException > org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc cannot be cast to > org.apache.hadoop.hive.ql.plan.ExprNodeColumnDesc > java.lang.ClassCastException: > org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc cannot be cast to > org.apache.hadoop.hive.ql.plan.ExprNodeColumnDesc > at > org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.generateSemiJoinOperatorPlan(DynamicPartitionPruningOptimization.java:410) > at > org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.process(DynamicPartitionPruningOptimization.java:226) > at > org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89) > at > org.apache.hadoop.hive.ql.lib.ForwardWalker.walk(ForwardWalker.java:74) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.runDynamicPartitionPruning(TezCompiler.java:359) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.optimizeOperatorPlan(TezCompiler.java:91) > at > org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:138) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11159) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10708) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeInternal(UpdateDeleteSemanticAnalyzer.java:70) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:257) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeMerge(UpdateDeleteSemanticAnalyzer.java:729) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeInternal(UpdateDeleteSemanticAnalyzer.java:84) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:257) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:455) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:336) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1197) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1290) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1123) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:233) > at org.apache.hadoop.util.RunJar.main(RunJar.java:148) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (HIVE-16212) MM tables: suspicious ORC HDFS counter changes
[ https://issues.apache.org/jira/browse/HIVE-16212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-16212: --- > MM tables: suspicious ORC HDFS counter changes > -- > > Key: HIVE-16212 > URL: https://issues.apache.org/jira/browse/HIVE-16212 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > > org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_llap_counters1] > (batchId=136) > org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_llap_counters] > (batchId=139) > org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] > (batchId=136) > org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_schema_evol_3a] > (batchId=137) > HDFS counters for operation counts go up (which I can repro locally). -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16210) Use jvm temporary tmp dir by default
[ https://issues.apache.org/jira/browse/HIVE-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15924924#comment-15924924 ] slim bouguerra commented on HIVE-16210: --- [~ashutoshc] can you please checkout this. > Use jvm temporary tmp dir by default > > > Key: HIVE-16210 > URL: https://issues.apache.org/jira/browse/HIVE-16210 > Project: Hive > Issue Type: Improvement > Components: Druid integration >Reporter: slim bouguerra >Assignee: slim bouguerra > Attachments: HIVE-16210.patch > > > instead of using "/tmp" by default, it makes more sense to use the jvm > default tmp dir. This can have dramatic consequences if the indexed files are > huge. For instance application run by run containers can be provisioned with > a dedicated tmp dir. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (HIVE-16211) MERGE statement failing with ClassCastException
[ https://issues.apache.org/jira/browse/HIVE-16211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal reassigned HIVE-16211: - > MERGE statement failing with ClassCastException > --- > > Key: HIVE-16211 > URL: https://issues.apache.org/jira/browse/HIVE-16211 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal > > Issuing a merge statement gives this error, > hive> 2017-03-14T18:34:02,945 ERROR [17d1c728-8865-47f5-a6fd-2b156d183d0f > main] ql.Driver: FAILED: ClassCastException > org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc cannot be cast to > org.apache.hadoop.hive.ql.plan.ExprNodeColumnDesc > java.lang.ClassCastException: > org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc cannot be cast to > org.apache.hadoop.hive.ql.plan.ExprNodeColumnDesc > at > org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.generateSemiJoinOperatorPlan(DynamicPartitionPruningOptimization.java:410) > at > org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.process(DynamicPartitionPruningOptimization.java:226) > at > org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89) > at > org.apache.hadoop.hive.ql.lib.ForwardWalker.walk(ForwardWalker.java:74) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.runDynamicPartitionPruning(TezCompiler.java:359) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.optimizeOperatorPlan(TezCompiler.java:91) > at > org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:138) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11159) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10708) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeInternal(UpdateDeleteSemanticAnalyzer.java:70) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:257) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeMerge(UpdateDeleteSemanticAnalyzer.java:729) > at > org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeInternal(UpdateDeleteSemanticAnalyzer.java:84) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:257) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:455) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:336) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1197) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1290) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1123) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:233) > at org.apache.hadoop.util.RunJar.main(RunJar.java:148) -- This message was sent by Atlassian JIRA (v6.3.15#6346)