[jira] [Updated] (HIVE-18281) HiveServer2 HA for LLAP and Workload Manager
[ https://issues.apache.org/jira/browse/HIVE-18281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-18281: - Attachment: HIVE-18281.5.patch > HiveServer2 HA for LLAP and Workload Manager > > > Key: HIVE-18281 > URL: https://issues.apache.org/jira/browse/HIVE-18281 > Project: Hive > Issue Type: New Feature >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-18281.1.patch, HIVE-18281.2.patch, > HIVE-18281.4.patch, HIVE-18281.5.patch, HIVE-18281.WIP.patch, HSI-HA.pdf > > > When running HS2 with LLAP and Workload Manager, HS2 becomes single point of > failure as some of the states for workload management and scheduling are > maintained in-memory. > The proposal is to support Active/Passive mode of high availability in which, > all HS2 and tez AMs registers with ZooKeeper and a leader have to be chosen > which will maintain stateful information. Clients using service discovery > will always connect to the leader for submitting queries. There will also be > some responsibilities for the leader, failover handling, tez session > reconnect etc. Will upload some more detailed information in a separate doc. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-16882) Improvements For Avro SerDe Package
[ https://issues.apache.org/jira/browse/HIVE-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399942#comment-16399942 ] Hive QA commented on HIVE-16882: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 37s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s{color} | {color:red} serde: The patch generated 3 new + 38 unchanged - 4 fixed = 41 total (was 42) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 14s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 9m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-9637/dev-support/hive-personality.sh | | git revision | master / d5cb7f6 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9637/yetus/diff-checkstyle-serde.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9637/yetus/patch-asflicense-problems.txt | | modules | C: serde U: serde | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9637/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Improvements For Avro SerDe Package > --- > > Key: HIVE-16882 > URL: https://issues.apache.org/jira/browse/HIVE-16882 > Project: Hive > Issue Type: Improvement > Components: Serializers/Deserializers >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HIVE-16882.1.patch, HIVE-16882.2.patch, > HIVE-16882.3.patch > > > # Use SLF4J parameter DEBUG logging > # Use re-usable libraries where appropriate > # Use enhanced for loops where appropriate > # Fix several minor check-style error > # Small performance enhancements in InstanceCache -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18863) trunc() calls itself trunk() in an error message
[ https://issues.apache.org/jira/browse/HIVE-18863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399932#comment-16399932 ] Hive QA commented on HIVE-18863: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12914579/HIVE-18863.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 25 failed/errored test(s), 13015 tests executed *Failed tests:* {noformat} TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=94)
[jira] [Commented] (HIVE-18712) Design HMS Api v2
[ https://issues.apache.org/jira/browse/HIVE-18712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399927#comment-16399927 ] Alexander Kolbasov commented on HIVE-18712: --- I think that another useful thing should be considered in the new API is the notion of session ID that can be used to correlate multiple HMS operations with each other and with other Hive operations. > Design HMS Api v2 > - > > Key: HIVE-18712 > URL: https://issues.apache.org/jira/browse/HIVE-18712 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > > This is an umbrella Jira covering the design of Hive Metastore API v2. > It is supposed to be a placeholder for discussion and design documents. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18718) Integer like types throws error when there is a mismatch
[ https://issues.apache.org/jira/browse/HIVE-18718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janaki Lahorani updated HIVE-18718: --- Attachment: HIVE-18718.4.patch > Integer like types throws error when there is a mismatch > > > Key: HIVE-18718 > URL: https://issues.apache.org/jira/browse/HIVE-18718 > Project: Hive > Issue Type: Improvement >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18718.1.patch, HIVE-18718.2.patch, > HIVE-18718.3.patch, HIVE-18718.4.patch > > > If a value is saved with long type and read as int type it results in > FAILED: Execution Error, return code 2 from > org.apache.hadoop.hive.ql.exec.mr.MapRedTask -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18940) Hive notifications serialize all write DDL operations
[ https://issues.apache.org/jira/browse/HIVE-18940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399924#comment-16399924 ] Alexander Kolbasov commented on HIVE-18940: --- [~vihangk1] The major problem with all auto-increment approaches is that clients have to deal with holes in the stream of IDs and the hole can be temporary (there is transaction in flight) or permanent (transaction failed). I am not convinced that we must produce events in the commit order but it is important to not loose events (and guarantee that within a session the order is preserved). Sessions usually have a nice property that they send some operations to HMS, wait for completion and only then send another one. Interleaving ordering between different sessions should be fine. Unfortunately, HMS APIs do not include any session ID (otherwise we could use it to partition locks). > Hive notifications serialize all write DDL operations > - > > Key: HIVE-18940 > URL: https://issues.apache.org/jira/browse/HIVE-18940 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Alexander Kolbasov >Priority: Major > > The implementation of DbNotificationListener uses a single row to store > current notification ID and uses {{SELECT FOR UPDATE}} to lock the row. This > serializes all write DDL operations which isn't good. > We should consider using database auto-increment for notification ID instead. > Especially on mMySQL/innoDb it is supported natively with relatively > light-weight locking. > This creates potential issue for consumers though because such IDs may have > holes. There are two types of holes - transient hole for a transaction which > have not committed yet and will be committed shortly and permanent holes for > transactions that fail. Consumers need to deal with it. It may be useful to > add DB-generated timestamp as well to assist in recovery from holes. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18940) Hive notifications serialize all write DDL operations
[ https://issues.apache.org/jira/browse/HIVE-18940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399904#comment-16399904 ] anishek commented on HIVE-18940: having a separate global commit id might not solve the problem since association between the events ( if we use auto increment ) and the global commit has to be done in the same txn. Also since we cant have ordering of the commit ids different to the point of commits, lock on this table will have to be taken to make sure one txn commits before the other one can commit. if there was some way to do fine grained control such that txn's with commit ids fire the db (COMMIT sql statement) in order of their id's. We can trim down the time the lock is taken by doing some directsql and putting in commit-id based ordering, which again across multiple metastores would add latency to the whole process. > Hive notifications serialize all write DDL operations > - > > Key: HIVE-18940 > URL: https://issues.apache.org/jira/browse/HIVE-18940 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Alexander Kolbasov >Priority: Major > > The implementation of DbNotificationListener uses a single row to store > current notification ID and uses {{SELECT FOR UPDATE}} to lock the row. This > serializes all write DDL operations which isn't good. > We should consider using database auto-increment for notification ID instead. > Especially on mMySQL/innoDb it is supported natively with relatively > light-weight locking. > This creates potential issue for consumers though because such IDs may have > holes. There are two types of holes - transient hole for a transaction which > have not committed yet and will be committed shortly and permanent holes for > transactions that fail. Consumers need to deal with it. It may be useful to > add DB-generated timestamp as well to assist in recovery from holes. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18885) DbNotificationListener has a deadlock between Java and DB locks
[ https://issues.apache.org/jira/browse/HIVE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399903#comment-16399903 ] anishek commented on HIVE-18885: [~vihangk1] i was not able to find NOTIFICATION_TBL_LOCK in the java code. Can you please point me to it, it will help me understand the java lock + db deadlock, I thought that was used earlier but was removed and is no longer present. > DbNotificationListener has a deadlock between Java and DB locks > --- > > Key: HIVE-18885 > URL: https://issues.apache.org/jira/browse/HIVE-18885 > Project: Hive > Issue Type: Bug > Components: Hive, Metastore >Affects Versions: 2.3.2 >Reporter: Alexander Kolbasov >Assignee: Vihang Karajgaonkar >Priority: Major > > You can see the problem from looking at the code, but it actually created > severe problems for real life Hive user. > When {{alter table}} has {{cascade}} option it does the following: > {code:java} > msdb.openTransaction() > ... > List parts = msdb.getPartitions(dbname, name, -1); > for (Partition part : parts) { > List oldCols = part.getSd().getCols(); > part.getSd().setCols(newt.getSd().getCols()); > String oldPartName = > Warehouse.makePartName(oldt.getPartitionKeys(), part.getValues()); > updatePartColumnStatsForAlterColumns(msdb, part, oldPartName, > part.getValues(), oldCols, part); > msdb.alterPartition(dbname, name, part.getValues(), part); > } > {code} > So it walks all partitions (and this may be huge list) and does some > non-trivial operations in one single uber-transaction. > When DbNotificationListener is enabled, it adds an event for each partition, > all while > holding a row lock on NOTIFICATION_SEQUENCE table. As a result, while this is > happening no other write DDL can proceed. This can sometimes cause DB lock > timeouts which cause HMS level operation retries which make things even worse. > In one particular case this pretty much made HMS unusable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18863) trunc() calls itself trunk() in an error message
[ https://issues.apache.org/jira/browse/HIVE-18863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399898#comment-16399898 ] Hive QA commented on HIVE-18863: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 58s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 13m 29s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-9636/dev-support/hive-personality.sh | | git revision | master / d5cb7f6 | | Default Java | 1.8.0_111 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9636/yetus/patch-asflicense-problems.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9636/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > trunc() calls itself trunk() in an error message > > > Key: HIVE-18863 > URL: https://issues.apache.org/jira/browse/HIVE-18863 > Project: Hive > Issue Type: Bug > Components: UDF >Reporter: Tim Armstrong >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Minor > Labels: newbie > Attachments: HIVE-18863.1.patch, HIVE-18863.2.patch > > > {noformat} > > select trunc('millennium', cast('2001-02-16 20:38:40' as timestamp)) > FAILED: SemanticException Line 0:-1 Argument type mismatch ''2001-02-16 > 20:38:40'': trunk() only takes STRING/CHAR/VARCHAR types as second argument, > got TIMESTAMP > {noformat} > I saw this on a derivative of Hive 1.1.0 (cdh5.15.0), but the string still > seems to be present on master: > https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java#L262 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18959) Avoid creating extra pool of threads within LLAP
[ https://issues.apache.org/jira/browse/HIVE-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399889#comment-16399889 ] Hive QA commented on HIVE-18959: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12914565/HIVE-18959.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 106 failed/errored test(s), 13412 tests executed *Failed tests:* {noformat} TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=94)
[jira] [Commented] (HIVE-18940) Hive notifications serialize all write DDL operations
[ https://issues.apache.org/jira/browse/HIVE-18940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399869#comment-16399869 ] Vihang Karajgaonkar commented on HIVE-18940: I think part of the issue is that the event id generation mechanism is done using a different table which always has 1 row {{NOTIFICATION_SEQUENCE}} Why can't we use auto-increment directly on the {{NOTIFICATION_LOG}} table for event id? I know derby doesn't support auto-increment but I think most production clusters will be not using derby. Designing a feature just to make it work for derby does not seem to be a good idea. If derby doesn't support auto-increments we should treat it as an exception and handle that in the code separately. We should also separate event ID and commit id for an event. Currently, the event id is strictly tied with the actual commit time which is why we have to hold the lock at the generation time until the transaction commits which in theory could take a long time. Also, the timing of generating the event id and actual commit is non-obvious in the code. So it is easy to miss that while writing the code. I think it would be great to use something like auto-increment to just uniquely identify a notification log message. The actual commit id should be generated from a global monotonically increasing number at the actual commit time. This number should apply to all the events pertaining to that transaction. A transaction which alters 1000 partitions should not have 1000 different ids because they were not committed one by one in 1000 transactions. They were all committed as one transaction and hence ideally should only generate one commit id. This would greatly help with the lock durations for long transactions because now commit lock is held for a constant time irrespective of how long that transaction ran. > Hive notifications serialize all write DDL operations > - > > Key: HIVE-18940 > URL: https://issues.apache.org/jira/browse/HIVE-18940 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Alexander Kolbasov >Priority: Major > > The implementation of DbNotificationListener uses a single row to store > current notification ID and uses {{SELECT FOR UPDATE}} to lock the row. This > serializes all write DDL operations which isn't good. > We should consider using database auto-increment for notification ID instead. > Especially on mMySQL/innoDb it is supported natively with relatively > light-weight locking. > This creates potential issue for consumers though because such IDs may have > holes. There are two types of holes - transient hole for a transaction which > have not committed yet and will be committed shortly and permanent holes for > transactions that fail. Consumers need to deal with it. It may be useful to > add DB-generated timestamp as well to assist in recovery from holes. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18963) JDBC: Provide an option to simplify beeline usage by supporting default and named URL for beeline
[ https://issues.apache.org/jira/browse/HIVE-18963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399860#comment-16399860 ] Vihang Karajgaonkar commented on HIVE-18963: beeline-site.xml sounds so much better than beeline-hs2-connection.xml. I wonder why I didn't think of that name in HIVE-14063 :( > JDBC: Provide an option to simplify beeline usage by supporting default and > named URL for beeline > - > > Key: HIVE-18963 > URL: https://issues.apache.org/jira/browse/HIVE-18963 > Project: Hive > Issue Type: Bug > Components: Beeline >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Major > > Currently, after opening Beeline CLI, the user needs to supply a connection > string to use the HS2 instance and set up the jdbc driver. Since we plan to > replace Hive CLI with Beeline in future (HIVE-10511), it will help the > usability if the user can simply type {{beeline}} and get start the hive > session. The jdbc url can be specified in a beeline-site.xml (which can > contain other named jdbc urls as well, and they can be accessed by something > like: {{beeline -c namedUrl}}. The use of beeline-site.xml can also be > potentially expanded later if needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18963) JDBC: Provide an option to simplify beeline usage by supporting default and named URL for beeline
[ https://issues.apache.org/jira/browse/HIVE-18963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399859#comment-16399859 ] Vihang Karajgaonkar commented on HIVE-18963: Thanks [~vgumashta] for creating this. I think it would be a very useful feature. How does this request relate to HIVE-14063? That patch used hive-site.xml and beeline-hs2-connection.xml files to automatically connect to the HS2. I am not aware of beeline-site.xml. Is this patch planning to introduce a new file called beeline-site.xml or it already exists? Can we reuse some of the work done for HIVE-14063? Thanks! > JDBC: Provide an option to simplify beeline usage by supporting default and > named URL for beeline > - > > Key: HIVE-18963 > URL: https://issues.apache.org/jira/browse/HIVE-18963 > Project: Hive > Issue Type: Bug > Components: Beeline >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Major > > Currently, after opening Beeline CLI, the user needs to supply a connection > string to use the HS2 instance and set up the jdbc driver. Since we plan to > replace Hive CLI with Beeline in future (HIVE-10511), it will help the > usability if the user can simply type {{beeline}} and get start the hive > session. The jdbc url can be specified in a beeline-site.xml (which can > contain other named jdbc urls as well, and they can be accessed by something > like: {{beeline -c namedUrl}}. The use of beeline-site.xml can also be > potentially expanded later if needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18959) Avoid creating extra pool of threads within LLAP
[ https://issues.apache.org/jira/browse/HIVE-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399827#comment-16399827 ] Hive QA commented on HIVE-18959: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 9s{color} | {color:red} druid-handler: The patch generated 1 new + 21 unchanged - 3 fixed = 22 total (was 24) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 9m 5s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-9635/dev-support/hive-personality.sh | | git revision | master / d5cb7f6 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9635/yetus/diff-checkstyle-druid-handler.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9635/yetus/patch-asflicense-problems.txt | | modules | C: druid-handler U: druid-handler | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9635/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Avoid creating extra pool of threads within LLAP > > > Key: HIVE-18959 > URL: https://issues.apache.org/jira/browse/HIVE-18959 > Project: Hive > Issue Type: Task > Components: Druid integration > Environment: Kerberos Cluster >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18959.patch > > > The current Druid-Kerberos-Http client is using an external single threaded > pool to handle retry auth calls (eg when a cookie expire or other transient > auth issues). > First, this is not buying us anything since all the Druid Task is executed as > one synchronous task. > Second, this can cause a major issue if an exception occurs that leads to > shutting down the LLAP main thread with an exception like: > {code} org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread > Thread[KerberosHttpClient... threw an Exception. Shutting down now...{code} > Thus to fix this we should avoid using an external thread pool and handle > retrying in a synchronous way. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-16858) Acumulo Utils Improvements
[ https://issues.apache.org/jira/browse/HIVE-16858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399816#comment-16399816 ] Hive QA commented on HIVE-16858: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12914568/HIVE-16858.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 85 failed/errored test(s), 13389 tests executed *Failed tests:* {noformat} TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=94)
[jira] [Updated] (HIVE-18933) disable ORC codec pool for now; remove clone
[ https://issues.apache.org/jira/browse/HIVE-18933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18933: Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Tests that the patch works; even after enabling the codec pool I still didn't get any errors. Ideally we should make sure we have ORC-310 before enabling it again just in case. > disable ORC codec pool for now; remove clone > > > Key: HIVE-18933 > URL: https://issues.apache.org/jira/browse/HIVE-18933 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18933.patch > > > See ORC-310. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HIVE-18933) disable ORC codec pool for now; remove clone
[ https://issues.apache.org/jira/browse/HIVE-18933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399773#comment-16399773 ] Sergey Shelukhin edited comment on HIVE-18933 at 3/15/18 1:53 AM: -- Tested on a cluster that the patch works; even after enabling the codec pool I still didn't get any errors. Ideally we should make sure we have ORC-310 before enabling it again just in case. was (Author: sershe): Tests that the patch works; even after enabling the codec pool I still didn't get any errors. Ideally we should make sure we have ORC-310 before enabling it again just in case. > disable ORC codec pool for now; remove clone > > > Key: HIVE-18933 > URL: https://issues.apache.org/jira/browse/HIVE-18933 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18933.patch > > > See ORC-310. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-16858) Acumulo Utils Improvements
[ https://issues.apache.org/jira/browse/HIVE-16858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399749#comment-16399749 ] Hive QA commented on HIVE-16858: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 3s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} accumulo-handler: The patch generated 0 new + 15 unchanged - 2 fixed = 15 total (was 17) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 9m 12s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-9634/dev-support/hive-personality.sh | | git revision | master / 57a1ec2 | | Default Java | 1.8.0_111 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9634/yetus/patch-asflicense-problems.txt | | modules | C: accumulo-handler U: accumulo-handler | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9634/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Acumulo Utils Improvements > -- > > Key: HIVE-16858 > URL: https://issues.apache.org/jira/browse/HIVE-16858 > Project: Hive > Issue Type: Improvement > Components: Accumulo Storage Handler >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16858.1.patch, HIVE-16858.2.patch > > > # Use Apache library for copy routine > # Use Apache Commons where advantageous > # Improve debug logging > # Fix some spellcheck validations -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18693) Snapshot Isolation does not work for Micromanaged table when a insert transaction is aborted
[ https://issues.apache.org/jira/browse/HIVE-18693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399730#comment-16399730 ] Hive QA commented on HIVE-18693: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12914559/HIVE-18693.06.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 26 failed/errored test(s), 13415 tests executed *Failed tests:* {noformat} TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=95)
[jira] [Updated] (HIVE-18962) add WM task state to Tez AM heartbeat
[ https://issues.apache.org/jira/browse/HIVE-18962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18962: Status: Patch Available (was: Open) > add WM task state to Tez AM heartbeat > - > > Key: HIVE-18962 > URL: https://issues.apache.org/jira/browse/HIVE-18962 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-18962.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18959) Avoid creating extra pool of threads within LLAP
[ https://issues.apache.org/jira/browse/HIVE-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399717#comment-16399717 ] Sergey Shelukhin commented on HIVE-18959: - Hmm. I wonder if there's also a broader issue here that LLAP would die because some random thread has died. Do we really want to do that? Can be handled in separate jira > Avoid creating extra pool of threads within LLAP > > > Key: HIVE-18959 > URL: https://issues.apache.org/jira/browse/HIVE-18959 > Project: Hive > Issue Type: Task > Components: Druid integration > Environment: Kerberos Cluster >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18959.patch > > > The current Druid-Kerberos-Http client is using an external single threaded > pool to handle retry auth calls (eg when a cookie expire or other transient > auth issues). > First, this is not buying us anything since all the Druid Task is executed as > one synchronous task. > Second, this can cause a major issue if an exception occurs that leads to > shutting down the LLAP main thread with an exception like: > {code} org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread > Thread[KerberosHttpClient... threw an Exception. Shutting down now...{code} > Thus to fix this we should avoid using an external thread pool and handle > retrying in a synchronous way. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-16879) Improve Cache Key
[ https://issues.apache.org/jira/browse/HIVE-16879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399668#comment-16399668 ] Misha Dmitriev commented on HIVE-16879: --- This looks like nice optimization work, assuming that the right things are optimized. Did you measure that the duplicate strings referenced by the fields of Key indeed waste a noticeable amount of memory? If yes, what tool did you use and can you share your findings? Is it really the case that dbName and tblName cause enough duplication to benefit from interning, but colName does not? > Improve Cache Key > - > > Key: HIVE-16879 > URL: https://issues.apache.org/jira/browse/HIVE-16879 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16879.1.patch, HIVE-16879.2.patch > > > Improve cache key for cache implemented in > {{org.apache.hadoop.hive.metastore.AggregateStatsCache}}. > # Cache some of the key components themselves (db name, table name) using > {{String}} intern method to conserve memory for repeated keys, to improve > {{equals}} method as now references can be used for equality, and hashcodes > will be cached as well as per {{String}} clash hashcode method. > # Upgrade _debug_ logging to not generate text unless required > # Changed _equals_ method to check first for the item most likely to be > different, column name -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18693) Snapshot Isolation does not work for Micromanaged table when a insert transaction is aborted
[ https://issues.apache.org/jira/browse/HIVE-18693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399664#comment-16399664 ] Hive QA commented on HIVE-18693: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 54s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 43s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 58s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 6s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 18s{color} | {color:red} streaming in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 41s{color} | {color:red} ql in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} The patch streaming passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} ql: The patch generated 0 new + 283 unchanged - 3 fixed = 283 total (was 286) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} standalone-metastore: The patch generated 0 new + 539 unchanged - 1 fixed = 539 total (was 540) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 13s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 19m 57s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-9633/dev-support/hive-personality.sh | | git revision | master / 57a1ec2 | | Default Java | 1.8.0_111 | | mvninstall | http://104.198.109.242/logs//PreCommit-HIVE-Build-9633/yetus/patch-mvninstall-hcatalog_streaming.txt | | mvninstall | http://104.198.109.242/logs//PreCommit-HIVE-Build-9633/yetus/patch-mvninstall-ql.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-9633/yetus/whitespace-eol.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9633/yetus/patch-asflicense-problems.txt | | modules | C: hcatalog/streaming ql standalone-metastore U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9633/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Snapshot Isolation does not work for Micromanaged table when a insert > transaction is aborted > > > Key: HIVE-18693 > URL: https://issues.apache.org/jira/browse/HIVE-18693 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major
[jira] [Updated] (HIVE-18963) JDBC: Provide an option to simplify beeline usage by supporting default and named URL for beeline
[ https://issues.apache.org/jira/browse/HIVE-18963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-18963: Summary: JDBC: Provide an option to simplify beeline usage by supporting default and named URL for beeline (was: JDBC: Provide an option to simplify beeline usage) > JDBC: Provide an option to simplify beeline usage by supporting default and > named URL for beeline > - > > Key: HIVE-18963 > URL: https://issues.apache.org/jira/browse/HIVE-18963 > Project: Hive > Issue Type: Bug > Components: Beeline >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Major > > Currently, after opening Beeline CLI, the user needs to supply a connection > string to use the HS2 instance and set up the jdbc driver. Since we plan to > replace Hive CLI with Beeline in future (HIVE-10511), it will help the > usability if the user can simply type {{beeline}} and get start the hive > session. The jdbc url can be specified in a beeline-site.xml (which can > contain other named jdbc urls as well, and they can be accessed by something > like: {{beeline -c namedUrl}}. The use of beeline-site.xml can also be > potentially expanded later if needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-16861) MapredParquetOutputFormat - Save Some Array Allocations
[ https://issues.apache.org/jira/browse/HIVE-16861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399637#comment-16399637 ] Hive QA commented on HIVE-16861: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12914557/HIVE-16861.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 161 failed/errored test(s), 13785 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver (batchId=89) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] (batchId=53) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez_empty] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[groupby_groupingset_bug] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_1] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=159) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[update_access_time_non_current_db] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_div0] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_dynamic_semijoin_reduction] (batchId=153) org.apache.hadoop.hive.cli.TestNegativeCliDriver.org.apache.hadoop.hive.cli.TestNegativeCliDriver (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.org.apache.hadoop.hive.cli.TestNegativeCliDriver (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[mm_convert] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[mm_truncate_cols] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[smb_bucketmapjoin] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[smb_mapjoin_14] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[sortmerge_mapjoin_mismatch_1] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[special_character_in_tabnames_1] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[split_sample_out_of_range] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[split_sample_wrong_format] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_1] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_2] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_publisher_error_1] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[strict_join_2] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[strict_orderby] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[strict_orderby_2] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[strict_pruning_2] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_corr_grandparent] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_corr_in_agg] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_in_groupby] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_in_implicit_gby] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_multiple_cols_in_select] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_notin_implicit_gby] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_scalar_corr_multi_rows] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_scalar_multi_rows] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_select_aggregate] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_select_distinct] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_subquery_chain_exists] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[temp_table_rename] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[touch2] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_bucketed_column]
[jira] [Assigned] (HIVE-18963) JDBC: Provide an option to simplify beeline usage
[ https://issues.apache.org/jira/browse/HIVE-18963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta reassigned HIVE-18963: --- Assignee: Vaibhav Gumashta > JDBC: Provide an option to simplify beeline usage > - > > Key: HIVE-18963 > URL: https://issues.apache.org/jira/browse/HIVE-18963 > Project: Hive > Issue Type: Bug > Components: Beeline >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Major > > Currently, after opening Beeline CLI, the user needs to supply a connection > string to use the HS2 instance and set up the jdbc driver. Since we plan to > replace Hive CLI with Beeline in future (HIVE-10511), it will help the > usability if the user can simply type {{beeline}} and get start the hive > session. The jdbc url can be specified in a beeline-site.xml (which can > contain other named jdbc urls as well, and they can be accessed by something > like: {{beeline -c namedUrl}}. The use of beeline-site.xml can also be > potentially expanded later if needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18962) add WM task state to Tez AM heartbeat
[ https://issues.apache.org/jira/browse/HIVE-18962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399623#comment-16399623 ] Sergey Shelukhin commented on HIVE-18962: - [~jdere] can you take a look? thanks > add WM task state to Tez AM heartbeat > - > > Key: HIVE-18962 > URL: https://issues.apache.org/jira/browse/HIVE-18962 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-18962.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18693) Snapshot Isolation does not work for Micromanaged table when a insert transaction is aborted
[ https://issues.apache.org/jira/browse/HIVE-18693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399614#comment-16399614 ] Steve Yeom commented on HIVE-18693: --- Looks like the two days ago, the patch did not run due to failure probably because of mismatch in thrift file (which can be updated frequently). Added a new patch hours ago. > Snapshot Isolation does not work for Micromanaged table when a insert > transaction is aborted > > > Key: HIVE-18693 > URL: https://issues.apache.org/jira/browse/HIVE-18693 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Attachments: HIVE-18693.01.patch, HIVE-18693.02.patch, > HIVE-18693.03.patch, HIVE-18693.04.patch, HIVE-18693.05.patch, > HIVE-18693.06.patch > > > TestTxnCommands2#writeBetweenWorkerAndCleaner with minor > changes (changing delete command to insert command) fails on MM table. > Specifically the last SELECT commands returns wrong results. > But this test works fine with full ACID table. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18962) add WM task state to Tez AM heartbeat
[ https://issues.apache.org/jira/browse/HIVE-18962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18962: Attachment: HIVE-18962.patch > add WM task state to Tez AM heartbeat > - > > Key: HIVE-18962 > URL: https://issues.apache.org/jira/browse/HIVE-18962 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-18962.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18962) add WM task state to Tez AM heartbeat
[ https://issues.apache.org/jira/browse/HIVE-18962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-18962: --- > add WM task state to Tez AM heartbeat > - > > Key: HIVE-18962 > URL: https://issues.apache.org/jira/browse/HIVE-18962 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18344) Remove LinkedList from SharedWorkOptimizer.java
[ https://issues.apache.org/jira/browse/HIVE-18344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399603#comment-16399603 ] Sahil Takiar commented on HIVE-18344: - +1 > Remove LinkedList from SharedWorkOptimizer.java > --- > > Key: HIVE-18344 > URL: https://issues.apache.org/jira/browse/HIVE-18344 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-18344.1.patch, HIVE-18344.2.patch > > > Prefer {{ArrayList}} over {{LinkedList}} especially in this class because the > initial size of the collection is known. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18533) Add option to use InProcessLauncher to submit spark jobs
[ https://issues.apache.org/jira/browse/HIVE-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-18533: Attachment: HIVE-18533.4.patch > Add option to use InProcessLauncher to submit spark jobs > > > Key: HIVE-18533 > URL: https://issues.apache.org/jira/browse/HIVE-18533 > Project: Hive > Issue Type: Improvement > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-18533.1.patch, HIVE-18533.2.patch, > HIVE-18533.3.patch, HIVE-18533.4.patch > > > See discussion in HIVE-16484 for details. > I think this will help with reducing the amount of time it takes to open a > HoS session + debuggability (no need launch a separate process to run a Spark > app). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18961) Error in results cache when query has identifiers with spaces
[ https://issues.apache.org/jira/browse/HIVE-18961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399597#comment-16399597 ] Gopal V commented on HIVE-18961: LGTM - +1 Will look into removing the parse in the cache-key generation in a later ticket. > Error in results cache when query has identifiers with spaces > - > > Key: HIVE-18961 > URL: https://issues.apache.org/jira/browse/HIVE-18961 > Project: Hive > Issue Type: Sub-task > Components: Query Planning >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-18961.1.patch > > > Found by [~gopalv]: > {noformat} > 2018-03-14T05:08:32,551 ERROR [0c4b7a6c-ed37-428e-ac04-ca38716f211e > HiveServer2-HttpHandler-Pool: Thread-8961]: parse.CalcitePlanner (:()) - > Unexpected ] > org.apache.hadoop.hive.ql.parse.ParseException: line 1:100 missing EOF at > 'Count' near 'IMSI' > at > org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:220) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:74) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getQueryStringForCache(SemanticAnalyzer.java:14067) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:] > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.createLookupInfoForQuery(SemanticAnalyzer.java:14080) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.ja] > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11683) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-S] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:304) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHO] > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:273) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNA] > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:614) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHOT] > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18778) Needs to capture input/output entities in explain
[ https://issues.apache.org/jira/browse/HIVE-18778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399595#comment-16399595 ] Daniel Dai commented on HIVE-18778: --- Finally begin to make process. TestCliDriver first, now Tez/LLAP/Druid secion. > Needs to capture input/output entities in explain > - > > Key: HIVE-18778 > URL: https://issues.apache.org/jira/browse/HIVE-18778 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-18778-SparkPositive.patch, HIVE-18778.1.patch, > HIVE-18778.2.patch, HIVE-18778.3.patch, HIVE-18778_TestCliDriver.patch, > HIVE-18788_SparkNegative.patch, HIVE-18788_SparkPerf.patch > > > With Sentry enabled, commands like explain drop table foo fail with {{explain > drop table foo;}} > {code} > Error: Error while compiling statement: FAILED: SemanticException No valid > privileges > Required privilege( Table) not available in input privileges > The required privileges: (state=42000,code=4) > {code} > Sentry fails to authorize because the ExplainSemanticAnalyzer uses an > instance of DDLSemanticAnalyzer to analyze the explain query. > {code} > BaseSemanticAnalyzer sem = SemanticAnalyzerFactory.get(conf, input); > sem.analyze(input, ctx); > sem.validate() > {code} > The inputs/outputs entities for this query are set in the above code. > However, these are never set on the instance of ExplainSemanticAnalyzer > itself and thus is not propagated into the HookContext in the calling Driver > code. > {code} > sem.analyze(tree, ctx); --> this results in calling the above code that uses > DDLSA > hookCtx.update(sem); --> sem is an instance of ExplainSemanticAnalyzer, this > code attempts to update the HookContext with the input/output info from ESA > which is never set. > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18778) Needs to capture input/output entities in explain
[ https://issues.apache.org/jira/browse/HIVE-18778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-18778: -- Attachment: HIVE-18778_TestCliDriver.patch > Needs to capture input/output entities in explain > - > > Key: HIVE-18778 > URL: https://issues.apache.org/jira/browse/HIVE-18778 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-18778-SparkPositive.patch, HIVE-18778.1.patch, > HIVE-18778.2.patch, HIVE-18778.3.patch, HIVE-18778_TestCliDriver.patch, > HIVE-18788_SparkNegative.patch, HIVE-18788_SparkPerf.patch > > > With Sentry enabled, commands like explain drop table foo fail with {{explain > drop table foo;}} > {code} > Error: Error while compiling statement: FAILED: SemanticException No valid > privileges > Required privilege( Table) not available in input privileges > The required privileges: (state=42000,code=4) > {code} > Sentry fails to authorize because the ExplainSemanticAnalyzer uses an > instance of DDLSemanticAnalyzer to analyze the explain query. > {code} > BaseSemanticAnalyzer sem = SemanticAnalyzerFactory.get(conf, input); > sem.analyze(input, ctx); > sem.validate() > {code} > The inputs/outputs entities for this query are set in the above code. > However, these are never set on the instance of ExplainSemanticAnalyzer > itself and thus is not propagated into the HookContext in the calling Driver > code. > {code} > sem.analyze(tree, ctx); --> this results in calling the above code that uses > DDLSA > hookCtx.update(sem); --> sem is an instance of ExplainSemanticAnalyzer, this > code attempts to update the HookContext with the input/output info from ESA > which is never set. > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-16861) MapredParquetOutputFormat - Save Some Array Allocations
[ https://issues.apache.org/jira/browse/HIVE-16861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399577#comment-16399577 ] Hive QA commented on HIVE-16861: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 29s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-9632/dev-support/hive-personality.sh | | git revision | master / 57a1ec2 | | Default Java | 1.8.0_111 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9632/yetus/patch-asflicense-problems.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9632/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > MapredParquetOutputFormat - Save Some Array Allocations > --- > > Key: HIVE-16861 > URL: https://issues.apache.org/jira/browse/HIVE-16861 > Project: Hive > Issue Type: Improvement >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16861.1.patch, HIVE-16861.2.patch > > > Remove superfluous array allocations from {{MapredParquetOutputFormat}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18910) Migrate to Murmur hash for shuffle and bucketing
[ https://issues.apache.org/jira/browse/HIVE-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-18910: -- Attachment: HIVE-18910.7.patch > Migrate to Murmur hash for shuffle and bucketing > > > Key: HIVE-18910 > URL: https://issues.apache.org/jira/browse/HIVE-18910 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-18910.1.patch, HIVE-18910.2.patch, > HIVE-18910.3.patch, HIVE-18910.4.patch, HIVE-18910.5.patch, > HIVE-18910.6.patch, HIVE-18910.7.patch > > > Hive uses JAVA hash which is not as good as murmur for better distribution > and efficiency in bucketing a table. > Migrate to murmur hash but still keep backward compatibility for existing > users so that they dont have to reload the existing tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-16879) Improve Cache Key
[ https://issues.apache.org/jira/browse/HIVE-16879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399546#comment-16399546 ] Hive QA commented on HIVE-16879: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12914552/HIVE-16879.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 27 failed/errored test(s), 13015 tests executed *Failed tests:* {noformat} TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=94)
[jira] [Commented] (HIVE-18342) Remove LinkedList from HiveAlterHandler.java
[ https://issues.apache.org/jira/browse/HIVE-18342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399508#comment-16399508 ] Sahil Takiar commented on HIVE-18342: - +1 > Remove LinkedList from HiveAlterHandler.java > > > Key: HIVE-18342 > URL: https://issues.apache.org/jira/browse/HIVE-18342 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-18342.1.patch > > > Remove {{LinkedList}} in favor of {{ArrayList}} for class > {{org.apache.hadoop.hive.metastore.HiveAlterHandler}}. > {quote} > The size, isEmpty, get, set, iterator, and listIterator operations run in > constant time. The add operation runs in amortized constant time, that is, > adding n elements requires O(n) time. All of the other operations run in > linear time (roughly speaking). *The constant factor is low compared to that > for the LinkedList implementation.* > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-16858) Acumulo Utils Improvements
[ https://issues.apache.org/jira/browse/HIVE-16858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399507#comment-16399507 ] Sahil Takiar commented on HIVE-16858: - +1 > Acumulo Utils Improvements > -- > > Key: HIVE-16858 > URL: https://issues.apache.org/jira/browse/HIVE-16858 > Project: Hive > Issue Type: Improvement > Components: Accumulo Storage Handler >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16858.1.patch, HIVE-16858.2.patch > > > # Use Apache library for copy routine > # Use Apache Commons where advantageous > # Improve debug logging > # Fix some spellcheck validations -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17990) Add Thrift and DB storage for Schema Registry objects
[ https://issues.apache.org/jira/browse/HIVE-17990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399478#comment-16399478 ] Vineet Garg commented on HIVE-17990: [~alangates] I think you missed adding {{APP.SERDES}} schema changes in {{metastore/scripts/upgrade/derby/hive-schema-3.0.0.derby.sql}}. I am getting following error while creating table: {code:sql} org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Insert of object "org.apache.hadoop.hive.metastore.model.MSerDeInfo@238ad211" using statement "INSERT INTO SERDES (SERDE_ID,DESCRIPTION,DESERIALIZER_CLASS,"NAME",SERDE_TYPE,SLIB,SERIALIZER_CLASS) VALUES (?,?,?,?,?,?,?)" failed : 'DESCRIPTION' is not a column in table or VTI 'APP.SERDES'.) FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Insert of object "org.apache.hadoop.hive.metastore.model.MSerDeInfo@238ad211" using statement "INSERT INTO SERDES (SERDE_ID,DESCRIPTION,DESERIALIZER_CLASS,"NAME",SERDE_TYPE,SLIB,SERIALIZER_CLASS) VALUES (?,?,?,?,?,?,?)" failed : 'DESCRIPTION' is not a column in table or VTI 'APP.SERDES'.) {code} > Add Thrift and DB storage for Schema Registry objects > - > > Key: HIVE-17990 > URL: https://issues.apache.org/jira/browse/HIVE-17990 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0 > > Attachments: Adding-Schema-Registry-to-Metastore.pdf, > HIVE-17990.2.patch, HIVE-17990.3.patch, HIVE-17990.patch > > > This JIRA tracks changes to Thrift, RawStore, and DB scripts to support > objects in the Schema Registry. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-16879) Improve Cache Key
[ https://issues.apache.org/jira/browse/HIVE-16879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399460#comment-16399460 ] Hive QA commented on HIVE-16879: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 52s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} standalone-metastore: The patch generated 0 new + 15 unchanged - 3 fixed = 15 total (was 18) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 14s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 45s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-9631/dev-support/hive-personality.sh | | git revision | master / 57a1ec2 | | Default Java | 1.8.0_111 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9631/yetus/patch-asflicense-problems.txt | | modules | C: standalone-metastore U: standalone-metastore | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9631/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Improve Cache Key > - > > Key: HIVE-16879 > URL: https://issues.apache.org/jira/browse/HIVE-16879 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16879.1.patch, HIVE-16879.2.patch > > > Improve cache key for cache implemented in > {{org.apache.hadoop.hive.metastore.AggregateStatsCache}}. > # Cache some of the key components themselves (db name, table name) using > {{String}} intern method to conserve memory for repeated keys, to improve > {{equals}} method as now references can be used for equality, and hashcodes > will be cached as well as per {{String}} clash hashcode method. > # Upgrade _debug_ logging to not generate text unless required > # Changed _equals_ method to check first for the item most likely to be > different, column name -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18961) Error in results cache when query has identifiers with spaces
[ https://issues.apache.org/jira/browse/HIVE-18961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-18961: -- Status: Patch Available (was: Open) > Error in results cache when query has identifiers with spaces > - > > Key: HIVE-18961 > URL: https://issues.apache.org/jira/browse/HIVE-18961 > Project: Hive > Issue Type: Sub-task > Components: Query Planning >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-18961.1.patch > > > Found by [~gopalv]: > {noformat} > 2018-03-14T05:08:32,551 ERROR [0c4b7a6c-ed37-428e-ac04-ca38716f211e > HiveServer2-HttpHandler-Pool: Thread-8961]: parse.CalcitePlanner (:()) - > Unexpected ] > org.apache.hadoop.hive.ql.parse.ParseException: line 1:100 missing EOF at > 'Count' near 'IMSI' > at > org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:220) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:74) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getQueryStringForCache(SemanticAnalyzer.java:14067) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:] > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.createLookupInfoForQuery(SemanticAnalyzer.java:14080) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.ja] > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11683) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-S] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:304) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHO] > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:273) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNA] > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:614) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHOT] > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18961) Error in results cache when query has identifiers with spaces
[ https://issues.apache.org/jira/browse/HIVE-18961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399438#comment-16399438 ] Jason Dere commented on HIVE-18961: --- One solution is to quote all identifiers in the query string used as the cache key. Attaching patch. > Error in results cache when query has identifiers with spaces > - > > Key: HIVE-18961 > URL: https://issues.apache.org/jira/browse/HIVE-18961 > Project: Hive > Issue Type: Sub-task > Components: Query Planning >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-18961.1.patch > > > Found by [~gopalv]: > {noformat} > 2018-03-14T05:08:32,551 ERROR [0c4b7a6c-ed37-428e-ac04-ca38716f211e > HiveServer2-HttpHandler-Pool: Thread-8961]: parse.CalcitePlanner (:()) - > Unexpected ] > org.apache.hadoop.hive.ql.parse.ParseException: line 1:100 missing EOF at > 'Count' near 'IMSI' > at > org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:220) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:74) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getQueryStringForCache(SemanticAnalyzer.java:14067) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:] > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.createLookupInfoForQuery(SemanticAnalyzer.java:14080) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.ja] > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11683) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-S] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:304) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHO] > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:273) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNA] > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:614) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHOT] > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18961) Error in results cache when query has identifiers with spaces
[ https://issues.apache.org/jira/browse/HIVE-18961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-18961: -- Attachment: HIVE-18961.1.patch > Error in results cache when query has identifiers with spaces > - > > Key: HIVE-18961 > URL: https://issues.apache.org/jira/browse/HIVE-18961 > Project: Hive > Issue Type: Sub-task > Components: Query Planning >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-18961.1.patch > > > Found by [~gopalv]: > {noformat} > 2018-03-14T05:08:32,551 ERROR [0c4b7a6c-ed37-428e-ac04-ca38716f211e > HiveServer2-HttpHandler-Pool: Thread-8961]: parse.CalcitePlanner (:()) - > Unexpected ] > org.apache.hadoop.hive.ql.parse.ParseException: line 1:100 missing EOF at > 'Count' near 'IMSI' > at > org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:220) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:74) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getQueryStringForCache(SemanticAnalyzer.java:14067) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:] > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.createLookupInfoForQuery(SemanticAnalyzer.java:14080) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.ja] > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11683) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-S] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:304) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHO] > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:273) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNA] > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:614) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHOT] > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18961) Error in results cache when query has identifiers with spaces
[ https://issues.apache.org/jira/browse/HIVE-18961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-18961: -- Summary: Error in results cache when query has identifiers with spaces (was: Error in results cache when query has space in the name) > Error in results cache when query has identifiers with spaces > - > > Key: HIVE-18961 > URL: https://issues.apache.org/jira/browse/HIVE-18961 > Project: Hive > Issue Type: Sub-task > Components: Query Planning >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-18961.1.patch > > > Found by [~gopalv]: > {noformat} > 2018-03-14T05:08:32,551 ERROR [0c4b7a6c-ed37-428e-ac04-ca38716f211e > HiveServer2-HttpHandler-Pool: Thread-8961]: parse.CalcitePlanner (:()) - > Unexpected ] > org.apache.hadoop.hive.ql.parse.ParseException: line 1:100 missing EOF at > 'Count' near 'IMSI' > at > org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:220) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:74) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getQueryStringForCache(SemanticAnalyzer.java:14067) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:] > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.createLookupInfoForQuery(SemanticAnalyzer.java:14080) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.ja] > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11683) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-S] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:304) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHO] > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:273) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNA] > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:614) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHOT] > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18718) Integer like types throws error when there is a mismatch
[ https://issues.apache.org/jira/browse/HIVE-18718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399402#comment-16399402 ] Vihang Karajgaonkar commented on HIVE-18718: Thanks for the changes [~janulatha] I left some minor suggestions on RB. Rest looks good to me. > Integer like types throws error when there is a mismatch > > > Key: HIVE-18718 > URL: https://issues.apache.org/jira/browse/HIVE-18718 > Project: Hive > Issue Type: Improvement >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18718.1.patch, HIVE-18718.2.patch, > HIVE-18718.3.patch > > > If a value is saved with long type and read as int type it results in > FAILED: Execution Error, return code 2 from > org.apache.hadoop.hive.ql.exec.mr.MapRedTask -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18961) Error in results cache when query has space in the name
[ https://issues.apache.org/jira/browse/HIVE-18961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere reassigned HIVE-18961: - > Error in results cache when query has space in the name > --- > > Key: HIVE-18961 > URL: https://issues.apache.org/jira/browse/HIVE-18961 > Project: Hive > Issue Type: Sub-task > Components: Query Planning >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > > Found by [~gopalv]: > {noformat} > 2018-03-14T05:08:32,551 ERROR [0c4b7a6c-ed37-428e-ac04-ca38716f211e > HiveServer2-HttpHandler-Pool: Thread-8961]: parse.CalcitePlanner (:()) - > Unexpected ] > org.apache.hadoop.hive.ql.parse.ParseException: line 1:100 missing EOF at > 'Count' near 'IMSI' > at > org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:220) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:74) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getQueryStringForCache(SemanticAnalyzer.java:14067) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:] > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.createLookupInfoForQuery(SemanticAnalyzer.java:14080) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.ja] > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11683) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-S] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:304) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHO] > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:273) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNA] > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:614) > ~[hive-exec-3.0.0.3.0.0.2-132-jdere.jar:3.0.0-SNAPSHOT] > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18863) trunc() calls itself trunk() in an error message
[ https://issues.apache.org/jira/browse/HIVE-18863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharathkrishna Guruvayoor Murali updated HIVE-18863: Attachment: HIVE-18863.2.patch > trunc() calls itself trunk() in an error message > > > Key: HIVE-18863 > URL: https://issues.apache.org/jira/browse/HIVE-18863 > Project: Hive > Issue Type: Bug > Components: UDF >Reporter: Tim Armstrong >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Minor > Labels: newbie > Attachments: HIVE-18863.1.patch, HIVE-18863.2.patch > > > {noformat} > > select trunc('millennium', cast('2001-02-16 20:38:40' as timestamp)) > FAILED: SemanticException Line 0:-1 Argument type mismatch ''2001-02-16 > 20:38:40'': trunk() only takes STRING/CHAR/VARCHAR types as second argument, > got TIMESTAMP > {noformat} > I saw this on a derivative of Hive 1.1.0 (cdh5.15.0), but the string still > seems to be present on master: > https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java#L262 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18863) trunc() calls itself trunk() in an error message
[ https://issues.apache.org/jira/browse/HIVE-18863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399350#comment-16399350 ] Bharathkrishna Guruvayoor Murali commented on HIVE-18863: - Added new patch updating the commit message > trunc() calls itself trunk() in an error message > > > Key: HIVE-18863 > URL: https://issues.apache.org/jira/browse/HIVE-18863 > Project: Hive > Issue Type: Bug > Components: UDF >Reporter: Tim Armstrong >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Minor > Labels: newbie > Attachments: HIVE-18863.1.patch, HIVE-18863.2.patch > > > {noformat} > > select trunc('millennium', cast('2001-02-16 20:38:40' as timestamp)) > FAILED: SemanticException Line 0:-1 Argument type mismatch ''2001-02-16 > 20:38:40'': trunk() only takes STRING/CHAR/VARCHAR types as second argument, > got TIMESTAMP > {noformat} > I saw this on a derivative of Hive 1.1.0 (cdh5.15.0), but the string still > seems to be present on master: > https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java#L262 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18952) Tez session disconnect and reconnect on HS2 HA failover
[ https://issues.apache.org/jira/browse/HIVE-18952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-18952: --- Assignee: Sergey Shelukhin > Tez session disconnect and reconnect on HS2 HA failover > --- > > Key: HIVE-18952 > URL: https://issues.apache.org/jira/browse/HIVE-18952 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Sergey Shelukhin >Priority: Major > > Now that TEZ-3892 is committed, HIVE-18281 can make use of tez session > disconnect and reconnect on HA failover. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18863) trunc() calls itself trunk() in an error message
[ https://issues.apache.org/jira/browse/HIVE-18863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharathkrishna Guruvayoor Murali updated HIVE-18863: Attachment: (was: HIVE-18863.1.patch) > trunc() calls itself trunk() in an error message > > > Key: HIVE-18863 > URL: https://issues.apache.org/jira/browse/HIVE-18863 > Project: Hive > Issue Type: Bug > Components: UDF >Reporter: Tim Armstrong >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Minor > Labels: newbie > Attachments: HIVE-18863.1.patch > > > {noformat} > > select trunc('millennium', cast('2001-02-16 20:38:40' as timestamp)) > FAILED: SemanticException Line 0:-1 Argument type mismatch ''2001-02-16 > 20:38:40'': trunk() only takes STRING/CHAR/VARCHAR types as second argument, > got TIMESTAMP > {noformat} > I saw this on a derivative of Hive 1.1.0 (cdh5.15.0), but the string still > seems to be present on master: > https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java#L262 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18863) trunc() calls itself trunk() in an error message
[ https://issues.apache.org/jira/browse/HIVE-18863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharathkrishna Guruvayoor Murali updated HIVE-18863: Attachment: HIVE-18863.1.patch > trunc() calls itself trunk() in an error message > > > Key: HIVE-18863 > URL: https://issues.apache.org/jira/browse/HIVE-18863 > Project: Hive > Issue Type: Bug > Components: UDF >Reporter: Tim Armstrong >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Minor > Labels: newbie > Attachments: HIVE-18863.1.patch > > > {noformat} > > select trunc('millennium', cast('2001-02-16 20:38:40' as timestamp)) > FAILED: SemanticException Line 0:-1 Argument type mismatch ''2001-02-16 > 20:38:40'': trunk() only takes STRING/CHAR/VARCHAR types as second argument, > got TIMESTAMP > {noformat} > I saw this on a derivative of Hive 1.1.0 (cdh5.15.0), but the string still > seems to be present on master: > https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java#L262 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18344) Remove LinkedList from SharedWorkOptimizer.java
[ https://issues.apache.org/jira/browse/HIVE-18344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399221#comment-16399221 ] BELUGA BEHR commented on HIVE-18344: [~stakiar] Patch updated, these test cannot be related to my changes :) > Remove LinkedList from SharedWorkOptimizer.java > --- > > Key: HIVE-18344 > URL: https://issues.apache.org/jira/browse/HIVE-18344 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-18344.1.patch, HIVE-18344.2.patch > > > Prefer {{ArrayList}} over {{LinkedList}} especially in this class because the > initial size of the collection is known. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16882) Improvements For Avro SerDe Package
[ https://issues.apache.org/jira/browse/HIVE-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-16882: --- Status: Patch Available (was: Open) Made changes based on feedback from review > Improvements For Avro SerDe Package > --- > > Key: HIVE-16882 > URL: https://issues.apache.org/jira/browse/HIVE-16882 > Project: Hive > Issue Type: Improvement > Components: Serializers/Deserializers >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HIVE-16882.1.patch, HIVE-16882.2.patch, > HIVE-16882.3.patch > > > # Use SLF4J parameter DEBUG logging > # Use re-usable libraries where appropriate > # Use enhanced for loops where appropriate > # Fix several minor check-style error > # Small performance enhancements in InstanceCache -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16882) Improvements For Avro SerDe Package
[ https://issues.apache.org/jira/browse/HIVE-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-16882: --- Attachment: HIVE-16882.3.patch > Improvements For Avro SerDe Package > --- > > Key: HIVE-16882 > URL: https://issues.apache.org/jira/browse/HIVE-16882 > Project: Hive > Issue Type: Improvement > Components: Serializers/Deserializers >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HIVE-16882.1.patch, HIVE-16882.2.patch, > HIVE-16882.3.patch > > > # Use SLF4J parameter DEBUG logging > # Use re-usable libraries where appropriate > # Use enhanced for loops where appropriate > # Fix several minor check-style error > # Small performance enhancements in InstanceCache -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16882) Improvements For Avro SerDe Package
[ https://issues.apache.org/jira/browse/HIVE-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-16882: --- Status: Open (was: Patch Available) > Improvements For Avro SerDe Package > --- > > Key: HIVE-16882 > URL: https://issues.apache.org/jira/browse/HIVE-16882 > Project: Hive > Issue Type: Improvement > Components: Serializers/Deserializers >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HIVE-16882.1.patch, HIVE-16882.2.patch, > HIVE-16882.3.patch > > > # Use SLF4J parameter DEBUG logging > # Use re-usable libraries where appropriate > # Use enhanced for loops where appropriate > # Fix several minor check-style error > # Small performance enhancements in InstanceCache -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18960) Make Materialized view invalidation cache work with catalogs
[ https://issues.apache.org/jira/browse/HIVE-18960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates reassigned HIVE-18960: - > Make Materialized view invalidation cache work with catalogs > > > Key: HIVE-18960 > URL: https://issues.apache.org/jira/browse/HIVE-18960 > Project: Hive > Issue Type: Sub-task > Components: Materialized views, Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > > MaterializationsInvalidationCache needs to be made catalog aware. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18951) Fix the llapdump usage error in llapdump.sh
[ https://issues.apache.org/jira/browse/HIVE-18951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399200#comment-16399200 ] Sergey Shelukhin commented on HIVE-18951: - +1 > Fix the llapdump usage error in llapdump.sh > --- > > Key: HIVE-18951 > URL: https://issues.apache.org/jira/browse/HIVE-18951 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Saijin Huang >Assignee: Saijin Huang >Priority: Minor > Attachments: HIVE-18951.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16858) Acumulo Utils Improvements
[ https://issues.apache.org/jira/browse/HIVE-16858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-16858: --- Description: # Use Apache library for copy routine # Use Apache Commons where advantageous # Improve debug logging # Fix some spellcheck validations was: # Use Apache library for copy routine # Use Apache Commons where advantageous # Improve debug logging > Acumulo Utils Improvements > -- > > Key: HIVE-16858 > URL: https://issues.apache.org/jira/browse/HIVE-16858 > Project: Hive > Issue Type: Improvement > Components: Accumulo Storage Handler >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16858.1.patch, HIVE-16858.2.patch > > > # Use Apache library for copy routine > # Use Apache Commons where advantageous > # Improve debug logging > # Fix some spellcheck validations -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16858) Acumulo Utils Improvements
[ https://issues.apache.org/jira/browse/HIVE-16858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-16858: --- Status: Patch Available (was: Open) > Acumulo Utils Improvements > -- > > Key: HIVE-16858 > URL: https://issues.apache.org/jira/browse/HIVE-16858 > Project: Hive > Issue Type: Improvement > Components: Accumulo Storage Handler >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16858.1.patch, HIVE-16858.2.patch > > > # Use Apache library for copy routine > # Use Apache Commons where advantageous > # Improve debug logging > # Fix some spellcheck validations -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16858) Acumulo Utils Improvements
[ https://issues.apache.org/jira/browse/HIVE-16858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-16858: --- Description: # Use Apache library for copy routine # Use Apache Commons where advantageous # Improve debug logging was: # Use Apache library for copy routine # Improve debug logging > Acumulo Utils Improvements > -- > > Key: HIVE-16858 > URL: https://issues.apache.org/jira/browse/HIVE-16858 > Project: Hive > Issue Type: Improvement > Components: Accumulo Storage Handler >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16858.1.patch, HIVE-16858.2.patch > > > # Use Apache library for copy routine > # Use Apache Commons where advantageous > # Improve debug logging -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16858) Acumulo Utils Improvements
[ https://issues.apache.org/jira/browse/HIVE-16858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-16858: --- Attachment: HIVE-16858.2.patch > Acumulo Utils Improvements > -- > > Key: HIVE-16858 > URL: https://issues.apache.org/jira/browse/HIVE-16858 > Project: Hive > Issue Type: Improvement > Components: Accumulo Storage Handler >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16858.1.patch, HIVE-16858.2.patch > > > # Use Apache library for copy routine > # Improve debug logging -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16858) Acumulo Utils Improvements
[ https://issues.apache.org/jira/browse/HIVE-16858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-16858: --- Affects Version/s: (was: 2.1.1) Status: Open (was: Patch Available) > Acumulo Utils Improvements > -- > > Key: HIVE-16858 > URL: https://issues.apache.org/jira/browse/HIVE-16858 > Project: Hive > Issue Type: Improvement > Components: Accumulo Storage Handler >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16858.1.patch > > > # Use Apache library for copy routine > # Improve debug logging -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16858) Acumulo Utils Improvements
[ https://issues.apache.org/jira/browse/HIVE-16858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-16858: --- Component/s: Accumulo Storage Handler > Acumulo Utils Improvements > -- > > Key: HIVE-16858 > URL: https://issues.apache.org/jira/browse/HIVE-16858 > Project: Hive > Issue Type: Improvement > Components: Accumulo Storage Handler >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16858.1.patch > > > # Use Apache library for copy routine > # Improve debug logging -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18959) Avoid creating extra pool of threads within LLAP
[ https://issues.apache.org/jira/browse/HIVE-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399153#comment-16399153 ] Prasanth Jayachandran commented on HIVE-18959: -- lgtm, +1 > Avoid creating extra pool of threads within LLAP > > > Key: HIVE-18959 > URL: https://issues.apache.org/jira/browse/HIVE-18959 > Project: Hive > Issue Type: Task > Components: Druid integration > Environment: Kerberos Cluster >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18959.patch > > > The current Druid-Kerberos-Http client is using an external single threaded > pool to handle retry auth calls (eg when a cookie expire or other transient > auth issues). > First, this is not buying us anything since all the Druid Task is executed as > one synchronous task. > Second, this can cause a major issue if an exception occurs that leads to > shutting down the LLAP main thread with an exception like: > {code} org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread > Thread[KerberosHttpClient... threw an Exception. Shutting down now...{code} > Thus to fix this we should avoid using an external thread pool and handle > retrying in a synchronous way. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18344) Remove LinkedList from SharedWorkOptimizer.java
[ https://issues.apache.org/jira/browse/HIVE-18344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399141#comment-16399141 ] Hive QA commented on HIVE-18344: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12914508/HIVE-18344.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 110 failed/errored test(s), 13402 tests executed *Failed tests:* {noformat} TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=92) [infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q] TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=94)
[jira] [Commented] (HIVE-18959) Avoid creating extra pool of threads within LLAP
[ https://issues.apache.org/jira/browse/HIVE-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399137#comment-16399137 ] slim bouguerra commented on HIVE-18959: --- [~ashutoshc] / [~prasanth_j] can you please check this out Thanks > Avoid creating extra pool of threads within LLAP > > > Key: HIVE-18959 > URL: https://issues.apache.org/jira/browse/HIVE-18959 > Project: Hive > Issue Type: Task > Components: Druid integration > Environment: Kerberos Cluster >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18959.patch > > > The current Druid-Kerberos-Http client is using an external single threaded > pool to handle retry auth calls (eg when a cookie expire or other transient > auth issues). > First, this is not buying us anything since all the Druid Task is executed as > one synchronous task. > Second, this can cause a major issue if an exception occurs that leads to > shutting down the LLAP main thread with an exception like: > {code} org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread > Thread[KerberosHttpClient... threw an Exception. Shutting down now...{code} > Thus to fix this we should avoid using an external thread pool and handle > retrying in a synchronous way. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work started] (HIVE-18959) Avoid creating extra pool of threads within LLAP
[ https://issues.apache.org/jira/browse/HIVE-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-18959 started by slim bouguerra. - > Avoid creating extra pool of threads within LLAP > > > Key: HIVE-18959 > URL: https://issues.apache.org/jira/browse/HIVE-18959 > Project: Hive > Issue Type: Task > Components: Druid integration > Environment: Kerberos Cluster >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18959.patch > > > The current Druid-Kerberos-Http client is using an external single threaded > pool to handle retry auth calls (eg when a cookie expire or other transient > auth issues). > First, this is not buying us anything since all the Druid Task is executed as > one synchronous task. > Second, this can cause a major issue if an exception occurs that leads to > shutting down the LLAP main thread with an exception like: > {code} org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread > Thread[KerberosHttpClient... threw an Exception. Shutting down now...{code} > Thus to fix this we should avoid using an external thread pool and handle > retrying in a synchronous way. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18959) Avoid creating extra pool of threads within LLAP
[ https://issues.apache.org/jira/browse/HIVE-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] slim bouguerra updated HIVE-18959: -- Attachment: HIVE-18959.patch > Avoid creating extra pool of threads within LLAP > > > Key: HIVE-18959 > URL: https://issues.apache.org/jira/browse/HIVE-18959 > Project: Hive > Issue Type: Task > Components: Druid integration > Environment: Kerberos Cluster >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18959.patch > > > The current Druid-Kerberos-Http client is using an external single threaded > pool to handle retry auth calls (eg when a cookie expire or other transient > auth issues). > First, this is not buying us anything since all the Druid Task is executed as > one synchronous task. > Second, this can cause a major issue if an exception occurs that leads to > shutting down the LLAP main thread with an exception like: > {code} org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread > Thread[KerberosHttpClient... threw an Exception. Shutting down now...{code} > Thus to fix this we should avoid using an external thread pool and handle > retrying in a synchronous way. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work stopped] (HIVE-18959) Avoid creating extra pool of threads within LLAP
[ https://issues.apache.org/jira/browse/HIVE-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-18959 stopped by slim bouguerra. - > Avoid creating extra pool of threads within LLAP > > > Key: HIVE-18959 > URL: https://issues.apache.org/jira/browse/HIVE-18959 > Project: Hive > Issue Type: Task > Components: Druid integration > Environment: Kerberos Cluster >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18959.patch > > > The current Druid-Kerberos-Http client is using an external single threaded > pool to handle retry auth calls (eg when a cookie expire or other transient > auth issues). > First, this is not buying us anything since all the Druid Task is executed as > one synchronous task. > Second, this can cause a major issue if an exception occurs that leads to > shutting down the LLAP main thread with an exception like: > {code} org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread > Thread[KerberosHttpClient... threw an Exception. Shutting down now...{code} > Thus to fix this we should avoid using an external thread pool and handle > retrying in a synchronous way. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18959) Avoid creating extra pool of threads within LLAP
[ https://issues.apache.org/jira/browse/HIVE-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] slim bouguerra updated HIVE-18959: -- Status: Patch Available (was: In Progress) > Avoid creating extra pool of threads within LLAP > > > Key: HIVE-18959 > URL: https://issues.apache.org/jira/browse/HIVE-18959 > Project: Hive > Issue Type: Task > Components: Druid integration > Environment: Kerberos Cluster >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18959.patch > > > The current Druid-Kerberos-Http client is using an external single threaded > pool to handle retry auth calls (eg when a cookie expire or other transient > auth issues). > First, this is not buying us anything since all the Druid Task is executed as > one synchronous task. > Second, this can cause a major issue if an exception occurs that leads to > shutting down the LLAP main thread with an exception like: > {code} org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread > Thread[KerberosHttpClient... threw an Exception. Shutting down now...{code} > Thus to fix this we should avoid using an external thread pool and handle > retrying in a synchronous way. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work started] (HIVE-18959) Avoid creating extra pool of threads within LLAP
[ https://issues.apache.org/jira/browse/HIVE-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-18959 started by slim bouguerra. - > Avoid creating extra pool of threads within LLAP > > > Key: HIVE-18959 > URL: https://issues.apache.org/jira/browse/HIVE-18959 > Project: Hive > Issue Type: Task > Components: Druid integration > Environment: Kerberos Cluster >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 3.0.0 > > > The current Druid-Kerberos-Http client is using an external single threaded > pool to handle retry auth calls (eg when a cookie expire or other transient > auth issues). > First, this is not buying us anything since all the Druid Task is executed as > one synchronous task. > Second, this can cause a major issue if an exception occurs that leads to > shutting down the LLAP main thread with an exception like: > {code} org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread > Thread[KerberosHttpClient... threw an Exception. Shutting down now...{code} > Thus to fix this we should avoid using an external thread pool and handle > retrying in a synchronous way. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-16861) MapredParquetOutputFormat - Save Some Array Allocations
[ https://issues.apache.org/jira/browse/HIVE-16861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399118#comment-16399118 ] Aihua Xu commented on HIVE-16861: - +1. > MapredParquetOutputFormat - Save Some Array Allocations > --- > > Key: HIVE-16861 > URL: https://issues.apache.org/jira/browse/HIVE-16861 > Project: Hive > Issue Type: Improvement >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16861.1.patch, HIVE-16861.2.patch > > > Remove superfluous array allocations from {{MapredParquetOutputFormat}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18959) Avoid creating extra pool of threads within LLAP
[ https://issues.apache.org/jira/browse/HIVE-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] slim bouguerra updated HIVE-18959: -- Description: The current Druid-Kerberos-Http client is using an external single threaded pool to handle retry auth calls (eg when a cookie expire or other transient auth issues). First, this is not buying us anything since all the Druid Task is executed as one synchronous task. Second, this can cause a major issue if an exception occurs that leads to shutting down the LLAP main thread with an exception like: {code} org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread Thread[KerberosHttpClient... threw an Exception. Shutting down now...{code} Thus to fix this we should avoid using an external thread pool and handle retrying in a synchronous way. was: The current Druid-Kerberos-Http client is using an external single threaded pool to handle retry auth calls (eg when a cookie expire or other transient auth issues). First, this is not buying us anything since all the Druid Task is executed as one synchronous task. Second, this can cause a major issue if an exception occurs that leads to shutting down the LLAP main thread with excpetion like {code} org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread Thread[KerberosHttpClient... threw an Exception. Shutting down now...{code} Thus to fix this we should avoid using an external thread pool and handle retrying in a synchronous way. > Avoid creating extra pool of threads within LLAP > > > Key: HIVE-18959 > URL: https://issues.apache.org/jira/browse/HIVE-18959 > Project: Hive > Issue Type: Task > Components: Druid integration > Environment: Kerberos Cluster >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 3.0.0 > > > The current Druid-Kerberos-Http client is using an external single threaded > pool to handle retry auth calls (eg when a cookie expire or other transient > auth issues). > First, this is not buying us anything since all the Druid Task is executed as > one synchronous task. > Second, this can cause a major issue if an exception occurs that leads to > shutting down the LLAP main thread with an exception like: > {code} org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread > Thread[KerberosHttpClient... threw an Exception. Shutting down now...{code} > Thus to fix this we should avoid using an external thread pool and handle > retrying in a synchronous way. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18959) Avoid creating extra pool of threads within LLAP
[ https://issues.apache.org/jira/browse/HIVE-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] slim bouguerra updated HIVE-18959: -- Description: The current Druid-Kerberos-Http client is using an external single threaded pool to handle retry auth calls (eg when a cookie expire or other transient auth issues). First, this is not buying us anything since all the Druid Task is executed as one synchronous task. Second, this can cause a major issue if an exception occurs that leads to shutting down the LLAP main thread with excpetion like {code} org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread Thread[KerberosHttpClient... threw an Exception. Shutting down now...{code} Thus to fix this we should avoid using an external thread pool and handle retrying in a synchronous way. was: The current Druid-Kerberos-Http client is using an external single threaded pool to handle retry auth calls (eg when a cookie expire or other transient auth issues). First, this is not buying us anything since all the Druid Task is executed as one synchronous task. Second, this can cause a major issue if an exception occurs that leads to shutting down the LLAP main thread with excpetion like {code} org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread Thread[KerberosHttpClient... threw an Exception. Shutting down now...{code}. Thus to fix this we should avoid using an external thread pool and handle retrying in a synchronous way. > Avoid creating extra pool of threads within LLAP > > > Key: HIVE-18959 > URL: https://issues.apache.org/jira/browse/HIVE-18959 > Project: Hive > Issue Type: Task > Components: Druid integration > Environment: Kerberos Cluster >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 3.0.0 > > > The current Druid-Kerberos-Http client is using an external single threaded > pool to handle retry auth calls (eg when a cookie expire or other transient > auth issues). > First, this is not buying us anything since all the Druid Task is executed as > one synchronous task. > Second, this can cause a major issue if an exception occurs that leads to > shutting down the LLAP main thread with excpetion like {code} > org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread > Thread[KerberosHttpClient... threw an Exception. Shutting down now...{code} > Thus to fix this we should avoid using an external thread pool and handle > retrying in a synchronous way. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18959) Avoid creating extra pool of threads within LLAP
[ https://issues.apache.org/jira/browse/HIVE-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] slim bouguerra updated HIVE-18959: -- Description: The current Druid-Kerberos-Http client is using an external single threaded pool to handle retry auth calls (eg when a cookie expire or other transient auth issues). First, this is not buying us anything since all the Druid Task is executed as one synchronous task. Second, this can cause a major issue if an exception occurs that leads to shutting down the LLAP main thread with excpetion like {code} org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread Thread[KerberosHttpClient... threw an Exception. Shutting down now...{code}. Thus to fix this we should avoid using an external thread pool and handle retrying in a synchronous way. was: The current Druid-Kerberos-Http client is using an external single threaded pool to handle retry auth calls (eg when a cookie expire or other transient auth issues). First, this is not buying us anything since all the Druid Task is executed as one synchronous task. Second, this can cause a major issue if an exception occurs that leads to shutting down the LLAP main thread with excpetion like {code} org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread Thread[KerberosHttpClient{code}. Thus to fix this we should avoid using an external thread pool and handle retrying in a synchronous way. > Avoid creating extra pool of threads within LLAP > > > Key: HIVE-18959 > URL: https://issues.apache.org/jira/browse/HIVE-18959 > Project: Hive > Issue Type: Task > Components: Druid integration > Environment: Kerberos Cluster >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 3.0.0 > > > The current Druid-Kerberos-Http client is using an external single threaded > pool to handle retry auth calls (eg when a cookie expire or other transient > auth issues). > First, this is not buying us anything since all the Druid Task is executed as > one synchronous task. > Second, this can cause a major issue if an exception occurs that leads to > shutting down the LLAP main thread with excpetion like {code} > org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread > Thread[KerberosHttpClient... threw an Exception. Shutting down now...{code}. > Thus to fix this we should avoid using an external thread pool and handle > retrying in a synchronous way. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18959) Avoid creating extra pool of threads within LLAP
[ https://issues.apache.org/jira/browse/HIVE-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] slim bouguerra updated HIVE-18959: -- Description: The current Druid-Kerberos-Http client is using an external single threaded pool to handle retry auth calls (eg when a cookie expire or other transient auth issues). First, this is not buying us anything since all the Druid Task is executed as one synchronous task. Second, this can cause a major issue if an exception occurs that leads to shutting down the LLAP main thread with excpetion like {code} org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread Thread[KerberosHttpClient{code}. Thus to fix this we should avoid using an external thread pool and handle retrying in a synchronous way. was: The current Druid-Kerberos-Http client is using an external single threaded pool to handle retry auth calls (eg when a cookie expire or other transient auth issues). First, this is not buying us anything since all the Druid Task is executed as one synchronous task. Second, this can cause a major issue if an exception occurs that leads to shutting down the LLAP main thread with excpetion like . Thus to fix this we should avoid using an external thread pool and handle retrying in a synchronous way. > Avoid creating extra pool of threads within LLAP > > > Key: HIVE-18959 > URL: https://issues.apache.org/jira/browse/HIVE-18959 > Project: Hive > Issue Type: Task > Components: Druid integration > Environment: Kerberos Cluster >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 3.0.0 > > > The current Druid-Kerberos-Http client is using an external single threaded > pool to handle retry auth calls (eg when a cookie expire or other transient > auth issues). > First, this is not buying us anything since all the Druid Task is executed as > one synchronous task. > Second, this can cause a major issue if an exception occurs that leads to > shutting down the LLAP main thread with excpetion like {code} > org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread > Thread[KerberosHttpClient{code}. > Thus to fix this we should avoid using an external thread pool and handle > retrying in a synchronous way. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18959) Avoid creating extra pool of threads within LLAP
[ https://issues.apache.org/jira/browse/HIVE-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] slim bouguerra updated HIVE-18959: -- Description: The current Druid-Kerberos-Http client is using an external single threaded pool to handle retry auth calls (eg when a cookie expire or other transient auth issues). First, this is not buying us anything since all the Druid Task is executed as one synchronous task. Second, this can cause a major issue if an exception occurs that leads to shutting down the LLAP main thread with excpetion like . Thus to fix this we should avoid using an external thread pool and handle retrying in a synchronous way. was: The current Druid-Kerberos-Http client is using an external single threaded pool to handle retry auth calls (eg when a cookie expire or other transient auth issues). First, this is not buying us anything since all the Druid Task is executed as one synchronous task. Second, this can cause a major issue if an exception occurs that leads to shutting down the LLAP main thread. Thus to fix this we should avoid using an external thread pool and handle retrying in a synchronous way. > Avoid creating extra pool of threads within LLAP > > > Key: HIVE-18959 > URL: https://issues.apache.org/jira/browse/HIVE-18959 > Project: Hive > Issue Type: Task > Components: Druid integration > Environment: Kerberos Cluster >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 3.0.0 > > > The current Druid-Kerberos-Http client is using an external single threaded > pool to handle retry auth calls (eg when a cookie expire or other transient > auth issues). > First, this is not buying us anything since all the Druid Task is executed as > one synchronous task. > Second, this can cause a major issue if an exception occurs that leads to > shutting down the LLAP main thread with excpetion like . > Thus to fix this we should avoid using an external thread pool and handle > retrying in a synchronous way. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18959) Avoid creating extra pool of threads within LLAP
[ https://issues.apache.org/jira/browse/HIVE-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399090#comment-16399090 ] slim bouguerra commented on HIVE-18959: --- possible exception stack {code} ERROR [KerberosHttpClient-0 ()] org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread Thread[KerberosHttpClient-0,5,main] threw an Exception. Shutting down now... java.lang.reflect.UndeclaredThrowableException: nullat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1887) ~[hadoop-common-2.7.3.2.6.5.0-88.jar:?] at org.apache.hadoop.hive.druid.security.KerberosHttpClient.inner_go(KerberosHttpClient.java:105) ~[hive-druid-handler-2.1.0.2.6.5.0-88.jar:2.1.0.2.6.5.0-88] at org.apache.hadoop.hive.druid.security.KerberosHttpClient.access$100(KerberosHttpClient.java:50) ~[hive-druid-handler-2.1.0.2.6.5.0-88.jar:2.1.0.2.6.5.0-88] at org.apache.hadoop.hive.druid.security.KerberosHttpClient$2.onSuccess(KerberosHttpClient.java:144) ~[hive-druid-handler-2.1.0.2.6.5.0-88.jar:2.1.0.2.6.5.0-88] at org.apache.hadoop.hive.druid.security.KerberosHttpClient$2.onSuccess(KerberosHttpClient.java:134) ~[hive-druid-handler-2.1.0.2.6.5.0-88.jar:2.1.0.2.6.5.0-88] at org.apache.hive.druid.com.google.common.util.concurrent.Futures$4.run(Futures.java:1181) ~[hive-druid-handler-2.1.0.2.6.5.0-88.jar:2.1.0.2.6.5.0-88] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_151] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_151] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151] Caused by: org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) at org.apache.hadoop.hive.druid.security.DruidKerberosUtil.kerberosChallenge(DruidKerberosUtil.java:82) ~[hive-druid-handler-2.1.0.2.6.5.0-88.jar:2.1.0.2.6.5.0-88] at org.apache.hadoop.hive.druid.security.KerberosHttpClient$1.run(KerberosHttpClient.java:110) ~[hive-druid-handler-2.1.0.2.6.5.0-88.jar:2.1.0.2.6.5.0-88] at org.apache.hadoop.hive.druid.security.KerberosHttpClient$1.run(KerberosHttpClient.java:106) ~[hive-druid-handler-2.1.0.2.6.5.0-88.jar:2.1.0.2.6.5.0-88] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_151] at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_151] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869) ~[hadoop-common-2.7.3.2.6.5.0-88.jar:?] ... 8 more Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147) ~[?:1.8.0_151] at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122) ~[?:1.8.0_151] at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187) ~[?:1.8.0_151] at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224) ~[?:1.8.0_151] at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212) ~[?:1.8.0_151] at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) ~[?:1.8.0_151] at org.apache.hadoop.hive.druid.security.DruidKerberosUtil.kerberosChallenge(DruidKerberosUtil.java:75) ~[hive-druid-handler-2.1.0.2.6.5.0-88.jar:2.1.0.2.6.5.0-88] at org.apache.hadoop.hive.druid.security.KerberosHttpClient$1.run(KerberosHttpClient.java:110) ~[hive-druid-handler-2.1.0.2.6.5.0-88.jar:2.1.0.2.6.5.0-88] at org.apache.hadoop.hive.druid.security.KerberosHttpClient$1.run(KerberosHttpClient.java:106) ~[hive-druid-handler-2.1.0.2.6.5.0-88.jar:2.1.0.2.6.5.0-88] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_151] at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_151] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869) ~[hadoop-common-2.7.3.2.6.5.0-88.jar:?] ... 8 more {code} > Avoid creating extra pool of threads within LLAP > > > Key: HIVE-18959 > URL: https://issues.apache.org/jira/browse/HIVE-18959 > Project: Hive > Issue Type: Task > Components: Druid integration > Environment: Kerberos Cluster >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 3.0.0 > > > The current Druid-Kerberos-Http client is using an external single threaded > pool to handle retry auth calls (eg when a cookie expire or other transient >
[jira] [Updated] (HIVE-18693) Snapshot Isolation does not work for Micromanaged table when a insert transaction is aborted
[ https://issues.apache.org/jira/browse/HIVE-18693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Yeom updated HIVE-18693: -- Attachment: HIVE-18693.06.patch > Snapshot Isolation does not work for Micromanaged table when a insert > transaction is aborted > > > Key: HIVE-18693 > URL: https://issues.apache.org/jira/browse/HIVE-18693 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Attachments: HIVE-18693.01.patch, HIVE-18693.02.patch, > HIVE-18693.03.patch, HIVE-18693.04.patch, HIVE-18693.05.patch, > HIVE-18693.06.patch > > > TestTxnCommands2#writeBetweenWorkerAndCleaner with minor > changes (changing delete command to insert command) fails on MM table. > Specifically the last SELECT commands returns wrong results. > But this test works fine with full ACID table. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18959) Avoid creating extra pool of threads within LLAP
[ https://issues.apache.org/jira/browse/HIVE-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] slim bouguerra reassigned HIVE-18959: - > Avoid creating extra pool of threads within LLAP > > > Key: HIVE-18959 > URL: https://issues.apache.org/jira/browse/HIVE-18959 > Project: Hive > Issue Type: Task > Components: Druid integration > Environment: Kerberos Cluster >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 3.0.0 > > > The current Druid-Kerberos-Http client is using an external single threaded > pool to handle retry auth calls (eg when a cookie expire or other transient > auth issues). > First, this is not buying us anything since all the Druid Task is executed as > one synchronous task. > Second, this can cause a major issue if an exception occurs that leads to > shutting down the LLAP main thread. > Thus to fix this we should avoid using an external thread pool and handle > retrying in a synchronous way. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16861) MapredParquetOutputFormat - Save Some Array Allocations
[ https://issues.apache.org/jira/browse/HIVE-16861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-16861: --- Status: Patch Available (was: Open) Updated patch to re-base version > MapredParquetOutputFormat - Save Some Array Allocations > --- > > Key: HIVE-16861 > URL: https://issues.apache.org/jira/browse/HIVE-16861 > Project: Hive > Issue Type: Improvement >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16861.1.patch, HIVE-16861.2.patch > > > Remove superfluous array allocations from {{MapredParquetOutputFormat}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16861) MapredParquetOutputFormat - Save Some Array Allocations
[ https://issues.apache.org/jira/browse/HIVE-16861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-16861: --- Status: Open (was: Patch Available) > MapredParquetOutputFormat - Save Some Array Allocations > --- > > Key: HIVE-16861 > URL: https://issues.apache.org/jira/browse/HIVE-16861 > Project: Hive > Issue Type: Improvement >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16861.1.patch, HIVE-16861.2.patch > > > Remove superfluous array allocations from {{MapredParquetOutputFormat}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16861) MapredParquetOutputFormat - Save Some Array Allocations
[ https://issues.apache.org/jira/browse/HIVE-16861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-16861: --- Affects Version/s: (was: 2.1.1) > MapredParquetOutputFormat - Save Some Array Allocations > --- > > Key: HIVE-16861 > URL: https://issues.apache.org/jira/browse/HIVE-16861 > Project: Hive > Issue Type: Improvement >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16861.1.patch, HIVE-16861.2.patch > > > Remove superfluous array allocations from {{MapredParquetOutputFormat}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16861) MapredParquetOutputFormat - Save Some Array Allocations
[ https://issues.apache.org/jira/browse/HIVE-16861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-16861: --- Attachment: HIVE-16861.2.patch > MapredParquetOutputFormat - Save Some Array Allocations > --- > > Key: HIVE-16861 > URL: https://issues.apache.org/jira/browse/HIVE-16861 > Project: Hive > Issue Type: Improvement >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16861.1.patch, HIVE-16861.2.patch > > > Remove superfluous array allocations from {{MapredParquetOutputFormat}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-16882) Improvements For Avro SerDe Package
[ https://issues.apache.org/jira/browse/HIVE-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399073#comment-16399073 ] Aihua Xu commented on HIVE-16882: - [~belugabehr] Sorry to miss the previous ping. Thanks for making such changes. +1. > Improvements For Avro SerDe Package > --- > > Key: HIVE-16882 > URL: https://issues.apache.org/jira/browse/HIVE-16882 > Project: Hive > Issue Type: Improvement > Components: Serializers/Deserializers >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HIVE-16882.1.patch, HIVE-16882.2.patch > > > # Use SLF4J parameter DEBUG logging > # Use re-usable libraries where appropriate > # Use enhanced for loops where appropriate > # Fix several minor check-style error > # Small performance enhancements in InstanceCache -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16879) Improve Cache Key
[ https://issues.apache.org/jira/browse/HIVE-16879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-16879: --- Status: Patch Available (was: Open) Changed patch for stand-alone metastore. Created intern strings on cache INSERT so that we don't maintain many copies of the same strings within the cache. > Improve Cache Key > - > > Key: HIVE-16879 > URL: https://issues.apache.org/jira/browse/HIVE-16879 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16879.1.patch, HIVE-16879.2.patch > > > Improve cache key for cache implemented in > {{org.apache.hadoop.hive.metastore.AggregateStatsCache}}. > # Cache some of the key components themselves (db name, table name) using > {{String}} intern method to conserve memory for repeated keys, to improve > {{equals}} method as now references can be used for equality, and hashcodes > will be cached as well as per {{String}} clash hashcode method. > # Upgrade _debug_ logging to not generate text unless required > # Changed _equals_ method to check first for the item most likely to be > different, column name -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16879) Improve Cache Key
[ https://issues.apache.org/jira/browse/HIVE-16879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-16879: --- Attachment: HIVE-16879.2.patch > Improve Cache Key > - > > Key: HIVE-16879 > URL: https://issues.apache.org/jira/browse/HIVE-16879 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16879.1.patch, HIVE-16879.2.patch > > > Improve cache key for cache implemented in > {{org.apache.hadoop.hive.metastore.AggregateStatsCache}}. > # Cache some of the key components themselves (db name, table name) using > {{String}} intern method to conserve memory for repeated keys, to improve > {{equals}} method as now references can be used for equality, and hashcodes > will be cached as well as per {{String}} clash hashcode method. > # Upgrade _debug_ logging to not generate text unless required > # Changed _equals_ method to check first for the item most likely to be > different, column name -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16879) Improve Cache Key
[ https://issues.apache.org/jira/browse/HIVE-16879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-16879: --- Attachment: (was: HIVE-16879.1.patch) > Improve Cache Key > - > > Key: HIVE-16879 > URL: https://issues.apache.org/jira/browse/HIVE-16879 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16879.1.patch, HIVE-16879.2.patch > > > Improve cache key for cache implemented in > {{org.apache.hadoop.hive.metastore.AggregateStatsCache}}. > # Cache some of the key components themselves (db name, table name) using > {{String}} intern method to conserve memory for repeated keys, to improve > {{equals}} method as now references can be used for equality, and hashcodes > will be cached as well as per {{String}} clash hashcode method. > # Upgrade _debug_ logging to not generate text unless required > # Changed _equals_ method to check first for the item most likely to be > different, column name -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16879) Improve Cache Key
[ https://issues.apache.org/jira/browse/HIVE-16879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-16879: --- Attachment: HIVE-16879.1.patch > Improve Cache Key > - > > Key: HIVE-16879 > URL: https://issues.apache.org/jira/browse/HIVE-16879 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16879.1.patch, HIVE-16879.2.patch > > > Improve cache key for cache implemented in > {{org.apache.hadoop.hive.metastore.AggregateStatsCache}}. > # Cache some of the key components themselves (db name, table name) using > {{String}} intern method to conserve memory for repeated keys, to improve > {{equals}} method as now references can be used for equality, and hashcodes > will be cached as well as per {{String}} clash hashcode method. > # Upgrade _debug_ logging to not generate text unless required > # Changed _equals_ method to check first for the item most likely to be > different, column name -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16879) Improve Cache Key
[ https://issues.apache.org/jira/browse/HIVE-16879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-16879: --- Affects Version/s: (was: 2.1.1) > Improve Cache Key > - > > Key: HIVE-16879 > URL: https://issues.apache.org/jira/browse/HIVE-16879 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16879.1.patch > > > Improve cache key for cache implemented in > {{org.apache.hadoop.hive.metastore.AggregateStatsCache}}. > # Cache some of the key components themselves (db name, table name) using > {{String}} intern method to conserve memory for repeated keys, to improve > {{equals}} method as now references can be used for equality, and hashcodes > will be cached as well as per {{String}} clash hashcode method. > # Upgrade _debug_ logging to not generate text unless required > # Changed _equals_ method to check first for the item most likely to be > different, column name -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16879) Improve Cache Key
[ https://issues.apache.org/jira/browse/HIVE-16879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-16879: --- Status: Open (was: Patch Available) > Improve Cache Key > - > > Key: HIVE-16879 > URL: https://issues.apache.org/jira/browse/HIVE-16879 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 2.1.1, 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16879.1.patch > > > Improve cache key for cache implemented in > {{org.apache.hadoop.hive.metastore.AggregateStatsCache}}. > # Cache some of the key components themselves (db name, table name) using > {{String}} intern method to conserve memory for repeated keys, to improve > {{equals}} method as now references can be used for equality, and hashcodes > will be cached as well as per {{String}} clash hashcode method. > # Upgrade _debug_ logging to not generate text unless required > # Changed _equals_ method to check first for the item most likely to be > different, column name -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18344) Remove LinkedList from SharedWorkOptimizer.java
[ https://issues.apache.org/jira/browse/HIVE-18344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399030#comment-16399030 ] Hive QA commented on HIVE-18344: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 18s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 7s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-9630/dev-support/hive-personality.sh | | git revision | master / 57a1ec2 | | Default Java | 1.8.0_111 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9630/yetus/patch-asflicense-problems.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9630/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Remove LinkedList from SharedWorkOptimizer.java > --- > > Key: HIVE-18344 > URL: https://issues.apache.org/jira/browse/HIVE-18344 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-18344.1.patch, HIVE-18344.2.patch > > > Prefer {{ArrayList}} over {{LinkedList}} especially in this class because the > initial size of the collection is known. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18941) HMS non-transactional listener may be called in transactional context
[ https://issues.apache.org/jira/browse/HIVE-18941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16398984#comment-16398984 ] Sergio Peña commented on HIVE-18941: [~akolb] and for the person who will work on this issue. There is a logic on the MetaStoreListenerNotifier.notifyEvent() methods that add a flag to the listener response parameters that specify whether such listener is executed inside an active transaction or not. This was very useful for Sentry to detect if some non-listener calls where indeed called inside a current transaction and ignore them if they are. The logic is like this: {noformat} if (ms != null) { event.putParameter(HIVE_METASTORE_TRANSACTION_ACTIVE, Boolean.toString(ms.isActiveTransaction())); } {noformat} See the code: [https://github.com/apache/hive/blob/master/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreListenerNotifier.java#L291] > HMS non-transactional listener may be called in transactional context > - > > Key: HIVE-18941 > URL: https://issues.apache.org/jira/browse/HIVE-18941 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 2.0.2, 3.0.0 >Reporter: Alexander Kolbasov >Priority: Major > > When HMS code calls listeners it assumes that they are *not* called as part > of the transaction. This isn't quite true because of the nested transaction - > it is quite possible that these listeners are called as part of the bigger > nested transaction. This causes several potential issues: > 1) It changes the assumptions about the context in which these listeners run > 2) It creates possibilities for deadlocks > 3) Some of these listeners may do relative long operations which may delay > transaction commits. > [~spena] FYI. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18908) Add support for FULL OUTER JOIN to MapJoin
[ https://issues.apache.org/jira/browse/HIVE-18908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16398969#comment-16398969 ] Hive QA commented on HIVE-18908: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12914506/HIVE-18908.07.patch {color:green}SUCCESS:{color} +1 due to 37 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 55 failed/errored test(s), 13030 tests executed *Failed tests:* {noformat} TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=94)
[jira] [Commented] (HIVE-18908) Add support for FULL OUTER JOIN to MapJoin
[ https://issues.apache.org/jira/browse/HIVE-18908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16398929#comment-16398929 ] Hive QA commented on HIVE-18908: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 30s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 29s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 26s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 55s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} The patch storage-api passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} The patch common passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} The patch serde passed checkstyle {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 2m 31s{color} | {color:red} root: The patch generated 375 new + 4249 unchanged - 176 fixed = 4624 total (was 4425) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} itests/hive-jmh: The patch generated 0 new + 11 unchanged - 6 fixed = 11 total (was 17) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 19s{color} | {color:red} ql: The patch generated 375 new + 3171 unchanged - 170 fixed = 3546 total (was 3341) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s{color} | {color:red} The patch has 13 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 53m 41s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-9629/dev-support/hive-personality.sh | | git revision | master / db4fe38 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9629/yetus/diff-checkstyle-root.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9629/yetus/diff-checkstyle-ql.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-9629/yetus/whitespace-eol.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9629/yetus/patch-asflicense-problems.txt | | modules | C: storage-api common serde . itests itests/hive-jmh ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9629/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Add support for FULL OUTER JOIN to MapJoin > -- > > Key: HIVE-18908 >
[jira] [Updated] (HIVE-18034) Improving logging with HoS executors spend lots of time in GC
[ https://issues.apache.org/jira/browse/HIVE-18034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-18034: Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Pushed to master, thanks Rui for all the reviews! > Improving logging with HoS executors spend lots of time in GC > - > > Key: HIVE-18034 > URL: https://issues.apache.org/jira/browse/HIVE-18034 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18034.1.patch, HIVE-18034.2.patch, > HIVE-18034.3.patch, HIVE-18034.4.patch, HIVE-18034.6.patch, HIVE-18034.7.patch > > > There are times when Spark will spend lots of time doing GC. The Spark > History UI shows a bunch of red flags when too much time is spent in GC. It > would be nice if those warnings are propagated to Hive. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18957) Upgrade Calcite version to 1.16.0
[ https://issues.apache.org/jira/browse/HIVE-18957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18957: --- Attachment: HIVE-18957.01.patch > Upgrade Calcite version to 1.16.0 > - > > Key: HIVE-18957 > URL: https://issues.apache.org/jira/browse/HIVE-18957 > Project: Hive > Issue Type: Task > Components: CBO >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18957.01.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18957) Upgrade Calcite version to 1.16.0
[ https://issues.apache.org/jira/browse/HIVE-18957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18957: --- Attachment: (was: HIVE-18957.patch) > Upgrade Calcite version to 1.16.0 > - > > Key: HIVE-18957 > URL: https://issues.apache.org/jira/browse/HIVE-18957 > Project: Hive > Issue Type: Task > Components: CBO >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18957.01.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)