Does any one know where I can upload spark-2.2.0-bin-hadoop2-without-hive.tgz

2017-12-04 Thread Zhang, Liyun
Hi all:

Now I am working on 
HIVE-18150(Upgrade Spark 
Version to 2.2.0). I found that I need to upload the 
spark-2.2.0-bin-hadoop2-without-hive.tgz to a public folder to let hive-qa to 
download and
test all qfiles relating about spark.  Does anyone know where I can  upload 
this spark-2.2.0-bin-hadoop2-without-hive.tgz(nearly 143M).  I tried to use 
Google Driver, but curl   failed.


I need change the url 
link(http://d3jw87u4immizc.cloudfront.net/spark-tarball/spark-${spark.version}-bin-hadoop2-without-hive.tgz)
$HIVE_SOURCE/itests/pom.xml

  

  set -x
  /bin/pwd
  BASE_DIR=./target
  HIVE_ROOT=$BASE_DIR/../../../
  DOWNLOAD_DIR=./../thirdparty
  download() {
url=$1;
finalName=$2
tarName=$(basename $url)
rm -rf $BASE_DIR/$finalName
if [[ ! -f $DOWNLOAD_DIR/$tarName ]]
then
 curl -Sso $DOWNLOAD_DIR/$tarName $url
else
  local md5File="$tarName".md5sum
  curl -Sso $DOWNLOAD_DIR/$md5File "$url".md5sum
  cd $DOWNLOAD_DIR
  if type md5sum >/dev/null  ! md5sum -c $md5File; then
curl -Sso $DOWNLOAD_DIR/$tarName $url || return 1
  fi

  cd -
fi
tar -zxf $DOWNLOAD_DIR/$tarName -C $BASE_DIR
mv $BASE_DIR/spark-${spark.version}-bin-hadoop2-without-hive 
$BASE_DIR/$finalName
  }
  mkdir -p $DOWNLOAD_DIR
  download 
"http://d3jw87u4immizc.cloudfront.net/spark-tarball/spark-${spark.version}-bin-hadoop2-without-hive.tgz;
 "spark"
  cp -f $HIVE_ROOT/data/conf/spark/log4j2.properties $BASE_DIR/spark/conf/




Best Regards
Kelly Zhang/Zhang,Liyun



Review Request 64326: HIVE-18208

2017-12-04 Thread Deepak Jaiswal

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/64326/
---

Review request for hive and Jason Dere.


Repository: hive-git


Description
---

SMB Join : Fix the unit tests to run SMB Joins.
Updated tests and result files.


Diffs
-

  ql/src/test/queries/clientpositive/auto_sortmerge_join_1.q a1d5249448 
  ql/src/test/queries/clientpositive/auto_sortmerge_join_10.q e65344dd6d 
  ql/src/test/queries/clientpositive/auto_sortmerge_join_11.q 11499f8eab 
  ql/src/test/queries/clientpositive/auto_sortmerge_join_12.q b512cc5c74 
  ql/src/test/queries/clientpositive/auto_sortmerge_join_13.q 1c868dcd15 
  ql/src/test/queries/clientpositive/auto_sortmerge_join_14.q dd59c74fc0 
  ql/src/test/queries/clientpositive/auto_sortmerge_join_15.q 1480b15488 
  ql/src/test/queries/clientpositive/auto_sortmerge_join_16.q 12ab1fa1d1 
  ql/src/test/queries/clientpositive/auto_sortmerge_join_2.q e77d937991 
  ql/src/test/queries/clientpositive/auto_sortmerge_join_3.q 183f03335a 
  ql/src/test/queries/clientpositive/auto_sortmerge_join_4.q 21f273a17b 
  ql/src/test/queries/clientpositive/auto_sortmerge_join_7.q cf12331e13 
  ql/src/test/queries/clientpositive/auto_sortmerge_join_8.q 5ec4e26d4b 
  ql/src/test/queries/clientpositive/auto_sortmerge_join_9.q f95631f429 
  ql/src/test/queries/clientpositive/bucketsortoptimize_insert_2.q 4a14587857 
  ql/src/test/queries/clientpositive/bucketsortoptimize_insert_6.q ec0c2dc254 
  ql/src/test/queries/clientpositive/bucketsortoptimize_insert_7.q 45635c1209 
  ql/src/test/queries/clientpositive/quotedid_smb.q 25d1f0eee7 
  ql/src/test/queries/clientpositive/smb_cache.q e415e51053 
  ql/src/test/results/clientpositive/auto_sortmerge_join_10.q.out 22ac2a201a 
  ql/src/test/results/clientpositive/auto_sortmerge_join_11.q.out 243a49b45f 
  ql/src/test/results/clientpositive/auto_sortmerge_join_12.q.out 3d0559a47c 
  ql/src/test/results/clientpositive/llap/auto_sortmerge_join_1.q.out 
36bfac3f4c 
  ql/src/test/results/clientpositive/llap/auto_sortmerge_join_10.q.out 
b8f10fec67 
  ql/src/test/results/clientpositive/llap/auto_sortmerge_join_11.q.out 
37d97d2252 
  ql/src/test/results/clientpositive/llap/auto_sortmerge_join_12.q.out 
655573650b 
  ql/src/test/results/clientpositive/llap/auto_sortmerge_join_13.q.out 
a6d73097e0 
  ql/src/test/results/clientpositive/llap/auto_sortmerge_join_14.q.out 
2d03e8cb72 
  ql/src/test/results/clientpositive/llap/auto_sortmerge_join_15.q.out 
ce41569f49 
  ql/src/test/results/clientpositive/llap/auto_sortmerge_join_16.q.out 
cb8564fd78 
  ql/src/test/results/clientpositive/llap/auto_sortmerge_join_2.q.out 
90d362e981 
  ql/src/test/results/clientpositive/llap/auto_sortmerge_join_3.q.out 
365f63c0ad 
  ql/src/test/results/clientpositive/llap/auto_sortmerge_join_4.q.out 
8ee44b3493 
  ql/src/test/results/clientpositive/llap/auto_sortmerge_join_7.q.out 
83d5a968b7 
  ql/src/test/results/clientpositive/llap/auto_sortmerge_join_8.q.out 
0e0428481b 
  ql/src/test/results/clientpositive/llap/auto_sortmerge_join_9.q.out 
8bd3d126c1 
  ql/src/test/results/clientpositive/llap/bucketsortoptimize_insert_2.q.out 
b907c2dbd8 
  ql/src/test/results/clientpositive/llap/bucketsortoptimize_insert_6.q.out 
f5f5f91e82 
  ql/src/test/results/clientpositive/llap/bucketsortoptimize_insert_7.q.out 
7b380562ac 
  ql/src/test/results/clientpositive/llap/quotedid_smb.q.out 8e850f50ce 
  ql/src/test/results/clientpositive/llap/smb_cache.q.out 60d4ff0ba0 
  ql/src/test/results/clientpositive/spark/auto_sortmerge_join_1.q.out 
e6038b857d 
  ql/src/test/results/clientpositive/spark/auto_sortmerge_join_12.q.out 
ff9a0f4fa4 
  ql/src/test/results/clientpositive/spark/auto_sortmerge_join_14.q.out 
8c0d506b26 
  ql/src/test/results/clientpositive/spark/auto_sortmerge_join_15.q.out 
b005bda331 
  ql/src/test/results/clientpositive/spark/auto_sortmerge_join_16.q.out 
cb8564fd78 
  ql/src/test/results/clientpositive/spark/auto_sortmerge_join_2.q.out 
025d0d29c5 
  ql/src/test/results/clientpositive/spark/auto_sortmerge_join_3.q.out 
3ad950a107 
  ql/src/test/results/clientpositive/spark/auto_sortmerge_join_4.q.out 
60437ec56d 
  ql/src/test/results/clientpositive/spark/auto_sortmerge_join_7.q.out 
16ecabe05d 
  ql/src/test/results/clientpositive/spark/auto_sortmerge_join_8.q.out 
e180471dcb 
  ql/src/test/results/clientpositive/spark/auto_sortmerge_join_9.q.out 
4d0476f9ee 
  ql/src/test/results/clientpositive/spark/bucketsortoptimize_insert_2.q.out 
814553d81a 
  ql/src/test/results/clientpositive/spark/quotedid_smb.q.out 7b8777f9d6 


Diff: https://reviews.apache.org/r/64326/diff/1/


Testing
---


Thanks,

Deepak Jaiswal



Re: Review Request 64222: HIVE-18088: Add WM event traces at query level for debugging

2017-12-04 Thread j . prasanth . j

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/64222/
---

(Updated Dec. 5, 2017, 4:38 a.m.)


Review request for hive and Sergey Shelukhin.


Changes
---

Separate future for return event instead of WM test event.


Bugs: HIVE-18088
https://issues.apache.org/jira/browse/HIVE-18088


Repository: hive-git


Description
---

HIVE-18088: Add WM event traces at query level for debugging


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 3be5a8d 
  itests/hive-unit/pom.xml ea5b7b9 
  
itests/hive-unit/src/test/java/org/apache/hive/jdbc/AbstractJdbcTriggersTest.java
 235e6c3 
  
itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestTriggersMoveWorkloadManager.java
 a983855 
  ql/src/java/org/apache/hadoop/hive/ql/Context.java 57e1803 
  ql/src/java/org/apache/hadoop/hive/ql/Driver.java 389a1a6 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/AmPluginNode.java 0509cbc 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/tez/KillMoveTriggerActionHandler.java
 94b189b 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/KillTriggerActionHandler.java 
8c60b6f 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionState.java 6fa3724 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezTask.java af77f30 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TriggerValidatorRunnable.java 
5821659 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WmEvent.java PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WmTezSession.java d61c531 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java ecdcf12 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManagerFederation.java 
0a9fa72 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/PrintSummary.java 
5bb6bf1 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezJobMonitor.java 
3dd4b31 
  
ql/src/java/org/apache/hadoop/hive/ql/hooks/PostExecWMEventsSummaryPrinter.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/wm/Trigger.java e41b460 
  ql/src/java/org/apache/hadoop/hive/ql/wm/TriggerActionHandler.java 8b142da 
  ql/src/java/org/apache/hadoop/hive/ql/wm/TriggerContext.java 16072c3 
  ql/src/java/org/apache/hadoop/hive/ql/wm/WmContext.java PRE-CREATION 
  ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestWorkloadManager.java 
78df962 


Diff: https://reviews.apache.org/r/64222/diff/5/

Changes: https://reviews.apache.org/r/64222/diff/4-5/


Testing
---


Thanks,

Prasanth_J



Review Request 64324: HIVE-18153 refactor reopen and file management in TezTask

2017-12-04 Thread Sergey Shelukhin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/64324/
---

Review request for hive, Prasanth_J and Siddharth Seth.


Repository: hive-git


Description
---

see jira


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java 88a75edd35 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/DagUtils.java 5c338b89c9 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPool.java 3bcf657ac4 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java 
8417ebb7d5 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolSession.java 
b3ccd24fd6 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionState.java 
6fa37244a5 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezTask.java af77f300c2 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java 
ecdcf12510 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezJobMonitor.java 
3dd4b31186 
  ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDTFGetSplits.java 
4148a8aa3a 
  ql/src/test/org/apache/hadoop/hive/ql/exec/tez/SampleTezSessionState.java 
52484540ff 
  ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestTezSessionPool.java 
829ea8cecc 
  ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestTezTask.java 47aa936845 
  ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestWorkloadManager.java 
78df962a3a 


Diff: https://reviews.apache.org/r/64324/diff/1/


Testing
---


Thanks,

Sergey Shelukhin



[jira] [Created] (HIVE-18221) test acid default

2017-12-04 Thread Eugene Koifman (JIRA)
Eugene Koifman created HIVE-18221:
-

 Summary: test acid default
 Key: HIVE-18221
 URL: https://issues.apache.org/jira/browse/HIVE-18221
 Project: Hive
  Issue Type: Test
  Components: Transactions
Affects Versions: 3.0.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HIVE-18220) Workload Management tables have broken constraints defined on postgres schema

2017-12-04 Thread Deepesh Khandelwal (JIRA)
Deepesh Khandelwal created HIVE-18220:
-

 Summary: Workload Management tables have broken constraints 
defined on postgres schema
 Key: HIVE-18220
 URL: https://issues.apache.org/jira/browse/HIVE-18220
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Deepesh Khandelwal
Assignee: Deepesh Khandelwal
Priority: Blocker


Schema initialization on Postgres fails with the following error:
{noformat}
0: jdbc:postgresql://localhost.localdomain:54> ALTER TABLE ONLY "WM_POOL" ADD 
CO 
NSTRAINT "UNIQUE_WM_RESOURCEPLAN" UNIQUE ("NAME")
Error: ERROR: column "NAME" named in key does not exist (state=42703,code=0)
Closing: 0: jdbc:postgresql://localhost.localdomain:5432/hive
org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization 
FAILED! Metastore state would be inconsistent !!
Underlying cause: java.io.IOException : Schema script failed, errorcode 2
org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization 
FAILED! Metastore state would be inconsistent !!
  at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:586)
  at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:559)
  at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1183)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
  at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
Caused by: java.io.IOException: Schema script failed, errorcode 2
  at org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:957)
  at org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:935)
  at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:582)
  ... 8 more
{noformat}
It is due to couple on incorrect constraint definitions in the schema.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HIVE-18219) When InputStream is corrupted, the skip() returns -1, causing infinite loop

2017-12-04 Thread John Doe (JIRA)
John Doe created HIVE-18219:
---

 Summary: When InputStream is corrupted, the skip() returns -1, 
causing infinite loop
 Key: HIVE-18219
 URL: https://issues.apache.org/jira/browse/HIVE-18219
 Project: Hive
  Issue Type: Bug
Affects Versions: 2.3.2
Reporter: John Doe


Similar like 
[CASSANDRA-7330|https://issues.apache.org/jira/browse/CASSANDRA-7330], when 
InputStream is corrupted, skip() returns -1, causing the following loop be 
infinite.

{code:java}
  public final int skipBytes(int count) throws IOException {
int skipped = 0;
long skip;
while (skipped < count && (skip = in.skip(count - skipped)) != 0) {
  skipped += skip;
}
if (skipped < 0) {
  throw new EOFException();
}
return skipped;
  }
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HIVE-18218) SMB Join : auto_sortmerge_join_16 fails with wrong results

2017-12-04 Thread Deepak Jaiswal (JIRA)
Deepak Jaiswal created HIVE-18218:
-

 Summary: SMB Join : auto_sortmerge_join_16 fails with wrong results
 Key: HIVE-18218
 URL: https://issues.apache.org/jira/browse/HIVE-18218
 Project: Hive
  Issue Type: Bug
Reporter: Deepak Jaiswal
Assignee: Deepak Jaiswal


While working on HIVE-18208, it was found that with SMB, the results are 
incorrect. This most likely is a product issue.

cc [~hagleitn]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HIVE-18217) When Text is corrupted, populateMappings() hangs indefinitely

2017-12-04 Thread John Doe (JIRA)
John Doe created HIVE-18217:
---

 Summary: When Text is corrupted, populateMappings() hangs 
indefinitely
 Key: HIVE-18217
 URL: https://issues.apache.org/jira/browse/HIVE-18217
 Project: Hive
  Issue Type: Bug
Affects Versions: 2.3.2
Reporter: John Doe


Similar like [HIVE-18216|https://issues.apache.org/jira/browse/HIVE-18216],
when the Text is corrupted, the following loop become infinite.

{code:java}
  private void populateMappings(Text from, Text to) {
replacementMap.clear();
deletionSet.clear();

ByteBuffer fromBytes = ByteBuffer.wrap(from.getBytes(), 0, 
from.getLength());
ByteBuffer toBytes = ByteBuffer.wrap(to.getBytes(), 0, to.getLength());

// Traverse through the from string, one code point at a time
while (fromBytes.hasRemaining()) {
  // This will also move the iterator ahead by one code point
  int fromCodePoint = Text.bytesToCodePoint(fromBytes);
  // If the to string has more code points, make sure to traverse it too
  if (toBytes.hasRemaining()) {
int toCodePoint = Text.bytesToCodePoint(toBytes);
// If the code point from from string already has a replacement or is 
to be deleted, we
// don't need to do anything, just move on to the next code point
if (replacementMap.containsKey(fromCodePoint) || 
deletionSet.contains(fromCodePoint)) {
  continue;
}
replacementMap.put(fromCodePoint, toCodePoint);
  } else {
// If the code point from from string already has a replacement or is 
to be deleted, we
// don't need to do anything, just move on to the next code point
if (replacementMap.containsKey(fromCodePoint) || 
deletionSet.contains(fromCodePoint)) {
  continue;
}
deletionSet.add(fromCodePoint);
  }
}
  }
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HIVE-18216) When Text is corrupted, processInput() hangs indefinitely

2017-12-04 Thread John Doe (JIRA)
John Doe created HIVE-18216:
---

 Summary: When Text is corrupted, processInput() hangs indefinitely
 Key: HIVE-18216
 URL: https://issues.apache.org/jira/browse/HIVE-18216
 Project: Hive
  Issue Type: Bug
Affects Versions: 2.3.2
Reporter: John Doe


When the Text is corrupted, the following loop become infinite.
This is because in hadoop.io.Text.bytesToCodePoint(), when extraBytesToRead == 
-1, the index in the ByteBuffer is not moved, and thus, ByteBuffer.remaining() 
is always > 0.
And it deletionSet.contains(-1), then this loop become infinite.

{code:java}
  private String processInput(Text input) {
StringBuilder resultBuilder = new StringBuilder();
// Obtain the byte buffer from the input string so we can traverse it code 
point by code point
ByteBuffer inputBytes = ByteBuffer.wrap(input.getBytes(), 0, 
input.getLength());
// Traverse the byte buffer containing the input string one code point at a 
time
while (inputBytes.hasRemaining()) {
  int inputCodePoint = Text.bytesToCodePoint(inputBytes);
  // If the code point exists in deletion set, no need to emit out anything 
for this code point.
  // Continue on to the next code point
  if (deletionSet.contains(inputCodePoint)) {
continue;
  }

  Integer replacementCodePoint = replacementMap.get(inputCodePoint);
  // If a replacement exists for this code point, emit out the replacement 
and append it to the
  // output string. If no such replacement exists, emit out the original 
input code point
  char[] charArray = Character.toChars((replacementCodePoint != null) ? 
replacementCodePoint
  : inputCodePoint);
  resultBuilder.append(charArray);
}
String resultString = resultBuilder.toString();
return resultString;
  }
{code}

Here is the hadoop.io.Text.bytesToCodePoint() function.

{code:java}
  public static int bytesToCodePoint(ByteBuffer bytes) {
bytes.mark();
byte b = bytes.get();
bytes.reset();
int extraBytesToRead = bytesFromUTF8[(b & 0xFF)];
if (extraBytesToRead < 0) return -1; // trailing byte!
int ch = 0;

switch (extraBytesToRead) {
case 5: ch += (bytes.get() & 0xFF); ch <<= 6; /* remember, illegal UTF-8 */
case 4: ch += (bytes.get() & 0xFF); ch <<= 6; /* remember, illegal UTF-8 */
case 3: ch += (bytes.get() & 0xFF); ch <<= 6;
case 2: ch += (bytes.get() & 0xFF); ch <<= 6;
case 1: ch += (bytes.get() & 0xFF); ch <<= 6;
case 0: ch += (bytes.get() & 0xFF);
}
ch -= offsetsFromUTF8[extraBytesToRead];

return ch;
  }
{code}





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Review Request 64193: HIVE-18054: Make Lineage work with concurrent queries on a Session

2017-12-04 Thread Andrew Sherman via Review Board


> On Dec. 2, 2017, 12:22 a.m., Sahil Takiar wrote:
> > Since we touch the `LoadSemanticAnalyzer` could we add a q-test (could be 
> > added to one of the existing `lineage*.q` files) for `LOAD` statements. 
> > Same for import / export statements (as far as I can tell there are no 
> > existing ones, correct me if I am wrong).
> > 
> > If you have time, it would be great to run some of the lineage tests for 
> > HoS too, but since thats a bit orthogonal to this JIRA, it can be done in a 
> > follow up JIRA.

I will addsome more tests...


> On Dec. 2, 2017, 12:22 a.m., Sahil Takiar wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/Driver.java
> > Lines 365 (patched)
> > 
> >
> > Sounds good. Just curious, is there any way to know for sure where code 
> > run by a `Driver`, creates another `Driver`? How did you determine when 
> > this is necessary?

I reviewed all code that creates a Driver.


> On Dec. 2, 2017, 12:22 a.m., Sahil Takiar wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/exec/MoveTask.java
> > Line 401 (original), 401 (patched)
> > 
> >
> > Ok, but do we need to do `if (queryState.getLineageState() != null)` to 
> > ensure an NPE isn't thrown? That seems to be what the old code is doing.

I don't think we need to do that. There is always an initial lineageState 
inside queryState so for it to be null someone would have had to call 
setLineageState(null)


> On Dec. 2, 2017, 12:22 a.m., Sahil Takiar wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/parse/TaskCompiler.java
> > Lines 111 (patched)
> > 
> >
> > Doesn't a `TaskCompiler` already have a `QueryState` object? Why do we 
> > need to explicitly pass in a `LineageState`?

Good catch, I will fix


- Andrew


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/64193/#review192601
---


On Nov. 30, 2017, 1:22 a.m., Andrew Sherman wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/64193/
> ---
> 
> (Updated Nov. 30, 2017, 1:22 a.m.)
> 
> 
> Review request for hive.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> A Hive Session can contain multiple concurrent sql Operations.
> Lineage is currently tracked in SessionState and is cleared when a query
> completes. This results in Lineage for other running queries being lost.
> 
> To fix this, move LineageState from SessionState to QueryState.
> In MoveTask/MoveWork use the LineageState from the MoveTask's QueryState
> rather than trying to use it from MoveWork.
> Add a test which runs multiple jdbc queries in a thread pool
> against the same connection and show that Vertices are not lost from Lineage.
> As part of this test, add ReadableHook, an ExecuteWithHookContext that stores
> HookContexts in memory and makes them available for reading.
> Make LineageLogger methods static so they can be used elsewhere.
> 
> Sometimes a running query (originating in a Driver) will instantiate
> another Driver to run or compile another query. Because these Drivers
> shared a Session, the child Driver would accumulate Lineage information
> along with that of the parent Driver. For consistency a LineageState is
> passed to these child Drivers and stored in the new Driver's QueryState.
> 
> 
> Diffs
> -
> 
>   
> itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniHS2.java 
> f5ed735c1ec14dfee338e56020fa2629b168389d 
>   ql/src/java/org/apache/hadoop/hive/ql/Driver.java 
> af9f193dc94e2e05caa88d965a34f4483c9d7069 
>   ql/src/java/org/apache/hadoop/hive/ql/QueryState.java 
> 7d5aa8b179e536e25c41a8946e667f8dd5669e0f 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 
> e7af5e004fb560b574b82f6d1b60517511802f37 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/MoveTask.java 
> e2f8c1f8012ad25114e279747e821b291c7f4ca6 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/Task.java 
> 1f0487f4f72ab18bcf876f45ad5758d83a7f001b 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/repl/bootstrap/load/table/LoadPartitions.java
>  262225fc202d4627652acfd77350e44b0284b3da 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/repl/bootstrap/load/table/LoadTable.java
>  bb1f4e50509e57a9d0b9e6793c1fc08baa4d2981 
>   ql/src/java/org/apache/hadoop/hive/ql/hooks/HookContext.java 
> 7b617309f6b0d8a7ce0dea80ab1f790c2651b147 
>   ql/src/java/org/apache/hadoop/hive/ql/hooks/LineageLogger.java 
> 2f764f8a29a9d41a7db013a949ffe3a8a9417d32 
>   ql/src/java/org/apache/hadoop/hive/ql/hooks/ReadableHook.java PRE-CREATION 
>   

Re: Review Request 64222: HIVE-18088: Add WM event traces at query level for debugging

2017-12-04 Thread j . prasanth . j

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/64222/
---

(Updated Dec. 4, 2017, 11:08 p.m.)


Review request for hive and Sergey Shelukhin.


Changes
---

Addressed review comments.


Bugs: HIVE-18088
https://issues.apache.org/jira/browse/HIVE-18088


Repository: hive-git


Description
---

HIVE-18088: Add WM event traces at query level for debugging


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 3be5a8d 
  itests/hive-unit/pom.xml ea5b7b9 
  
itests/hive-unit/src/test/java/org/apache/hive/jdbc/AbstractJdbcTriggersTest.java
 235e6c3 
  
itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestTriggersMoveWorkloadManager.java
 a983855 
  ql/src/java/org/apache/hadoop/hive/ql/Context.java 57e1803 
  ql/src/java/org/apache/hadoop/hive/ql/Driver.java 389a1a6 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/AmPluginNode.java 0509cbc 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/tez/KillMoveTriggerActionHandler.java
 94b189b 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/KillTriggerActionHandler.java 
8c60b6f 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionState.java 6fa3724 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezTask.java af77f30 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TriggerValidatorRunnable.java 
5821659 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WmEvent.java PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WmTezSession.java d61c531 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java ecdcf12 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManagerFederation.java 
0a9fa72 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/PrintSummary.java 
5bb6bf1 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezJobMonitor.java 
3dd4b31 
  
ql/src/java/org/apache/hadoop/hive/ql/hooks/PostExecWMEventsSummaryPrinter.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/wm/Trigger.java e41b460 
  ql/src/java/org/apache/hadoop/hive/ql/wm/TriggerActionHandler.java 8b142da 
  ql/src/java/org/apache/hadoop/hive/ql/wm/TriggerContext.java 16072c3 
  ql/src/java/org/apache/hadoop/hive/ql/wm/WmContext.java PRE-CREATION 
  ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestWorkloadManager.java 
78df962 


Diff: https://reviews.apache.org/r/64222/diff/4/

Changes: https://reviews.apache.org/r/64222/diff/3-4/


Testing
---


Thanks,

Prasanth_J



Re: Review Request 64222: HIVE-18088: Add WM event traces at query level for debugging

2017-12-04 Thread j . prasanth . j


> On Dec. 4, 2017, 7:50 p.m., Sergey Shelukhin wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java
> > Lines 1353 (patched)
> > 
> >
> > hmm.. several returns will overwrite each others events. Perhaps 
> > addTerminal... should be changed to return the current event if already 
> > set, similar to the one that dumps state.
> > 
> > Why is this needed anyway?

Fixed. This is required for printing the last RETURN event. After query 
completion, events summary is printed immediately after return of session to 
pool. This return event will not be captured if we don't wait for one iteration 
of process events in WM. If you look at the test case changes, it will now 
capture the RETURN event as well (earlier it wasn't).


- Prasanth_J


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/64222/#review192749
---


On Dec. 3, 2017, 10:40 p.m., Prasanth_J wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/64222/
> ---
> 
> (Updated Dec. 3, 2017, 10:40 p.m.)
> 
> 
> Review request for hive and Sergey Shelukhin.
> 
> 
> Bugs: HIVE-18088
> https://issues.apache.org/jira/browse/HIVE-18088
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HIVE-18088: Add WM event traces at query level for debugging
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 3be5a8d 
>   itests/hive-unit/pom.xml ea5b7b9 
>   
> itests/hive-unit/src/test/java/org/apache/hive/jdbc/AbstractJdbcTriggersTest.java
>  235e6c3 
>   
> itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestTriggersMoveWorkloadManager.java
>  a983855 
>   ql/src/java/org/apache/hadoop/hive/ql/Context.java 97b52b0 
>   ql/src/java/org/apache/hadoop/hive/ql/Driver.java 389a1a6 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/AmPluginNode.java 0509cbc 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/tez/KillMoveTriggerActionHandler.java
>  94b189b 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/tez/KillTriggerActionHandler.java 
> 8c60b6f 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionState.java 6fa3724 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezTask.java af77f30 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TriggerValidatorRunnable.java 
> 5821659 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WmEvent.java PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WmTezSession.java d61c531 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java ecdcf12 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManagerFederation.java 
> 0a9fa72 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/PrintSummary.java 
> 5bb6bf1 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezJobMonitor.java 
> 3dd4b31 
>   
> ql/src/java/org/apache/hadoop/hive/ql/hooks/PostExecWMEventsSummaryPrinter.java
>  PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/wm/Trigger.java e41b460 
>   ql/src/java/org/apache/hadoop/hive/ql/wm/TriggerActionHandler.java 8b142da 
>   ql/src/java/org/apache/hadoop/hive/ql/wm/TriggerContext.java 16072c3 
>   ql/src/java/org/apache/hadoop/hive/ql/wm/WmContext.java PRE-CREATION 
>   ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestWorkloadManager.java 
> 78df962 
> 
> 
> Diff: https://reviews.apache.org/r/64222/diff/3/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Prasanth_J
> 
>



Re: Review Request 64222: HIVE-18088: Add WM event traces at query level for debugging

2017-12-04 Thread j . prasanth . j


- Prasanth_J


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/64222/#review192749
---


On Dec. 3, 2017, 10:40 p.m., Prasanth_J wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/64222/
> ---
> 
> (Updated Dec. 3, 2017, 10:40 p.m.)
> 
> 
> Review request for hive and Sergey Shelukhin.
> 
> 
> Bugs: HIVE-18088
> https://issues.apache.org/jira/browse/HIVE-18088
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HIVE-18088: Add WM event traces at query level for debugging
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 3be5a8d 
>   itests/hive-unit/pom.xml ea5b7b9 
>   
> itests/hive-unit/src/test/java/org/apache/hive/jdbc/AbstractJdbcTriggersTest.java
>  235e6c3 
>   
> itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestTriggersMoveWorkloadManager.java
>  a983855 
>   ql/src/java/org/apache/hadoop/hive/ql/Context.java 97b52b0 
>   ql/src/java/org/apache/hadoop/hive/ql/Driver.java 389a1a6 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/AmPluginNode.java 0509cbc 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/tez/KillMoveTriggerActionHandler.java
>  94b189b 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/tez/KillTriggerActionHandler.java 
> 8c60b6f 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionState.java 6fa3724 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezTask.java af77f30 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TriggerValidatorRunnable.java 
> 5821659 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WmEvent.java PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WmTezSession.java d61c531 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java ecdcf12 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManagerFederation.java 
> 0a9fa72 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/PrintSummary.java 
> 5bb6bf1 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezJobMonitor.java 
> 3dd4b31 
>   
> ql/src/java/org/apache/hadoop/hive/ql/hooks/PostExecWMEventsSummaryPrinter.java
>  PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/wm/Trigger.java e41b460 
>   ql/src/java/org/apache/hadoop/hive/ql/wm/TriggerActionHandler.java 8b142da 
>   ql/src/java/org/apache/hadoop/hive/ql/wm/TriggerContext.java 16072c3 
>   ql/src/java/org/apache/hadoop/hive/ql/wm/WmContext.java PRE-CREATION 
>   ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestWorkloadManager.java 
> 78df962 
> 
> 
> Diff: https://reviews.apache.org/r/64222/diff/3/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Prasanth_J
> 
>



Re: Review Request 64282: HIVE-18173: Improve plans for correlated subqueries with non-equi predicate

2017-12-04 Thread Vineet Garg

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/64282/
---

(Updated Dec. 4, 2017, 10:42 p.m.)


Review request for hive and Ashutosh Chauhan.


Bugs: HIVE-18173
https://issues.apache.org/jira/browse/HIVE-18173


Repository: hive-git


Description
---

Improve plans for correlated subqueries with non-equi predicate


Diffs (updated)
-

  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveRelDecorrelator.java
 d1fe49c875 
  ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java 76c82e2606 
  ql/src/test/queries/clientpositive/subquery_in.q 7d4ece9dca 
  ql/src/test/results/clientpositive/llap/explainuser_1.q.out 5adf401b25 
  ql/src/test/results/clientpositive/llap/subquery_exists.q.out dfe424046e 
  ql/src/test/results/clientpositive/llap/subquery_in.q.out 5dcdfdd15f 
  ql/src/test/results/clientpositive/llap/subquery_in_having.q.out 0ffbaaea34 
  ql/src/test/results/clientpositive/llap/subquery_multi.q.out d0a78a2bb4 
  ql/src/test/results/clientpositive/llap/subquery_notin.q.out 5da12584f0 
  ql/src/test/results/clientpositive/llap/subquery_scalar.q.out ab67a7dc59 
  ql/src/test/results/clientpositive/llap/subquery_select.q.out d41704661d 
  ql/src/test/results/clientpositive/llap/subquery_views.q.out af695691a7 
  ql/src/test/results/clientpositive/spark/spark_explainuser_1.q.out 6a4bea1bd4 
  ql/src/test/results/clientpositive/spark/subquery_exists.q.out fb13fb73e9 
  ql/src/test/results/clientpositive/spark/subquery_in.q.out e19240b7ca 
  ql/src/test/results/clientpositive/spark/subquery_multi.q.out a4282df08a 
  ql/src/test/results/clientpositive/spark/subquery_notin.q.out 0d12d0db60 
  ql/src/test/results/clientpositive/spark/subquery_scalar.q.out d8b1c92526 
  ql/src/test/results/clientpositive/spark/subquery_select.q.out 6feb852965 
  ql/src/test/results/clientpositive/spark/subquery_views.q.out 9a1c25fffd 
  ql/src/test/results/clientpositive/subquery_exists.q.out b6b31aaf47 
  ql/src/test/results/clientpositive/subquery_notexists.q.out a6175f8fec 
  ql/src/test/results/clientpositive/subquery_notin_having.q.out 433609d016 
  ql/src/test/results/clientpositive/subquery_unqualcolumnrefs.q.out bfb5d2b0a6 


Diff: https://reviews.apache.org/r/64282/diff/3/

Changes: https://reviews.apache.org/r/64282/diff/2-3/


Testing
---


Thanks,

Vineet Garg



[jira] [Created] (HIVE-18215) Possible code optimization exists for "INSERT OVERWITE on MM table. SELECT FROM (SELECT .. UNION ALL SELECT ..)

2017-12-04 Thread Steve Yeom (JIRA)
Steve Yeom created HIVE-18215:
-

 Summary: Possible code optimization exists for "INSERT OVERWITE on 
MM table. SELECT FROM (SELECT .. UNION ALL SELECT ..)
 Key: HIVE-18215
 URL: https://issues.apache.org/jira/browse/HIVE-18215
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 3.0.0
Reporter: Steve Yeom
Priority: Minor
 Fix For: 3.0.0


removeTempOrDuplicateFiles(.) has an opportunity for performance code 
optimization for the 
test case of "INSERT OVERWITE on MM table. SELECT FROM (SELECT .. UNION ALL 
SELECT ..)" from dp_counter_mm.q.

This is MM table specific and we can avoid calling fs.exists() by creating a 
specific mmDirectories
list for the current SELECT statement (out of two SELECTs in our test case from 
the dp_counter_mm.q) from the IOW union all query.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HIVE-18214) Flaky test: TestSparkClient

2017-12-04 Thread Sahil Takiar (JIRA)
Sahil Takiar created HIVE-18214:
---

 Summary: Flaky test: TestSparkClient
 Key: HIVE-18214
 URL: https://issues.apache.org/jira/browse/HIVE-18214
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Sahil Takiar
Assignee: Sahil Takiar


Looks like there is a race condition in {{TestSparkClient#runTest}}. The test 
creates a {{RemoteDriver}} in memory, which creates a {{JavaSparkContext}}. A 
new {{JavaSparkContext}} is created for each test that is run. There is a race 
condition where the {{RemoteDriver}} isn't given enough time to shutdown, so 
when the next test starts running it creates another {{JavaSparkContext}} which 
causes an exception like {{org.apache.spark.SparkException: Only one 
SparkContext may be running in this JVM (see SPARK-2243)}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HIVE-18213) Tests: YARN Minicluster times out if the disks are >90% full

2017-12-04 Thread Gopal V (JIRA)
Gopal V created HIVE-18213:
--

 Summary: Tests: YARN Minicluster times out if the disks are >90% 
full
 Key: HIVE-18213
 URL: https://issues.apache.org/jira/browse/HIVE-18213
 Project: Hive
  Issue Type: Bug
Reporter: Gopal V


Increase YARN minicluster threshold to timeout only at 99% full instead of 90%.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Review Request 64222: HIVE-18088: Add WM event traces at query level for debugging

2017-12-04 Thread Sergey Shelukhin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/64222/#review192749
---




ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java
Lines 1353 (patched)


hmm.. several returns will overwrite each others events. Perhaps 
addTerminal... should be changed to return the current event if already set, 
similar to the one that dumps state.

Why is this needed anyway?


- Sergey Shelukhin


On Dec. 3, 2017, 10:40 p.m., Prasanth_J wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/64222/
> ---
> 
> (Updated Dec. 3, 2017, 10:40 p.m.)
> 
> 
> Review request for hive and Sergey Shelukhin.
> 
> 
> Bugs: HIVE-18088
> https://issues.apache.org/jira/browse/HIVE-18088
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HIVE-18088: Add WM event traces at query level for debugging
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 3be5a8d 
>   itests/hive-unit/pom.xml ea5b7b9 
>   
> itests/hive-unit/src/test/java/org/apache/hive/jdbc/AbstractJdbcTriggersTest.java
>  235e6c3 
>   
> itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestTriggersMoveWorkloadManager.java
>  a983855 
>   ql/src/java/org/apache/hadoop/hive/ql/Context.java 97b52b0 
>   ql/src/java/org/apache/hadoop/hive/ql/Driver.java 389a1a6 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/AmPluginNode.java 0509cbc 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/tez/KillMoveTriggerActionHandler.java
>  94b189b 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/tez/KillTriggerActionHandler.java 
> 8c60b6f 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionState.java 6fa3724 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezTask.java af77f30 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TriggerValidatorRunnable.java 
> 5821659 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WmEvent.java PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WmTezSession.java d61c531 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java ecdcf12 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManagerFederation.java 
> 0a9fa72 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/PrintSummary.java 
> 5bb6bf1 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezJobMonitor.java 
> 3dd4b31 
>   
> ql/src/java/org/apache/hadoop/hive/ql/hooks/PostExecWMEventsSummaryPrinter.java
>  PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/wm/Trigger.java e41b460 
>   ql/src/java/org/apache/hadoop/hive/ql/wm/TriggerActionHandler.java 8b142da 
>   ql/src/java/org/apache/hadoop/hive/ql/wm/TriggerContext.java 16072c3 
>   ql/src/java/org/apache/hadoop/hive/ql/wm/WmContext.java PRE-CREATION 
>   ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestWorkloadManager.java 
> 78df962 
> 
> 
> Diff: https://reviews.apache.org/r/64222/diff/3/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Prasanth_J
> 
>



Re: [DISCUSS] Do-it-yourself docs

2017-12-04 Thread Eugene Koifman
Perhaps this should be a 2 stage process.  One to approve the code and one to 
approve the doc.
It seems odd to update the Wiki (which isn’t tracked using the same Git repo as 
the code) before
the code changes have been agreed to.  Both approvals would be required to 
commit.

Eugene
 

On 12/3/17, 2:49 PM, "Prasanth Jayachandran"  wrote:

+1 for Yetus integration to -1 patches without docs.


Thanks and Regards,
Prasanth Jayachandran


On Sat, Dec 2, 2017 at 3:04 AM, Klára Barna Zsombor 
wrote:

> Could this be somehow integrated into the Yetus checks? I'm thinking that
> if the Jira being tested does not have one of the "Doc-Performed",
> "To-Doc", "Doc-Not-Needed" labels then it would get a -1 from Yetus.
> Peter what do you think? Is Yetus extendable in this way?
>
> On Thu, Nov 30, 2017 at 2:58 AM, Lefty Leverenz 
> wrote:
>
> > Hive contributors are responsible for documenting their own commits,
> > although many seem to be unaware of this or too busy with other tasks.
> How
> > can we boost the number of jiras that get documented?
> >
> >
> > Our current process is to put a TODOC** label on each committed
> > issue that needs wiki documentation, then remove it when the doc is 
done.
> > But nobody tallies the TODOC labels at release time or pressures
> > contributors to do their documentation, so we have a large backlog of
> > unfinished doc tasks.
> >
> >
> > For several years I've monitored the dev@hive mailing list for issues
> that
> > should be documented in the wiki.  Whenever a committed patch needs doc
> and
> > the contributor hasn't taken care of it, I add a TODOC label and write a
> > doc note naming new configuration parameters, reserved words, or HiveQL
> > syntax.  (This is convenient for searches.)  I also give links to places
> in
> > the wiki where the docs belong.
> >
> >
> > Soon, I'll stop monitoring the Hive mailing lists and writing doc notes.
> > My time can be better spent doing documentation, instead of just 
pointing
> > out that it needs to be done.  But I can't tackle the whole backlog, and
> > many future commits won't even get a TODOC label.
> >
> >
> > What can we do to improve the Hive doc process?
> >
> > -- Lefty
> >
>




[jira] [Created] (HIVE-18212) Make sure Yetus check always has a full log

2017-12-04 Thread Adam Szita (JIRA)
Adam Szita created HIVE-18212:
-

 Summary: Make sure Yetus check always has a full log
 Key: HIVE-18212
 URL: https://issues.apache.org/jira/browse/HIVE-18212
 Project: Hive
  Issue Type: Sub-task
Reporter: Adam Szita
Assignee: Adam Szita






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)