> On 6月 3, 2016, 5:50 p.m., Vaibhav Gumashta wrote: > > itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniHA.java, > > line 157 > > <https://reviews.apache.org/r/40867/diff/9/?file=1386567#file1386567line157> > > > > Can you add a test with a join query as well? The join query should > > write the results in a new intermediate file on hdfs and it will be good to > > test that.
I added it in the latest patch. > On 6月 3, 2016, 5:50 p.m., Vaibhav Gumashta wrote: > > jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java, line 552 > > <https://reviews.apache.org/r/40867/diff/9/?file=1386570#file1386570line552> > > > > Nit: Usually an iterator implements > > https://docs.oracle.com/javase/7/docs/api/java/util/Iterator.html. This class wrappers SequenceFile.Reader. And I think implementing an iterator is not suitable in this class. (Implementing hasNext() is not easy.) I just changed the class name to avoid confusion. > On 6月 3, 2016, 5:50 p.m., Vaibhav Gumashta wrote: > > jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java, line 573 > > <https://reviews.apache.org/r/40867/diff/9/?file=1386570#file1386570line573> > > > > Nit: getXXX methods are usually used to return something other than > > void. I think the name is not valid. I changed the name in the latest patch. > On 6月 3, 2016, 5:50 p.m., Vaibhav Gumashta wrote: > > jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java, line 590 > > <https://reviews.apache.org/r/40867/diff/9/?file=1386570#file1386570line590> > > > > This should be a private method. I fixed it. > On 6月 3, 2016, 5:50 p.m., Vaibhav Gumashta wrote: > > ql/src/java/org/apache/hadoop/hive/ql/Driver.java, line 1978 > > <https://reviews.apache.org/r/40867/diff/9/?file=1386572#file1386572line1978> > > > > Shouldn't the read happen independent of the file format? For example, > > the default format used to be TextFile until very recently and the user can > > as well choose to configure it that way. I thought handling other formats will make code complex. If this should handle TextFile or other formats, I want to create a follow-up jira. > On 6月 3, 2016, 5:50 p.m., Vaibhav Gumashta wrote: > > jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java, line 536 > > <https://reviews.apache.org/r/40867/diff/9/?file=1386570#file1386570line536> > > > > This hardcodes the serde to LazySimpleSerde. There is work in > > https://issues.apache.org/jira/browse/HIVE-12049 where we write using a > > different serde in the final tasks. However, I'll create a follow-up jira > > for making this more generic. I got it. Thank you. > On 6月 3, 2016, 5:50 p.m., Vaibhav Gumashta wrote: > > ql/src/java/org/apache/hadoop/hive/ql/Driver.java, line 1976 > > <https://reviews.apache.org/r/40867/diff/9/?file=1386572#file1386572line1976> > > > > We can create a follow-up jira to handle this. Shall I create the jira? - Takanobu ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/40867/#review136073 ----------------------------------------------------------- On 6月 15, 2016, 6:50 a.m., Takanobu Asanuma wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/40867/ > ----------------------------------------------------------- > > (Updated 6月 15, 2016, 6:50 a.m.) > > > Review request for hive. > > > Repository: hive-git > > > Description > ------- > > This is a WIP patch for HIVE-11527 > > * I added a new configuration whose name is > hive.server2.webhdfs.bypass.enabled. The default is false. When this value is > true, clients use the bypass. > > * I still have not considered security such as Kerberos and SSL at present. > > * I have not implement Statement#setFetchSize for bypass yet. > > > Diffs > ----- > > common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 761dbb2 > itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniHA.java > 84644d1 > > itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniHS2.java > 0c313a2 > itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniMr.java > 637e51a > jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java 92fdbca > jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java a242501 > ql/src/java/org/apache/hadoop/hive/ql/Driver.java 2263192 > ql/src/java/org/apache/hadoop/hive/ql/exec/FetchTask.java dff1815 > service-rpc/if/TCLIService.thrift 5a9a785 > service-rpc/src/gen/thrift/gen-cpp/TCLIService_types.h d23b3cd > service-rpc/src/gen/thrift/gen-cpp/TCLIService_types.cpp 0f53cb2 > > service-rpc/src/gen/thrift/gen-javabean/org/apache/hive/service/rpc/thrift/TColumnDesc.java > 31472c8 > > service-rpc/src/gen/thrift/gen-javabean/org/apache/hive/service/rpc/thrift/TExecuteStatementResp.java > 7101fa5 > > service-rpc/src/gen/thrift/gen-javabean/org/apache/hive/service/rpc/thrift/TGetTablesReq.java > 1aa3f94 > > service-rpc/src/gen/thrift/gen-javabean/org/apache/hive/service/rpc/thrift/TProtocolVersion.java > 14d50ed > service-rpc/src/gen/thrift/gen-php/Types.php a6a257f > service-rpc/src/gen/thrift/gen-py/TCLIService/ttypes.py fcd330f > service-rpc/src/gen/thrift/gen-rb/t_c_l_i_service_types.rb 71148a0 > service/src/java/org/apache/hive/service/cli/CLIService.java ed52b4a > service/src/java/org/apache/hive/service/cli/ColumnDescriptor.java bfd7135 > service/src/java/org/apache/hive/service/cli/operation/Operation.java > d48b92c > > service/src/java/org/apache/hive/service/cli/operation/OperationManager.java > 2f18231 > service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java > 3bf40eb > service/src/java/org/apache/hive/service/cli/session/HiveSession.java > 78ff388 > service/src/java/org/apache/hive/service/cli/session/HiveSessionImpl.java > 7341635 > service/src/java/org/apache/hive/service/cli/thrift/ThriftCLIService.java > 8bc3d94 > > Diff: https://reviews.apache.org/r/40867/diff/ > > > Testing > ------- > > I have tested few simple queries and they worked well. But I think there are > some problems for some queries. I'm going to test more queries and fix bugs. > I'm also going to add unit tests. > > > Thanks, > > Takanobu Asanuma > >