[ 
https://issues.apache.org/jira/browse/HIVE-943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12880095#action_12880095
 ] 

Vu Hoang commented on HIVE-943:
-------------------------------

I used the method like this:
{code:java|title=ClientCodeTest|borderStyle=solid}
select("default", "select * from keyword_frequency", 
"/home/vhoang/hadoop/hive", true);
{code}

method definition select(String schema, String query, String path, boolean 
header)
{code:java|title=ClientCode.select(String schema, String query, String path, 
boolean header)|borderStyle=solid}
try
{
        String url = METADATA_URL.replace(VAR_META_HOST, 
getConfig().get(HIVE_JDBC_SERVER)).replace(VAR_META_NAME, schema);
        Class.forName(METADATA_DRIVER);
                        
        Log.debug(getLogString() + "url='" + url + "'");
        Connection conn = DriverManager.getConnection(url, "", "");
        Statement stmt = conn.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, 
ResultSet.CONCUR_UPDATABLE);
                        
        ResultSet recs = stmt.executeQuery(query);
        ResultSetMetaData meta = recs.getMetaData();
                        
        if (path == null) print(recs, meta);
        else write(recs, meta, new File(path), header);
}
catch (ClassNotFoundException ex) { error(ex); }
catch (SQLException ex) { error(ex); }
{code}

method definition print(ResultSet recs, ResultSetMetaData meta)
{code:java|title=ClientCode.print(ResultSet recs, ResultSetMetaData 
meta)|borderStyle=solid}
if (getLogString().equals(BLANK))
        setLogString("HiveQuery.print()|");
                
init();
try
{
        int columnSize = meta.getColumnCount();
        Log.debug(getLogString() + "columnSize=" + columnSize);
        for (int index = 0; index < columnSize; index++)
        {
                String columnName = meta.getColumnName(index + 1);
                Log.debug(getLogString() + "column='" + columnName + "'");
                if (index == columnSize - 1) System.out.print(columnName);
                else System.out.print(columnName + StringConst.COMMA);
        }
        System.out.println();
                
        while (recs.next())
        {
                for (int index = 0; index < columnSize; index++)
                {
                        String record = recs.getString(index + 1);
                        Log.debug(getLogString() + "record='" + record + "'");
                        if (index == columnSize - 1) System.out.print(record);
                        else System.out.print(record + StringConst.COMMA);
                }
                System.out.println();
        }
}
catch (SQLException ex) { error(ex); }
{code}

sorry about the delay feedback

> Hive jdbc client - result is NULL when I run a query to select a large of 
> data (with starting mapreduce)
> --------------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-943
>                 URL: https://issues.apache.org/jira/browse/HIVE-943
>             Project: Hadoop Hive
>          Issue Type: Bug
>          Components: Clients
>    Affects Versions: 0.4.0
>            Reporter: Vu Hoang
>             Fix For: 0.4.2
>
>
> - some main output messages i got from console:
> Total MapReduce jobs = 1
> 09/11/18 15:56:03 INFO ql.Driver: Total MapReduce jobs = 1
> 09/11/18 15:56:03 INFO exec.ExecDriver: BytesPerReducer=1000000000 
> maxReducers=999 totalInputFileSize=1289288953
> Number of reduce tasks not specified. Estimated from input data size: 2
> 09/11/18 15:56:03 INFO exec.ExecDriver: Number of reduce tasks not specified. 
> Estimated from input data size: 2
> In order to change the average load for a reducer (in bytes):
> 09/11/18 15:56:03 INFO exec.ExecDriver: In order to change the average load 
> for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> 09/11/18 15:56:03 INFO exec.ExecDriver:   set 
> hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
> 09/11/18 15:56:03 INFO exec.ExecDriver: In order to limit the maximum number 
> of reducers:
>   set hive.exec.reducers.max=<number>
> 09/11/18 15:56:03 INFO exec.ExecDriver:   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
> 09/11/18 15:56:03 INFO exec.ExecDriver: In order to set a constant number of 
> reducers:
>   set mapred.reduce.tasks=<number>
> 09/11/18 15:56:03 INFO exec.ExecDriver:   set mapred.reduce.tasks=<number>
> 09/11/18 15:56:03 INFO exec.ExecDriver: Using 
> org.apache.hadoop.hive.ql.io.HiveInputFormat
> Starting Job = job_200911122011_0639, Tracking URL = 
> http://**********/jobdetails.jsp?jobid=job_200911122011_0639
> 09/11/18 15:56:04 INFO exec.ExecDriver: Starting Job = job_200911122011_0639, 
> Tracking URL = http://**********/jobdetails.jsp?jobid=job_200911122011_0639
> Kill Command = /data/hadoop-hive/bin/../bin/hadoop job  
> -Dmapred.job.tracker=********** -kill job_200911122011_0639
> 09/11/18 15:56:04 INFO exec.ExecDriver: Kill Command = 
> /data/hadoop-hive/bin/../bin/hadoop job  -Dmapred.job.tracker=********** 
> -kill job_200911122011_0639
> 2009-11-18 03:56:05,701 map = 0%,  reduce = 0%
> 09/11/18 15:56:05 INFO exec.ExecDriver: 2009-11-18 03:56:05,701 map = 0%,  
> reduce = 0%
> 2009-11-18 03:56:21,798 map = 4%,  reduce = 0%
> 09/11/18 15:56:21 INFO exec.ExecDriver: 2009-11-18 03:56:21,798 map = 4%,  
> reduce = 0%
> 2009-11-18 03:56:22,818 map = 8%,  reduce = 0%
> 09/11/18 15:56:22 INFO exec.ExecDriver: 2009-11-18 03:56:22,818 map = 8%,  
> reduce = 0%
> 2009-11-18 03:56:23,832 map = 13%,  reduce = 0%
> 09/11/18 15:56:23 INFO exec.ExecDriver: 2009-11-18 03:56:23,832 map = 13%,  
> reduce = 0%
> 2009-11-18 03:56:24,854 map = 17%,  reduce = 0%
> 09/11/18 15:56:24 INFO exec.ExecDriver: 2009-11-18 03:56:24,854 map = 17%,  
> reduce = 0%
> 2009-11-18 03:56:25,864 map = 21%,  reduce = 0%
> 09/11/18 15:56:25 INFO exec.ExecDriver: 2009-11-18 03:56:25,864 map = 21%,  
> reduce = 0%
> 2009-11-18 03:56:29,890 map = 25%,  reduce = 0%
> 09/11/18 15:56:29 INFO exec.ExecDriver: 2009-11-18 03:56:29,890 map = 25%,  
> reduce = 0%
> 2009-11-18 03:56:30,900 map = 29%,  reduce = 0%
> 09/11/18 15:56:30 INFO exec.ExecDriver: 2009-11-18 03:56:30,900 map = 29%,  
> reduce = 0%
> 2009-11-18 03:56:31,909 map = 33%,  reduce = 0%
> 09/11/18 15:56:31 INFO exec.ExecDriver: 2009-11-18 03:56:31,909 map = 33%,  
> reduce = 0%
> 2009-11-18 03:56:33,933 map = 37%,  reduce = 0%
> 09/11/18 15:56:33 INFO exec.ExecDriver: 2009-11-18 03:56:33,933 map = 37%,  
> reduce = 0%
> 2009-11-18 03:56:35,946 map = 50%,  reduce = 0%
> 09/11/18 15:56:35 INFO exec.ExecDriver: 2009-11-18 03:56:35,946 map = 50%,  
> reduce = 0%
> 2009-11-18 03:56:36,956 map = 54%,  reduce = 0%
> 09/11/18 15:56:36 INFO exec.ExecDriver: 2009-11-18 03:56:36,956 map = 54%,  
> reduce = 0%
> 2009-11-18 03:56:37,965 map = 58%,  reduce = 0%
> 09/11/18 15:56:37 INFO exec.ExecDriver: 2009-11-18 03:56:37,965 map = 58%,  
> reduce = 0%
> 2009-11-18 03:56:38,978 map = 79%,  reduce = 0%
> 09/11/18 15:56:38 INFO exec.ExecDriver: 2009-11-18 03:56:38,978 map = 79%,  
> reduce = 0%
> 2009-11-18 03:56:39,988 map = 83%,  reduce = 0%
> 09/11/18 15:56:39 INFO exec.ExecDriver: 2009-11-18 03:56:39,988 map = 83%,  
> reduce = 0%
> 2009-11-18 03:56:40,998 map = 96%,  reduce = 0%
> 09/11/18 15:56:41 INFO exec.ExecDriver: 2009-11-18 03:56:40,998 map = 96%,  
> reduce = 0%
> 2009-11-18 03:56:42,006 map = 100%,  reduce = 0%
> 09/11/18 15:56:42 INFO exec.ExecDriver: 2009-11-18 03:56:42,006 map = 100%,  
> reduce = 0%
> 2009-11-18 03:56:46,031 map = 100%,  reduce = 13%
> 09/11/18 15:56:46 INFO exec.ExecDriver: 2009-11-18 03:56:46,031 map = 100%,  
> reduce = 13%
> 2009-11-18 03:56:51,060 map = 100%,  reduce = 25%
> 09/11/18 15:56:51 INFO exec.ExecDriver: 2009-11-18 03:56:51,060 map = 100%,  
> reduce = 25%
> 2009-11-18 03:56:56,091 map = 100%,  reduce = 67%
> 09/11/18 15:56:56 INFO exec.ExecDriver: 2009-11-18 03:56:56,091 map = 100%,  
> reduce = 67%
> 2009-11-18 03:56:57,102 map = 100%,  reduce = 100%
> 09/11/18 15:56:57 INFO exec.ExecDriver: 2009-11-18 03:56:57,102 map = 100%,  
> reduce = 100%
> Ended Job = job_200911122011_0639
> 09/11/18 15:56:59 INFO exec.ExecDriver: Ended Job = job_200911122011_0639
> 09/11/18 15:56:59 INFO exec.FileSinkOperator: Moving tmp dir: 
> hdfs://**********/tmp/hive-asadm/1751400872/_tmp.10001 to: 
> hdfs://**********/tmp/hive-asadm/1751400872/_tmp.10001.intermediate
> 09/11/18 15:56:59 INFO exec.FileSinkOperator: Moving tmp dir: 
> hdfs://**********/tmp/hive-asadm/1751400872/_tmp.10001.intermediate to: 
> hdfs://**********/tmp/hive-asadm/1751400872/10001
> OK
> 09/11/18 15:56:59 INFO ql.Driver: OK
> 09/11/18 15:56:59 INFO ql.Driver: Returning Hive schema: 
> Schema(fieldSchemas:[FieldSchema(name:_col0, type:string, comment:from 
> deserializer)], properties:null)
> 09/11/18 15:56:59 INFO ql.Driver: Returning Thrift schema: 
> Schema(fieldSchemas:[FieldSchema(name:_col0, type:string, comment:from 
> deserializer)], properties:null)
> 09/11/18 15:56:59 INFO service.HiveServer: Returning schema: 
> Schema(fieldSchemas:[FieldSchema(name:_col0, type:string, comment:from 
> deserializer)], properties:null)
> 09/11/18 15:56:59 INFO mapred.FileInputFormat: Total input paths to process : 
> 2
> ||_col0||
> |NULL|
> - this problem DOES NOT appear when mapreduce was not running ?
> - i made something wrong at configuration from Hive jdbc api ?
> - temporary data has been moved twice, that's reason why ?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to