[
https://issues.apache.org/jira/browse/HDDS-670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16652460#comment-16652460
]
Namit Maheshwari commented on HDDS-670:
---------------------------------------
{code:java}
-bash-4.2$ beeline
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/usr/hdp/3.0.3.0-63/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/hdp/3.0.3.0-63/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to
jdbc:hive2://ctr-e138-1518143905142-510793-01-000011.hwx.site:2181,ctr-e138-1518143905142-510793-01-000006.hwx.site:2181,ctr-e138-1518143905142-510793-01-000008.hwx.site:2181,ctr-e138-1518143905142-510793-01-000010.hwx.site:2181,ctr-e138-1518143905142-510793-01-000007.hwx.site:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Enter username for
jdbc:hive2://ctr-e138-1518143905142-510793-01-000011.hwx.site:2181,ctr-e138-1518143905142-510793-01-000006.hwx.site:2181,ctr-e138-1518143905142-510793-01-000008.hwx.site:2181,ctr-e138-1518143905142-510793-01-000010.hwx.site:2181,ctr-e138-1518143905142-510793-01-000007.hwx.site:2181/default:
Enter password for
jdbc:hive2://ctr-e138-1518143905142-510793-01-000011.hwx.site:2181,ctr-e138-1518143905142-510793-01-000006.hwx.site:2181,ctr-e138-1518143905142-510793-01-000008.hwx.site:2181,ctr-e138-1518143905142-510793-01-000010.hwx.site:2181,ctr-e138-1518143905142-510793-01-000007.hwx.site:2181/default:
18/10/16 21:09:32 [main]: INFO jdbc.HiveConnection: Connected to
ctr-e138-1518143905142-510793-01-000004.hwx.site:10000
Connected to: Apache Hive (version 3.1.0.3.0.3.0-63)
Driver: Hive JDBC (version 3.1.0.3.0.3.0-63)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 3.1.0.3.0.3.0-63 by Apache Hive
0: jdbc:hive2://ctr-e138-1518143905142-510793> describe formatted testo3;
INFO : Compiling
command(queryId=hive_20181016210256_3400c0cb-a1d3-4384-8af4-7b95678030e4):
describe formatted testo3
INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:col_name,
type:string, comment:from deserializer), FieldSchema(name:data_type,
type:string, comment:from deserializer), FieldSchema(name:comment, type:string,
comment:from deserializer)], properties:null)
INFO : Completed compiling
command(queryId=hive_20181016210256_3400c0cb-a1d3-4384-8af4-7b95678030e4); Time
taken: 1.616 seconds
INFO : Executing
command(queryId=hive_20181016210256_3400c0cb-a1d3-4384-8af4-7b95678030e4):
describe formatted testo3
INFO : Starting task [Stage-0:DDL] in serial mode
INFO : Completed executing
command(queryId=hive_20181016210256_3400c0cb-a1d3-4384-8af4-7b95678030e4); Time
taken: 0.294 seconds
INFO : OK
+-------------------------------+----------------------------------------------------+-----------------------+
| col_name | data_type | comment |
+-------------------------------+----------------------------------------------------+-----------------------+
| # col_name | data_type | comment |
| i | int | |
| s | string | |
| d | float | |
| | NULL | NULL |
| # Detailed Table Information | NULL | NULL |
| Database: | default | NULL |
| OwnerType: | USER | NULL |
| Owner: | anonymous | NULL |
| CreateTime: | Mon Oct 15 22:25:33 UTC 2018 | NULL |
| LastAccessTime: | UNKNOWN | NULL |
| Retention: | 0 | NULL |
| Location: | o3://bucket2.volume2/testo3 | NULL |
| Table Type: | EXTERNAL_TABLE | NULL |
| Table Parameters: | NULL | NULL |
| | EXTERNAL | TRUE |
| | bucketing_version | 2 |
| | transient_lastDdlTime | 1539642333 |
| | NULL | NULL |
| # Storage Information | NULL | NULL |
| SerDe Library: | org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe | NULL |
| InputFormat: | org.apache.hadoop.mapred.TextInputFormat | NULL |
| OutputFormat: | org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat |
NULL |
| Compressed: | No | NULL |
| Num Buckets: | -1 | NULL |
| Bucket Columns: | [] | NULL |
| Sort Columns: | [] | NULL |
| Storage Desc Params: | NULL | NULL |
| | serialization.format | 1 |
+-------------------------------+----------------------------------------------------+-----------------------+
29 rows selected (2.65 seconds)
0: jdbc:hive2://ctr-e138-1518143905142-510793> insert into testo3 values(1,
"aa", 3.0);
INFO : Compiling
command(queryId=hive_20181016210935_cbe26097-44f0-4444-b70d-8a6555f461a0):
insert into testo3 values(1, "aa", 3.0)
INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_col0,
type:int, comment:null), FieldSchema(name:_col1, type:string, comment:null),
FieldSchema(name:_col2, type:float, comment:null)], properties:null)
INFO : Completed compiling
command(queryId=hive_20181016210935_cbe26097-44f0-4444-b70d-8a6555f461a0); Time
taken: 4.499 seconds
INFO : Executing
command(queryId=hive_20181016210935_cbe26097-44f0-4444-b70d-8a6555f461a0):
insert into testo3 values(1, "aa", 3.0)
INFO : Query ID = hive_20181016210935_cbe26097-44f0-4444-b70d-8a6555f461a0
INFO : Total jobs = 1
INFO : Launching Job 1 out of 1
INFO : Starting task [Stage-1:MAPRED] in serial mode
INFO : Subscribed to counters: [] for queryId:
hive_20181016210935_cbe26097-44f0-4444-b70d-8a6555f461a0
INFO : Tez session hasn't been created yet. Opening session
INFO : Dag name: insert into testo3 values(1, "aa", 3.0) (Stage-1)
INFO : Status: Running (Executing on YARN cluster with App id
application_1539383731490_0036)
----------------------------------------------------------------------------------------------
VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
----------------------------------------------------------------------------------------------
Map 1 .......... container SUCCEEDED 1 1 0 0 0 0
Reducer 2 ...... container SUCCEEDED 1 1 0 0 0 0
----------------------------------------------------------------------------------------------
VERTICES: 02/02 [==========================>>] 100% ELAPSED TIME: 11.06 s
----------------------------------------------------------------------------------------------
INFO : Status: DAG finished successfully in 10.94 seconds
INFO :
INFO : Query Execution Summary
INFO :
----------------------------------------------------------------------------------------------
INFO : OPERATION DURATION
INFO :
----------------------------------------------------------------------------------------------
INFO : Compile Query 4.50s
INFO : Prepare Plan 6.69s
INFO : Get Query Coordinator (AM) 0.03s
INFO : Submit Plan 0.61s
INFO : Start DAG 0.62s
INFO : Run DAG 10.94s
INFO :
----------------------------------------------------------------------------------------------
INFO :
INFO : Task Execution Summary
INFO :
----------------------------------------------------------------------------------------------
INFO : VERTICES DURATION(ms) CPU_TIME(ms) GC_TIME(ms) INPUT_RECORDS
OUTPUT_RECORDS
INFO :
----------------------------------------------------------------------------------------------
INFO : Map 1 7129.00 14,430 154 3 1
INFO : Reducer 2 224.00 2,240 0 1 0
INFO :
----------------------------------------------------------------------------------------------
INFO :
INFO : org.apache.tez.common.counters.DAGCounter:
INFO : NUM_SUCCEEDED_TASKS: 2
INFO : TOTAL_LAUNCHED_TASKS: 2
INFO : AM_CPU_MILLISECONDS: 3900
INFO : AM_GC_TIME_MILLIS: 0
INFO : File System Counters:
INFO : FILE_BYTES_READ: 135
INFO : FILE_BYTES_WRITTEN: 135
INFO : HDFS_BYTES_WRITTEN: 199
INFO : HDFS_READ_OPS: 3
INFO : HDFS_WRITE_OPS: 2
INFO : HDFS_OP_CREATE: 1
INFO : HDFS_OP_GET_FILE_STATUS: 3
INFO : HDFS_OP_RENAME: 1
INFO : org.apache.tez.common.counters.TaskCounter:
INFO : SPILLED_RECORDS: 0
INFO : NUM_SHUFFLED_INPUTS: 1
INFO : NUM_FAILED_SHUFFLE_INPUTS: 0
INFO : GC_TIME_MILLIS: 154
INFO : TASK_DURATION_MILLIS: 7434
INFO : CPU_MILLISECONDS: 16670
INFO : PHYSICAL_MEMORY_BYTES: 4294967296
INFO : VIRTUAL_MEMORY_BYTES: 11069292544
INFO : COMMITTED_HEAP_BYTES: 4294967296
INFO : INPUT_RECORDS_PROCESSED: 5
INFO : INPUT_SPLIT_LENGTH_BYTES: 1
INFO : OUTPUT_RECORDS: 1
INFO : OUTPUT_LARGE_RECORDS: 0
INFO : OUTPUT_BYTES: 94
INFO : OUTPUT_BYTES_WITH_OVERHEAD: 102
INFO : OUTPUT_BYTES_PHYSICAL: 127
INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0
INFO : ADDITIONAL_SPILLS_BYTES_READ: 0
INFO : ADDITIONAL_SPILL_COUNT: 0
INFO : SHUFFLE_BYTES: 103
INFO : SHUFFLE_BYTES_DECOMPRESSED: 102
INFO : SHUFFLE_BYTES_TO_MEM: 0
INFO : SHUFFLE_BYTES_TO_DISK: 0
INFO : SHUFFLE_BYTES_DISK_DIRECT: 103
INFO : SHUFFLE_PHASE_TIME: 71
INFO : FIRST_EVENT_RECEIVED: 44
INFO : LAST_EVENT_RECEIVED: 44
INFO : HIVE:
INFO : CREATED_FILES: 2
INFO : DESERIALIZE_ERRORS: 0
INFO : RECORDS_IN_Map_1: 3
INFO : RECORDS_OUT_0: 1
INFO : RECORDS_OUT_1_default.testo3: 1
INFO : RECORDS_OUT_INTERMEDIATE_Map_1: 1
INFO : RECORDS_OUT_INTERMEDIATE_Reducer_2: 0
INFO : RECORDS_OUT_OPERATOR_FS_12: 1
INFO : RECORDS_OUT_OPERATOR_FS_5: 1
INFO : RECORDS_OUT_OPERATOR_GBY_10: 1
INFO : RECORDS_OUT_OPERATOR_GBY_8: 1
INFO : RECORDS_OUT_OPERATOR_MAP_0: 0
INFO : RECORDS_OUT_OPERATOR_RS_9: 1
INFO : RECORDS_OUT_OPERATOR_SEL_1: 1
INFO : RECORDS_OUT_OPERATOR_SEL_3: 1
INFO : RECORDS_OUT_OPERATOR_SEL_7: 1
INFO : RECORDS_OUT_OPERATOR_TS_0: 1
INFO : RECORDS_OUT_OPERATOR_UDTF_2: 1
INFO : TaskCounter_Map_1_INPUT__dummy_table:
INFO : INPUT_RECORDS_PROCESSED: 4
INFO : INPUT_SPLIT_LENGTH_BYTES: 1
INFO : TaskCounter_Map_1_OUTPUT_Reducer_2:
INFO : ADDITIONAL_SPILLS_BYTES_READ: 0
INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0
INFO : ADDITIONAL_SPILL_COUNT: 0
INFO : OUTPUT_BYTES: 94
INFO : OUTPUT_BYTES_PHYSICAL: 127
INFO : OUTPUT_BYTES_WITH_OVERHEAD: 102
INFO : OUTPUT_LARGE_RECORDS: 0
INFO : OUTPUT_RECORDS: 1
INFO : SPILLED_RECORDS: 0
INFO : TaskCounter_Reducer_2_INPUT_Map_1:
INFO : FIRST_EVENT_RECEIVED: 44
INFO : INPUT_RECORDS_PROCESSED: 1
INFO : LAST_EVENT_RECEIVED: 44
INFO : NUM_FAILED_SHUFFLE_INPUTS: 0
INFO : NUM_SHUFFLED_INPUTS: 1
INFO : SHUFFLE_BYTES: 103
INFO : SHUFFLE_BYTES_DECOMPRESSED: 102
INFO : SHUFFLE_BYTES_DISK_DIRECT: 103
INFO : SHUFFLE_BYTES_TO_DISK: 0
INFO : SHUFFLE_BYTES_TO_MEM: 0
INFO : SHUFFLE_PHASE_TIME: 71
INFO : TaskCounter_Reducer_2_OUTPUT_out_Reducer_2:
INFO : OUTPUT_RECORDS: 0
ERROR : Job Commit failed with exception
'org.apache.hadoop.hive.ql.metadata.HiveException(Unable to move:
o3://bucket2.volume2/testo3/.hive-staging_hive_2018-10-16_21-09-35_130_1001829123585250245-1/_tmp.-ext-10000
to:
o3://bucket2.volume2/testo3/.hive-staging_hive_2018-10-16_21-09-35_130_1001829123585250245-1/_tmp.-ext-10000.moved)'
org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move:
o3://bucket2.volume2/testo3/.hive-staging_hive_2018-10-16_21-09-35_130_1001829123585250245-1/_tmp.-ext-10000
to:
o3://bucket2.volume2/testo3/.hive-staging_hive_2018-10-16_21-09-35_130_1001829123585250245-1/_tmp.-ext-10000.moved
at org.apache.hadoop.hive.ql.exec.Utilities.rename(Utilities.java:1171)
at
org.apache.hadoop.hive.ql.exec.Utilities.mvFileToFinalPath(Utilities.java:1502)
at
org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1384)
at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:798)
at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:803)
at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:803)
at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:803)
at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:803)
at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:583)
at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:320)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:210)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2707)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2378)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2054)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1752)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1746)
at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157)
at
org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226)
at
org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
at
org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:318)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at
org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:338)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
ERROR : FAILED: Execution Error, return code 3 from
org.apache.hadoop.hive.ql.exec.tez.TezTask
INFO : Completed executing
command(queryId=hive_20181016210935_cbe26097-44f0-4444-b70d-8a6555f461a0); Time
taken: 18.787 seconds
Error: Error while processing statement: FAILED: Execution Error, return code 3
from org.apache.hadoop.hive.ql.exec.tez.TezTask (state=08S01,code=3)
0: jdbc:hive2://ctr-e138-1518143905142-510793> Closing: 0:
jdbc:hive2://ctr-e138-1518143905142-510793-01-000011.hwx.site:2181,ctr-e138-1518143905142-510793-01-000006.hwx.site:2181,ctr-e138-1518143905142-510793-01-000008.hwx.site:2181,ctr-e138-1518143905142-510793-01-000010.hwx.site:2181,ctr-e138-1518143905142-510793-01-000007.hwx.site:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Unexpected end of file when reading from HS2 server. The root cause might be
too many concurrent connections. Please ask the administrator to check the
number of active connections, and adjust hive.server2.thrift.max.worker.threads
if applicable.
Error: Error while cleaning up the server resources (state=,code=0)
{code}
> Hive insert fails against Ozone external table
> ----------------------------------------------
>
> Key: HDDS-670
> URL: https://issues.apache.org/jira/browse/HDDS-670
> Project: Hadoop Distributed Data Store
> Issue Type: Task
> Reporter: Namit Maheshwari
> Priority: Blocker
>
> It fails with
> {code:java}
> ERROR : Job Commit failed with exception
> 'org.apache.hadoop.hive.ql.metadata.HiveException(Unable to move:
> o3://bucket2.volume2/testo3/.hive-staging_hive_2018-10-16_21-09-35_130_1001829123585250245-1/_tmp.-ext-10000
> to:
> o3://bucket2.volume2/testo3/.hive-staging_hive_2018-10-16_21-09-35_130_1001829123585250245-1/_tmp.-ext-10000.moved)'
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move:
> o3://bucket2.volume2/testo3/.hive-staging_hive_2018-10-16_21-09-35_130_1001829123585250245-1/_tmp.-ext-10000
> to:
> o3://bucket2.volume2/testo3/.hive-staging_hive_2018-10-16_21-09-35_130_1001829123585250245-1/_tmp.-ext-10000.moved
> {code}
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]