[
https://issues.apache.org/jira/browse/HIVE-10746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14553061#comment-14553061
]
Greg Senia commented on HIVE-10746:
-----------------------------------
Debug logs from DAG with compressed it sets 1 split.. so how do we fix this
issue?
2015-05-20 16:15:12,041 DEBUG [InputInitializer [Map 1] #0] exec.Utilities:
Found plan in cache for name: map.xml
2015-05-20 16:15:12,055 INFO [InputInitializer [Map 1] #0] exec.Utilities:
Processing alias gss_rsn2
2015-05-20 16:15:12,055 INFO [InputInitializer [Map 1] #0] exec.Utilities:
Adding input file
hdfs://xhadnnm1p.example.com:8020/apps/hive/warehouse/hue_debug.db/gss_rsn2
2015-05-20 16:15:12,057 INFO [InputInitializer [Map 1] #0] io.HiveInputFormat:
hive.io.file.readcolumn.ids=
2015-05-20 16:15:12,058 INFO [InputInitializer [Map 1] #0] io.HiveInputFormat:
hive.io.file.readcolumn.names=,arsn_cd,appl_user_id
2015-05-20 16:15:12,058 INFO [InputInitializer [Map 1] #0] io.HiveInputFormat:
Generating splits
2015-05-20 16:15:12,087 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
2015-05-20 16:15:12,087 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = true
2015-05-20 16:15:12,087 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
2015-05-20 16:15:12,087 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.domain.socket.path = /var/lib/hadoop-hdfs/dn_socket
2015-05-20 16:15:12,088 DEBUG [InputInitializer [Map 1] #0] retry.RetryUtils:
multipleLinearRandomRetry = null
2015-05-20 16:15:12,088 DEBUG [InputInitializer [Map 1] #0] ipc.Client: getting
client out of cache: org.apache.hadoop.ipc.Client@6c93595a
2015-05-20 16:15:12,091 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
2015-05-20 16:15:12,091 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = true
2015-05-20 16:15:12,091 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
2015-05-20 16:15:12,091 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.domain.socket.path = /var/lib/hadoop-hdfs/dn_socket
2015-05-20 16:15:12,091 DEBUG [InputInitializer [Map 1] #0] retry.RetryUtils:
multipleLinearRandomRetry = null
2015-05-20 16:15:12,092 DEBUG [InputInitializer [Map 1] #0] ipc.Client: getting
client out of cache: org.apache.hadoop.ipc.Client@6c93595a
2015-05-20 16:15:12,216 DEBUG [InputInitializer [Map 1] #0]
mapred.FileInputFormat: Time taken to get FileStatuses: 112
2015-05-20 16:15:12,216 INFO [InputInitializer [Map 1] #0]
mapred.FileInputFormat: Total input paths to process : 1
2015-05-20 16:15:12,219 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
2015-05-20 16:15:12,219 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = true
2015-05-20 16:15:12,219 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
2015-05-20 16:15:12,219 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.domain.socket.path = /var/lib/hadoop-hdfs/dn_socket
2015-05-20 16:15:12,220 DEBUG [InputInitializer [Map 1] #0] retry.RetryUtils:
multipleLinearRandomRetry = null
2015-05-20 16:15:12,220 DEBUG [InputInitializer [Map 1] #0] ipc.Client: getting
client out of cache: org.apache.hadoop.ipc.Client@6c93595a
2015-05-20 16:15:12,222 DEBUG [InputInitializer [Map 1] #0]
mapred.FileInputFormat: Total # of splits generated by getSplits: 1, TimeTaken:
132
2015-05-20 16:15:12,222 INFO [InputInitializer [Map 1] #0] io.HiveInputFormat:
number of splits 1
2015-05-20 16:15:12,222 INFO [InputInitializer [Map 1] #0] log.PerfLogger:
</PERFLOG method=getSplits start=1432152912040 end=1432152912222 duration=182
from=org.apache.hadoop.hive.ql.io.HiveInputFormat>
2015-05-20 16:15:12,222 INFO [InputInitializer [Map 1] #0]
tez.HiveSplitGenerator: Number of input splits: 1. 23542 available slots, 1.7
waves. Input format is: org.apache.hadoop.hive.ql.io.HiveInputFormat
2015-05-20 16:15:12,223 INFO [InputInitializer [Map 1] #0] exec.Utilities: PLAN
PATH =
hdfs://xhadnnm1p.example.com:8020/tmp/hive/gss2002/646469af-0a87-4080-9d2b-e40af4a34c0e/hive_2015-05-20_16-15-06_565_5281905327000741927-1/gss2002/_tez_scratch_dir/049d6a0d-aea4-4805-90a5-84b8c38fe1f4/map.xml
2015-05-20 16:15:12,223 INFO [InputInitializer [Map 1] #0] exec.Utilities:
***************non-local mode***************
2015-05-20 16:15:12,223 INFO [InputInitializer [Map 1] #0] exec.Utilities:
local path =
hdfs://xhadnnm1p.example.com:8020/tmp/hive/gss2002/646469af-0a87-4080-9d2b-e40af4a34c0e/hive_2015-05-20_16-15-06_565_5281905327000741927-1/gss2002/_tez_scratch_dir/049d6a0d-aea4-4805-90a5-84b8c38fe1f4/map.xml
2015-05-20 16:15:12,223 DEBUG [InputInitializer [Map 1] #0] exec.Utilities:
Loading plan from string:
/tmp/hive/gss2002/646469af-0a87-4080-9d2b-e40af4a34c0e/hive_2015-05-20_16-15-06_565_5281905327000741927-1/gss2002/_tez_scratch_dir/049d6a0d-aea4-4805-90a5-84b8c38fe1f4/map.xml
2015-05-20 16:15:12,223 INFO [InputInitializer [Map 1] #0] log.PerfLogger:
<PERFLOG method=deserializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
2015-05-20 16:15:12,223 INFO [InputInitializer [Map 1] #0] exec.Utilities:
Deserializing MapWork via kryo
2015-05-20 16:15:12,239 INFO [InputInitializer [Map 1] #0] log.PerfLogger:
</PERFLOG method=deserializePlan start=1432152912223 end=1432152912239
duration=16 from=org.apache.hadoop.hive.ql.exec.Utilities>
2015-05-20 16:15:12,240 DEBUG [InputInitializer [Map 1] #0] tez.SplitGrouper:
Adding split
hdfs://xhadnnm1p.example.com:8020/apps/hive/warehouse/hue_debug.db/gss_rsn2/000000_0.snappy
to src new group? true
2015-05-20 16:15:12,240 INFO [InputInitializer [Map 1] #0] tez.SplitGrouper: #
Src groups for split generation: 2
2015-05-20 16:15:12,091 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
2015-05-20 16:15:12,091 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = true
2015-05-20 16:15:12,091 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
2015-05-20 16:15:12,091 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.domain.socket.path = /var/lib/hadoop-hdfs/dn_
socket
2015-05-20 16:15:12,091 DEBUG [InputInitializer [Map 1] #0] retry.RetryUtils:
multipleLinearRandomRetry = null
2015-05-20 16:15:12,092 DEBUG [InputInitializer [Map 1] #0] ipc.Client: getting
client out of cache: org.apache.hadoop.ipc.Client@6c
93595a
2015-05-20 16:15:12,216 DEBUG [InputInitializer [Map 1] #0]
mapred.FileInputFormat: Time taken to get FileStatuses: 112
2015-05-20 16:15:12,216 INFO [InputInitializer [Map 1] #0]
mapred.FileInputFormat: Total input paths to process : 1
2015-05-20 16:15:12,219 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
2015-05-20 16:15:12,219 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = true
2015-05-20 16:15:12,219 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
2015-05-20 16:15:12,219 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.domain.socket.path = /var/lib/hadoop-hdfs/dn_
socket
2015-05-20 16:15:12,220 DEBUG [InputInitializer [Map 1] #0] retry.RetryUtils:
multipleLinearRandomRetry = null
2015-05-20 16:15:12,220 DEBUG [InputInitializer [Map 1] #0] ipc.Client: getting
client out of cache: org.apache.hadoop.ipc.Client@6c
93595a
2015-05-20 16:15:12,222 DEBUG [InputInitializer [Map 1] #0]
mapred.FileInputFormat: Total # of splits generated by getSplits: 1, Tim
eTaken: 132
2015-05-20 16:15:12,222 INFO [InputInitializer [Map 1] #0] io.HiveInputFormat:
number of splits 1
2015-05-20 16:15:12,222 INFO [InputInitializer [Map 1] #0] log.PerfLogger:
</PERFLOG method=getSplits start=1432152912040 end=143215
2912222 duration=182 from=org.apache.hadoop.hive.ql.io.HiveInputFormat>
2015-05-20 16:15:12,222 INFO [InputInitializer [Map 1] #0]
tez.HiveSplitGenerator: Number of input splits: 1. 23542 available slots,
1.7 waves. Input format is: org.apache.hadoop.hive.ql.io.HiveInputFormat
2015-05-20 16:15:12,223 INFO [InputInitializer [Map 1] #0] exec.Utilities: PLAN
PATH = hdfs://xhadnnm1p.example.com:8020/tmp/hive/a760
104/646469af-0a87-4080-9d2b-e40af4a34c0e/hive_2015-05-20_16-15-06_565_5281905327000741927-1/gss2002/_tez_scratch_dir/049d6a0d-aea4-4
805-90a5-84b8c38fe1f4/map.xml
2015-05-20 16:15:12,223 INFO [InputInitializer [Map 1] #0] exec.Utilities:
***************non-local mode***************
2015-05-20 16:15:12,223 INFO [InputInitializer [Map 1] #0] exec.Utilities:
local path = hdfs://xhadnnm1p.example.com:8020/tmp/hive/a76
0104/646469af-0a87-4080-9d2b-e40af4a34c0e/hive_2015-05-20_16-15-06_565_5281905327000741927-1/gss2002/_tez_scratch_dir/049d6a0d-aea4-
4805-90a5-84b8c38fe1f4/map.xml
2015-05-20 16:15:12,223 DEBUG [InputInitializer [Map 1] #0] exec.Utilities:
Loading plan from string: /tmp/hive/gss2002/646469af-0a8
7-4080-9d2b-e40af4a34c0e/hive_2015-05-20_16-15-06_565_5281905327000741927-1/gss2002/_tez_scratch_dir/049d6a0d-aea4-4805-90a5-84b8c38
fe1f4/map.xml
2015-05-20 16:15:12,223 INFO [InputInitializer [Map 1] #0] log.PerfLogger:
<PERFLOG method=deserializePlan from=org.apache.hadoop.hi
ve.ql.exec.Utilities>
2015-05-20 16:15:12,223 INFO [InputInitializer [Map 1] #0] exec.Utilities:
Deserializing MapWork via kryo
2015-05-20 16:15:12,239 INFO [InputInitializer [Map 1] #0] log.PerfLogger:
</PERFLOG method=deserializePlan start=1432152912223 end=
1432152912239 duration=16 from=org.apache.hadoop.hive.ql.exec.Utilities>
2015-05-20 16:15:12,240 DEBUG [InputInitializer [Map 1] #0] tez.SplitGrouper:
Adding split hdfs://xhadnnm1p.example.com:8020/apps/hive
/warehouse/hue_debug.db/gss_rsn2/000000_0.snappy to src new group? true
2015-05-20 16:15:12,240 INFO [InputInitializer [Map 1] #0] tez.SplitGrouper: #
Src groups for split generation: 2
2015-05-20 16:15:12,241 INFO [InputInitializer [Map 1] #0] tez.SplitGrouper:
Estimated number of tasks: 40021 for bucket 1
2015-05-20 16:15:12,241 INFO [InputInitializer [Map 1] #0]
split.TezMapredSplitsGrouper: Grouping splits in Tez
2015-05-20 16:15:12,241 INFO [InputInitializer [Map 1] #0]
split.TezMapredSplitsGrouper: Desired splits: 40021 too large. Desired
splitLength: 20 Min splitLength: 16777216 New desired splits: 1 Total length:
807489 Original splits: 1
2015-05-20 16:15:12,242 INFO [InputInitializer [Map 1] #0]
split.TezMapredSplitsGrouper: Using original number of splits: 1 desired
splits: 1
2015-05-20 16:15:12,242 INFO [InputInitializer [Map 1] #0] tez.SplitGrouper:
Original split size is 1 grouped split size is 1, for bucket: 1
2015-05-20 16:15:12,244 INFO [InputInitializer [Map 1] #0]
tez.HiveSplitGenerator: Number of grouped splits: 1
2015-05-20 16:15:12,251 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
2015-05-20 16:15:12,251 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = true
2015-05-20 16:15:12,251 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
2015-05-20 16:15:12,251 DEBUG [InputInitializer [Map 1] #0]
hdfs.BlockReaderLocal: dfs.domain.socket.path = /var/lib/hadoop-hdfs/dn_socket
2015-05-20 16:15:12,251 DEBUG [InputInitializer [Map 1] #0] retry.RetryUtils:
multipleLinearRandomRetry = null
2015-05-20 16:15:12,251 DEBUG [InputInitializer [Map 1] #0] ipc.Client: getting
client out of cache: org.apache.hadoop.ipc.Client@6c93595a
2015-05-20 16:15:12,252 TRACE [InputInitializer [Map 1] #0]
ipc.ProtobufRpcEngine: 85: Call -> xhadnnm1p.example.com/167.69.200.200:8020:
getFileInfo {src:
"/tmp/hive/gss2002/646469af-0a87-4080-9d2b-e40af4a34c0e/hive_2015-05-20_16-15-06_565_5281905327000741927-1/gss2002/_tez_scratch_dir/049d6a0d-aea4-4805-90a5-84b8c38fe1f4/map.xml"}
2015-05-20 16:15:12,253 DEBUG [InputInitializer [Map 1] #0]
ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms
2015-05-20 16:15:12,253 TRACE [InputInitializer [Map 1] #0]
ipc.ProtobufRpcEngine: 85: Response <-
xhadnnm1p.example.com/167.69.200.200:8020: getFileInfo {}
2015-05-20 16:15:12,254 TRACE [InputInitializer [Map 1] #0]
ipc.ProtobufRpcEngine: 85: Call -> xhadnnm1p.example.com/167.69.200.200:8020:
getFileInfo {src:
"/tmp/hive/gss2002/646469af-0a87-4080-9d2b-e40af4a34c0e/hive_2015-05-20_16-15-06_565_5281905327000741927-1/gss2002/_tez_scratch_dir/049d6a0d-aea4-4805-90a5-84b8c38fe1f4/reduce.xml"}
2015-05-20 16:15:12,255 DEBUG [InputInitializer [Map 1] #0]
ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms
2015-05-20 16:15:12,255 TRACE [InputInitializer [Map 1] #0]
ipc.ProtobufRpcEngine: 85: Response <-
xhadnnm1p.example.com/167.69.200.200:8020: getFileInfo {}
2015-05-20 16:15:12,255 INFO [InputInitializer [Map 1] #0]
dag.RootInputInitializerManager: Succeeded InputInitializer for Input: gss_rsn2
on vertex vertex_1426958683478_173564_1_00 [Map 1]
> Hive 0.14.x and Hive 1.2.0 w/ Tez 0.5.3/Tez 0.6.0 Slow group by/order by
> ------------------------------------------------------------------------
>
> Key: HIVE-10746
> URL: https://issues.apache.org/jira/browse/HIVE-10746
> Project: Hive
> Issue Type: Bug
> Components: Hive, Tez
> Affects Versions: 0.14.0, 0.14.1, 1.2.0, 1.1.0, 1.1.1
> Reporter: Greg Senia
> Priority: Critical
> Attachments: slow_query_output.zip
>
>
> The following query: "SELECT appl_user_id, arsn_cd, COUNT(*) as RecordCount
> FROM adw.crc_arsn GROUP BY appl_user_id,arsn_cd ORDER BY appl_user_id;" runs
> consistently fast in Spark and Mapreduce on Hive 1.2.0. When attempting to
> run this same query against Tez as the execution engine it consistently runs
> for over 300-500 seconds this seems extremely long. This is a basic external
> table delimited by tabs and is a single file in a folder. In Hive 0.13 this
> query with Tez runs fast and I tested with Hive 0.14, 0.14.1/1.0.0 and now
> Hive 1.2.0 and there clearly is something going awry with Hive w/Tez as an
> execution engine with Single or small file tables. I can attach further logs
> if someone needs them for deeper analysis.
> HDFS Output:
> hadoop fs -ls /example_dw/crc/arsn
> Found 2 items
> -rwxr-x--- 6 loaduser hadoopusers 0 2015-05-17 20:03
> /example_dw/crc/arsn/_SUCCESS
> -rwxr-x--- 6 loaduser hadoopusers 3883880 2015-05-17 20:03
> /example_dw/crc/arsn/part-m-00000
> Hive Table Describe:
> hive> describe formatted crc_arsn;
> OK
> # col_name data_type comment
>
> arsn_cd string
> clmlvl_cd string
> arclss_cd string
> arclssg_cd string
> arsn_prcsr_rmk_ind string
> arsn_mbr_rspns_ind string
> savtyp_cd string
> arsn_eff_dt string
> arsn_exp_dt string
> arsn_pstd_dts string
> arsn_lstupd_dts string
> arsn_updrsn_txt string
> appl_user_id string
> arsntyp_cd string
> pre_d_indicator string
> arsn_display_txt string
> arstat_cd string
> arsn_tracking_no string
> arsn_cstspcfc_ind string
> arsn_mstr_rcrd_ind string
> state_specific_ind string
> region_specific_in string
> arsn_dpndnt_cd string
> unit_adjustment_in string
> arsn_mbr_only_ind string
> arsn_qrmb_ind string
>
> # Detailed Table Information
> Database: adw
> Owner: [email protected]
> CreateTime: Mon Apr 28 13:28:05 EDT 2014
> LastAccessTime: UNKNOWN
> Protect Mode: None
> Retention: 0
> Location: hdfs://xhadnnm1p.example.com:8020/example_dw/crc/arsn
>
> Table Type: EXTERNAL_TABLE
> Table Parameters:
> EXTERNAL TRUE
> transient_lastDdlTime 1398706085
>
> # Storage Information
> SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
>
> InputFormat: org.apache.hadoop.mapred.TextInputFormat
> OutputFormat:
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
> Compressed: No
> Num Buckets: -1
> Bucket Columns: []
> Sort Columns: []
> Storage Desc Params:
> field.delim \t
> line.delim \n
> serialization.format \t
> Time taken: 1.245 seconds, Fetched: 54 row(s)
> Explain Hive 1.2.0 w/Tez:
> STAGE DEPENDENCIES:
> Stage-1 is a root stage
> Stage-0 depends on stages: Stage-1
> STAGE PLANS:
> Stage: Stage-1
> Tez
> Edges:
> Reducer 2 <- Map 1 (SIMPLE_EDGE)
> Reducer 3 <- Reducer 2 (SIMPLE_EDGE)
> Explain Hive 0.13 w/Tez:
> STAGE DEPENDENCIES:
> Stage-1 is a root stage
> Stage-0 is a root stage
> STAGE PLANS:
> Stage: Stage-1
> Tez
> Edges:
> Reducer 2 <- Map 1 (SIMPLE_EDGE)
> Reducer 3 <- Reducer 2 (SIMPLE_EDGE)
> Results:
> Hive 1.2.0 w/Spark 1.3.1:
> Finished successfully in 7.09 seconds
> Hive 1.2.0 w/Mapreduce:
> Stage 1: 32 Seconds
> Stage 2: 35 Seconds
> Hive 1.2.0 w/Tez 0.5.3:
> Time taken: 565.025 seconds, Fetched: 11516 row(s)
>
> Hive 0.13 w/Tez 0.4.0:
> Time taken: 13.552 seconds, Fetched: 11516 row(s)
> And finally looking at the Dag Attempt that is stuck for 500 seconds or so in
> Tez it looks to be stuck running the same method over and over again:
> 8 duration=2561 from=org.apache.hadoop.hive.ql.exec.tez.RecordProcessor>
> 2015-05-18 19:58:41,719 INFO [TezChild] exec.Utilities: PLAN PATH =
> hdfs://xhadnnm1p.example.com:8020/tmp/hive/gss2002/dbc4b0b5-7859-4487-a56d-969440bc5e90/hive_2015-05-18_19-58-25_951_5497535752804149087-1/gss2002/_tez_scratch_dir/4e635121-c4cd-4e3f-b96b-9f08a6a7bf5d/map.xml
> 2015-05-18 19:58:41,822 INFO [TezChild] exec.MapOperator: MAP[4]: records
> read - 1
> 2015-05-18 19:58:41,835 INFO [TezChild] io.HiveContextAwareRecordReader:
> Processing file
> hdfs://xhadnnm1p.example.com:8020/example_dw/crc/arsn/part-m-00000
> 2015-05-18 19:58:41,848 INFO [TezChild] io.HiveContextAwareRecordReader:
> Processing file
> hdfs://xhadnnm1p.example.com:8020/example_dw/crc/arsn/part-m-00000
> ......
> 2015-05-18 20:07:46,560 INFO [TezChild] io.HiveContextAwareRecordReader:
> Processing file
> hdfs://xhadnnm1p.example.com:8020/example_dw/crc/arsn/part-m-00000
> 2015-05-18 20:07:46,574 INFO [TezChild] io.HiveContextAwareRecordReader:
> Processing file
> hdfs://xhadnnm1p.example.com:8020/example_dw/crc/arsn/part-m-00000
> 2015-05-18 20:07:46,587 INFO [TezChild] io.HiveContextAwareRecordReader:
> Processing file
> hdfs://xhadnnm1p.example.com:8020/example_dw/crc/arsn/part-m-00000
> 2015-05-18 20:07:46,603 INFO [TezChild] io.HiveContextAwareRecordReader:
> Processing file
> hdfs://xhadnnm1p.example.com:8020/example_dw/crc/arsn/part-m-00000
> 2015-05-18 20:07:46,603 INFO [TezChild] log.PerfLogger: </PERFLOG
> method=TezRunProcessor start=1431993518764 end=1431994066603 duration=547839
> from=org.apache.hadoop.hive.ql.exec.tez.TezProcessor>
> 2015-05-18 20:07:46,603 INFO [TezChild] exec.MapOperator: 4 finished.
> closing...
> 2015-05-18 20:07:46,603 INFO [TezChild] exec.MapOperator:
> RECORDS_IN_Map_1:13440
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)