[ 
https://issues.apache.org/jira/browse/YARN-9847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16936506#comment-16936506
 ] 

Wang, Xinglong commented on YARN-9847:
--------------------------------------

[~tangzhankun], instance of ApplicationAttemptStateData will be fully 
serialized into app attempt znode and the instance contains several fields 
besides diagnostics info as following including startTime, finalTrackingUrl, 
diagnostics, exitStatus etc. So if diagnostics is 100KB, then 
ApplicationAttemptStateData will be bigger than 100KB.

{code:java}
public abstract class ApplicationAttemptStateData {

  public static ApplicationAttemptStateData newInstance(
      ApplicationAttemptId attemptId, Container container,
      Credentials attemptTokens, long startTime, RMAppAttemptState finalState,
      String finalTrackingUrl, String diagnostics,
      FinalApplicationStatus amUnregisteredFinalStatus, int exitStatus,
      long finishTime, Map<String, Long> resourceSecondsMap,
      Map<String, Long> preemptedResourceSecondsMap) {
    ApplicationAttemptStateData attemptStateData =
        Records.newRecord(ApplicationAttemptStateData.class);
    attemptStateData.setAttemptId(attemptId);
    attemptStateData.setMasterContainer(container);
    attemptStateData.setAppAttemptTokens(attemptTokens);
    attemptStateData.setState(finalState);
    attemptStateData.setFinalTrackingUrl(finalTrackingUrl);
    attemptStateData.setDiagnostics(diagnostics == null ? "" : diagnostics);
    attemptStateData.setStartTime(startTime);
    attemptStateData.setFinalApplicationStatus(amUnregisteredFinalStatus);
    attemptStateData.setAMContainerExitStatus(exitStatus);
    attemptStateData.setFinishTime(finishTime);
    attemptStateData.setMemorySeconds(RMServerUtils
        .getOrDefault(resourceSecondsMap,
            ResourceInformation.MEMORY_MB.getName(), 0L));
    attemptStateData.setVcoreSeconds(RMServerUtils
        .getOrDefault(resourceSecondsMap, ResourceInformation.VCORES.getName(),
            0L));
    attemptStateData.setPreemptedMemorySeconds(RMServerUtils
        .getOrDefault(preemptedResourceSecondsMap,
            ResourceInformation.MEMORY_MB.getName(), 0L));
    attemptStateData.setPreemptedVcoreSeconds(RMServerUtils
        .getOrDefault(preemptedResourceSecondsMap,
            ResourceInformation.VCORES.getName(), 0L));
    attemptStateData.setResourceSecondsMap(resourceSecondsMap);
    attemptStateData
        .setPreemptedResourceSecondsMap(preemptedResourceSecondsMap);
    return attemptStateData;
  }
{code}

In the test, I limited znode size to be 100KB which means serialized 
ApplicationAttemptStateData should be below or equal with 100KB. And I 
generated 100KB diagnostics data within ApplicationAttemptStateData to make 
sure fully serialized ApplicationAttemptStateData will bigger than 100KB which 
will trigger the truncate logic.

*Original *
ApplicationAttemptStateData serialized size > 100KB
ApplicationAttemptStateData.diagnostics serialized size = 100KB

*Truncated*
ApplicationAttemptStateData serialized size = 100KB
ApplicationAttemptStateData.diagnostics serialized size < 100KB  due to 
truncation happened on this field only.

This is why this assert stands.

{code:java}
assertNotEquals("", attempt1.getDiagnostics(),
   attemptStateData1.getDiagnostics()); 
{code}


> ZKRMStateStore will cause zk connection loss when writing huge data into znode
> ------------------------------------------------------------------------------
>
>                 Key: YARN-9847
>                 URL: https://issues.apache.org/jira/browse/YARN-9847
>             Project: Hadoop YARN
>          Issue Type: Improvement
>            Reporter: Wang, Xinglong
>            Assignee: Wang, Xinglong
>            Priority: Minor
>         Attachments: YARN-9847.001.patch, YARN-9847.002.patch
>
>
> Recently, we encountered RM ZK connection issue due to RM was trying to write 
> huge data into znode. This behavior will zk report Len error and then cause 
> zk session connection loss. And eventually RM would crash due to zk 
> connection issue.
> *The fix*
> In order to protect ResouceManager from crash due to this.
> This fix is trying to limit the size of data for attemp by limiting the 
> diagnostic info when writing ApplicationAttemptStateData into znode. The size 
> will be regulated by -Djute.maxbuffer set in yarn-env.sh. The same value will 
> be also used by zookeeper server.
> *The story*
> ResourceManager Log
> {code:java}
> 2019-07-29 02:14:59,638 WARN org.apache.zookeeper.ClientCnxn: Session 
> 0x36ab902369100a0 for serverabc-zk-5.vip.ebay.com/10.210.82.29:2181, 
> unexpected error, closing socket connection and attempting reconnect
> java.io.IOException: Broken pipe
> at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
> at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
> at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
> at sun.nio.ch.IOUtil.write(IOUtil.java:65)
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
> at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:117)
> at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
> at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
> 2019-07-29 04:27:35,459 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: 
> Exception while executing a ZK operation.
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss
> at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:935)
> at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:915)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$5.run(ZKRMStateStore.java:998)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$5.run(ZKRMStateStore.java:995)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1174)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1207)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doStoreMultiWithRetries(ZKRMStateStore.java:1001)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doStoreMultiWithRetries(ZKRMStateStore.java:1009)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.setDataWithRetries(ZKRMStateStore.java:1050)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.updateApplicationAttemptStateInternal(ZKRMStateStore.java:699)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppAttemptTransition.transition(RMStateStore.java:317)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppAttemptTransition.transition(RMStateStore.java:299)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.handleStoreEvent(RMStateStore.java:955)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:1036)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:1031)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:183)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:109)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> ResourceManager will retry to connect to zookeeper until it exhausted retry 
> number and then give up.
> {code:java}
> 2019-07-29 02:25:06,404 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: 
> Retrying operation on ZK. Retry no. 999
> 2019-07-29 02:25:06,718 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: 
> Client will use GSSAPI as SASL mechanism.
> 2019-07-29 02:25:06,718 INFO org.apache.zookeeper.ClientCnxn: Opening socket 
> connection to server 2019-07-29 02:25:06,404 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: 
> Retrying operation on ZK. Retry no. 999
> 2019-07-29 02:25:06,718 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: 
> Client will use GSSAPI as SASL mechanism.
> 2019-07-29 02:25:06,718 INFO org.apache.zookeeper.ClientCnxn: Opening socket 
> connection to server serverabc-lvs-zk-5.vip.ebay.com/10.210.82.29:2181. Will 
> attempt to SASL-authenticate using Login Context section 'Client'
> 2019-07-29 02:25:06,718 INFO org.apache.zookeeper.ClientCnxn: Socket 
> connection established to serverabc-lvs-zk-5.vip.ebay.com/10.210.82.29:2181, 
> initiating session
> 2019-07-29 02:25:06,720 INFO org.apache.zookeeper.ClientCnxn: Session 
> establishment complete on server 
> serverabc-lvs-zk-5.vip.ebay.com/10.210.82.29:2181, sessionid = 
> 0x36ab902369100a0, negotiated timeout = 40000
> 2019-07-29 02:25:06,749 WARN org.apache.zookeeper.ClientCnxn: Session 
> 0x36ab902369100a0 for server 
> serverabc-lvs-zk-5.vip.ebay.com/10.210.82.29:2181, unexpected error, closing 
> socket connection and attempting reconnect
> java.io.IOException: Broken pipe
>         at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>         at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
>         at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>         at sun.nio.ch.IOUtil.write(IOUtil.java:65)
>         at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
>         at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:117)
>         at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
>         at 
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
> 2019-07-29 02:25:06,850 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Maxed 
> out ZK retries. Giving up!
> {code}
> The retry behavior is controlled by following config.
> {code:java}
> <property>
> <name>yarn.resourcemanager.zk-state-store.parent-path</name>
> <value>/rmstore</value>
> <source>yarn-default.xml</source>
> </property>
> <property>
> <name>yarn.resourcemanager.zk-acl</name>
> <value>sasl:yarn:rwcda</value>
> <source>yarn-site.xml</source>
> </property>
> <property>
> <name>yarn.resourcemanager.zk-num-retries</name>
> <value>1000</value>
> <source>yarn-default.xml</source>
> </property>
> <property>
> <name>yarn.resourcemanager.zk-retry-interval-ms</name>
> <value>1000</value>
> <source>yarn-default.xml</source>
> </property>
> <property>
> <name>yarn.resourcemanager.store.class&lt;/name>
> <value>
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
> </value>
> <source>yarn-site.xml</source>
> </property>
> {code}
> While on zk side, Len error
> {code:java}
> 2019-07-29 02:14:45,809 [myid:5] - WARN 
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception 
> causing close of session 0x36ab902369100a0 due to java.io.IOException: Len 
> error 9156576
> {code}
> After went through all zk nodes, the following large znode was found. 
> Surprisingly the reason for the huge size of this znode is that, it was with 
> very huge (18MB in our case) diagnostic info for ApplicationAttemptStateData.
> {code:java}
> /rmstore/ZKRMStateRoot/RMAppRoot/application_1568721711618_198456/appattempt_1568721711618_198456_000002
>  dataLength:18913283
> {code}
> Just show a piece of the diagnostic info for this attemp.
> I believe in the original design no one would expect such a huge error 
> message. However it happened in our cluster twice recently.
> {code:java}
> User class threw exception: org.apache.spark.sql.AnalysisException: resolved 
> attribute(s) BMID#74532 missing from 
> BMID#2643,LEAF_CATEG_ID#19326,letterkey#35970,item_site_id#30429,id_in_leaf#2640,ITEM_BRAND#8208
>  in operator !Project [item_site_id#30429, LEAF_CATEG_ID#19326, 
> letterkey#35970, ITEM_BRAND#8208, id_in_leaf#2640, BMID#74532];;
> Sort [item_id#160 ASC NULLS FIRST], false
> +- RepartitionByExpression [item_id#160]
> +- Project [item_id#160, item_vrtn_id#36412, item_site_id#531, 
> BSNS_VRTCL_NAME#823, CATEG_LVL3_ID#803, LEAF_CATEG_ID#161, epid#36398, 
> BMID#2643, ITEM_TITL#36413, ITEM_BRAND#36414, TITL#36394 AS PROD_TITLE#74530, 
> BRAND#36395 AS PROD_BRAND#74531, SampleItem#36396]
> +- Filter isnotnull(bmid#2643)
> +- Join LeftOuter, (bmid#2643 = bmid#74532)
> :- SubqueryAlias a
> : +- SubqueryAlias cbt_bdtool_prodid_w
> : +- Project [item_id#160, CASE WHEN isnotnull(item_vrtn_id#1634) THEN 
> item_vrtn_id#1634 ELSE cast(0 as decimal(18,0)) END AS item_vrtn_id#36412, 
> item_site_id#531, BSNS_VRTCL_NAME#823, CATEG_LVL3_ID#803, LEAF_CATEG_ID#161, 
> regexp_replace(regexp_replace(lower(auct_titl#36740), 
> [^$₤£€#×^*-~@!?./&%()+=":_<>,0-9A-Za-zäöüßÄÖÜ ]+, ), s+, ) AS 
> ITEM_TITL#36413, regexp_replace(regexp_replace(item_brand#1563, 
> [^$₤£€#×^*-~@!?./&%()+=":_<>,0-9A-Za-zäöüßÄÖÜ ]+, ), s+, ) AS 
> ITEM_BRAND#36414, epid#36398, BMID#2643]
> : +- Join LeftOuter, ((item_id#160 = item_id#36781) && (item_site_id#531 = 
> item_site_id#53076))
> : :- Join LeftOuter, (item_id#160 = item_id#36778)
> : : :- Join LeftOuter, (item_id#160 = item_id#36674)
> : : : :- SubqueryAlias a
> : : : : +- SubqueryAlias cbt_bdtool_drv_epid_w
> : : : : +- Project [item_id#160, item_vrtn_id#1634, ITEM_SITE_ID#531, 
> BSNS_VRTCL_NAME#823, CATEG_LVL3_ID#803, LEAF_CATEG_ID#161, 
> coalesce(epid#1083, epid#1566) AS ePID#36398]
> : : : : +- Join LeftOuter, (item_id#160 = item_id#36399)
> : : : : :- SubqueryAlias drv
> : : : : : +- SubqueryAlias cbt_bdtool_drv_w
> : : : : : +- Project [item_id#160, ITEM_SITE_ID#531, BSNS_VRTCL_NAME#823, 
> CATEG_LVL3_ID#803, LEAF_CATEG_ID#161, coalesce(AT_PROD_REF_ID#1096, 
> PROD_REF_ID#1097) AS ePID#1083]
> : : : : : +- Join LeftOuter, ((item_id#160 = item_id#1094) && 
> (ITEM_SITE_ID#531 = ITEM_SITE_ID#1264))
> : : : : : :- SubqueryAlias drv
> : : : : : : +- SubqueryAlias cbt_bdtool_drv3_w
> : : : : : : +- Distinct
> : : : : : : +- Project [ITEM_ID#160, ITEM_SITE_ID#531, BSNS_VRTCL_NAME#823, 
> CATEG_LVL3_ID#803, LEAF_CATEG_ID#161]
> : : : : : : +- SubqueryAlias cbt_bdtool_drv2_w
> : : : : : : +- Project [ITEM_SITE_ID#531, BSNS_VRTCL_NAME#823, 
> CATEG_LVL3_ID#803, LEAF_CATEG_ID#161, ITEM_ID#160, cum_gmv#1072, 
> sec_gmv#1073, cum_pct#1074]
> : : : : : : +- Filter (cast(cum_pct#1074 as decimal(38,21)) <= cast(0.95 as 
> decimal(38,21)))
> : : : : : : +- SubqueryAlias rk
> : : : : : : +- Project [ITEM_SITE_ID#531, BSNS_VRTCL_NAME#823, 
> CATEG_LVL3_ID#803, LEAF_CATEG_ID#161, ITEM_ID#160, cum_gmv#1072, 
> sec_gmv#1073, cum_pct#1074]
> : : : : : : +- Project [ITEM_SITE_ID#531, BSNS_VRTCL_NAME#823, 
> CATEG_LVL3_ID#803, LEAF_CATEG_ID#161, ITEM_ID#160, GMV_PLAN_USD#162, 
> cum_gmv#1072, sec_gmv#1073, _we2#1079, _we3#1080, cum_gmv#1072, sec_gmv#1073, 
> CheckOverflow((promote_precision(cast(cast(_we2#1079 as decimal(18,4)) as 
> decimal(18,4))) / promote_precision(cast(cast(_we3#1080 as decimal(18,4)) as 
> decimal(18,4)))), DecimalType(38,21)) AS cum_pct#1074]
> : : : : : : +- Window [sum(GMV_PLAN_USD#162) 
> windowspecdefinition(ITEM_SITE_ID#531, BSNS_VRTCL_NAME#823, GMV_PLAN_USD#162 
> DESC NULLS LAST, ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS 
> cum_gmv#1072, sum(GMV_PLAN_USD#162) windowspecdefinition(ITEM_SITE_ID#531, 
> BSNS_VRTCL_NAME#823, GMV_PLAN_USD#162 DESC NULLS LAST, ROWS BETWEEN UNBOUNDED 
> PRECEDING AND UNBOUNDED FOLLOWING) AS sec_gmv#1073, sum(GMV_PLAN_USD#162) 
> windowspecdefinition(ITEM_SITE_ID#531, BSNS_VRTCL_NAME#823, GMV_PLAN_USD#162 
> DESC NULLS LAST, ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS 
> _we2#1079, sum(GMV_PLAN_USD#162) windowspecdefinition(ITEM_SITE_ID#531, 
> BSNS_VRTCL_NAME#823, GMV_PLAN_USD#162 DESC NULLS LAST, ROWS BETWEEN UNBOUNDED 
> PRECEDING AND UNBOUNDED FOLLOWING) AS _we3#1080], [ITEM_SITE_ID#531, 
> BSNS_VRTCL_NAME#823], [GMV_PLAN_USD#162 DESC NULLS LAST]
> : : : : : : +- Project [ITEM_SITE_ID#531, BSNS_VRTCL_NAME#823, 
> CATEG_LVL3_ID#803, LEAF_CATEG_ID#161, ITEM_ID#160, GMV_PLAN_USD#162]
> : : : : : : +- SubqueryAlias cbt_bdtool_drv1_w
> : : : : : : +- Aggregate [LSTG_ID#310, ITEM_SITE_ID#531, BSNS_VRTCL_NAME#823, 
> CATEG_LVL3_ID#803, MOVE_TO#748], [LSTG_ID#310 AS ITEM_ID#160, 
> ITEM_SITE_ID#531, BSNS_VRTCL_NAME#823, CATEG_LVL3_ID#803, MOVE_TO#748 AS 
> LEAF_CATEG_ID#161, 
> sum(cast(CheckOverflow((promote_precision(cast(CheckOverflow((promote_precision(cast(QTY#337
>  as decimal(20,2))) * promote_precision(cast(ITEM_PRICE#336 as 
> decimal(20,2)))), DecimalType(37,2)) as decimal(38,6))) * 
> promote_precision(cast(CURNCY_PLAN_RATE#364 as decimal(38,6)))), 
> DecimalType(38,8)) as decimal(18,2))) AS GMV_PLAN_USD#162]
> : : : : : : +- Filter ((((cast(rprtd_gmv_dt#348 as string) >= 2019-08-01) && 
> (cast(rprtd_gmv_dt#348 as string) <= 2019-08-31)) && (rprtd_wacko_yn#347 = 
> N)) && (cast(LSTG_SITE_ID#321 as decimal(10,0)) IN (cast(0 as 
> decimal(10,0)),cast(3 as decimal(10,0)),cast(15 as decimal(10,0)),cast(77 as 
> decimal(10,0)),cast(100 as decimal(10,0))) && BSNS_VRTCL_NAME#823 IN (Parts & 
> Accessories,Business & Industrial,Electronics,Lifestyle,Home & Garden)))
> : : : : : : +- Join LeftOuter, ((move_to#748 = LEAF_CATEG_ID#788) && 
> (SITE_ID#718 = SITE_ID#790))
> : : : : : : :- Join LeftOuter, ((leaf_categ_id#540 = LEAF_CATEG_ID#716) && 
> (cast(ITEM_SITE_ID#531 as decimal(9,0)) = cast(SITE_ID#718 as decimal(9,0))))
> : : : : : : : :- Join LeftOuter, ((lstg_id#310 = item_id#526) && 
> (LSTG_SITE_ID#321 = item_site_id#531))
> : : : : : : : : :- Join LeftOuter, (cast(LSTG_CURNCY_ID#323 as decimal(9,0)) 
> = cast(CURNCY_ID#363 as decimal(9,0)))
> : : : : : : : : : :- SubqueryAlias ck, `batch_views`.`dw_gem2_cmn_ck_i`
> : : : : : : : : : : +- Project [gen_attr_0#263 AS lstg_id#310, gen_attr_2#264 
> AS ck_trans_id#311, gen_attr_4#265 AS ck_date#312, gen_attr_6#266 AS 
> ck_ts#313, gen_attr_8#267 AS seller_type_cd#314, gen_attr_10#268 AS 
> glbl_rprt_bsns_ctgry_group_cd#315, gen_attr_12#269 AS user_dsgntn_id#316, 
> gen_attr_14#270 AS lstg_end_dt#317, gen_attr_16#271 AS byr_id#318, 
> gen_attr_18#272 AS byr_cntry_id#319, gen_attr_20#273 AS leaf_categ_id#320, 
> gen_attr_22#274 AS lstg_site_id#321, gen_attr_24#275 AS lstg_type_code#322, 
> gen_attr_26#276 AS lstg_curncy_id#323, gen_attr_28#277 AS 
> offrd_slng_chnl_grp_id#324, gen_attr_30#278 AS sold_slng_chnl_grp_id#325, 
> gen_attr_32#279 AS slr_id#326, gen_attr_34#280 AS slr_cntry_id#327, 
> gen_attr_36#281 AS lc_exchng_rate#328, gen_attr_38#282 AS 
> slr_lc_exchng_rate#329, gen_attr_40#283 AS byr_lc_exchng_rate#330, 
> gen_attr_42#284 AS lc_mnthly_exchng_rate#331, gen_attr_44#285 AS 
> slr_lc_mnthly_exchng_rate#332, gen_attr_46#286 AS 
> byr_lc_mnthly_exchng_rate#333, ... 23 more fields]
> : : : : : : : : : : +- SubqueryAlias gen_subquery_1
> : : : : : : : : : : +- Project [gen_attr_1#212 AS gen_attr_0#263, 
> gen_attr_3#213 AS gen_attr_2#264, gen_attr_5#260 AS gen_attr_4#265, 
> gen_attr_7#214 AS gen_attr_6#266, gen_attr_9#215 AS gen_attr_8#267, 
> gen_attr_11#216 AS gen_attr_10#268, gen_attr_13#217 AS gen_attr_12#269, 
> gen_attr_15#218 AS gen_attr_14#270, gen_attr_17#219 AS gen_attr_16#271, 
> gen_attr_19#220 AS gen_attr_18#272, gen_attr_21#221 AS gen_attr_20#273, 
> gen_attr_23#222 AS gen_attr_22#274, gen_attr_25#223 AS gen_attr_24#275, 
> gen_attr_27#251 AS gen_attr_26#276, gen_attr_29#224 AS gen_attr_28#277, 
> gen_attr_31#225 AS gen_attr_30#278, gen_attr_33#226 AS gen_attr_32#279, 
> gen_attr_35#227 AS gen_attr_34#280, gen_attr_37#228 AS gen_attr_36#281, 
> gen_attr_39#229 AS gen_attr_38#282, gen_attr_41#230 AS gen_attr_40#283, 
> gen_attr_43#231 AS gen_attr_42#284, gen_attr_45#232 AS gen_attr_44#285, 
> gen_attr_47#233 AS gen_attr_46#286, ... 23 more fields]
> : : : : : : : : : : +- SubqueryAlias dw_gem2_cmn_ck_i
> : : : : : : : : : : +- Project [gen_attr_1#212, gen_attr_3#213, 
> gen_attr_5#260, gen_attr_7#214, gen_attr_9#215, gen_attr_11#216, 
> gen_attr_13#217, gen_attr_15#218, gen_attr_17#219, gen_attr_19#220, 
> gen_attr_21#221, gen_attr_23#222, gen_attr_25#223, gen_attr_27#251, 
> gen_attr_29#224, gen_attr_31#225, gen_attr_33#226, gen_attr_35#227, 
> gen_attr_37#228, gen_attr_39#229, gen_attr_41#230, gen_attr_43#231, 
> gen_attr_45#232, gen_attr_47#233, ... 23 more fields]
> : : : : : : : : : : +- SubqueryAlias gen_subquery_0
> : : : : : : : : : : +- Project [gen_attr_96#163 AS gen_attr_1#212, 
> gen_attr_97#164 AS gen_attr_3#213, gen_attr_98#166 AS gen_attr_7#214, 
> gen_attr_99#167 AS gen_attr_9#215, gen_attr_100#168 AS gen_attr_11#216, 
> gen_attr_101#169 AS gen_attr_13#217, gen_attr_102#170 AS gen_attr_15#218, 
> gen_attr_103#171 AS gen_attr_17#219, gen_attr_104#172 AS gen_attr_19#220, 
> gen_attr_105#173 AS gen_attr_21#221, gen_attr_106#174 AS gen_attr_23#222, 
> gen_attr_107#175 AS gen_attr_25#223, gen_attr_108#176 AS gen_attr_29#224, 
> gen_attr_109#177 AS gen_attr_31#225, gen_attr_110#178 AS gen_attr_33#226, 
> gen_attr_111#179 AS gen_attr_35#227, gen_attr_112#180 AS gen_attr_37#228, 
> gen_attr_113#181 AS gen_attr_39#229, gen_attr_114#182 AS gen_attr_41#230, 
> gen_attr_115#183 AS gen_attr_43#231, gen_attr_116#184 AS gen_attr_45#232, 
> gen_attr_117#185 AS gen_attr_47#233, gen_attr_118#186 AS gen_attr_49#234, 
> gen_attr_119#187 AS gen_attr_51#235, ... 25 more fields]
> : : : : : : : : : : +- SubqueryAlias gen_subquery_0
> : : : : : : : : : : +- Project [lstg_id#824 AS gen_attr_96#163, 
> ck_trans_id#825 AS gen_attr_97#164, ck_date#826 AS gen_attr_146#165, 
> ck_ts#827 AS gen_attr_98#166, seller_type_cd#828 AS gen_attr_99#167, 
> glbl_rprt_bsns_ctgry_group_cd#829 AS gen_attr_100#168, user_dsgntn_id#830 AS 
> gen_attr_101#169, lstg_end_dt#831 AS gen_attr_102#170, byr_id#832 AS 
> gen_attr_103#171, byr_cntry_id#833 AS gen_attr_104#172, leaf_categ_id#834 AS 
> gen_attr_105#173, lstg_site_id#835 AS gen_attr_106#174, lstg_type_code#836 AS 
> gen_attr_107#175, offrd_slng_chnl_grp_id#837 AS gen_attr_108#176, 
> sold_slng_chnl_grp_id#838 AS gen_attr_109#177, slr_id#839 AS 
> gen_attr_110#178, slr_cntry_id#840 AS gen_attr_111#179, lc_exchng_rate#841 AS 
> gen_attr_112#180, slr_lc_exchng_rate#842 AS gen_attr_113#181, 
> byr_lc_exchng_rate#843 AS gen_attr_114#182, lc_mnthly_exchng_rate#844 AS 
> gen_attr_115#183, slr_lc_mnthly_exchng_rate#845 AS gen_attr_116#184, 
> byr_lc_mnthly_exchng_rate#846 AS gen_attr_117#185, bin_lstg_yn_id#847 AS 
> gen_attr_118#186, ... 25 more fields]
> : : : : : : : : : : +- SubqueryAlias dw_gem2_cmn_ck_i_tdcopy
> : : : : : : : : : : +- 
> Relation[lstg_id#824,ck_trans_id#825,ck_date#826,ck_ts#827,seller_type_cd#828,glbl_rprt_bsns_ctgry_group_cd#829,user_dsgntn_id#830,lstg_end_dt#831,byr_id#832,byr_cntry_id#833,leaf_categ_id#834,lstg_site_id#835,lstg_type_code#836,offrd_slng_chnl_grp_id#837,sold_slng_chnl_grp_id#838,slr_id#839,slr_cntry_id#840,lc_exchng_rate#841,slr_lc_exchng_rate#842,byr_lc_exchng_rate#843,lc_mnthly_exchng_rate#844,slr_lc_mnthly_exchng_rate#845,byr_lc_mnthly_exchng_rate#846,bin_lstg_yn_id#847,...
>  25 more fields] parquet
> : : : : : : : : : +- SubqueryAlias fx, 
> `batch_views`.`ssa_curncy_plan_rate_dim`
> : : : : : : : : : +- Project [gen_attr_0#357 AS curncy_id#363, gen_attr_1#358 
> AS curncy_plan_rate#364, gen_attr_2#359 AS cre_date#365, gen_attr_3#360 AS 
> cre_user#366, gen_attr_4#361 AS upd_date#367, gen_attr_5#362 AS upd_user#368]
> : : : : : : : : : +- SubqueryAlias ssa_curncy_plan_rate_dim
> : : : : : : : : : +- Project [gen_attr_0#357, gen_attr_1#358, gen_attr_2#359, 
> gen_attr_3#360, gen_attr_4#361, gen_attr_5#362]
> : : : : : : : : : +- SubqueryAlias gen_subquery_0
> : : : : : : : : : +- Project [curncy_id#873 AS gen_attr_0#357, 
> curncy_plan_rate#874 AS gen_attr_1#358, cre_date#875 AS gen_attr_2#359, 
> cre_user#876 AS gen_attr_3#360, upd_date#877 AS gen_attr_4#361, upd_user#878 
> AS gen_attr_5#362]
> : : : : : : : : : +- SubqueryAlias ssa_curncy_plan_rate_dim
> : : : : : : : : : +- 
> Relation[CURNCY_ID#873,CURNCY_PLAN_RATE#874,CRE_DATE#875,CRE_USER#876,UPD_DATE#877,UPD_USER#878]
>  parquet
> : : : : : : : : +- SubqueryAlias hot, `batch_views`.`dw_lstg_item`
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to