Re: Graduation resolution proposal

2017-11-08 Thread Jim Apple
We are now on step 3, in which the IPMC votes on the proposed graduation
resolution:

https://lists.apache.org/thread.html/4abfbf40b7d822cdc19421ea55de21f19ce70c4fd73c6f4c8cc98ce8@%3Cgeneral.incubator.apache.org%3E

If it passes, the next step is a board resolution:

http://incubator.apache.org/guides/graduation.html#submission_of_the_resolution_to_the_board

On Tue, Oct 31, 2017 at 10:36 PM, Todd Lipcon  wrote:

> Thanks Jim!
>
> -Todd
>
> On Tue, Oct 31, 2017 at 10:35 PM, Jim Apple  wrote:
>
> > I have sent this to general@ for discussion:
> >
> > https://lists.apache.org/thread.html/6b8598408f76a472532923c5a7fc51
> > 0470b21671677ba3486568c57e@%3Cgeneral.incubator.apache.org%3E
> >
> > On Sat, Oct 28, 2017 at 8:12 AM, Jim Apple  wrote:
> > > Below is a graduation resolution I would like to send to
> > > general@incubator for discussion. It includes the PMC volunteers as
> > > well as the result of the first PMC chair election, which was me.
> > >
> > > Unless there is objection, I'll send this to general@incubator for
> > > discussion in a couple of days. If you want to participate in that
> > > discussion at general@incubator, you can subscribe by emailing
> > > general-subscr...@incubator.apache.org.
> > >
> > > As a reminder, the next steps I will take are:
> > >
> > > 1. Prepare a charter (i.e. this email)
> > >
> > > 2. Start a discussion on general@incubator.
> > >
> > > Should the discussion look mostly positive:
> > >
> > > 3. Call a vote on general@incubator.
> > >
> > > Should that vote succeed:
> > >
> > > 4. Submit the resolution to the ASF Board. See more here:
> > > http://incubator.apache.org/guides/graduation.html
> > >
> > > 
> > ---
> > >
> > > Establish the Apache Impala Project
> > >
> > > WHEREAS, the Board of Directors deems it to be in the best interests of
> > > the Foundation and consistent with the Foundation's purpose to
> establish
> > > a Project Management Committee charged with the creation and
> maintenance
> > > of open-source software, for distribution at no charge to the public,
> > > related to a high-performance distributed SQL engine.
> > >
> > > NOW, THEREFORE, BE IT RESOLVED, that a Project Management Committee
> > > (PMC), to be known as the "Apache Impala Project", be and hereby is
> > > established pursuant to Bylaws of the Foundation; and be it further
> > >
> > > RESOLVED, that the Apache Impala Project be and hereby is responsible
> > > for the creation and maintenance of software related to a
> > > high-performance distributed SQL engine; and be it further
> > >
> > > RESOLVED, that the office of "Vice President, Apache Impala" be and
> > > hereby is created, the person holding such office to serve at the
> > > direction of the Board of Directors as the chair of the Apache Impala
> > > Project, and to have primary responsibility for management of the
> > > projects within the scope of responsibility of the Apache Impala
> > > Project; and be it further
> > >
> > > RESOLVED, that the persons listed immediately below be and hereby are
> > > appointed to serve as the initial members of the Apache Impala Project:
> > >
> > >  * Alex Behm 
> > >  * Bharath Vissapragada  
> > >  * Brock Noland  
> > >  * Carl Steinbach
> > >  * Casey Ching   
> > >  * Daniel Hecht  
> > >  * Dimitris Tsirogiannis 
> > >  * Henry Robinson
> > >  * Ishaan Joshi  
> > >  * Jim Apple 
> > >  * John Russell  
> > >  * Juan Yu   
> > >  * Lars Volker   
> > >  * Lenni Kuff
> > >  * Marcel Kornacker  
> > >  * Martin Grund  
> > >  * Matthew Jacobs
> > >  * Michael Brown 
> > >  * Michael Ho
> > >  * Sailesh Mukil 
> > >  * Skye Wanderman-Milne  
> > >  * Taras Bobrovytsky 
> > >  * Tim Armstrong 
> > >  * Todd Lipcon   
> > >
> > > NOW, THEREFORE, BE IT FURTHER RESOLVED, that Jim Apple be appointed to
> > > the office of Vice President, Apache Impala, to serve in accordance
> with
> > > and subject to the direction of the Board of Directors and the Bylaws
> of
> > > the Foundation until death, resignation, retirement, removal or
> > > disqualification, or until a successor is appointed; and be it further
> > >
> > > RESOLVED, that the initial Apache Impala PMC be and hereby is tasked
> > > with the creation of a set of 

Re: jenkins.impala.io maintenance

2017-11-08 Thread Alexander Behm
Awesome! Thanks, Thomas.

On Wed, Nov 8, 2017 at 4:42 PM, Thomas Tauber-Marshall <
tmarsh...@cloudera.com> wrote:

> Jenkins has been updated.
>
> On Wed, Nov 8, 2017 at 3:13 PM Thomas Tauber-Marshall <
> tmarsh...@cloudera.com> wrote:
>
> > jenkins.impala.io need updates for some plugins to address a new
> security
> > advisory.
> >
> > It will be put into maintenance mode at 3:00pm PST so no new jobs can be
> > submitted. The upgrade will happen after all pending jobs complete. Once
> > the upgrade completes, another email will be sent out.
> >
> > Please speak up if you have any objection to the above.
> >
> > Thanks,
> > Thomas
> >
>


Re: jenkins.impala.io maintenance

2017-11-08 Thread Thomas Tauber-Marshall
Jenkins has been updated.

On Wed, Nov 8, 2017 at 3:13 PM Thomas Tauber-Marshall <
tmarsh...@cloudera.com> wrote:

> jenkins.impala.io need updates for some plugins to address a new security
> advisory.
>
> It will be put into maintenance mode at 3:00pm PST so no new jobs can be
> submitted. The upgrade will happen after all pending jobs complete. Once
> the upgrade completes, another email will be sent out.
>
> Please speak up if you have any objection to the above.
>
> Thanks,
> Thomas
>


jenkins.impala.io maintenance

2017-11-08 Thread Thomas Tauber-Marshall
jenkins.impala.io need updates for some plugins to address a new security
advisory.

It will be put into maintenance mode at 3:00pm PST so no new jobs can be
submitted. The upgrade will happen after all pending jobs complete. Once
the upgrade completes, another email will be sent out.

Please speak up if you have any objection to the above.

Thanks,
Thomas


Re: long codegen time while codegen disabled

2017-11-08 Thread Tim Armstrong
This was cross-posted to a Cloudera forum:
http://community.cloudera.com/t5/Interactive-Short-cycle-SQL/long-codegen-time-while-codegen-disabled/m-p/61635#M3812?eid=1=1,
where I gave the same answer that Mostafa gave.

If you cross-post to multiple related mailing lists and forums, can you
please at least link to the previous post so that people don't waste time
answering the same question twice? We're happy to help but don't appreciate
having our time wasted.

On Wed, Nov 8, 2017 at 8:29 AM, Mostafa Mokhtar 
wrote:

> From the profile codegen is disabled for HDFS_SCAN_NODE (id=8) and not the
> entire query.
> If you wish to disable codegen run "set disable_codegen=1;" before
> executing the query from impala-shell or add it to the connection string if
> using JDBC.
>
>HDFS_SCAN_NODE (id=8):(Total: 2.314ms, non-child: 2.314ms, %
> non-child: 100.00%)
>   Hdfs split stats (:<# splits>/):
> 2:1/14.28 KB
>   Hdfs Read Thread Concurrency Bucket: 0:0% 1:0% 2:0% 3:0%
>   File Formats: PARQUET/SNAPPY:3
>   ExecOption: Codegen enabled: 0 out of 1
>
>
> On a side not I recommend trying out a more recent version of Impala
> as a lot has improved since.
>
>
> On Wed, Nov 8, 2017 at 12:13 AM, chen  wrote:
>
> >  I have a query,tooks a long time on codegen:
> >
> >   CodeGen:(Total: 32m22s, non-child: 32m22s, % non-child: 100.00%)
> >  - CodegenTime: 0ns
> >  - CompileTime: 53.143ms
> >  - LoadTime: 58.680us
> >  - ModuleFileSize: 1.96 MB (2054956)
> >  - OptimizationTime: 32m22s
> >  - PrepareTime: 157.700ms
> >
> > but from the profile ,we can see that codegen is diabled for this query:
> >
> > ExecOption: Codegen enabled: 0 out of 1
> >
> > attached is the complete profile.
> >
> >
> > can anyone help to firgure out a way to bypass.
> >
> >
> > Chen
> >
> >
>


Re: S3 connections

2017-11-08 Thread Mostafa Mokhtar
It should be safe to apply this setting to all machine sizes.
This setting is mostly to workaround S3 connector timeouts failures that
look like the one below.

The default value is too low to reliably run single user queries.

I1227 19:29:41.471863  1490 AmazonHttpClient.java:496] Unable to execute
HTTP request: Timeout waiting for connection from pool
Java exception follows:
com.cloudera.org.apache.http.conn.ConnectionPoolTimeoutException: Timeout
waiting for connection from pool
at com.cloudera.org.apache.http.impl.conn.PoolingClientConnectionManager
.leaseConnection(PoolingClientConnectionManager.java:232)
at com.cloudera.org.apache.http.impl.conn.PoolingClientConnectionManager
$1.getConnection(PoolingClientConnectionManager.java:199)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(
DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.cloudera.com.amazonaws.http.conn.ClientConnectionRequestFactory
$Handler.invoke(ClientConnectionRequestFactory.java:70)
at com.cloudera.com.amazonaws.http.conn.$Proxy21.getConnection(Unknown
Source)
at com.cloudera.org.apache.http.impl.client.DefaultRequestDirector.execute(
DefaultRequestDirector.java:456)
at com.cloudera.org.apache.http.impl.client.AbstractHttpClient.execute(
AbstractHttpClient.java:906)
at com.cloudera.org.apache.http.impl.client.AbstractHttpClient.execute(
AbstractHttpClient.java:805)
at com.cloudera.com.amazonaws.http.AmazonHttpClient.executeOneRequest(
AmazonHttpClient.java:728)
at com.cloudera.com.amazonaws.http.AmazonHttpClient.executeHelper(
AmazonHttpClient.java:489)
at com.cloudera.com.amazonaws.http.AmazonHttpClient.execute(
AmazonHttpClient.java:310)
at com.cloudera.com.amazonaws.services.s3.AmazonS3Client.
invoke(AmazonS3Client.java:3785)
at com.cloudera.com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(
AmazonS3Client.java:1050)
at com.cloudera.com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(
AmazonS3Client.java:1027)
at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(
S3AFileSystem.java:913)
at org.apache.hadoop.fs.s3a.S3AFileSystem.open(S3AFileSystem.java:394)



On Wed, Nov 8, 2017 at 9:12 AM, Jim Apple  wrote:

> http://impala.apache.org/docs/build/html/topics/impala_s3.html
> recommends "Set the safety valve fs.s3a.connection.maximum to 1500 for
> impalad." For best performance, should this be increased for nodes
> with very high CPU, RAM, or bandwidth? Or decreased for less-beefy
> nodes?
>


S3 connections

2017-11-08 Thread Jim Apple
http://impala.apache.org/docs/build/html/topics/impala_s3.html
recommends "Set the safety valve fs.s3a.connection.maximum to 1500 for
impalad." For best performance, should this be increased for nodes
with very high CPU, RAM, or bandwidth? Or decreased for less-beefy
nodes?


Re: A question about loading data by functional-query workload

2017-11-08 Thread Jim Apple
The recommended way to get a development environment set up calls
https://github.com/apache/incubator-impala/blob/master/bin/bootstrap_development.sh
which calls /buildall.sh -noclean -format -testdata -skiptests. That
is the recommended way to load the test data.

On Tue, Nov 7, 2017 at 10:17 PM, Jin Chul Kim  wrote:
> Hi,
>
> I would like to run E2E test using the document:
> https://cwiki.apache.org/confluence/display/IMPALA/How+to+load+and+run+Impala+tests
> .
>
> Here is my command: ./tests/run-tests.py query_test/test_queries.py -k
> TestQueriesTextTables
>
> It failed with not found functional database. Should I load data manually?
> Anyway, I found ./bin/load-data.sh by chance and I ran it with the
> command: ./bin/load-data.py -w functional-query
> By the way, it failed with not found matching path file:
> ${IMPALA_HOME}/testdata/target/AllTypes/090101.txt. I don't find the
> directory ${IMPALA_HOME}/testdata/target/AllTypes. I guess it can be
> generated internally. Would you please guide me?
>
> 0: jdbc:hive2://localhost:11050/default> LOAD DATA LOCAL INPATH
> '/home/jinchulkim/workspace/Impala/testdata/target/AllTypes/090101.txt'
> OVERWRITE INTO TABLE functional.alltypes PARTITION(year=2009, month=1);
> going to print operations logs
> printed operations logs
> Getting log thread is interrupted, since query is done!
> Error: Error while compiling statement: FAILED: SemanticException Line 1:23
> Invalid path
> ''/home/jinchulkim/workspace/Impala/testdata/target/AllTypes/090101.txt'':
> No files matching path
> file:/home/jinchulkim/workspace/Impala/testdata/target/AllTypes/090101.txt
> (state=42000,code=4)
> org.apache.hive.service.cli.HiveSQLException: Error while compiling
> statement: FAILED: SemanticException Line 1:23 Invalid path
> ''/home/jinchulkim/workspace/Impala/testdata/target/AllTypes/090101.txt'':
> No files matching path
> file:/home/jinchulkim/workspace/Impala/testdata/target/AllTypes/090101.txt
>
> Best regards,
> Jinchul


Re: long codegen time while codegen disabled

2017-11-08 Thread Mostafa Mokhtar
>From the profile codegen is disabled for HDFS_SCAN_NODE (id=8) and not the
entire query.
If you wish to disable codegen run "set disable_codegen=1;" before
executing the query from impala-shell or add it to the connection string if
using JDBC.

   HDFS_SCAN_NODE (id=8):(Total: 2.314ms, non-child: 2.314ms, %
non-child: 100.00%)
  Hdfs split stats (:<# splits>/):
2:1/14.28 KB
  Hdfs Read Thread Concurrency Bucket: 0:0% 1:0% 2:0% 3:0%
  File Formats: PARQUET/SNAPPY:3
  ExecOption: Codegen enabled: 0 out of 1


On a side not I recommend trying out a more recent version of Impala
as a lot has improved since.


On Wed, Nov 8, 2017 at 12:13 AM, chen  wrote:

>  I have a query,tooks a long time on codegen:
>
>   CodeGen:(Total: 32m22s, non-child: 32m22s, % non-child: 100.00%)
>  - CodegenTime: 0ns
>  - CompileTime: 53.143ms
>  - LoadTime: 58.680us
>  - ModuleFileSize: 1.96 MB (2054956)
>  - OptimizationTime: 32m22s
>  - PrepareTime: 157.700ms
>
> but from the profile ,we can see that codegen is diabled for this query:
>
> ExecOption: Codegen enabled: 0 out of 1
>
> attached is the complete profile.
>
>
> can anyone help to firgure out a way to bypass.
>
>
> Chen
>
>


Error Invalid TimestampValue - Timestamp converstion errror Imapala kudu

2017-11-08 Thread chaitra shivakumar
Hi,

We recently upgraded to kudu 1.5 and impala 2.10 on cloudera  CDH 5.13

I am trying to merge data from one table into another and using left outer
join.

Half way through the merge I get a error which really does not point me to
the  problematic row.
Error seen is

   - Query Status: Invalid TimestampValue: -16532:15:03.6

I can see the error could be because of some conversion issues, but I  was
trying to dig a little deeper into the logs to pin point what  exactly
causes this problem
and all I could find was this

cc:55

Invalid TimestampValue: -16532:15:03.6
@   0x83d85a  impala::Status::Status()
@   0x8415e2  impala::WriteKuduTimestampValue()
@   0x842437  impala::WriteKuduValue()
@   0xce2900  impala::KuduTableSink::Send()
@   0xa50fc4  impala::FragmentInstanceState::ExecInternal()
@   0xa543b9  impala::FragmentInstanceState::Exec()
@   0xa30b38  impala::QueryState::ExecFInstance()
@   0xbd4722  impala::Thread::SuperviseThread()
@   0xbd4e84  boost::detail::thread_data<>::run()
@   0xe6113a  (unknown)
@ 0x7f56b4210dc5  start_thread
@ 0x7f56b3f3dced  __clone

Data set has a few 100 thousand rows of data so It hard to pinpoint manually.

Is there a way to get a more concrete error to pinpoint the problem,
or some resources on how to resolve it.









-- 
regards
Chaitra


RUNTIME ERROR, Error Invalid TimestampValue

2017-11-08 Thread chaitra shivakumar
Hi,

We recently upgraded to kudu 1.5 and impala 2.10 on cloudera  CDH 5.13

I am trying to merge data from one table into another table and using left
outer join.

pl. note both the tables have the same field types. problem happens when we
try to insert  timestamp field of table1  into timestamp filed of table2.
both table 1 and table 2 are kudu tables with external impala tables.
I am running the query from impala.

Half way through the merge I get a error which really does not point me to
the  problematic row.
Error seen is

   - Query Status: RUNTIME ERROR, Invalid TimestampValue:
   -16532:15:03.6

I can see the error could be because of some conversion issues, but I  was
trying to dig a little deeper into the logs to pin point what  exactly
causes this problem
and all I could find was this

cc:55

Invalid TimestampValue: -16532:15:03.6
@   0x83d85a  impala::Status::Status()
@   0x8415e2  impala::WriteKuduTimestampValue()
@   0x842437  impala::WriteKuduValue()
@   0xce2900  impala::KuduTableSink::Send()
@   0xa50fc4  impala::FragmentInstanceState::ExecInternal()
@   0xa543b9  impala::FragmentInstanceState::Exec()
@   0xa30b38  impala::QueryState::ExecFInstance()
@   0xbd4722  impala::Thread::SuperviseThread()
@   0xbd4e84  boost::detail::thread_data<>::run()
@   0xe6113a  (unknown)
@ 0x7f56b4210dc5  start_thread
@ 0x7f56b3f3dced  __clone



Data set has a few 100 thousand rows of data so It hard to pinpoint manually.

Has anyone encountered this problem earlier. Is there a way to get a
more concrete error to pinpoint the problem, or some resources on how
to resolve it.


On Wed, Nov 8, 2017 at 10:48 AM, chaitra shivakumar <
chaitra.shivaku...@gmail.com> wrote:

>
> Hi,
>
> We recently upgraded to kudu 1.5 and impala 2.10 on cloudera  CDH 5.13
>
> I am trying to merge data from one table into another and using left outer
> join.
>
> Half way through the merge I get a error which really does not point me to
> the  problematic row.
> Error seen is
>
>- Query Status: Invalid TimestampValue: -16532:15:03.6
>
> I can see the error could be because of some conversion issues, but I  was
> trying to dig a little deeper into the logs to pin point what  exactly
> causes this problem
> and all I could find was this
>
> cc:55
>
> Invalid TimestampValue: -16532:15:03.6
> @   0x83d85a  impala::Status::Status()
> @   0x8415e2  impala::WriteKuduTimestampValue()
> @   0x842437  impala::WriteKuduValue()
> @   0xce2900  impala::KuduTableSink::Send()
> @   0xa50fc4  impala::FragmentInstanceState::ExecInternal()
> @   0xa543b9  impala::FragmentInstanceState::Exec()
> @   0xa30b38  impala::QueryState::ExecFInstance()
> @   0xbd4722  impala::Thread::SuperviseThread()
> @   0xbd4e84  boost::detail::thread_data<>::run()
> @   0xe6113a  (unknown)
> @ 0x7f56b4210dc5  start_thread
> @ 0x7f56b3f3dced  __clone
>
> Data set has a few 100 thousand rows of data so It hard to pinpoint manually.
>
> Is there a way to get a more concrete error to pinpoint the problem, or some 
> resources on how to resolve it.
>
>
>
>
>
>
>
>
>
> --
> regards
> Chaitra
>



-- 
regards
Chaitra


long codegen time while codegen disabled

2017-11-08 Thread chen
 I have a query,tooks a long time on codegen:

  CodeGen:(Total: 32m22s, non-child: 32m22s, % non-child: 100.00%)
 - CodegenTime: 0ns
 - CompileTime: 53.143ms
 - LoadTime: 58.680us
 - ModuleFileSize: 1.96 MB (2054956)
 - OptimizationTime: 32m22s
 - PrepareTime: 157.700ms

but from the profile ,we can see that codegen is diabled for this query:

ExecOption: Codegen enabled: 0 out of 1

attached is the complete profile.


can anyone help to firgure out a way to bypass.


Chen
Query (id=a84436eacc99694c:4870859f6ca7d094):
  Summary:
Session ID: 9245a5d4802eafa6:2d08911d4ea694b2
Session Type: HIVESERVER2
HiveServer2 Protocol Version: V6
Start Time: 2017-11-07 16:29:13.181284000
End Time: 2017-11-07 17:01:36.30062
Query Type: DML
Query State: FINISHED
Query Status: OK
Impala Version: impalad version 2.1.0-cdh5 RELEASE (build 
e48c2b48c53ea9601b8f47a39373aa8)
User: 
Connected User: 
Delegated User: 
Network Address: :::141.1.9.32:45361
Default Db: db_np
Sql Statement: INSERT INTO `db_np`.`t_ccbq_all_change` SELECT 
`v_msccbq`.`c_bh`, `v_msccbq`.`c_ajbh`, `v_msccbq`.`c_lx`, 
`v_msccbq`.`c_bqfdf`, `v_msccbq`.`c_sqdlr`, `v_msccbq`.`c_dlrlxfs`, 
`v_msccbq`.`c_sqrq`, `v_msccbq`.`n_sqje`, `v_msccbq`.`c_sqcc`, 
`v_msccbq`.`c_cdrq`, `v_msccbq`.`c_cdjg`, `v_msccbq`.`c_cdssd`, 
`v_msccbq`.`c_zs`, `v_msccbq`.`c_tzpsrq`, `v_msccbq`.`c_tzps`, 
`v_msccbq`.`c_yzpsrq`, `v_msccbq`.`c_yzps`, `v_msccbq`.`c_xwnr`, 
`v_msccbq`.`c_zxrq`, `v_msccbq`.`n_sjzxje`, `v_msccbq`.`c_jcbqrq`, 
`v_msccbq`.`c_zjbh`, `v_msccbq`.`n_xh`, `v_msccbq`.`n_bqqx`, 
`v_msccbq`.`c_sqxbrq`, `v_msccbq`.`n_sqbqf`, `v_msccbq`.`c_pmggrq`, 
`v_msccbq`.`c_pmggqj`, `v_msccbq`.`c_yszt`, `v_msccbq`.`c_ssbqcdws`, 
`v_msccbq`.`c_sfcszsfpt`, `v_msccbq`.`c_jfzt`, `v_msccbq`.`c_ssbqsqly`, 
`v_msccbq`.`c_jcbqsqly`, `v_msccbq`.`c_sqjcsj`, `v_msccbq`.`c_fyid`, 
`v_msccbq`.`c_fy`, `v_msccbq`.`n_fydm`, `v_msccbq`.`n_nply` FROM 
`db_np`.`v_msccbq`
Coordinator: master:22000
Plan: 

Estimated Per-Host Requirements: Memory=320.01MB VCores=9
WARNING: The following tables are missing relevant table and/or column 
statistics.
db_np.t_msccbq

F02:PLAN FRAGMENT [HASH(n_dm)]
  WRITE TO HDFS [db_np.t_ccbq_all_change, OVERWRITE=false]
  |  partitions=1
  |  hosts=3 per-host-mem=6.96KB
  |
  16:HASH JOIN [LEFT OUTER JOIN, BROADCAST]
  |  hash predicates: ccbq.n_jbfy = fy.n_fybs
  |  other predicates: fy.c_fyid IS NOT NULL
  |  hosts=3 per-host-mem=1.02KB
  |  tuple-ids=1N,0,3N,5N,7N,9N,11N,13N,14N row-size=1019B cardinality=21
  |
  |--25:EXCHANGE [BROADCAST]
  | hosts=3 per-host-mem=0B
  | tuple-ids=14 row-size=73B cardinality=13
  |
  15:HASH JOIN [LEFT OUTER JOIN, BROADCAST]
  |  hash predicates: ccbq.n_jfzt = jfzt.n_dm
  |  hosts=3 per-host-mem=128B
  |  tuple-ids=1N,0,3N,5N,7N,9N,11N,13N row-size=945B cardinality=21
  |
  |--24:EXCHANGE [BROADCAST]
  | hosts=3 per-host-mem=0B
  | tuple-ids=13 row-size=29B cardinality=4
  |
  14:HASH JOIN [LEFT OUTER JOIN, BROADCAST]
  |  hash predicates: ccbq.n_jcbqsqly = n_dm
  |  hosts=3 per-host-mem=1.82KB
  |  tuple-ids=1N,0,3N,5N,7N,9N,11N row-size=916B cardinality=21
  |
  |--23:EXCHANGE [BROADCAST]
  | hosts=3 per-host-mem=0B
  | tuple-ids=11 row-size=81B cardinality=21
  |
  13:HASH JOIN [LEFT OUTER JOIN, BROADCAST]
  |  hash predicates: ccbq.n_ssbqsqly = n_dm
  |  hosts=3 per-host-mem=1.82KB
  |  tuple-ids=1N,0,3N,5N,7N,9N row-size=836B cardinality=21
  |
  |--22:EXCHANGE [BROADCAST]
  | hosts=3 per-host-mem=0B
  | tuple-ids=9 row-size=81B cardinality=21
  |
  12:HASH JOIN [LEFT OUTER JOIN, BROADCAST]
  |  hash predicates: ccbq.n_yszt = n_dm
  |  hosts=3 per-host-mem=1.82KB
  |  tuple-ids=1N,0,3N,5N,7N row-size=755B cardinality=21
  |
  |--21:EXCHANGE [BROADCAST]
  | hosts=3 per-host-mem=0B
  | tuple-ids=7 row-size=81B cardinality=21
  |
  11:HASH JOIN [LEFT OUTER JOIN, BROADCAST]
  |  hash predicates: ccbq.n_cdjg = n_dm
  |  hosts=3 per-host-mem=1.82KB
  |  tuple-ids=1N,0,3N,5N row-size=674B cardinality=21
  |
  |--20:EXCHANGE [BROADCAST]
  | hosts=3 per-host-mem=0B
  | tuple-ids=5 row-size=81B cardinality=21
  |
  10:HASH JOIN [LEFT OUTER JOIN, BROADCAST]
  |  hash predicates: ccbq.n_lx = n_dm
  |  hosts=3 per-host-mem=1.82KB
  |  tuple-ids=1N,0,3N row-size=593B cardinality=21
  |
  |--19:EXCHANGE [BROADCAST]
  | hosts=3 per-host-mem=0B
  | tuple-ids=3 row-size=81B cardinality=21
  |
  09:HASH JOIN [RIGHT OUTER JOIN, PARTITIONED]
  |  hash predicates: n_dm = ccbq.n_bqfdf
  |  hosts=3 per-host-mem=0B
  |  tuple-ids=1N,0 row-size=513B cardinality=21
  |
  |--18:EXCHANGE [HASH(ccbq.n_bqfdf)]
  | hosts=1 per-host-mem=0B
  | tuple-ids=0 row-size=432B cardinality=0
  |
  17:EXCHANGE [HASH(n_dm)]
 hosts=3 per-host-mem=0B
 tuple-ids=1 row-size=81B cardinality=21

F09:PLAN FRAGMENT [RANDOM]
  DATASTREAM SINK [FRAGMENT=F02, EXCHANGE=25,