[jira] [Created] (HIVE-21309) Support for Schema Tool to use HMS TLS to the Database properties

2019-02-21 Thread Morio Ramdenbourg (JIRA)
Morio Ramdenbourg created HIVE-21309:


 Summary: Support for Schema Tool to use HMS TLS to the Database 
properties
 Key: HIVE-21309
 URL: https://issues.apache.org/jira/browse/HIVE-21309
 Project: Hive
  Issue Type: New Feature
  Components: Metastore, Standalone Metastore
Reporter: Morio Ramdenbourg


[HIVE-20992|https://issues.apache.org/jira/browse/HIVE-20992] added properties 
to configure TLS to the HMS backend database (_hive.metastore.dbaccess.ssl.*_). 
The changes were made in the ObjectStore implementation in 
[ObjectStore#configureSSL|https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java#L349-L380],
 and those TLS properties are consumed before establishing a connection to the 
database. However, since the Schema Tool doesn't follow this same code path, it 
doesn't use any of these TLS properties.

We should add support for the schema tool to read in and use these properties, 
so that it can also use TLS encryption to communicate with the HMS database.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] pudidic opened a new pull request #547: HIVE-21294: Vectorization: 1-reducer Shuffle can skip the object hash…

2019-02-21 Thread GitBox
pudidic opened a new pull request #547: HIVE-21294: Vectorization: 1-reducer 
Shuffle can skip the object hash…
URL: https://github.com/apache/hive/pull/547
 
 
   … functions (Teddy Choi)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


Re: Review Request 70031: HIVE-21167

2019-02-21 Thread Deepak Jaiswal

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70031/
---

(Updated Feb. 22, 2019, 7:19 a.m.)


Review request for hive, Jason Dere and Vaibhav Gumashta.


Changes
---

Added the union test which identified an issue which is fixed.
The followup JIRA to show bucketing version in explain extended is created.
https://issues.apache.org/jira/browse/HIVE-21304


Bugs: HIVE-21167
https://issues.apache.org/jira/browse/HIVE-21167


Repository: hive-git


Description
---

Bucketing: Bucketing version 1 is incorrectly partitioning data


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/parse/TezCompiler.java 4b10e8974e 
  ql/src/test/queries/clientpositive/murmur_hash_migration.q 2b8da9f683 
  ql/src/test/results/clientpositive/llap/dynpart_sort_opt_vectorization.q.out 
5a2cd47381 
  ql/src/test/results/clientpositive/llap/murmur_hash_migration.q.out 
5343628252 


Diff: https://reviews.apache.org/r/70031/diff/2/

Changes: https://reviews.apache.org/r/70031/diff/1-2/


Testing
---


Thanks,

Deepak Jaiswal



[GitHub] sankarh opened a new pull request #546: HIVE-21307: Need to set GzipJSONMessageEncoder as default config for EVENT_MESSAGE_FACTORY.

2019-02-21 Thread GitBox
sankarh opened a new pull request #546: HIVE-21307: Need to set 
GzipJSONMessageEncoder as default config for EVENT_MESSAGE_FACTORY.
URL: https://github.com/apache/hive/pull/546
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ashutosh-bapat opened a new pull request #545: HIVE-21306 : Upgrade HttpComponents to the latest versions similar to what Hadoop has done.

2019-02-21 Thread GitBox
ashutosh-bapat opened a new pull request #545: HIVE-21306 : Upgrade 
HttpComponents to the latest versions similar to what Hadoop has done.
URL: https://github.com/apache/hive/pull/545
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (HIVE-21308) Negative forms of variables are not supported in HPL/SQL

2019-02-21 Thread Baoning He (JIRA)
Baoning He created HIVE-21308:
-

 Summary: Negative forms of variables are not supported in HPL/SQL
 Key: HIVE-21308
 URL: https://issues.apache.org/jira/browse/HIVE-21308
 Project: Hive
  Issue Type: Bug
  Components: hpl/sql
Reporter: Baoning He
Assignee: Baoning He


In the following HPL/SQL programs:
declare num = 1; print -num;
The expected result should be '-1',but it print '1' .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-21307) Need to set GzipJSONMessageEncoder as default config for EVENT_MESSAGE_FACTORY.

2019-02-21 Thread Sankar Hariappan (JIRA)
Sankar Hariappan created HIVE-21307:
---

 Summary: Need to set GzipJSONMessageEncoder as default config for 
EVENT_MESSAGE_FACTORY.
 Key: HIVE-21307
 URL: https://issues.apache.org/jira/browse/HIVE-21307
 Project: Hive
  Issue Type: Bug
  Components: Configuration, repl
Affects Versions: 4.0.0
Reporter: Sankar Hariappan
Assignee: Sankar Hariappan


Currently, we use JsonMessageEncoder as the default message factory for 
Notification events. As the size of some of the events are really huge and 
cause OOM issues in RDBMS. So, it is needed to enable GzipJSONMessageEncoder as 
default message factory to optimise the memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-21306) Upgrade HttpComponents to the latest versions similar to what Hadoop has done.

2019-02-21 Thread Ashutosh Bapat (JIRA)
Ashutosh Bapat created HIVE-21306:
-

 Summary: Upgrade HttpComponents to the latest versions similar to 
what Hadoop has done.
 Key: HIVE-21306
 URL: https://issues.apache.org/jira/browse/HIVE-21306
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 4.0.0
Reporter: Ashutosh Bapat
Assignee: Ashutosh Bapat
 Fix For: 4.0.0


The use of HTTPClient 4.5.2 breaks the use of SPNEGO over TLS.
It mistakenly added HTTPS instead of HTTP to the principal when over SSL and 
thus breaks the authentication.

This was upgraded recently in Hadoop and needs to be done for Hive as well.

See: HADOOP-16076

Where we upgraded from 4.5.2 and 4.4.4 to 4.5.6 and 4.4.10.



4.5.2
4.4.4
+ 4.5.6
+ 4.4.10



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-21305) LLAP: Option to skip cache for ETL queries

2019-02-21 Thread Prasanth Jayachandran (JIRA)
Prasanth Jayachandran created HIVE-21305:


 Summary: LLAP: Option to skip cache for ETL queries
 Key: HIVE-21305
 URL: https://issues.apache.org/jira/browse/HIVE-21305
 Project: Hive
  Issue Type: Improvement
  Components: llap
Affects Versions: 4.0.0
Reporter: Prasanth Jayachandran


To avoid ETL queries from polluting the cache, would be good to detect such 
queries at compile time and optional skip llap io for such queries. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-21304) Show Bucketing version for ReduceSinkOp in explain extended plan

2019-02-21 Thread Deepak Jaiswal (JIRA)
Deepak Jaiswal created HIVE-21304:
-

 Summary: Show Bucketing version for ReduceSinkOp in explain 
extended plan
 Key: HIVE-21304
 URL: https://issues.apache.org/jira/browse/HIVE-21304
 Project: Hive
  Issue Type: Bug
Reporter: Deepak Jaiswal
Assignee: Deepak Jaiswal


Show Bucketing version for ReduceSinkOp in explain extended plan.

This helps identify what hashing algorithm is being used by by ReduceSinkOp.

 

cc [~vgarg]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 70031: HIVE-21167

2019-02-21 Thread Deepak Jaiswal


> On Feb. 21, 2019, 6:29 p.m., Vineet Garg wrote:
> > ql/src/test/results/clientpositive/llap/dynpart_sort_opt_vectorization.q.out
> > Line 1332 (original), 1332 (patched)
> > 
> >
> > Do you know the reason this size changed? This seems strange.

The size of one file went down by 2 and another went up by 2. It looks like 
this bug was hitting the test case.


- Deepak


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70031/#review213034
---


On Feb. 21, 2019, 8:59 a.m., Deepak Jaiswal wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/70031/
> ---
> 
> (Updated Feb. 21, 2019, 8:59 a.m.)
> 
> 
> Review request for hive, Jason Dere and Vaibhav Gumashta.
> 
> 
> Bugs: HIVE-21167
> https://issues.apache.org/jira/browse/HIVE-21167
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Bucketing: Bucketing version 1 is incorrectly partitioning data
> 
> 
> Diffs
> -
> 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/TezCompiler.java 4b10e8974e 
>   ql/src/test/queries/clientpositive/murmur_hash_migration.q 2b8da9f683 
>   
> ql/src/test/results/clientpositive/llap/dynpart_sort_opt_vectorization.q.out 
> 5a2cd47381 
>   ql/src/test/results/clientpositive/llap/murmur_hash_migration.q.out 
> 5343628252 
> 
> 
> Diff: https://reviews.apache.org/r/70031/diff/1/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Deepak Jaiswal
> 
>



[jira] [Created] (HIVE-21303) Update TextRecordReader

2019-02-21 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HIVE-21303:
--

 Summary: Update TextRecordReader
 Key: HIVE-21303
 URL: https://issues.apache.org/jira/browse/HIVE-21303
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 4.0.0, 3.2.0
Reporter: BELUGA BEHR
Assignee: BELUGA BEHR
 Attachments: HIVE-21303.1.patch

Remove use of Deprecated 
{{org.apache.hadoop.mapred.LineRecordReader.LineReader}}

For every call to {{next}}, the code dives into the configuration map to see if 
this feature is enabled.  Just look it up once and cache the value.

{code:java}
public int next(Writable row) throws IOException {
...
if (HiveConf.getBoolVar(conf, HiveConf.ConfVars.HIVESCRIPTESCAPE)) {
  return HiveUtils.unescapeText((Text) row);
}
return bytesConsumed;
}
{code}

Other clean up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 70031: HIVE-21167

2019-02-21 Thread Vineet Garg

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70031/#review213034
---




ql/src/java/org/apache/hadoop/hive/ql/parse/TezCompiler.java
Lines 230 (patched)


Can you also add comment explaining why this should be the last 
transformation?



ql/src/test/queries/clientpositive/murmur_hash_migration.q
Lines 71 (patched)


There doesn't seem to be any way currently to see the bucketing version 
used by reduce sink op. It will be really useful to print this information in 
explain extended. It will help uncover bugs this like.



ql/src/test/queries/clientpositive/murmur_hash_migration.q
Lines 77 (patched)


Can you also add a test with insert select with union? something like 

insert into table acid_ptn_bucket1  select key, count(value), key from 
(select key, value from src where value > 2 group by key, value union all 
select key, '45' from src s2 where key > 1 group by key) sub group by key;



ql/src/test/results/clientpositive/llap/dynpart_sort_opt_vectorization.q.out
Line 1332 (original), 1332 (patched)


Do you know the reason this size changed? This seems strange.


- Vineet Garg


On Feb. 21, 2019, 8:59 a.m., Deepak Jaiswal wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/70031/
> ---
> 
> (Updated Feb. 21, 2019, 8:59 a.m.)
> 
> 
> Review request for hive, Jason Dere and Vaibhav Gumashta.
> 
> 
> Bugs: HIVE-21167
> https://issues.apache.org/jira/browse/HIVE-21167
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Bucketing: Bucketing version 1 is incorrectly partitioning data
> 
> 
> Diffs
> -
> 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/TezCompiler.java 4b10e8974e 
>   ql/src/test/queries/clientpositive/murmur_hash_migration.q 2b8da9f683 
>   
> ql/src/test/results/clientpositive/llap/dynpart_sort_opt_vectorization.q.out 
> 5a2cd47381 
>   ql/src/test/results/clientpositive/llap/murmur_hash_migration.q.out 
> 5343628252 
> 
> 
> Diff: https://reviews.apache.org/r/70031/diff/1/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Deepak Jaiswal
> 
>



[GitHub] miklosgergely opened a new pull request #544: HIVE-16924 Support distinct in presence of Group By

2019-02-21 Thread GitBox
miklosgergely opened a new pull request #544: HIVE-16924 Support distinct in 
presence of Group By
URL: https://github.com/apache/hive/pull/544
 
 
   create table e011_01 (c1 int, c2 smallint);
   insert into e011_01 values (1, 1), (2, 2);
   
   These queries should work:
   
   select distinct c1, count(*) from e011_01 group by c1;
   select distinct c1, avg(c2) from e011_01 group by c1;
   
   Currently, you get : 
   FAILED: SemanticException 1:52 SELECT DISTINCT and GROUP BY can not be in 
the same query. Error encountered near token 'c1'


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] miklosgergely opened a new pull request #543: HIVE-21292: Break up DDLTask 1 - extract Database related operations

2019-02-21 Thread GitBox
miklosgergely opened a new pull request #543: HIVE-21292: Break up DDLTask 1 - 
extract Database related operations
URL: https://github.com/apache/hive/pull/543
 
 
   DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
also a huge class, which has a field for each DDL operation it supports. The 
goal is to refactor these in order to have everything cut into more handleable 
classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
   
   have a separate class for each operation
   have a package for each operation group (database ddl, table ddl, etc), so 
the amount of classes under a package is more manageable
   make all the requests (DDLDesc subclasses) immutable
   DDLTask should be agnostic to the actual operations
   right now let's ignore the issue of having some operations handled by 
DDLTask which are not actual DDL operations (lock, unlock, desc...)
   In the interim time when there are two DDLTask and DDLWork classes in the 
code base the new ones in the new package are called DDLTask2 and DDLWork2 thus 
avoiding the usage of fully qualified class names where both the old and the 
new classes are in use.
   
   Step #1: extract all the database related operations from the old DDLTask, 
and move them under the new package. Also create the new internal framework.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jcamachor opened a new pull request #542: HIVE-21301

2019-02-21 Thread GitBox
jcamachor opened a new pull request #542: HIVE-21301
URL: https://github.com/apache/hive/pull/542
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


Disjoin alias

2019-02-21 Thread Daniel Takacs
I have been trying for a while to disjoin this alias nothing worked.  Who is 
the admin who can help?

Sent from my iPhone

Re: Moving forward with the timestamp proposal

2019-02-21 Thread Zoltan Ivanfi
Hi,

We can add these new SQL types by adding support to the file formats first.
But the most important and immediate goal is reserving these types for
their desired meaning and that can already be done without such support.

Of course, eventually the new types need to be implemented as well, and for
that we would need support from the file format components. I have already
contacted the Avro, ORC, Parquet, Arrow, Kudu, Iceberg and CarbonData
communities to let them know of this new requirement. Parquet, Arrow and
Iceberg already has semantics metadata that supports LocalDateTime and
Instant semantics and we plan to actively drive their addition to Avro and
would also be happy to contribute to ORC. Regarding the OffsetDateTime
semantics, I don't know about any file format that would already support it
natively.

Alternatively, we could also do the new types without such support, in
which case the semantics metadata could not be deduced from the files
themselves but would have to come directly from the user (at least
initially). This will be the case for text files for example, where no
metadata can be stored in the files. I think we should reserve this way for
file formats where having proper metadata in the files is impossible (text
files) or where the developers of a file format component prefer not to add
new types for this purpose (unlikely but possible).

Br,

Zoltan

On Thu, Feb 21, 2019 at 8:32 AM Wenchen Fan  wrote:

> I think this is the right direction to go, but I'm wondering how can Spark
> support these new types if the underlying data sources(like parquet files)
> do not support them yet.
>
> I took a quick look at the new doc for file formats, but not sure what's
> the proposal. Are we going to implement these new types in Parquet/Orc
> first? Or are we going to use low-level physical types directly and add
> Spark-specific metadata to Parquet/Orc files?
>
> On Wed, Feb 20, 2019 at 10:57 PM Zoltan Ivanfi 
> wrote:
>
> > Hi,
> >
> > Last december we shared a timestamp harmonization proposal
> >  with the Hive, Spark and Impala communities.
> This
> > was followed by an extensive discussion in January that lead to various
> > updates and improvements to the proposal, as well as the creation of a
> new
> > document for file format components. February has been quiet regarding
> this
> > topic and the latest revision of the proposal has been steady in the
> recent
> > weeks.
> >
> > In short, the following is being proposed (please see the document for
> > details):
> >
> >- The TIMESTAMP WITHOUT TIME ZONE type should have LocalDateTime
> >semantics.
> >- The TIMESTAMP WITH LOCAL TIME ZONE type should have Instant
> >semantics.
> >- The TIMESTAMP WITH TIME ZONE type should have OffsetDateTime
> >semantics.
> >
> > This proposal is in accordance with the SQL standard and many major DB
> > engines.
> >
> > Based on the feedback we got I believe that the latest revision of the
> > proposal addresses the needs of all affected components, therefore I
> would
> > like to move forward and create JIRA-s and/or roadmap documentation pages
> > for the desired semantics of the different SQL types according to the
> > proposal.
> >
> > Please let me know if you have any remaning concerns about the proposal
> or
> > about the course of action outlined above.
> >
> > Thanks,
> >
> > Zoltan
> >
>


Review Request 70031: HIVE-21167

2019-02-21 Thread Deepak Jaiswal

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70031/
---

Review request for hive, Jason Dere and Vaibhav Gumashta.


Bugs: HIVE-21167
https://issues.apache.org/jira/browse/HIVE-21167


Repository: hive-git


Description
---

Bucketing: Bucketing version 1 is incorrectly partitioning data


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/parse/TezCompiler.java 4b10e8974e 
  ql/src/test/queries/clientpositive/murmur_hash_migration.q 2b8da9f683 
  ql/src/test/results/clientpositive/llap/dynpart_sort_opt_vectorization.q.out 
5a2cd47381 
  ql/src/test/results/clientpositive/llap/murmur_hash_migration.q.out 
5343628252 


Diff: https://reviews.apache.org/r/70031/diff/1/


Testing
---


Thanks,

Deepak Jaiswal



[jira] [Created] (HIVE-21302) datanucleus.schema.autoCreateAll=true will not work properly as schematool

2019-02-21 Thread Oleksandr Polishchuk (JIRA)
Oleksandr Polishchuk created HIVE-21302:
---

 Summary: datanucleus.schema.autoCreateAll=true will not work 
properly as schematool
 Key: HIVE-21302
 URL: https://issues.apache.org/jira/browse/HIVE-21302
 Project: Hive
  Issue Type: Bug
Affects Versions: 2.3.4, 2.1.0
Reporter: Oleksandr Polishchuk


The bug was found while working with configured environment: {{Apache Hadoop 
cluster}} and {{Apache Hive}} (the same issue on both versions 2.1 and 2.3).

Configuration of working environment was performed in the following steps:
 # Installed Apache Hadoop cluster
 # Installed over cluster - Hive component
 # Configured properties in {{hive-site.xml}}
{code:xml}

javax.jdo.option.ConnectionURL
jdbc:derby:;databaseName=test_db;create=true
 the URL of the Derby database


javax.jdo.option.ConnectionDriverName
org.apache.derby.jdbc.EmbeddedDriver
Driver class name for a JDBC metastore


hive.metastore.warehouse.dir
/usr/hive/warehouse
location of default database for the 
warehouse


datanucleus.schema.autoCreateAll
 true
creates necessary schema on a startup if one doesnt 
exist.


hive.metastore.schema.verification
false

{code}
 # Installed {{[IJ|http://db.apache.org/derby/papers/DerbyTut/index.html]}} 
utility to work with embedded DB, according to steps in the guide.
 # Launched services: {{hiveserver2}} and {{metastore}}
 ** 
{code:java}
hive --service hiveserver2
{code}
 ** {code:java}
 hive --service metastore
{code}
 # The next step was to start the utility in the next steps:
 ** 
{code:java}
$ cd $DERBY_INSTALL/bin
{code}
 ** 
{code:java}
 $ . setEmbeddedCP
{code}
 ** 
{code:java}
$ echo $CLASSPATH 
/opt/Apache/db-derby-10.14.2.0-bin/lib/derby.jar:/opt/Apache/db-derby-10.14.2.0-bin/lib/derbytools.jar:
{code}
 ** Start up {{ij}} with this command: {{./ij}}
{code:java}
 connect 'jdbc:derby:/opt/hadoop/hive/metastore_db';
{code}
  But after performing that command appears the next error.
{code:java}
ij> connect 'jdbc:derby:/opt/hadoop/hive/metastore_db';
ERROR XJ040: Failed to start database '/opt/hadoop/hive/metastore_db' with 
class loader sun.misc.Launcher$AppClassLoader@5e2de80c, see the next exception 
for details.
ERROR XSDB6: Another instance of Derby may have already booted the database 
/opt/hadoop/hive/metastore_db.
{code}
To resolve this error need to stop all hive services, like hiveserver2 and 
metastore. Whereupon again enter the command for connect to DB connect 
{{'jdbc:derby:/opt/hadoop/hive/metastore_db';}} that was successful.
 # As far as metastore service was launched, he generated all tables lists in 
the 
[package.jdo|https://github.com/apache/hive/blob/branch-2.3/metastore/src/model/package.jdo]
 file, which we could see if perform the next command in {{ij}} utility: 
{{ij(CONNECTION1)> show tables;}}
{code:java}
ij> connect 'jdbc:derby:/opt/hadoop/hive/metastore_db';
ij(CONNECTION1)> show tables;
TABLE_SCHEM |TABLE_NAME    |REMARKS     

SYS |SYSALIASES    |    
SYS |SYSCHECKS |    
SYS |SYSCOLPERMS   |    
SYS |SYSCOLUMNS    |    
SYS |SYSCONGLOMERATES  |    
SYS |SYSCONSTRAINTS    |    
SYS |SYSDEPENDS    |    
SYS |SYSFILES  |    
SYS |SYSFOREIGNKEYS    |    
SYS |SYSKEYS   |    
SYS |SYSPERMS  |    
SYS |SYSROLES  |    
SYS |SYSROUTINEPERMS   |    
SYS |SYSSCHEMAS    |    
SYS |SYSSEQUENCES  |    
SYS |SYSSTATEMENTS |    
SYS |SYSSTATISTICS |    
SYS |SYSTABLEPERMS |    
SYS |SYSTABLES |    
SYS |SYSTRIGGERS   |    
SYS |SYSUSERS  |    
SYS |SYSVIEWS  |    
SYSIBM  |SYSDUMMY1 |    
APP |BUCKETING_COLS    |