[jira] [Updated] (TRAFODION-2980) Add instr build-in function instr as Oracle

2018-03-05 Thread Yuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Liu updated TRAFODION-2980:

Summary: Add instr build-in function instr as Oracle  (was: Add instr 
build-in function as Oracle)

> Add instr build-in function instr as Oracle
> ---
>
> Key: TRAFODION-2980
> URL: https://issues.apache.org/jira/browse/TRAFODION-2980
> Project: Apache Trafodion
>  Issue Type: New Feature
>  Components: sql-general
>Reporter: Yuan Liu
>Priority: Major
>
> Add an build-in function as Oracle instr.
>  
> For example,
> select instr('helloworld',',',2,2) from dual; --return 4
> select instr('helloworld', 'l', 4, 2) from dual; – return 9



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2980) Add build-in function instr as Oracle

2018-03-05 Thread Yuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Liu updated TRAFODION-2980:

Summary: Add build-in function instr as Oracle  (was: Add instr build-in 
function instr as Oracle)

> Add build-in function instr as Oracle
> -
>
> Key: TRAFODION-2980
> URL: https://issues.apache.org/jira/browse/TRAFODION-2980
> Project: Apache Trafodion
>  Issue Type: New Feature
>  Components: sql-general
>Reporter: Yuan Liu
>Priority: Major
>
> Add an build-in function as Oracle instr.
>  
> For example,
> select instr('helloworld',',',2,2) from dual; --return 4
> select instr('helloworld', 'l', 4, 2) from dual; – return 9



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TRAFODION-2980) Add instr build-in function as Oracle

2018-03-05 Thread Yuan Liu (JIRA)
Yuan Liu created TRAFODION-2980:
---

 Summary: Add instr build-in function as Oracle
 Key: TRAFODION-2980
 URL: https://issues.apache.org/jira/browse/TRAFODION-2980
 Project: Apache Trafodion
  Issue Type: New Feature
  Components: sql-general
Reporter: Yuan Liu


Add an build-in function as Oracle instr.

 

For example,

select instr('helloworld',',',2,2) from dual; --return 4

select instr('helloworld', 'l', 4, 2) from dual; – return 9



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2978) update statistics would hang for 100 billion rows table

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387156#comment-16387156
 ] 

ASF GitHub Bot commented on TRAFODION-2978:
---

GitHub user kakaxi3019 opened a pull request:

https://github.com/apache/trafodion/pull/1462

fix TRAFODION-2978 update statistics hang for 100 billion rows table



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kakaxi3019/trafodion trafodion-2978

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/trafodion/pull/1462.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1462


commit 8ab331923ae8cc2ef0ca4f1f100bf1bcd61056a7
Author: kakaxi3019 
Date:   2018-03-06T02:04:06Z

fix TRAFODION-2978 update statistics hang




> update statistics would hang for 100 billion rows table
> ---
>
> Key: TRAFODION-2978
> URL: https://issues.apache.org/jira/browse/TRAFODION-2978
> Project: Apache Trafodion
>  Issue Type: Bug
> Environment: CentOS
>Reporter: chenyunren
>Assignee: chenyunren
>Priority: Major
>
> CREATE TABLE T1
>    ( 
>  C1 INT DEFAULT NULL NOT SERIALIZED
>    , C2 INT DEFAULT NULL NOT SERIALIZED
>    , C3 VARCHAR(100 CHARS) CHARACTER SET UTF8
>    COLLATE DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE NOT SERIALIZED
>    , C4 VARCHAR(100 CHARS) CHARACTER SET UTF8
>    COLLATE DEFAULT DEFAULT NULL NOT SERIALIZED
>    , C5 LARGEINT NO DEFAULT NOT NULL NOT DROPPABLE
>    NOT SERIALIZED
>    , C6 LARGEINT DEFAULT NULL NOT SERIALIZED
>    , C7 LARGEINT DEFAULT NULL NOT SERIALIZED
>    , C8 DOUBLE PRECISION DEFAULT NULL NOT
>    SERIALIZED
>    , C9 LARGEINT DEFAULT NULL NOT SERIALIZED
>    , C10 DOUBLE PRECISION DEFAULT NULL NOT
>    SERIALIZED
>    , C11 LARGEINT DEFAULT NULL NOT SERIALIZED
>    , C12 LARGEINT DEFAULT NULL NOT SERIALIZED
>    , C13 LARGEINT DEFAULT NULL NOT SERIALIZED
>    , C14 DOUBLE PRECISION DEFAULT NULL NOT
>    SERIALIZED
>    , C15 DOUBLE PRECISION DEFAULT NULL NOT
>    SERIALIZED
>    , C16 LARGEINT DEFAULT NULL NOT SERIALIZED
>    , C17 LARGEINT DEFAULT NULL NOT SERIALIZED
>    , C18 DOUBLE PRECISION DEFAULT NULL NOT
>    SERIALIZED
>    , C19 LARGEINT DEFAULT NULL NOT SERIALIZED
>    , C20 DOUBLE PRECISION DEFAULT NULL NOT
>    SERIALIZED
>    , C21 LARGEINT DEFAULT NULL NOT SERIALIZED
>    , C22 DOUBLE PRECISION DEFAULT NULL NOT
>    SERIALIZED
>    , C23 LARGEINT DEFAULT NULL NOT SERIALIZED
>    , C24 DOUBLE PRECISION DEFAULT NULL NOT
>    SERIALIZED
>    )
>    STORE BY (C5 ASC, C3 ASC)
>    SALT USING 10 PARTITIONS
>    HBASE_OPTIONS 
>    ( 
>  DATA_BLOCK_ENCODING = 'FAST_DIFF',
>  MEMSTORE_FLUSH_SIZE = '1073741824' 
>    ) 
>  ;
>  
> just to run below sql
>  update statistics for table T1 on every column, (C2, C3),(C1, C2, C3), (C1, 
> C2, C3, C4) sample;



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2978) update statistics would hang for 100 billion rows table

2018-03-05 Thread chenyunren (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenyunren updated TRAFODION-2978:
--
Description: 
CREATE TABLE T1
   ( 
 C1 INT DEFAULT NULL NOT SERIALIZED
   , C2 INT DEFAULT NULL NOT SERIALIZED
   , C3 VARCHAR(100 CHARS) CHARACTER SET UTF8
   COLLATE DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE NOT SERIALIZED
   , C4 VARCHAR(100 CHARS) CHARACTER SET UTF8
   COLLATE DEFAULT DEFAULT NULL NOT SERIALIZED
   , C5 LARGEINT NO DEFAULT NOT NULL NOT DROPPABLE
   NOT SERIALIZED
   , C6 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C7 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C8 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C9 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C10 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C11 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C12 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C13 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C14 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C15 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C16 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C17 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C18 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C19 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C20 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C21 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C22 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C23 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C24 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   )
   STORE BY (C5 ASC, C3 ASC)
   SALT USING 10 PARTITIONS
   HBASE_OPTIONS 
   ( 
 DATA_BLOCK_ENCODING = 'FAST_DIFF',
 MEMSTORE_FLUSH_SIZE = '1073741824' 
   ) 
 ;

 

just to run below sql
 update statistics for table T1 on every column, (C2, C3),(C1, C2, C3), (C1, 
C2, C3, C4) sample;

  was:
CREATE TABLE T1
   ( 
 C1 INT DEFAULT NULL NOT SERIALIZED
   , C2 INT DEFAULT NULL NOT SERIALIZED
   , C3 VARCHAR(100 CHARS) CHARACTER SET UTF8
   COLLATE DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE NOT SERIALIZED
   , C4 VARCHAR(100 CHARS) CHARACTER SET UTF8
   COLLATE DEFAULT DEFAULT NULL NOT SERIALIZED
   , C5 LARGEINT NO DEFAULT NOT NULL NOT DROPPABLE
   NOT SERIALIZED
   , C6 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C7 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C8 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C9 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C10 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C11 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C12 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C13 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C14 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C15 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C16 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C17 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C18 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C19 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C20 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C21 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C22 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C23 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C24 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   )
   STORE BY (C5 ASC, C3 ASC)
   SALT USING 10 PARTITIONS
   HBASE_OPTIONS 
   ( 
 DATA_BLOCK_ENCODING = 'FAST_DIFF',
 MEMSTORE_FLUSH_SIZE = '1073741824' 
   ) 
 ;

 

just to run below sql
 update statistics for table CELL_INDICATOR_HIVE on every column, (C2, C3),(C1, 
C2, C3), (C1, C2, C3, C4) sample;


> update statistics would hang for 100 billion rows table
> ---
>
> Key: TRAFODION-2978
> URL: https://issues.apache.org/jira/browse/TRAFODION-2978
> Project: Apache Trafodion
>  Issue Type: Bug
> Environment: CentOS
>Reporter: chenyunren
>Assignee: chenyunren
>Priority: Major
>
> CREATE TABLE T1
>    ( 
>  C1 INT DEFAULT NULL NOT SERIALIZED
>    , C2 INT DEFAULT NULL NOT SERIALIZED
>    , C3 VARCHAR(100 CHARS) CHARACTER SET UTF8
>    COLLATE DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE NOT SERIALIZED
>    , C4 VARCHAR(100 CHARS) CHARACTER SET UTF8
>    COLLATE DEFAULT DEFAULT NULL NOT SERIALIZED
>    , C5 LARGEINT NO DEFAULT NOT NULL NOT DROPPABLE
>    NOT SERIALIZED
>    , C6 LARGEINT DEFAULT NULL NOT SERIALIZED
>    , C7 LARGEINT DEFAULT NULL NOT SERIALIZED
>    , C8 DOUBLE PRECISION DEFAULT NULL NOT
>    SERIALIZED
>    , C9 LARGEINT DEFAULT NULL NOT SERIALIZED
>    , C10 DOUBLE PRECISION DEFAULT NULL NOT
>    SERIALIZED
>    , C11 LARGEINT DEFAULT NULL NOT SERIALIZED
>    , C12 LARGEINT DEFAULT NULL NOT SERIALIZED
>    , C13 LARGEINT DEFAULT NULL NOT SERIALIZED
>    , C14 DOUBLE PRECISION DEFAULT NULL NOT
>    SERIALIZED
>    , C15 DOUBLE PRECISION DEFAULT NULL NOT
>    SERIALIZED
>    , 

[jira] [Updated] (TRAFODION-2978) update statistics would hang for 100 billion rows table

2018-03-05 Thread chenyunren (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenyunren updated TRAFODION-2978:
--
Description: 
CREATE TABLE T1
   ( 
 C1 INT DEFAULT NULL NOT SERIALIZED
   , C2 INT DEFAULT NULL NOT SERIALIZED
   , C3 VARCHAR(100 CHARS) CHARACTER SET UTF8
   COLLATE DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE NOT SERIALIZED
   , C4 VARCHAR(100 CHARS) CHARACTER SET UTF8
   COLLATE DEFAULT DEFAULT NULL NOT SERIALIZED
   , C5 LARGEINT NO DEFAULT NOT NULL NOT DROPPABLE
   NOT SERIALIZED
   , C6 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C7 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C8 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C9 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C10 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C11 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C12 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C13 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C14 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C15 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C16 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C17 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C18 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C19 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C20 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C21 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C22 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C23 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C24 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   )
   STORE BY (C5 ASC, C3 ASC)
   SALT USING 10 PARTITIONS
   HBASE_OPTIONS 
   ( 
 DATA_BLOCK_ENCODING = 'FAST_DIFF',
 MEMSTORE_FLUSH_SIZE = '1073741824' 
   ) 
 ;

 

just to run below sql
 update statistics for table CELL_INDICATOR_HIVE on every column, (C2, C3),(C1, 
C2, C3), (C1, C2, C3, C4) sample;

  was:
CREATE TABLE T1
   ( 
 C1 INT DEFAULT NULL NOT SERIALIZED
   , C2 INT DEFAULT NULL NOT SERIALIZED
   , C3 VARCHAR(100 CHARS) CHARACTER SET UTF8
   COLLATE DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE NOT SERIALIZED
   , C4 VARCHAR(100 CHARS) CHARACTER SET UTF8
   COLLATE DEFAULT DEFAULT NULL NOT SERIALIZED
   , C5 LARGEINT NO DEFAULT NOT NULL NOT DROPPABLE
   NOT SERIALIZED
   , C6 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C7 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C8 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C9 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C10 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C11 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C12 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C13 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C14 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C15 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C16 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C17 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C18 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C19 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C20 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C21 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C22 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C23 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C24 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   )
   STORE BY (C5 ASC, C3 ASC)
   SALT USING 10 PARTITIONS
   HBASE_OPTIONS 
   ( 
 DATA_BLOCK_ENCODING = 'FAST_DIFF',
 MEMSTORE_FLUSH_SIZE = '1073741824' 
   ) 
 ;

 

just to run below sql
 update statistics for table CELL_INDICATOR_HIVE on every column, (PROVINCE_ID, 
CELL_ID),(CITY_ID, PROVINCE_ID, CELL_ID), (CITY_ID, PROVINCE_ID, CELL_ID, 
CELL_PROPERTY) sample;


> update statistics would hang for 100 billion rows table
> ---
>
> Key: TRAFODION-2978
> URL: https://issues.apache.org/jira/browse/TRAFODION-2978
> Project: Apache Trafodion
>  Issue Type: Bug
> Environment: CentOS
>Reporter: chenyunren
>Assignee: chenyunren
>Priority: Major
>
> CREATE TABLE T1
>    ( 
>  C1 INT DEFAULT NULL NOT SERIALIZED
>    , C2 INT DEFAULT NULL NOT SERIALIZED
>    , C3 VARCHAR(100 CHARS) CHARACTER SET UTF8
>    COLLATE DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE NOT SERIALIZED
>    , C4 VARCHAR(100 CHARS) CHARACTER SET UTF8
>    COLLATE DEFAULT DEFAULT NULL NOT SERIALIZED
>    , C5 LARGEINT NO DEFAULT NOT NULL NOT DROPPABLE
>    NOT SERIALIZED
>    , C6 LARGEINT DEFAULT NULL NOT SERIALIZED
>    , C7 LARGEINT DEFAULT NULL NOT SERIALIZED
>    , C8 DOUBLE PRECISION DEFAULT NULL NOT
>    SERIALIZED
>    , C9 LARGEINT DEFAULT NULL NOT SERIALIZED
>    , C10 DOUBLE PRECISION DEFAULT NULL NOT
>    SERIALIZED
>    , C11 LARGEINT DEFAULT NULL NOT SERIALIZED
>    , C12 LARGEINT DEFAULT NULL NOT SERIALIZED
>    , C13 LARGEINT DEFAULT NULL NOT SERIALIZED
>    , C14 DOUBLE PRECISION DEFAULT NULL NOT
>    

[jira] [Updated] (TRAFODION-2979) Enhance to_date and support transfering string to timestamp with millseconds

2018-03-05 Thread Yuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Liu updated TRAFODION-2979:

Description: 
I know we can transfer a string to timestamp using to_date, to_timestamp or 
cast(.. as timestamp)

 

Now I have a string 20160912100706259067 and want to transfer to timestamp with 
millseconds. The only way I can think is firstly changing the value to 
2016-09-12 10:07:06.259067 and then using to_timestamp('2016-09-12 
10:07:06.259067'). Is there any better way?

 

In Oracle, it supports TO_TIMESTAMP('20160912100706259067', 
'MMDDHH24MISSFF6'), it seems we can not support it.

 

Currently to_date in Trafodion can only support below format,
 * '-MM-DD'

 * 'MM/DD/'

 * 'DD.MM.'

 * '-MM'

 * 'MM/DD/'

 * '/MM/DD'

 * 'MMDD'

 * 'YY/MM/DD'

 * 'MM/DD/YY'

 * 'MM-DD-'

 * 'MM'

 * 'DD-MM-'

 * 'DD-MON-'

 * 'DDMON'

 * 'MMDDHH24MISS'

 * 'DD.MM.:HH24.MI.SS'

 * '-MM-DD HH24:MI:SS'

 * 'MMDD:HH24:MI:SS'

 * 'MMDD HH24:MI:SS'

 * 'MM/DD/ HH24:MI:SS'

 * 'DD-MON- HH:MI:SS'

 * 'MONTH DD, , HH:MI'

 * 'DD.MM. HH24.MI.SS'

 

We need add more fomat to include millseconds such as 'MMDDHH24MISSFF6'

The format of millseconds could be FF [1..9]

 

  was:
I know we can transfer a string to timestamp using to_date, to_timestamp or 
cast(.. as timestamp)

 

Now I have a string 20160912100706259067 and want to transfer to timestamp with 
millseconds. The only way I can think is firstly changing the value to 
2016-09-12 10:07:06.259067 and then using to_timestamp('2016-09-12 
10:07:06.259067'). Is there any better way?

 

In Oracle, it supports TO_TIMESTAMP('20160912100706259067', 
'MMDDHH24MISSFF6'), it seems we can not support it.

 

Currently to_date in Trafodion can only support below format,
 * '-MM-DD'

 * 'MM/DD/'

 * 'DD.MM.'

 * '-MM'

 * 'MM/DD/'

 * '/MM/DD'

 * 'MMDD'

 * 'YY/MM/DD'

 * 'MM/DD/YY'

 * 'MM-DD-'

 * 'MM'

 * 'DD-MM-'

 * 'DD-MON-'

 * 'DDMON'

 * 'MMDDHH24MISS'

 * 'DD.MM.:HH24.MI.SS'

 * '-MM-DD HH24:MI:SS'

 * 'MMDD:HH24:MI:SS'

 * 'MMDD HH24:MI:SS'

 * 'MM/DD/ HH24:MI:SS'

 * 'DD-MON- HH:MI:SS'

 * 'MONTH DD, , HH:MI'

 * 'DD.MM. HH24.MI.SS'

 

We need add more fomat to include millseconds such as 'MMDDHH24MISSFF6'

The fommat of millseconds could be FF [1..9]

 


> Enhance to_date and support transfering string to timestamp with millseconds
> 
>
> Key: TRAFODION-2979
> URL: https://issues.apache.org/jira/browse/TRAFODION-2979
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: sql-general
>Affects Versions: 2.1-incubating
>Reporter: Yuan Liu
>Priority: Major
>
> I know we can transfer a string to timestamp using to_date, to_timestamp or 
> cast(.. as timestamp)
>  
> Now I have a string 20160912100706259067 and want to transfer to timestamp 
> with millseconds. The only way I can think is firstly changing the value to 
> 2016-09-12 10:07:06.259067 and then using to_timestamp('2016-09-12 
> 10:07:06.259067'). Is there any better way?
>  
> In Oracle, it supports TO_TIMESTAMP('20160912100706259067', 
> 'MMDDHH24MISSFF6'), it seems we can not support it.
>  
> Currently to_date in Trafodion can only support below format,
>  * '-MM-DD'
>  * 'MM/DD/'
>  * 'DD.MM.'
>  * '-MM'
>  * 'MM/DD/'
>  * '/MM/DD'
>  * 'MMDD'
>  * 'YY/MM/DD'
>  * 'MM/DD/YY'
>  * 'MM-DD-'
>  * 'MM'
>  * 'DD-MM-'
>  * 'DD-MON-'
>  * 'DDMON'
>  * 'MMDDHH24MISS'
>  * 'DD.MM.:HH24.MI.SS'
>  * '-MM-DD HH24:MI:SS'
>  * 'MMDD:HH24:MI:SS'
>  * 'MMDD HH24:MI:SS'
>  * 'MM/DD/ HH24:MI:SS'
>  * 'DD-MON- HH:MI:SS'
>  * 'MONTH DD, , HH:MI'
>  * 'DD.MM. HH24.MI.SS'
>  
> We need add more fomat to include millseconds such as 'MMDDHH24MISSFF6'
> The format of millseconds could be FF [1..9]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2979) Enhance to_date and support transfering string to timestamp with millseconds

2018-03-05 Thread Yuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Liu updated TRAFODION-2979:

Description: 
I know we can transfer a string to timestamp using to_date, to_timestamp or 
cast(.. as timestamp)

 

Now I have a string 20160912100706259067 and want to transfer to timestamp with 
millseconds. The only way I can think is firstly changing the value to 
2016-09-12 10:07:06.259067 and then using to_timestamp('2016-09-12 
10:07:06.259067'). Is there any better way?

 

In Oracle, it supports TO_TIMESTAMP('20160912100706259067', 
'MMDDHH24MISSFF6'), it seems we can not support it.

 

Currently to_date in Trafodion can only support below format,
 * '-MM-DD'

 * 'MM/DD/'

 * 'DD.MM.'

 * '-MM'

 * 'MM/DD/'

 * '/MM/DD'

 * 'MMDD'

 * 'YY/MM/DD'

 * 'MM/DD/YY'

 * 'MM-DD-'

 * 'MM'

 * 'DD-MM-'

 * 'DD-MON-'

 * 'DDMON'

 * 'MMDDHH24MISS'

 * 'DD.MM.:HH24.MI.SS'

 * '-MM-DD HH24:MI:SS'

 * 'MMDD:HH24:MI:SS'

 * 'MMDD HH24:MI:SS'

 * 'MM/DD/ HH24:MI:SS'

 * 'DD-MON- HH:MI:SS'

 * 'MONTH DD, , HH:MI'

 * 'DD.MM. HH24.MI.SS'

 

We need add more fomat to include millseconds such as 'MMDDHH24MISSFF6'

The fommat of millseconds could be FF [1..9]

 

  was:
I know we can transfer a string to timestamp using to_date, to_timestamp or 
cast(.. as timestamp)

 

Now I have a string 20160912100706259067 and want to transfer to timestamp with 
millseconds. The only way I can think is firstly changing the value to 
2016-09-12 10:07:06.259067 and then using to_timestamp('2016-09-12 
10:07:06.259067'). Is there any better way?

 

In Oracle, it supports TO_TIMESTAMP('20160912100706259067', 
'MMDDHH24MISSFF6'), it seems we can not support it.

 

Currently to_date in Trafodion can only support below format,
 * '-MM-DD'

 * 'MM/DD/'

 * 'DD.MM.'

 * '-MM'

 * 'MM/DD/'

 * '/MM/DD'

 * 'MMDD'

 * 'YY/MM/DD'

 * 'MM/DD/YY'

 * 'MM-DD-'

 * 'MM'

 * 'DD-MM-'

 * 'DD-MON-'

 * 'DDMON'

 * 'MMDDHH24MISS'

 * 'DD.MM.:HH24.MI.SS'

 * '-MM-DD HH24:MI:SS'

 * 'MMDD:HH24:MI:SS'

 * 'MMDD HH24:MI:SS'

 * 'MM/DD/ HH24:MI:SS'

 * 'DD-MON- HH:MI:SS'

 * 'MONTH DD, , HH:MI'

 * 'DD.MM. HH24.MI.SS'

 

We need add more fomat to include millseconds such as 'MMDDHH24MISSFF6'


> Enhance to_date and support transfering string to timestamp with millseconds
> 
>
> Key: TRAFODION-2979
> URL: https://issues.apache.org/jira/browse/TRAFODION-2979
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: sql-general
>Affects Versions: 2.1-incubating
>Reporter: Yuan Liu
>Priority: Major
>
> I know we can transfer a string to timestamp using to_date, to_timestamp or 
> cast(.. as timestamp)
>  
> Now I have a string 20160912100706259067 and want to transfer to timestamp 
> with millseconds. The only way I can think is firstly changing the value to 
> 2016-09-12 10:07:06.259067 and then using to_timestamp('2016-09-12 
> 10:07:06.259067'). Is there any better way?
>  
> In Oracle, it supports TO_TIMESTAMP('20160912100706259067', 
> 'MMDDHH24MISSFF6'), it seems we can not support it.
>  
> Currently to_date in Trafodion can only support below format,
>  * '-MM-DD'
>  * 'MM/DD/'
>  * 'DD.MM.'
>  * '-MM'
>  * 'MM/DD/'
>  * '/MM/DD'
>  * 'MMDD'
>  * 'YY/MM/DD'
>  * 'MM/DD/YY'
>  * 'MM-DD-'
>  * 'MM'
>  * 'DD-MM-'
>  * 'DD-MON-'
>  * 'DDMON'
>  * 'MMDDHH24MISS'
>  * 'DD.MM.:HH24.MI.SS'
>  * '-MM-DD HH24:MI:SS'
>  * 'MMDD:HH24:MI:SS'
>  * 'MMDD HH24:MI:SS'
>  * 'MM/DD/ HH24:MI:SS'
>  * 'DD-MON- HH:MI:SS'
>  * 'MONTH DD, , HH:MI'
>  * 'DD.MM. HH24.MI.SS'
>  
> We need add more fomat to include millseconds such as 'MMDDHH24MISSFF6'
> The fommat of millseconds could be FF [1..9]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2979) Enhance to_date and support transfering string to timestamp with millseconds

2018-03-05 Thread Yuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Liu updated TRAFODION-2979:

Description: 
I know we can transfer a string to timestamp using to_date, to_timestamp or 
cast(.. as timestamp)

 

Now I have a string 20160912100706259067 and want to transfer to timestamp with 
millseconds. The only way I can think is firstly changing the value to 
2016-09-12 10:07:06.259067 and then using to_timestamp('2016-09-12 
10:07:06.259067'). Is there any better way?

 

In Oracle, it supports TO_TIMESTAMP('20160912100706259067', 
'MMDDHH24MISSFF6'), it seems we can not support it.

 

Currently to_date in Trafodion can only support below format,
 * '-MM-DD'

 * 'MM/DD/'

 * 'DD.MM.'

 * '-MM'

 * 'MM/DD/'

 * '/MM/DD'

 * 'MMDD'

 * 'YY/MM/DD'

 * 'MM/DD/YY'

 * 'MM-DD-'

 * 'MM'

 * 'DD-MM-'

 * 'DD-MON-'

 * 'DDMON'

 * 'MMDDHH24MISS'

 * 'DD.MM.:HH24.MI.SS'

 * '-MM-DD HH24:MI:SS'

 * 'MMDD:HH24:MI:SS'

 * 'MMDD HH24:MI:SS'

 * 'MM/DD/ HH24:MI:SS'

 * 'DD-MON- HH:MI:SS'

 * 'MONTH DD, , HH:MI'

 * 'DD.MM. HH24.MI.SS'

 

We need add more fomat to include millseconds such as 'MMDDHH24MISSFF6'

  was:
I know we can transfer a string to timestamp using to_date, to_timestamp or 
cast(.. as timestamp)

 

Now I have a string 20160912100706259067 and want to transfer to timestamp with 
millseconds. The only way I can think is firstly changing the value to 
2016-09-12 10:07:06.259067 and then using to_timestamp('2016-09-12 
10:07:06.259067'). Is there any better way?

 

In Oracle, it supports TO_TIMESTAMP('20160912100706259067', 
'MMDDHH24MISSFF6'), it seems we can not support it.


> Enhance to_date and support transfering string to timestamp with millseconds
> 
>
> Key: TRAFODION-2979
> URL: https://issues.apache.org/jira/browse/TRAFODION-2979
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: sql-general
>Affects Versions: 2.1-incubating
>Reporter: Yuan Liu
>Priority: Major
>
> I know we can transfer a string to timestamp using to_date, to_timestamp or 
> cast(.. as timestamp)
>  
> Now I have a string 20160912100706259067 and want to transfer to timestamp 
> with millseconds. The only way I can think is firstly changing the value to 
> 2016-09-12 10:07:06.259067 and then using to_timestamp('2016-09-12 
> 10:07:06.259067'). Is there any better way?
>  
> In Oracle, it supports TO_TIMESTAMP('20160912100706259067', 
> 'MMDDHH24MISSFF6'), it seems we can not support it.
>  
> Currently to_date in Trafodion can only support below format,
>  * '-MM-DD'
>  * 'MM/DD/'
>  * 'DD.MM.'
>  * '-MM'
>  * 'MM/DD/'
>  * '/MM/DD'
>  * 'MMDD'
>  * 'YY/MM/DD'
>  * 'MM/DD/YY'
>  * 'MM-DD-'
>  * 'MM'
>  * 'DD-MM-'
>  * 'DD-MON-'
>  * 'DDMON'
>  * 'MMDDHH24MISS'
>  * 'DD.MM.:HH24.MI.SS'
>  * '-MM-DD HH24:MI:SS'
>  * 'MMDD:HH24:MI:SS'
>  * 'MMDD HH24:MI:SS'
>  * 'MM/DD/ HH24:MI:SS'
>  * 'DD-MON- HH:MI:SS'
>  * 'MONTH DD, , HH:MI'
>  * 'DD.MM. HH24.MI.SS'
>  
> We need add more fomat to include millseconds such as 'MMDDHH24MISSFF6'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2978) update statistics would hang for 100 billion rows table

2018-03-05 Thread chenyunren (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenyunren updated TRAFODION-2978:
--
Description: 
CREATE TABLE T1
   ( 
 C1 INT DEFAULT NULL NOT SERIALIZED
   , C2 INT DEFAULT NULL NOT SERIALIZED
   , C3 VARCHAR(100 CHARS) CHARACTER SET UTF8
   COLLATE DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE NOT SERIALIZED
   , C4 VARCHAR(100 CHARS) CHARACTER SET UTF8
   COLLATE DEFAULT DEFAULT NULL NOT SERIALIZED
   , C5 LARGEINT NO DEFAULT NOT NULL NOT DROPPABLE
   NOT SERIALIZED
   , C6 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C7 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C8 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C9 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C10 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C11 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C12 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C13 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C14 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C15 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C16 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C17 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C18 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C19 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C20 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C21 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C22 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , C23 LARGEINT DEFAULT NULL NOT SERIALIZED
   , C24 DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   )
   STORE BY (C5 ASC, C3 ASC)
   SALT USING 10 PARTITIONS
   HBASE_OPTIONS 
   ( 
 DATA_BLOCK_ENCODING = 'FAST_DIFF',
 MEMSTORE_FLUSH_SIZE = '1073741824' 
   ) 
 ;

 

just to run below sql
 update statistics for table CELL_INDICATOR_HIVE on every column, (PROVINCE_ID, 
CELL_ID),(CITY_ID, PROVINCE_ID, CELL_ID), (CITY_ID, PROVINCE_ID, CELL_ID, 
CELL_PROPERTY) sample;

  was:
CREATE TABLE TRAFODION.SY.CELL_INDICATOR_HIVE
   ( 
 CITY_ID INT DEFAULT NULL NOT SERIALIZED
   , PROVINCE_ID INT DEFAULT NULL NOT SERIALIZED
   , CELL_ID VARCHAR(100 CHARS) CHARACTER SET UTF8
   COLLATE DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE NOT SERIALIZED
   , CELL_PROPERTY VARCHAR(100 CHARS) CHARACTER SET UTF8
   COLLATE DEFAULT DEFAULT NULL NOT SERIALIZED
   , STARTTIME LARGEINT NO DEFAULT NOT NULL NOT DROPPABLE
   NOT SERIALIZED
   , HTTPSUCCNBR LARGEINT DEFAULT NULL NOT SERIALIZED
   , HTTPATTNBR LARGEINT DEFAULT NULL NOT SERIALIZED
   , HTTPSUCCRATE DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , HTTPTOTALRESPTIME LARGEINT DEFAULT NULL NOT SERIALIZED
   , HTTPAVGRESPTIME DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , ULTRAFFIC LARGEINT DEFAULT NULL NOT SERIALIZED
   , BIGDATADLTRAFFIC LARGEINT DEFAULT NULL NOT SERIALIZED
   , DLTRAFFIC LARGEINT DEFAULT NULL NOT SERIALIZED
   , HTTPAVGDLRATE DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , BIGDATADLRATE DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , TCPATTNBR LARGEINT DEFAULT NULL NOT SERIALIZED
   , TCPSUCCNBR LARGEINT DEFAULT NULL NOT SERIALIZED
   , TCPSUCCRATE DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , TCPTOTALTIME LARGEINT DEFAULT NULL NOT SERIALIZED
   , TCPAVGTIME DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , TCPDOWNTOTALTIME LARGEINT DEFAULT NULL NOT SERIALIZED
   , TCPDOWNAVGTIME DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , TCPUPTOTALTIME LARGEINT DEFAULT NULL NOT SERIALIZED
   , TCPUPAVGTIME DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   )
   STORE BY (STARTTIME ASC, CELL_ID ASC)
   SALT USING 104 PARTITIONS
  ATTRIBUTES ALIGNED FORMAT NAMESPACE 'TRAF_150' 
   HBASE_OPTIONS 
   ( 
 DATA_BLOCK_ENCODING = 'FAST_DIFF',
 COMPRESSION = 'SNAPPY',
 MEMSTORE_FLUSH_SIZE = '1073741824' 
   ) 
 ;
  
 -- GRANT SELECT, INSERT, DELETE, UPDATE, REFERENCES ON 
TRAFODION.SY.CELL_INDICATOR_HIVE TO DB__ROOT WITH GRANT OPTION;
 
 --- SQL operation complete.

 

just to run below sql
 update statistics for table CELL_INDICATOR_HIVE on every column, (PROVINCE_ID, 
CELL_ID),(CITY_ID, PROVINCE_ID, CELL_ID), (CITY_ID, PROVINCE_ID, CELL_ID, 
CELL_PROPERTY) sample;


> update statistics would hang for 100 billion rows table
> ---
>
> Key: TRAFODION-2978
> URL: https://issues.apache.org/jira/browse/TRAFODION-2978
> Project: Apache Trafodion
>  Issue Type: Bug
> Environment: CentOS
>Reporter: chenyunren
>Assignee: chenyunren
>Priority: Major
>
> CREATE TABLE T1
>    ( 
>  C1 INT DEFAULT NULL NOT SERIALIZED
>    , C2 INT DEFAULT NULL NOT SERIALIZED
>    , C3 VARCHAR(100 CHARS) CHARACTER SET UTF8
>    COLLATE DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE NOT SERIALIZED
>    , C4 VARCHAR(100 CHARS) CHARACTER SET UTF8
>    COLLATE DEFAULT DEFAULT NULL 

[jira] [Updated] (TRAFODION-2979) Enhance to_date and support transfering string to timestamp with millseconds

2018-03-05 Thread Yuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Liu updated TRAFODION-2979:

Summary: Enhance to_date and support transfering string to timestamp with 
millseconds  (was:  to_date can not support transfering string to timestamp 
with millseconds)

> Enhance to_date and support transfering string to timestamp with millseconds
> 
>
> Key: TRAFODION-2979
> URL: https://issues.apache.org/jira/browse/TRAFODION-2979
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: sql-general
>Affects Versions: 2.1-incubating
>Reporter: Yuan Liu
>Priority: Major
>
> I know we can transfer a string to timestamp using to_date, to_timestamp or 
> cast(.. as timestamp)
>  
> Now I have a string 20160912100706259067 and want to transfer to timestamp 
> with millseconds. The only way I can think is firstly changing the value to 
> 2016-09-12 10:07:06.259067 and then using to_timestamp('2016-09-12 
> 10:07:06.259067'). Is there any better way?
>  
> In Oracle, it supports TO_TIMESTAMP('20160912100706259067', 
> 'MMDDHH24MISSFF6'), it seems we can not support it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TRAFODION-2979) to_date can not support transfering string to timestamp with millseconds

2018-03-05 Thread Yuan Liu (JIRA)
Yuan Liu created TRAFODION-2979:
---

 Summary:  to_date can not support transfering string to timestamp 
with millseconds
 Key: TRAFODION-2979
 URL: https://issues.apache.org/jira/browse/TRAFODION-2979
 Project: Apache Trafodion
  Issue Type: Improvement
  Components: sql-general
Affects Versions: 2.1-incubating
Reporter: Yuan Liu


I know we can transfer a string to timestamp using to_date, to_timestamp or 
cast(.. as timestamp)

 

Now I have a string 20160912100706259067 and want to transfer to timestamp with 
millseconds. The only way I can think is firstly changing the value to 
2016-09-12 10:07:06.259067 and then using to_timestamp('2016-09-12 
10:07:06.259067'). Is there any better way?

 

In Oracle, it supports TO_TIMESTAMP('20160912100706259067', 
'MMDDHH24MISSFF6'), it seems we can not support it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TRAFODION-2978) update statistics would hang for 100 billion rows table

2018-03-05 Thread chenyunren (JIRA)
chenyunren created TRAFODION-2978:
-

 Summary: update statistics would hang for 100 billion rows table
 Key: TRAFODION-2978
 URL: https://issues.apache.org/jira/browse/TRAFODION-2978
 Project: Apache Trafodion
  Issue Type: Bug
 Environment: CentOS
Reporter: chenyunren
Assignee: chenyunren


CREATE TABLE TRAFODION.SY.CELL_INDICATOR_HIVE
   ( 
 CITY_ID INT DEFAULT NULL NOT SERIALIZED
   , PROVINCE_ID INT DEFAULT NULL NOT SERIALIZED
   , CELL_ID VARCHAR(100 CHARS) CHARACTER SET UTF8
   COLLATE DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE NOT SERIALIZED
   , CELL_PROPERTY VARCHAR(100 CHARS) CHARACTER SET UTF8
   COLLATE DEFAULT DEFAULT NULL NOT SERIALIZED
   , STARTTIME LARGEINT NO DEFAULT NOT NULL NOT DROPPABLE
   NOT SERIALIZED
   , HTTPSUCCNBR LARGEINT DEFAULT NULL NOT SERIALIZED
   , HTTPATTNBR LARGEINT DEFAULT NULL NOT SERIALIZED
   , HTTPSUCCRATE DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , HTTPTOTALRESPTIME LARGEINT DEFAULT NULL NOT SERIALIZED
   , HTTPAVGRESPTIME DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , ULTRAFFIC LARGEINT DEFAULT NULL NOT SERIALIZED
   , BIGDATADLTRAFFIC LARGEINT DEFAULT NULL NOT SERIALIZED
   , DLTRAFFIC LARGEINT DEFAULT NULL NOT SERIALIZED
   , HTTPAVGDLRATE DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , BIGDATADLRATE DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , TCPATTNBR LARGEINT DEFAULT NULL NOT SERIALIZED
   , TCPSUCCNBR LARGEINT DEFAULT NULL NOT SERIALIZED
   , TCPSUCCRATE DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , TCPTOTALTIME LARGEINT DEFAULT NULL NOT SERIALIZED
   , TCPAVGTIME DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , TCPDOWNTOTALTIME LARGEINT DEFAULT NULL NOT SERIALIZED
   , TCPDOWNAVGTIME DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   , TCPUPTOTALTIME LARGEINT DEFAULT NULL NOT SERIALIZED
   , TCPUPAVGTIME DOUBLE PRECISION DEFAULT NULL NOT
   SERIALIZED
   )
   STORE BY (STARTTIME ASC, CELL_ID ASC)
   SALT USING 104 PARTITIONS
  ATTRIBUTES ALIGNED FORMAT NAMESPACE 'TRAF_150' 
   HBASE_OPTIONS 
   ( 
 DATA_BLOCK_ENCODING = 'FAST_DIFF',
 COMPRESSION = 'SNAPPY',
 MEMSTORE_FLUSH_SIZE = '1073741824' 
   ) 
 ;
  
 -- GRANT SELECT, INSERT, DELETE, UPDATE, REFERENCES ON 
TRAFODION.SY.CELL_INDICATOR_HIVE TO DB__ROOT WITH GRANT OPTION;
 
 --- SQL operation complete.

 

just to run below sql
 update statistics for table CELL_INDICATOR_HIVE on every column, (PROVINCE_ID, 
CELL_ID),(CITY_ID, PROVINCE_ID, CELL_ID), (CITY_ID, PROVINCE_ID, CELL_ID, 
CELL_PROPERTY) sample;



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (TRAFODION-2974) Some predefined UDFs should be regular UDFs so we can revoke rights

2018-03-05 Thread Hans Zeller (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hans Zeller resolved TRAFODION-2974.

Resolution: Fixed

> Some predefined UDFs should be regular UDFs so we can revoke rights
> ---
>
> Key: TRAFODION-2974
> URL: https://issues.apache.org/jira/browse/TRAFODION-2974
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmu
>Affects Versions: 2.2.0
>Reporter: Hans Zeller
>Assignee: Hans Zeller
>Priority: Major
> Fix For: 2.3
>
>
> Roberta pointed out that we have two predefined UDFs, EVENT_LOG_READER and 
> JDBC, where the system administrator should have the ability to control who 
> can execute these functions.
> To do this, these two UDFs cannot be "predefined" UDFs anymore, since those 
> don't have the metadata that's required for doing grant and revoke.
> Roberta also pointed out that the JDBC UDF should refuse to connect to the T2 
> driver, for security reasons.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-2974) Some predefined UDFs should be regular UDFs so we can revoke rights

2018-03-05 Thread Hans Zeller (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hans Zeller closed TRAFODION-2974.
--

> Some predefined UDFs should be regular UDFs so we can revoke rights
> ---
>
> Key: TRAFODION-2974
> URL: https://issues.apache.org/jira/browse/TRAFODION-2974
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmu
>Affects Versions: 2.2.0
>Reporter: Hans Zeller
>Assignee: Hans Zeller
>Priority: Major
> Fix For: 2.3
>
>
> Roberta pointed out that we have two predefined UDFs, EVENT_LOG_READER and 
> JDBC, where the system administrator should have the ability to control who 
> can execute these functions.
> To do this, these two UDFs cannot be "predefined" UDFs anymore, since those 
> don't have the metadata that's required for doing grant and revoke.
> Roberta also pointed out that the JDBC UDF should refuse to connect to the T2 
> driver, for security reasons.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2974) Some predefined UDFs should be regular UDFs so we can revoke rights

2018-03-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386862#comment-16386862
 ] 

ASF GitHub Bot commented on TRAFODION-2974:
---

Github user asfgit closed the pull request at:

https://github.com/apache/trafodion/pull/1460


> Some predefined UDFs should be regular UDFs so we can revoke rights
> ---
>
> Key: TRAFODION-2974
> URL: https://issues.apache.org/jira/browse/TRAFODION-2974
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmu
>Affects Versions: 2.2.0
>Reporter: Hans Zeller
>Assignee: Hans Zeller
>Priority: Major
> Fix For: 2.3
>
>
> Roberta pointed out that we have two predefined UDFs, EVENT_LOG_READER and 
> JDBC, where the system administrator should have the ability to control who 
> can execute these functions.
> To do this, these two UDFs cannot be "predefined" UDFs anymore, since those 
> don't have the metadata that's required for doing grant and revoke.
> Roberta also pointed out that the JDBC UDF should refuse to connect to the T2 
> driver, for security reasons.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2127) enhance Trafodion implementation of WITH clause

2018-03-05 Thread Hans Zeller (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hans Zeller updated TRAFODION-2127:
---
Fix Version/s: (was: 2.2.0)
   2.3

> enhance Trafodion implementation of WITH clause
> ---
>
> Key: TRAFODION-2127
> URL: https://issues.apache.org/jira/browse/TRAFODION-2127
> Project: Apache Trafodion
>  Issue Type: Improvement
>Reporter: liu ming
>Assignee: Hans Zeller
>Priority: Major
>
> TRAFODION-1673 described some details about how to support WITH clause in 
> Trafodion.
> As initial implementation, we use a simple pure-parser method.
> That way, Trafodion can support WITH clause functionally, but not good from 
> performance point of view,
> also need to enhance the parser to be more strict in syntax.
> This JIRA is a follow up JIRA, to track following effort to support Trafodion 
> WITH clause.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work stopped] (TRAFODION-1584) Install Apache Kafka as an optional add-on in install_local_hadoop

2018-03-05 Thread Hans Zeller (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on TRAFODION-1584 stopped by Hans Zeller.
--
> Install Apache Kafka as an optional add-on in install_local_hadoop
> --
>
> Key: TRAFODION-1584
> URL: https://issues.apache.org/jira/browse/TRAFODION-1584
> Project: Apache Trafodion
>  Issue Type: Sub-task
>  Components: sql-general
>Affects Versions: 1.3-incubating
>Reporter: Hans Zeller
>Assignee: Hans Zeller
>Priority: Major
>
> Optionally install Apache Kafka so that we can test integration between Kafka 
> and Trafodion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2127) enhance Trafodion implementation of WITH clause

2018-03-05 Thread Hans Zeller (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hans Zeller updated TRAFODION-2127:
---
Fix Version/s: (was: 2.3)

> enhance Trafodion implementation of WITH clause
> ---
>
> Key: TRAFODION-2127
> URL: https://issues.apache.org/jira/browse/TRAFODION-2127
> Project: Apache Trafodion
>  Issue Type: Improvement
>Reporter: liu ming
>Assignee: Hans Zeller
>Priority: Major
>
> TRAFODION-1673 described some details about how to support WITH clause in 
> Trafodion.
> As initial implementation, we use a simple pure-parser method.
> That way, Trafodion can support WITH clause functionally, but not good from 
> performance point of view,
> also need to enhance the parser to be more strict in syntax.
> This JIRA is a follow up JIRA, to track following effort to support Trafodion 
> WITH clause.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (TRAFODION-1173) LP Bug: 1444088 - Hybrid Query Cache: sqlci may err with JRE SIGSEGV.

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah reassigned TRAFODION-1173:
-

Assignee: Suresh Subbiah  (was: Howard Qin)

> LP Bug: 1444088 - Hybrid Query Cache: sqlci may err with JRE SIGSEGV.
> -
>
> Key: TRAFODION-1173
> URL: https://issues.apache.org/jira/browse/TRAFODION-1173
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Julie Thai
>Assignee: Suresh Subbiah
>Priority: Major
> Fix For: 2.3
>
>
> In sqlci, with HQC on and HQC_LOG specified, a prepared statement was 
> followed with:
> >>--interval 47, same selectivity as interval 51
> >>--interval 47 [jvFN3&789 - jyBT!]789)
> >>--expect = nothing in hqc log; SQC hit
> >>prepare XX from select * from f00 where colchar = 'jyBT!]789';
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x75d80595, pid=2708, tid=140737353866272
> #
> # JRE version: Java(TM) SE Runtime Environment (7.0_75-b13) (build 
> 1.7.0_75-b13)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.75-b04 mixed mode 
> linux-amd64 compressed oops)
> # Problematic frame:
> # C  [libstdc++.so.6+0x91595]  
> std::ostream::sentry::sentry(std::ostream&)+0x25
> #
> # Core dump written. Default location: 
> /opt/home/trafodion/thaiju/HQC/equal_char/core or core.2708
> #
> # An error report file with more information is saved as:
> # /opt/home/trafodion/thaiju/HQC/equal_char/hs_err_pid2708.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.sun.com/bugreport/crash.jsp
> # The crash happened outside the Java Virtual Machine in native code.
> # See problematic frame for where to report the bug.
> #
> Aborted
> No core file found under /opt/home/trafodion/thaiju/HQC/equal_char. But a 
> hs_err_pid2708.log file was generated (included in attached, to_repro.tar). 
> Problem does not reproduce if I explicitly turn off HQC.
> To reproduce:
> 1. download and untar attachment, to_repro.tar
> 1. in a sqlci session, obey setup_char.sql (from tar file)
> 2. in a new sqlci session, obey equal_char.sql (from tar file)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-157) LP Bug: 1252809 - DCS-ODBC-Getting 'Invalid server handle' after bound hstmt is used for a while.

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-157:
-
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 1252809 - DCS-ODBC-Getting 'Invalid server handle' after bound hstmt 
> is used for a while.
> -
>
> Key: TRAFODION-157
> URL: https://issues.apache.org/jira/browse/TRAFODION-157
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-odbc-linux
>Reporter: Aruna Sadashiva
>Assignee: RuoYu Zuo
>Priority: Major
> Fix For: 2.3
>
>
> Using ODBC 64 bit Linux driver.
> 'Invalid server handle' is returned and insert fails when using 
> SQLBindParameter/Prepare/Execute. The SQLExecute is done in a loop. It works 
> for a while, but fails within 10 minutes. Changed the program to reconnect 
> every 5 mins, but still seeing this error. It works on SQ.
> Have attached simple test program to recreate this. To run on SQ remove the 
> SQLExecDirect calls to set CQDs, those are specific to Traf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1172) LP Bug: 1444084 - Hybrid Query Cache: display interval boundaries in virtual table.

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1172:
--
Fix Version/s: (was: 2.2.0)
   2.4

> LP Bug: 1444084 - Hybrid Query Cache:  display interval boundaries in virtual 
> table.
> 
>
> Key: TRAFODION-1172
> URL: https://issues.apache.org/jira/browse/TRAFODION-1172
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Julie Thai
>Assignee: Howard Qin
>Priority: Major
> Fix For: 2.4
>
>
> For collapsing/merging intervals enhancement, displaying of interval 
> boundaries in virtual table would aid in verification of the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-480) LP Bug: 1349644 - Status array returned by batch operations contains wrong return value for T2

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-480:
-
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 1349644 - Status array returned by batch operations contains wrong 
> return value for T2
> --
>
> Key: TRAFODION-480
> URL: https://issues.apache.org/jira/browse/TRAFODION-480
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-jdbc-t2, client-jdbc-t4
>Reporter: Aruna Sadashiva
>Assignee: RuoYu Zuo
>Priority: Major
> Fix For: 2.3
>
>
> The status array returned from T2 contains a different value compared to T4. 
> T4 returns -2 and T2 returns 1. 
> The oracle JDBC documentation states:
> 0 or greater — the command was processed successfully and the value is an 
> update count indicating the number of rows in the database that were affected 
> by the command’s execution Chapter 14 Batch Updates 121
> Statement.SUCCESS_NO_INFO — the command was processed successfully, but the 
> number of rows affected is unknown
> Statement.SUCCESS_NO_INFO is defined as being -2, so your result says 
> everything worked fine, but you won't get information on the number of 
> updated columns.
> For a prepared statement batch, it is not possible to know the number of rows 
> affected in the database by each individual statement in the batch. 
> Therefore, all array elements have a value of -2. According to the JDBC 2.0 
> specification, a value of -2 indicates that the operation was successful but 
> the number of rows affected is unknown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-427) LP Bug: 1339541 - windows ODBC driver internal hp keyword cleanup

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-427:
-
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 1339541 - windows ODBC driver internal hp keyword cleanup
> -
>
> Key: TRAFODION-427
> URL: https://issues.apache.org/jira/browse/TRAFODION-427
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-odbc-windows
>Reporter: Daniel Lu
>Assignee: Daniel Lu
>Priority: Major
> Fix For: 2.3
>
>
> windows ODBC driver still have some code internally use hp keyword.
> for example:
>   E:\win-odbc64\odbcclient\drvr35\drvrglobal.h(99):#define ODBC_RESOURCE_DLL  
>"hp_ores0300.dll"
>   E:\win-odbc64\odbcclient\drvr35adm\drvr35adm.def(10):DESCRIPTION  'hp_oadm 
> Windows Dynamic Link Library'
>   E:\win-odbc64\odbcclient\Drvr35Res\Drvr35Res.def(3):LIBRARY  
> "hp_ores0300"
>   E:\win-odbc64\odbcclient\Drvr35Res\Drvr35Res.def(4):DESCRIPTION  
> 'hp_ores0300 Windows Dynamic Link Library'
>   E:\win-odbc64\odbcclient\drvr35\TCPIPV4\TCPIPV4.def(8):LIBRARY  
> "hp_tcpipv40300"
>   E:\win-odbc64\odbcclient\TranslationDll\TranslationDll.def(10):LIBRARY 
> hp_translation03
>   E:\win-odbc64\odbcclient\drvr35\TCPIPV6\TCPIPV6.def(8):LIBRARY  
> "hp_tcpipv60300"
>   E:\win-odbc64\Install\UpdateDSN\UpdateDSN\UpdateDSN.cpp(161):// 
> wcscat_s(NewDriver,L"\\hp_odbc0200.dll");



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1628) Implement T2 Driver's Rowsets ability to enhance the batch insert performance

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1628:
--
Fix Version/s: (was: 2.2.0)
   2.3

> Implement T2 Driver's Rowsets ability to enhance the batch insert performance
> -
>
> Key: TRAFODION-1628
> URL: https://issues.apache.org/jira/browse/TRAFODION-1628
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: client-jdbc-t2
>Reporter: RuoYu Zuo
>Assignee: RuoYu Zuo
>Priority: Critical
>  Labels: features, performance
> Fix For: 2.3
>
>
> JDBC T2 Driver now has very poor performance of batch insert, because it does 
> not have rowsets ability. Implement rowsets functionality will allow T2 
> Driver performs batch insert operation much faster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1053) LP Bug: 1430938 - In full explain output, begin/end key for char/varchar key column should be min/max if there is no predicated defined on the key column.

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1053:
--
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 1430938 - In full explain output, begin/end key for char/varchar key 
> column should be min/max if there is no predicated defined on the key column.
> --
>
> Key: TRAFODION-1053
> URL: https://issues.apache.org/jira/browse/TRAFODION-1053
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Julie Thai
>Assignee: Howard Qin
>Priority: Major
> Fix For: 2.3
>
>
> In full explain output, begin/end key for char/varchar key column should be 
> min/max 
> if there is no predicated defined on the key column.
> Snippet from TRAFODION_SCAN below:
> key_columns  _SALT_, COLTS, COLVCHRUCS2, COLINTS
> begin_key .. (_SALT_ = %(9)), (COLTS = ),
>  (COLVCHRUCS2 = '洼硡'), (COLINTS = 
> )
> end_key  (_SALT_ = %(9)), (COLTS = ),
>  (COLVCHRUCS2 = '洼湩'), (COLINTS = 
> )
> Expected  (COLVCHRUCS2 = '') and  (COLVCHRUCS2 = '').
> SQL>create table salttbl3 (
> +>colintu int unsigned not null, colints int signed not null,
> +>colsintu smallint unsigned not null, colsints smallint signed not null,
> +>collint largeint not null, colnum numeric(11,3) not null,
> +>colflt float not null, coldec decimal(11,2) not null,
> +>colreal real not null, coldbl double precision not null,
> +>coldate date not null, coltime time not null,
> +>colts timestamp not null,
> +>colchriso char(90) character set iso88591 not null,
> +>colchrucs2 char(111) character set ucs2 not null,
> +>colvchriso varchar(113) character set iso88591 not null,
> +>colvchrucs2 varchar(115) character set ucs2 not null,
> +>PRIMARY KEY (colts ASC, colvchrucs2 DESC, colints ASC))
> +>SALT USING 9 PARTITIONS ON (colints, colvchrucs2, colts);
> --- SQL operation complete.
> SQL>LOAD INTO salttbl3 SELECT
> +>c1+c2*10+c3*100+c4*1000+c5*1,
> +>(c1+c2*10+c3*100+c4*1000+c5*1) - 5,
> +>mod(c1+c2*10+c3*100+c4*1000+c5*1, 65535),
> +>mod(c1+c2*10+c3*100+c4*1000+c5*1, 32767),
> +>(c1+c2*10+c3*100+c4*1000+c5*1) + 549755813888,
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as numeric(11,3)),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as float),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as decimal(11,2)),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as real),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as double precision),
> +>cast(converttimestamp(2106142992 +
> +>(864 * (c1+c2*10+c3*100+c4*1000+c5*1))) as date),
> +>time'00:00:00' + cast(mod(c1+c2*10+c3*100+c4*1000+c5*1,3)
> +>as interval minute),
> +>converttimestamp(2106142992 + (864 *
> +>(c1+c2*10+c3*100+c4*1000+c5*1)) + (100 * (c1+c2*10+c3*100)) +
> +>(6000 * (c1+c2*10)) + (36 * (c1+c2*10))),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as char(90) character set iso88591),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as char(111) character set ucs2),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as varchar(113) character set 
> iso88591),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as varchar(115) character set ucs2)
> +>from (values(1)) t
> +>transpose 0,1,2,3,4,5,6,7,8,9 as c1
> +>transpose 0,1,2,3,4,5,6,7,8,9 as c2
> +>transpose 0,1,2,3,4,5,6,7,8,9 as c3
> +>transpose 0,1,2,3,4,5,6,7,8,9 as c4
> +>transpose 0,1,2,3,4,5,6,7,8,9 as c5;
> UTIL_OUTPUT
> 
> Task: LOAD Status: StartedObject: TRAFODION.SEABASE.SALTTBL3  
>   
> Task:  CLEANUP Status: StartedObject: TRAFODION.SEABASE.SALTTBL3  
>   
> Task:  CLEANUP Status: Ended  Object: TRAFODION.SEABASE.SALTTBL3  
>   
> Task:  DISABLE INDEXE  Status: StartedObject: TRAFODION.SEABASE.SALTTBL3  
>   
> Task:  DISABLE INDEXE  Status: Ended  Object: TRAFODION.SEABASE.SALTTBL3  
>   
> Task:  PREPARATION Status: StartedObject: TRAFODION.SEABASE.SALTTBL3  
>   
>Rows Processed: 10
> Task:  PREPARATION Status: Ended  ET: 00:00:10.332
>   
> Task:  COMPLETION  Status: StartedObject: TRAFODION.SEABASE.SALTTBL3  
>   
> Task:  COMPLETION  Status: Ended  ET: 00:00:02.941
>   
> Task:  POPULATE INDEX  Status: StartedObject: TRAFODION.SEABASE.SALTTBL3  
>   
> Task:  POPULATE INDEX  Status: Ended  ET: 00:00:05.357
>   
> --- SQL operation complete.
> SQL>update 

[jira] [Updated] (TRAFODION-1438) Windows ODBC Driver is not able to create certificate file with long name length (over 30 bytes).

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1438:
--
Fix Version/s: (was: 2.2.0)
   2.4

> Windows ODBC Driver is not able to create certificate file with long name 
> length (over 30 bytes).
> -
>
> Key: TRAFODION-1438
> URL: https://issues.apache.org/jira/browse/TRAFODION-1438
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-odbc-windows
>Affects Versions: 2.0-incubating
> Environment: Windows
>Reporter: RuoYu Zuo
>Assignee: RuoYu Zuo
>Priority: Critical
> Fix For: 2.4
>
>
> Windows ODBC driver stores the certificate file with the server name in its 
> file name, when the server name is long, the driver is not able to handle. 
> For now driver just uses 30 char* buffer to create the file name, thus when 
> it copies a long server name into the file name, it crashes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1442) Linux ODBC Driver is not able to create certificate file with long name length (over 30 bytes).

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1442:
--
Fix Version/s: (was: 2.2.0)
   2.3

> Linux ODBC Driver is not able to create certificate file with long name 
> length (over 30 bytes).
> ---
>
> Key: TRAFODION-1442
> URL: https://issues.apache.org/jira/browse/TRAFODION-1442
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-odbc-linux
>Affects Versions: 2.0-incubating
> Environment: Linunx
>Reporter: RuoYu Zuo
>Assignee: RuoYu Zuo
>Priority: Critical
> Fix For: 2.3
>
>
> Same as Windows driver does, Linux driver also reserved only 30 bytes for 
> certificate file name, there's potential of running into crash.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (TRAFODION-1221) LP Bug: 1450853 - Hybrid Query Cache: query with equals predicate on INTERVAL datatype should not have a non-parameterized literal.

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah reassigned TRAFODION-1221:
-

Assignee: Suresh Subbiah  (was: Howard Qin)

> LP Bug: 1450853 - Hybrid Query Cache: query with equals predicate on INTERVAL 
> datatype should not have a non-parameterized literal.
> ---
>
> Key: TRAFODION-1221
> URL: https://issues.apache.org/jira/browse/TRAFODION-1221
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Julie Thai
>Assignee: Suresh Subbiah
>Priority: Critical
> Fix For: 2.3
>
>
> For query with equal predicate on INTERVAL datatype, both parameterized and 
> non-parameterized literals appear in HybridQueryCacheEntries virtual table. 
> Non-parametrrized literal should be empty.
> SQL>prepare XX from select * from F00INTVL where colintvl = interval '39998' 
> day(6);
> *** WARNING[6008] Statistics for column (COLKEY) from table 
> TRAFODION.QUERYCACHE_HQC.F00INTVL were not available. As a result, the access 
> path chosen might not be the best possible. [2015-04-30 13:31:48]
> --- SQL command prepared.
> SQL>execute show_entries;
> HKEY  
>NUM_HITS   NUM_PLITERALS 
> (EXPR)
>NUM_NPLITERALS (EXPR)  
>   
> 
>  -- - 
> 
>  -- 
> 
> SELECT * FROM F00INTVL WHERE COLINTVL = INTERVAL #NP# DAY ( #NP# ) ;  
> 0 1 
> INTERVAL '39998' DAY(6)
> 1 '39998'
> --- 1 row(s) selected.
> To reproduce:
> create table F00INTVL(
> colkey int not null primary key,
> colintvl interval day(6));
> load into F00INTVL select
> c1+c2*10+c3*100+c4*1000+c5*1+c6*10, --colkey
> cast(cast(mod(c1+c2*10+c3*100+c4*1000+c5*1+c6*10,99)
> as integer) as interval day(6)) --colintvl
> from (values(1)) t
> transpose 0,1,2,3,4,5,6,7,8,9 as c1
> transpose 0,1,2,3,4,5,6,7,8,9 as c2
> transpose 0,1,2,3,4,5,6,7,8,9 as c3
> transpose 0,1,2,3,4,5,6,7,8,9 as c4
> transpose 0,1,2,3,4,5,6,7,8,9 as c5
> transpose 0,1,2,3,4,5,6,7,8,9 as c6;
> update statistics for table F00INTVL on colintvl;
> prepare show_entries from select left(hkey,50), num_pliterals, 
> left(pliterals,15), num_npliterals, left(npliterals,15) from 
> table(HybridQueryCacheEntries('USER', 'LOCAL'));
> prepare XX from select * from F00INTVL where colintvl = interval '39998' 
> day(6);
> execute show_entries;



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1221) LP Bug: 1450853 - Hybrid Query Cache: query with equals predicate on INTERVAL datatype should not have a non-parameterized literal.

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1221:
--
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 1450853 - Hybrid Query Cache: query with equals predicate on INTERVAL 
> datatype should not have a non-parameterized literal.
> ---
>
> Key: TRAFODION-1221
> URL: https://issues.apache.org/jira/browse/TRAFODION-1221
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Julie Thai
>Assignee: Howard Qin
>Priority: Critical
> Fix For: 2.3
>
>
> For query with equal predicate on INTERVAL datatype, both parameterized and 
> non-parameterized literals appear in HybridQueryCacheEntries virtual table. 
> Non-parametrrized literal should be empty.
> SQL>prepare XX from select * from F00INTVL where colintvl = interval '39998' 
> day(6);
> *** WARNING[6008] Statistics for column (COLKEY) from table 
> TRAFODION.QUERYCACHE_HQC.F00INTVL were not available. As a result, the access 
> path chosen might not be the best possible. [2015-04-30 13:31:48]
> --- SQL command prepared.
> SQL>execute show_entries;
> HKEY  
>NUM_HITS   NUM_PLITERALS 
> (EXPR)
>NUM_NPLITERALS (EXPR)  
>   
> 
>  -- - 
> 
>  -- 
> 
> SELECT * FROM F00INTVL WHERE COLINTVL = INTERVAL #NP# DAY ( #NP# ) ;  
> 0 1 
> INTERVAL '39998' DAY(6)
> 1 '39998'
> --- 1 row(s) selected.
> To reproduce:
> create table F00INTVL(
> colkey int not null primary key,
> colintvl interval day(6));
> load into F00INTVL select
> c1+c2*10+c3*100+c4*1000+c5*1+c6*10, --colkey
> cast(cast(mod(c1+c2*10+c3*100+c4*1000+c5*1+c6*10,99)
> as integer) as interval day(6)) --colintvl
> from (values(1)) t
> transpose 0,1,2,3,4,5,6,7,8,9 as c1
> transpose 0,1,2,3,4,5,6,7,8,9 as c2
> transpose 0,1,2,3,4,5,6,7,8,9 as c3
> transpose 0,1,2,3,4,5,6,7,8,9 as c4
> transpose 0,1,2,3,4,5,6,7,8,9 as c5
> transpose 0,1,2,3,4,5,6,7,8,9 as c6;
> update statistics for table F00INTVL on colintvl;
> prepare show_entries from select left(hkey,50), num_pliterals, 
> left(pliterals,15), num_npliterals, left(npliterals,15) from 
> table(HybridQueryCacheEntries('USER', 'LOCAL'));
> prepare XX from select * from F00INTVL where colintvl = interval '39998' 
> day(6);
> execute show_entries;



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1115) LP Bug: 1438934 - MXOSRVRs don't get released after interrupting execution of the client application (ODB)

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1115:
--
Fix Version/s: (was: 2.2.0)
   2.4

> LP Bug: 1438934 - MXOSRVRs don't get released after interrupting execution of 
> the client application (ODB)
> --
>
> Key: TRAFODION-1115
> URL: https://issues.apache.org/jira/browse/TRAFODION-1115
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-mxosrvr
>Reporter: Chirag Bhalgami
>Assignee: Daniel Lu
>Priority: Critical
> Fix For: 2.4
>
>
> MXOSRVRs are not getting released when ODB application is interrupted during 
> execution.
> After restarting DCS, it still shows that odb app is occupying MXOSRVRs.
> Also, executing odb throws following error message:
> -
> odb [2015-03-31 21:19:11]: starting ODBC connection(s)... (1) 1 2 3 4
> Connected to HP Database
> [3] 5,000 records inserted [commit]
> [2] odb [Oloadbuff(9477)] - Error (State: 25000, Native -8606)
> [Trafodion ODBC Driver][Trafodion Database] SQL ERROR:*** ERROR[8606] 
> Transaction subsystem TMF returned error 97 on a commit transaction. 
> [2015-03-31 21:39:47]
> [2] 0 records inserted [commit]
> [3] odb [Oloadbuff(9477)] - Error (State: 25000, Native -8606)
> [Trafodion ODBC Driver][Trafodion Database] SQL ERROR:*** ERROR[8606] 
> Transaction subsystem TMF returned error 97 on a commit transaction. 
> [2015-03-31 21:39:47]
> [3] 5,000 records inserted [commit]
> [4] odb [Oloadbuff(9477)] - Error (State: 25000, Native -8606)
> [Trafodion ODBC Driver][Trafodion Database] SQL ERROR:*** ERROR[8606] 
> Transaction subsystem TMF returned error 97 on a commit transaction. 
> [2015-03-31 21:39:47]
> [4] 0 records inserted [commit]
> [1] odb [Oloadbuff(9477)] - Error (State: 25000, Native -8606)
> [Trafodion ODBC Driver][Trafodion Database] SQL ERROR:*** ERROR[8606] 
> Transaction subsystem TMF returned error 97 on a commit transaction. 
> [2015-03-31 21:39:47]
> [1] 0 records inserted [commit]
> odb [sigcatch(4125)] - Received SIGINT. Exiting
> -
> Trafodion Build: Release [1.0.0-304-ga977ee7_Bld14], branch a977ee7-master, 
> date 20150329_083001)
> Hadoop Distro: HDP 2.2
> HBase Version: 0.98.4.2.2.0.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-646) LP Bug: 1371442 - ODBC driver AppUnicodeType setting is not in DSN level

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-646:
-
Fix Version/s: (was: 2.2.0)
   2.4

> LP Bug: 1371442 - ODBC driver AppUnicodeType setting is not in DSN level
> 
>
> Key: TRAFODION-646
> URL: https://issues.apache.org/jira/browse/TRAFODION-646
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-odbc-linux
>Reporter: Daniel Lu
>Assignee: Daniel Lu
>Priority: Critical
> Fix For: 2.4
>
>
> Currently, AppUnicodeType setting can only be set in [ODBC] section of 
> TRAFDSN or odbc.ini, or by environment variable. this way it is global. 
> affect all applications use same driver. we need make it in DSN level, so 
> every applications that use same driver can be either unicode or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-598) LP Bug: 1365821 - select (insert) with prepared stmt fails with rowset

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-598:
-
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 1365821 - select (insert) with prepared stmt fails with rowset
> --
>
> Key: TRAFODION-598
> URL: https://issues.apache.org/jira/browse/TRAFODION-598
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-dcs
>Reporter: Aruna Sadashiva
>Assignee: RuoYu Zuo
>Priority: Critical
> Fix For: 2.3
>
>
> This came out of : https://answers.launchpad.net/trafodion/+question/253796
> "select syskey from (insert into parts values(?,?,?)) x" does not work as 
> expected with a odbc rowset. A rowset with single row works, but with 
> multiple rows in the rowset, no rows get inserted. 
> The workaround is to execute the select after the insert rowset operation. 
> It also fails with jdbc batch, t4 driver throws a "select not supported in 
> batch" exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1246) LP Bug: 1458011 - Change core file names in Sandbox

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1246:
--
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 1458011 - Change core file names in Sandbox
> ---
>
> Key: TRAFODION-1246
> URL: https://issues.apache.org/jira/browse/TRAFODION-1246
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: installer
>Reporter: Amanda Moran
>Priority: Minor
> Fix For: 2.3
>
>
> When creating a sandbox we should change the name of core files so that users 
> will not have to do it themselves.
> echo "/tmp/cores/core.%e.%p.%h.%t" > /proc/sys/kernel/core_pattern
> Reference: https://sigquit.wordpress.com/2009/03/13/the-core-pattern/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2903) The COLUMN_SIZE fetched from mxosrvr is wrong

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-2903:
--
Fix Version/s: (was: 2.2.0)
   2.3

> The COLUMN_SIZE fetched from mxosrvr is wrong
> -
>
> Key: TRAFODION-2903
> URL: https://issues.apache.org/jira/browse/TRAFODION-2903
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-mxosrvr
>Affects Versions: any
>Reporter: XuWeixin
>Assignee: XuWeixin
>Priority: Major
> Fix For: 2.3
>
>
> 1. DDL: create table TEST (C1 L4PKE date,C2 time,C3 timestamp)
> 2. 
> SQLColumns(hstmt,(SQLTCHAR*)"TRAFODION",SQL_NTS,(SQLTCHAR*)"SEABASE",SQL_NTS,(SQLTCHAR*)"TEST",SQL_NTS,(SQLTCHAR*)"%",SQL_NTS);
> 3. SQLBindCol(hstmt,7,SQL_C_LONG,,0,)
> 4. SQLFetch(hstmt)
> return  DATE ColPrec expect: 10 and actual: 11
>TIME ColPrec expect: 8 and actual: 9
>TIMESTAMP ColPrec expect: 19 and actual: 20



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2899) Catalog API SQLColumns does not support ODBC2.x

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-2899:
--
Fix Version/s: (was: 2.2.0)
   2.3

> Catalog API SQLColumns does not support ODBC2.x
> ---
>
> Key: TRAFODION-2899
> URL: https://issues.apache.org/jira/browse/TRAFODION-2899
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-mxosrvr
>Affects Versions: any
> Environment: Centos 6.7
>Reporter: XuWeixin
>Assignee: XuWeixin
>Priority: Major
> Fix For: 2.3
>
>
> When using ODBC2.x to get description of columns, failure occurs but no error 
> returns.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2472) Alter table hbase options is not transaction enabled.

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-2472:
--
Fix Version/s: (was: 2.2.0)
   2.3

> Alter table hbase options is not transaction enabled.
> -
>
> Key: TRAFODION-2472
> URL: https://issues.apache.org/jira/browse/TRAFODION-2472
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dtm
>Reporter: Prashanth Vasudev
>Assignee: Prashanth Vasudev
>Priority: Major
> Fix For: 2.3
>
>
> Transaction DDL for alter commands is currently disabled. 
> There are few statements such as alter hbase option that is not disabled 
> which results in unpredictable errors. 
> Initially fix would be to disable alter statement to not use DDl transaction. 
> Following this DDL transaction would be enhanced to support of Alter table 
> statement.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2462) TRAFCI gui installer does not work

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-2462:
--
Fix Version/s: (was: 2.2.0)
   2.3

> TRAFCI gui installer does not work
> --
>
> Key: TRAFODION-2462
> URL: https://issues.apache.org/jira/browse/TRAFODION-2462
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-ci
>Affects Versions: 2.1-incubating
>Reporter: Anuradha Hegde
>Assignee: Alex Peng
>Priority: Major
> Fix For: 2.3
>
>
> There are several issues with trafci 
> 1. GUI installer on Windows does not work. Bascially the browse button to 
> upload the T4 jar file and to specify the location of trafci install dir does 
> not function. hence installation does not proceed
> 2.  After a successful install of trafci  on Windows or *nix we notice that 
> lib file contains jdbcT4 and jline jar files..  There is no need to 
> pre-package these files with the product
> 3.  Running any sql statements from TRAF_HOME folder returns the following 
> error 
> SQL>get tables;
> *** ERROR[1394] *** ERROR[16001] No message found for SQLCODE -1394.  
> MXID11292972123518900177319330906U300_877_SQL_CUR_2 
> [2017-01-25 20:44:03]
> But executing the same statement when you are in $TRAF_HOME/sql/scripts 
> folder works.
> 4. Executing the wrapper script 'trafci' returns and message as below and 
> proceeds with successful connection. You don't see this messagewhen executing 
> trafci.sh
> /core/sqf/sql/local_hadoop/dcs-2.1.0/bin/dcs-config.sh: line 
> 90: .: sqenv.sh: file not found
> 5. Executing sql statements in multiples lines causes additional SQL prompt 
> to be displayed
> Connected to Apache Trafodion
> SQL>get tables
> +>SQL>
> 6. on successful connect and disconnect when new mxosrvrs are picked up  the 
> default schema is changed from 'SEABASE' to 'USR' (This might be a server 
> side issue too but will need to debug and find out)
> 7. FC command does not work. Look at trafci manual for examples on how FC 
> command was displayed back, It was shown back with the SQL prompt  
> SQL>fc
> show remoteprocess;
> SQL>   i
> show re moteprocess;
> SQL>
> 8. Did the error message format change?  This should have been syntax error
>   
> SQL>gett;
> *** ERROR[15001] *** ERROR[16001] No message found for SQLCODE -15001.
> gett;
>^ (4 characters from start of SQL statement) 
> MXID11086222123521382568755030206U300_493_SQL_CUR_4 
> [2017-01-25 21:14:18]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-206) LP Bug: 1297518 - DCS - SQLProcedures and SQLProcedureColumns need to be supported

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-206:
-
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 1297518 - DCS - SQLProcedures and SQLProcedureColumns need to be 
> supported
> --
>
> Key: TRAFODION-206
> URL: https://issues.apache.org/jira/browse/TRAFODION-206
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-general
>Reporter: Aruna Sadashiva
>Assignee: Kevin Xu
>Priority: Critical
> Fix For: 2.3
>
>
> DCS needs to implement support for SQLProcedures and SQLProcedureColumns, 
> since traf sql supports SPJs now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2664) Instance will be down when the zookeeper on name node has been down

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-2664:
--
Fix Version/s: (was: 2.2.0)
   2.3

> Instance will be down when the zookeeper on name node has been down
> ---
>
> Key: TRAFODION-2664
> URL: https://issues.apache.org/jira/browse/TRAFODION-2664
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: foundation
>Affects Versions: 2.2.0
> Environment: Test Environment:
> CDH5.4.8: 10.10.23.19:7180, total 6 nodes.
> HDFS-HA and DCS-HA: enabled
> OS: Centos6.8, physic machine.
> SW Build: R2.2.3 (EsgynDB_Enterprise Release 2.2.3 (Build release [sbroeder], 
> branch 1ce8d39-xdc_nari, date 11Jun17)
>Reporter: Jarek
>Assignee: Gonzalo E Correa
>Priority: Critical
>  Labels: build
> Fix For: 2.3
>
>
> Description: Instance will be down when the zookeeper on name node has been 
> down
> Test Steps:
> Step 1. Start OE and 4 long queries with trafci on the first node 
> esggy-clu-n010
> Step 2. Wait several minutes and stop zookeeper on name node of node 
> esggy-clu-n010  in Cloudera Manager page.
> Step 3. With trafci, run a basic query and 4 long queries again.
> In the above Step 3, we will see the whole instance as down after a while. 
> For this test scenario, I tried it several times, always found instance as 
> down.
> Timestamp:
> Test Start Time: 20170616132939
> Test End  Time: 20170616134350
> Stop zookeeper on name node of node esggy-clu-n010: 20170616133344
> Check logs:
> 1) Each node displays the following error:
> 2017-06-16 13:33:46,276, ERROR, MON, Node Number: 0,, PIN: 5017 , Process 
> Name: $MONITOR,,, TID: 5429, Message ID: 101371801, 
> [CZClient::IsZNodeExpired], zoo_exists() for 
> /trafodion/instance/cluster/esggy-clu-n010.esgyn.cn failed with error 
> ZCONNECTIONLOSS
> 2) Zookeeper displays:
> ls /trafodion/instance/cluster
> []
> So, It seems zclient has been lost on each node.
> Location of logs:
> esggy-clu-n010: 
> /data4/jarek/ha.interactive/trafodion_and_cluster_logs/cluster_logs.20170616134816.tar.gz
>  and trafodion_logs.20170616134816.tar.gz
> By the way, because the size of the logs is out of the limited value, so i 
> cannot upload it as the attachment in this JIRA ID.
> How many zookeeper quorum servers in the cluster? total 3.
>   
> dcs.zookeeper.quorum
> 
> esggy-clu-n010.esgyn.cn,esggy-clu-n011.esgyn.cn,esggy-clu-n012.esgyn.cn
>   
> How to access the cluster?
> 1. Login 10.10.10.8 from US machine: trafodion/traf123
> 2. Login 10.10.23.19 from 10.10.10.8: trafodion/traf123



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1923) executor/TEST106 hangs at drop table at times

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1923:
--
Fix Version/s: (was: 2.2.0)
   2.3

> executor/TEST106 hangs at drop table at times
> -
>
> Key: TRAFODION-1923
> URL: https://issues.apache.org/jira/browse/TRAFODION-1923
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 2.0-incubating
>Reporter: Selvaganesan Govindarajan
>Assignee: Prashanth Vasudev
>Priority: Critical
> Fix For: 2.3
>
>
> executor/TEST106 hangs at
> drop table t106a 
> Currently executor/TEST106 test is not run as part of Daily regression build.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2307) Documentation update for REST and DCS

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-2307:
--
Fix Version/s: (was: 2.2.0)
   2.3

> Documentation update for REST and DCS
> -
>
> Key: TRAFODION-2307
> URL: https://issues.apache.org/jira/browse/TRAFODION-2307
> Project: Apache Trafodion
>  Issue Type: Improvement
>Affects Versions: any
>Reporter: Anuradha Hegde
>Assignee: Anuradha Hegde
>Priority: Major
> Fix For: 2.3
>
>
> As an improvement the information documented in DCS and REST will be updated



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2305) After a region split the transactions to check against list is not fully populated

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-2305:
--
Fix Version/s: (was: 2.2.0)
   2.3

> After a region split the transactions to check against list is not fully 
> populated
> --
>
> Key: TRAFODION-2305
> URL: https://issues.apache.org/jira/browse/TRAFODION-2305
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dtm
>Affects Versions: any
>Reporter: Sean Broeder
>Assignee: Sean Broeder
>Priority: Major
> Fix For: 2.3
>
>
> As part of a region split all current transactions and their relationships to 
> one another are written out into a ZKNode entry and later read in by the 
> daughter regions.  However, the transactionsToCheck list is not correctly 
> populated



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-1748) Error 97 received with large upsert and select statements

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah closed TRAFODION-1748.
-

> Error 97 received with large upsert and select statements
> -
>
> Key: TRAFODION-1748
> URL: https://issues.apache.org/jira/browse/TRAFODION-1748
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dtm
>Affects Versions: 1.3-incubating
>Reporter: Sean Broeder
>Assignee: Sean Broeder
>Priority: Major
> Fix For: 2.0-incubating
>
>
> From Selva-
> The script has just upserted 10 rows and querying these 1 rows 
> repeatedly.  From the RS logs, it looks like memstore got flushed. Currently, 
> I have made the process to loop on getting the error 8606. 
>  
> This query involves ESPs. This error is coming from sqlci at the time of 
> commit.  I assume sqlci must be looping. The looping ends after 3 minutes to 
> proceed further. You can also put sqlci into debug and set loopError=0 to 
> come out of the loop to proceed further.  I also created a core file of sqlci 
> at ~/selva/core.44100.
>  
> If the query is finished, you can do the following to reproduce this issue
>  
> cd ~/selva/LSEG/master/stream
> sqlci
> log traf_stream_run.log ;
> obey traf_stream_run.sql ;
> log ;
> 
> Looking at dtm tracing I can see the regions are throwing an 
> UnknownTransactionException at prepare time, which causes the TM to refresh 
> the RegionLocations and redrive the prepare messages.  These again fail and 
> the transaction is aborted and this eventually percolates back to SQL as an 
> error 97.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (TRAFODION-1748) Error 97 received with large upsert and select statements

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah resolved TRAFODION-1748.
---
Resolution: Fixed

> Error 97 received with large upsert and select statements
> -
>
> Key: TRAFODION-1748
> URL: https://issues.apache.org/jira/browse/TRAFODION-1748
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dtm
>Affects Versions: 1.3-incubating
>Reporter: Sean Broeder
>Assignee: Sean Broeder
>Priority: Major
> Fix For: 2.0-incubating
>
>
> From Selva-
> The script has just upserted 10 rows and querying these 1 rows 
> repeatedly.  From the RS logs, it looks like memstore got flushed. Currently, 
> I have made the process to loop on getting the error 8606. 
>  
> This query involves ESPs. This error is coming from sqlci at the time of 
> commit.  I assume sqlci must be looping. The looping ends after 3 minutes to 
> proceed further. You can also put sqlci into debug and set loopError=0 to 
> come out of the loop to proceed further.  I also created a core file of sqlci 
> at ~/selva/core.44100.
>  
> If the query is finished, you can do the following to reproduce this issue
>  
> cd ~/selva/LSEG/master/stream
> sqlci
> log traf_stream_run.log ;
> obey traf_stream_run.sql ;
> log ;
> 
> Looking at dtm tracing I can see the regions are throwing an 
> UnknownTransactionException at prepare time, which causes the TM to refresh 
> the RegionLocations and redrive the prepare messages.  These again fail and 
> the transaction is aborted and this eventually percolates back to SQL as an 
> error 97.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2597) ESP cores seen during daily builds after hive tests run

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-2597:
--
Fix Version/s: (was: 2.2.0)
   2.3

> ESP cores seen during daily builds after hive tests run
> ---
>
> Key: TRAFODION-2597
> URL: https://issues.apache.org/jira/browse/TRAFODION-2597
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Sandhya Sundaresan
>Priority: Major
> Fix For: 2.3
>
>
> After hive tetss run and pass successfully, soometimes we see core files of 
> ESP with the following trace :
> Thread 6 (Thread 0x7fe4f36e7700 (LWP 46076)):
> #0 0x7fe55ecb168c in pthread_cond_wait@@GLIBC_2.3.2 () from 
> /lib64/libpthread.so.0
> 001 0x7fe565841f1d in ExLobLock::wait (this=0x2dc0580) at 
> ../exp/ExpLOBaccess.cpp:3367
> 002 0x7fe565842f4a in ExLobGlobals::getHdfsRequest (this=0x2dc0550) 
> at ../exp/ExpLOBaccess.cpp:3464
> 003 0x7fe565846a31 in ExLobGlobals::doWorkInThread (this=0x2dc0550) 
> at ../exp/ExpLOBaccess.cpp:3494
> 004 0x7fe565846a69 in workerThreadMain (arg=) at 
> ../exp/ExpLOBaccess.cpp:3300
> 005 0x7fe55ecadaa1 in start_thread () from /lib64/libpthread.so.0
> 006 0x7fe561f1caad in clone () from /lib64/libc.so.6
> Thread 5 (Thread 0x7fe5532cd700 (LWP 45641)):
> #0 0x7fe561f1d0a3 in epoll_wait () from /lib64/libc.so.6
> 001 0x7fe561be08e1 in SB_Trans::Sock_Controller::epoll_wait 
> (this=0x7fe561e32de0, pp_where=0x7fe561c043a8 "Sock_Comp_Thread::run", 
> pv_timeout=-1) at sock.cpp:366
> 002 0x7fe561bdfcf3 in SB_Trans::Sock_Comp_Thread::run 
> (this=0x19190b0) at sock.cpp:108
> 003 0x7fe561bdfb2d in sock_comp_thread_fun (pp_arg=0x19190b0) at 
> sock.cpp:78
> 004 0x7fe5605ce71f in SB_Thread::Thread::disp (this=0x19190b0, 
> pp_arg=0x19190b0) at thread.cpp:214
> 005 0x7fe5605ceb77 in thread_fun (pp_arg=0x19190b0) at thread.cpp:310
> 006 0x7fe5605d1f3e in sb_thread_sthr_disp (pp_arg=0x1922240) at 
> threadl.cpp:270
> 007 0x7fe55ecadaa1 in start_thread () from /lib64/libpthread.so.0
> 008 0x7fe561f1caad in clone () from /lib64/libc.so.6
> Thread 4 (Thread 0x7fe553ecf700 (LWP 45627)):
> #0 0x7fe561e676dd in sigtimedwait () from /lib64/libc.so.6
> 001 0x7fe561ba578f in local_monitor_reader (pp_arg=0x28fd) at 
> ../../../monitor/linux/clio.cxx:291
> 002 0x7fe55ecadaa1 in start_thread () from /lib64/libpthread.so.0
> 003 0x7fe561f1caad in clone () from /lib64/libc.so.6
> Thread 3 (Thread 0x7fe4f5fdf700 (LWP 45725)):
> #0 0x7fe55ecb3a00 in sem_wait () from /lib64/libpthread.so.0
> 001 0x7fe563c78c41 in ?? () from 
> /usr/lib/jvm/java-1.8.0-openjdk.x86_64/jre/lib/amd64/server/libjvm.so
> 002 0x7fe563c6fa4a in ?? () from 
> /usr/lib/jvm/java-1.8.0-openjdk.x86_64/jre/lib/amd64/server/libjvm.so
> 003 0x7fe563db7335 in ?? () from 
> /usr/lib/jvm/java-1.8.0-openjdk.x86_64/jre/lib/amd64/server/libjvm.so
> 004 0x7fe563db7590 in ?? () from 
> /usr/lib/jvm/java-1.8.0-openjdk.x86_64/jre/lib/amd64/server/libjvm.so
> 005 0x7fe563c7a8b2 in ?? () from 
> /usr/lib/jvm/java-1.8.0-openjdk.x86_64/jre/lib/amd64/server/libjvm.so
> 006 0x7fe55ecadaa1 in start_thread () from /lib64/libpthread.so.0
> 007 0x7fe561f1caad in clone () from /lib64/libc.so.6
> Thread 2 (Thread 0x7fe567e2e920 (LWP 45584)):
> #0 0x7fe55ecb1a5e in pthread_cond_timedwait@@GLIBC_2.3.2 () from 
> /lib64/libpthread.so.0
> 001 0x7fe5605d136c in SB_Thread::CV::wait (this=0x1902b38, pv_sec=0, 
> pv_us=39) at 
> /home/jenkins/workspace/build-rh6-AdvEnt2.3-release@2/trafodion/core/sqf/export/include/seabed/int/thread.inl:652
> 002 0x7fe5605d1431 in SB_Thread::CV::wait (this=0x1902b38, 
> pv_lock=true, pv_sec=0, pv_us=39) at 
> /home/jenkins/workspace/build-rh6-AdvEnt2.3-release@2/trafodion/core/sqf/export/include/seabed/int/thread.inl:704
> 003 0x7fe561bb7c6b in SB_Ms_Event_Mgr::wait (this=0x1902a40, 
> pv_us=39) at mseventmgr.inl:354
> 004 0x7fe561bd8c6e in XWAIT_com (pv_mask=1280, pv_time=40, 
> pv_residual=false) at pctl.cpp:982
> 005 0x7fe561bd8a6f in XWAITNO0 (pv_mask=1280, pv_time=40) at 
> pctl.cpp:905
> 006 0x7fe564e2b59a in IpcSetOfConnections::waitOnSet 
> (this=0x7fe5532ce288, timeout=-1, calledByESP=1, timedout=0x7ffd57ec2c88) at 
> ../common/Ipc.cpp:1607
> 007 0x0040718c in waitOnAll (argc=3, argv=0x7ffd57ec2de8, 
> guaReceiveFastStart=0x0) at ../common/Ipc.h:3094
> 008 runESP (argc=3, argv=0x7ffd57ec2de8, guaReceiveFastStart=0x0) at 
> ../bin/ex_esp_main.cpp:416
> 009 0x004075d3 in main (argc=3, argv=0x7ffd57ec2de8) at 
> 

[jira] [Updated] (TRAFODION-1112) LP Bug: 1438888 - Error message incorrect when describing non existing procedure

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1112:
--
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 143 - Error message incorrect when describing non existing 
> procedure
> 
>
> Key: TRAFODION-1112
> URL: https://issues.apache.org/jira/browse/TRAFODION-1112
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-security
>Reporter: Paul Low
>Assignee: Suresh Subbiah
>Priority: Minor
> Fix For: 2.3
>
>
> Minor issue.
> Users may be confused by  the error message that returns when trying to 
> execute 'showddl procedure T1' when T1 is not a procedure.
> T1 does not exist as a procedure, but T1 does exist as a table object.
> The text in the error message is technically incorrect because object T1 does 
> exist, just not as a procedure.
> SQL>create schema schema1;
> --- SQL operation complete.
> SQL>set schema schema1;
> --- SQL operation complete.
> SQL>create table t1 (c1 int not null primary key, c2 int);
> --- SQL operation complete.
> SQL>grant select on table t1 to qauser_sqlqaa;
> --- SQL operation complete.
> SQL>showddl procedure t1;
> *** ERROR[1389] Object T1 does not exist in Trafodion. 
> *** ERROR[4082] Object TRAFODION.SCHEMA1.T1 does not exist or is inaccessible
> SQL>drop schema schema1 cascade;
> --- SQL operation complete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1803) Range delete on tables with nullable key columns deletes fewer rows

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1803:
--
Fix Version/s: (was: 2.2.0)
   2.3

> Range delete on tables with nullable key columns deletes fewer rows 
> 
>
> Key: TRAFODION-1803
> URL: https://issues.apache.org/jira/browse/TRAFODION-1803
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 1.2-incubating
>Reporter: Suresh Subbiah
>Assignee: Suresh Subbiah
>Priority: Major
> Fix For: 2.3
>
>
> When a table has nullable columns in the primary/store by key and these 
> columns have null values, delete and update statements may affect fewer rows 
> than intended.
> For example
> >>cqd allow_nullable_unique_key_constraint 'on' ;
> --- SQL operation complete.
> CREATE TABLE TRAFODION.JIRA.T1
>   (
> AINT DEFAULT NULL SERIALIZED
>   , BINT DEFAULT NULL SERIALIZED
>   , PRIMARY KEY (A ASC, B ASC)
>   )
> ;
> --- SQL operation complete.
> >>insert into t1 values (1, null) ;
> --- 1 row(s) inserted.
> >>delete from t1 where a = 1 ;
> --- 0 row(s) deleted.
> >>delete from t1 ;
> --- 0 row(s) deleted.
> >>delete from t1 where a =1 and b is null ;
> --- 1 row(s) deleted.
> >>explain delete from t1 where a =1  ;
> TRAFODION_DELETE ==  SEQ_NO 2NO CHILDREN
> TABLE_NAME ... TRAFODION.JIRA.T1
> REQUESTS_IN . 10
> ROWS/REQUEST . 1
> EST_OPER_COST  0.17
> EST_TOTAL_COST ... 0.17
> DESCRIPTION
>   max_card_est .. 99
>   fragment_id  0
>   parent_frag  (none)
>   fragment_type .. master
>   iud_type ... trafodion_delete TRAFODION.JIRA.T1
>   predicate .. (A = %(1)) and (B = B)
>   begin_key .. (A = %(1)) and (B = B)
>   end_key  (A = %(1)) and (B = B)
>  Similar issue can be seen for update statements too
>  
>  >>CREATE TABLE TRAFODION.JIRA.T2
>   (
> AINT DEFAULT NULL SERIALIZED
>   , BINT DEFAULT NULL SERIALIZED
>   , CINT DEFAULT NULL SERIALIZED
>   , PRIMARY KEY (A ASC, B ASC)
>   )
> ;+>+>+>+>+>+>+>
> --- SQL operation complete.
> >>
> >>
> >>insert into t2 values (1, null, 3) ;
> --- 1 row(s) inserted.
> >>update t2 set c = 30 where a = 1 ;
> --- 0 row(s) updated.
>  
> TRAFODION_UPDATE ==  SEQ_NO 2NO CHILDREN
> TABLE_NAME ... TRAFODION.JIRA.T2
> REQUESTS_IN .. 1
> ROWS_OUT . 1
> EST_OPER_COST  0
> EST_TOTAL_COST ... 0
> DESCRIPTION
>   max_card_est .. 99
>   fragment_id  0
>   parent_frag  (none)
>   fragment_type .. master
>   iud_type ... trafodion_update TRAFODION.JIRA.T2
>   new_rec_expr ... (C assign %(30))
>   predicate .. (A = %(1)) and (B = B)
>   begin_key .. (A = %(1)) and (B = B)
>   end_key  (A = %(1)) and (B = B)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1801) Inserting NULL for all key columns in a table causes a failure

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1801:
--
Fix Version/s: (was: 2.2.0)
   2.3

> Inserting NULL for all key columns in a table causes a failure
> --
>
> Key: TRAFODION-1801
> URL: https://issues.apache.org/jira/browse/TRAFODION-1801
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 1.2-incubating
>Reporter: Suresh Subbiah
>Assignee: Suresh Subbiah
>Priority: Major
> Fix For: 2.3
>
>
> cqd allow_nullable_unique_key_constraint 'on' ;
> >>create table t1 (a int, b int, primary key (a,b)) ;
> --- SQL operation complete.
> >>showddl t1 ;
> CREATE TABLE TRAFODION.JIRA.T1
>   (
> AINT DEFAULT NULL SERIALIZED
>   , BINT DEFAULT NULL SERIALIZED
>   , PRIMARY KEY (A ASC, B ASC)
>   )
> ;
> --- SQL operation complete.
> >>insert into t1(a) values (1);
> --- 1 row(s) inserted.
> >>insert into t1(b) values (2) ;
> --- 1 row(s) inserted.
> >>select * from t1 ;
> AB  
> ---  ---
>   1?
>   ?2
> --- 2 row(s) selected.
> >>insert into t1(a) values(3) ;
> --- 1 row(s) inserted.
> >>select * from t1 ;
> AB  
> ---  ---
>   1?
>   3?
>   ?2
> --- 3 row(s) selected.
> -- fails
> >>insert into t1 values (null, null) ;
> *** ERROR[8448] Unable to access Hbase interface. Call to 
> ExpHbaseInterface::checkAndInsertRow returned error HBASE_ACCESS_ERROR(-706). 
> Cause: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=35, exceptions:
> Tue Feb 02 19:58:34 UTC 2016, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@4c2e0b96, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-777) LP Bug: 1394488 - Bulk load for volatile table gets FileNotFoundException

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-777:
-
Fix Version/s: (was: 2.2.0)
   2.4

> LP Bug: 1394488 - Bulk load for volatile table gets FileNotFoundException
> -
>
> Key: TRAFODION-777
> URL: https://issues.apache.org/jira/browse/TRAFODION-777
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Barry Fritchman
>Assignee: Suresh Subbiah
>Priority: Major
> Fix For: 2.4
>
>
> When attempting to perform a bulk load into a volatile table, like this:
> create volatile table vps primary key (ps_partkey, ps_suppkey) no load as 
> select * from partsupp;
> cqd comp_bool_226 'on';
> cqd TRAF_LOAD_PREP_TMP_LOCATION '/bulkload/';
> cqd TRAF_LOAD_TAKE_SNAPSHOT 'OFF';
> load into vps select * from partsupp;
> An error 8448 is raised due to a java.io.FileNotFoundException:
> Task: LOAD Status: StartedObject: TRAFODION.HBASE.VPS
> Task:  CLEANUP Status: StartedObject: TRAFODION.HBASE.VPS
> Task:  CLEANUP Status: Ended  Object: TRAFODION.HBASE.VPS
> Task:  DISABLE INDEXE  Status: StartedObject: TRAFODION.HBASE.VPS
> Task:  DISABLE INDEXE  Status: Ended  Object: TRAFODION.HBASE.VPS
> Task:  PREPARATION Status: StartedObject: TRAFODION.HBASE.VPS
>Rows Processed: 160 
> Task:  PREPARATION Status: Ended  ET: 00:01:20.660
> Task:  COMPLETION  Status: StartedObject: TRAFODION.HBASE.VPS
> *** ERROR[8448] Unable to access Hbase interface. Call to 
> ExpHbaseInterface::doBulkLoad returned error HBASE_DOBULK_LOAD_ERROR(-714). 
> Cause: 
> java.io.FileNotFoundException: File /bulkload/TRAFODION.HBASE.VPS does not 
> exist.
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:654)
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:102)
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:712)
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:708)
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:708)
> org.trafodion.sql.HBaseAccess.HBulkLoadClient.doBulkLoad(HBulkLoadClient.java:442)
> It appears that the presumed qualification of the volatile table name is 
> incorrect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1212) LP Bug: 1449732 - Drop schema cascade returns error 1069

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1212:
--
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 1449732 - Drop schema cascade returns error 1069
> 
>
> Key: TRAFODION-1212
> URL: https://issues.apache.org/jira/browse/TRAFODION-1212
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmu
>Reporter: Weishiun Tsai
>Assignee: Suresh Subbiah
>Priority: Critical
> Fix For: 2.3
>
>
> The frequency of ‘drop schema cascade’ returning error 1069 is still pretty 
> high, even after several attempts to address this issue.  This is causing a 
> lot of headache for the QA regression testing.  After each regression testing 
> run, there are always several schemas that couldn’t be dropped and needed to 
> be manually cleaned up.
> Multiple issues may lead to this problem.  This just happens to be one 
> scenario that is quite reproducible now.  In this particular scenario, the 
> schema contains a TMUDF library qaTmudfLib and 2 TMUDF functions qa_tmudf1 
> and qa_tmudf2.  qa_tmudf1 is a valid function, while qa_tmudf2 has a bogus 
> external name and a call to it is expected to see an error.
> After invoking both, a drop schema cascade almost always returns error 1069.
> This is seen on the r1.1.0rc3 (v0427) build installed on a workstation and it 
> is fairly reproducible with this build.  To reproduce it:
> (1) Download the attached tar file and untar it to get the 3 files in there. 
> Put the files in any directory .
> (2) Make sure that you have run ./sqenv.sh of your Trafodion instance first 
> as building UDF needs $MY_SQROOT for the header files.
> (3) Run build.sh
> (4) Change the line “create library qaTmudfLib file 
> '/qaTMUdfTest.so';” in mytest.sql and fill in 
> (5) From sqlci, obey mytest.sql
> Here is the execution output:
> >>log mytest.log clear;
> >>drop schema mytest cascade;
> *** ERROR[1003] Schema TRAFODION.MYTEST does not exist.
> --- SQL operation failed with errors.
> >>create schema mytest;
> --- SQL operation complete.
> >>set schema mytest;
> --- SQL operation complete.
> >>
> >>create library qaTmudfLib file '/qaTMUdfTest.so';
> --- SQL operation complete.
> >>
> >>create table mytable (a int, b int);
> --- SQL operation complete.
> >>insert into mytable values (1,1),(2,2);
> --- 2 row(s) inserted.
> >>
> >>create table_mapping function qa_tmudf1()
> +>external name 'QA_TMUDF'
> +>language cpp
> +>library qaTmudfLib;
> --- SQL operation complete.
> >>
> >>select * from UDF(qa_tmudf1(TABLE(select * from mytable)));
> AB
> ---  ---
>   11
>   22
> --- 2 row(s) selected.
> >>
> >>create table_mapping function qa_tmudf2()
> +>external name 'DONTEXIST'
> +>language cpp
> +>library qaTmudfLib;
> --- SQL operation complete.
> >>
> >>select * from UDF(qa_tmudf2(TABLE(select * from mytable)));
> *** ERROR[11246] An error occurred locating function 'DONTEXIST' in library 
> 'qaTMUdfTest.so'.
> *** ERROR[8822] The statement was not prepared.
> >>
> >>drop schema mytest cascade;
> *** ERROR[1069] Schema TRAFODION.MYTEST could not be dropped.
> --- SQL operation failed with errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1145) LP Bug: 1441784 - UDF: Lack of checking for scalar UDF input/output values

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1145:
--
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 1441784 - UDF: Lack of checking for scalar UDF input/output values
> --
>
> Key: TRAFODION-1145
> URL: https://issues.apache.org/jira/browse/TRAFODION-1145
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Weishiun Tsai
>Assignee: Suresh Subbiah
>Priority: Critical
> Fix For: 2.3
>
> Attachments: udf_bug (1).tar
>
>
> Ideally, input/output values for a scalar UDF should be verified at the 
> create function time.  But this check is not in place right now.  As a 
> result, a lot of ill-constructed input/output values are left to be handled 
> at the run time.  And the behavior at the run time is haphazard at best.
> Here shows 3 examples of such behavior:
> (a) myudf1 defines 2 input values with the same name.  Create function does 
> not return an error.  But the invocation at the run time returns a perplexing 
> 4457 error indicating internal out-of-range index error.
> (b) myudf2 defines an input value and an output value with the same name.  
> Create function does not return an error.  But the invocation at the run time 
> returns a perplexing 4457 error complaining that there is no output value.
> (c) myudf3 defines 2 output values with the same name.  Create function does 
> not return an error.  The invocation at the run time simply ignores the 2nd 
> output value, as well as the fact that the C function only defines 1 output 
> value.  It returns one value as if the 2nd output value was never defined at 
> all.
> This is seen on the v0407 build installed on a workstation. To reproduce it:
> (1) Download the attached tar file and untar it to get the 3 files in there. 
> Put the files in any directory .
> (2) Make sure that you have run ./sqenv.sh of your Trafodion instance first 
> as building UDF needs $MY_SQROOT for the header files.
> (3) run build.sh
> (4) Change the line “create library qa_udf_lib file '/myudf.so';”; in 
> mytest.sql and fill in 
> (5) From sqlci, obey mytest.sql
> 
> Here is the execution output:
> >>create schema mytest;
> --- SQL operation complete.
> >>set schema mytest;
> --- SQL operation complete.
> >>
> >>create library qa_udf_lib file '/myudf.so';
> --- SQL operation complete.
> >>
> >>create table mytable (a int, b int);
> --- SQL operation complete.
> >>insert into mytable values (1,1),(2,2),(3,3);
> --- 3 row(s) inserted.
> >>
> >>create function myudf1
> +>(INVAL int, INVAL int)
> +>returns (OUTVAL int)
> +>language c
> +>parameter style sql
> +>external name 'qa_func_int32'
> +>library qa_udf_lib
> +>deterministic
> +>state area size 1024
> +>allow any parallelism
> +>no sql;
> --- SQL operation complete.
> >>
> >>select myudf1(a, b) from mytable;
> *** ERROR[4457] An error was encountered processing metadata for user-defined 
> function TRAFODION.MYTEST.MYUDF1.  Details: Internal error in 
> setInOrOutParam(): index position out of range..
> *** ERROR[8822] The statement was not prepared.
> >>
> >>create function myudf2
> +>(INVAL int)
> +>returns (INVAL int)
> +>language c
> +>parameter style sql
> +>external name 'qa_func_int32'
> +>library qa_udf_lib
> +>deterministic
> +>state area size 1024
> +>allow any parallelism
> +>no sql;
> --- SQL operation complete.
> >>
> >>select myudf2(a) from mytable;
> *** ERROR[4457] An error was encountered processing metadata for user-defined 
> function TRAFODION.MYTEST.MYUDF2.  Details: User-defined functions must have 
> at least one registered output value.
> *** ERROR[8822] The statement was not prepared.
> >>
> >>create function myudf3
> +>(INVAL int)
> +>returns (OUTVAL int, OUTVAL int)
> +>language c
> +>parameter style sql
> +>external name 'qa_func_int32'
> +>library qa_udf_lib
> +>deterministic
> +>state area size 1024
> +>allow any parallelism
> +>no sql;
> --- SQL operation complete.
> >>
> >>select myudf3(a) from mytable;
> OUTVAL
> ---
>   1
>   2
>   3
> --- 3 row(s) selected.
> >>
> >>drop function myudf1 cascade;
> --- SQL operation complete.
> >>drop function myudf2 cascade;
> --- SQL operation complete.
> >>drop function myudf3 cascade;
> --- SQL operation complete.
> >>drop library qa_udf_lib cascade;
> --- SQL operation complete.
> >>drop schema mytest cascade;
> --- SQL operation complete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1141) LP Bug: 1441378 - UDF: Multi-valued scalar UDF with clob/blob cores sqlci with SIGSEGV

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1141:
--
Fix Version/s: (was: 2.2.0)
   2.4

> LP Bug: 1441378 - UDF: Multi-valued scalar UDF with clob/blob cores sqlci 
> with SIGSEGV
> --
>
> Key: TRAFODION-1141
> URL: https://issues.apache.org/jira/browse/TRAFODION-1141
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Weishiun Tsai
>Assignee: Suresh Subbiah
>Priority: Critical
> Fix For: 2.4
>
> Attachments: udf_bug.tar
>
>
> While a single-valued scalar UDF works fine with the clob or blob data type.  
> A multi-valued scalar UDF cores sqlci with SIGSEGV even with just 2 clob or 
> blob output values. 
> Since clob and blob data types require large buffers, I am assuming this type 
> of scalar UDF is stressing the heap used internally somewhere.  But a core is 
> always bad.  If there is a limit on how clob and blob can be handled in a 
> scalar UDF, a check should be put in place and an error should be returned 
> more gracefully.
> This is seen on the v0407 build installed on a workstation. To reproduce it:
> (1) Download the attached tar file and untar it to get the 3 files in there. 
> Put the files in any directory .
> (2) Make sure that you have run ./sqenv.sh of your Trafodion instance first 
> as building UDF needs $MY_SQROOT for the header files.
> (3) run build.sh
> (4) Change the line “create library qa_udf_lib file '/myudf.so';”; in 
> mytest.sql and fill in 
> (5) From sqlci, obey mytest.sql
> ---
> Here is the execution output:
> >>create schema mytest;
> --- SQL operation complete.
> >>set schema mytest;
> --- SQL operation complete.
> >>
> >>create library qa_udf_lib file '/myudf.so';
> --- SQL operation complete.
> >>
> >>create function qa_udf_clob
> +>(INVAL clob)
> +>returns (c_clob clob)
> +>language c
> +>parameter style sql
> +>external name 'qa_func_vcstruct'
> +>library qa_udf_lib
> +>deterministic
> +>state area size 1024
> +>allow any parallelism
> +>no sql;
> --- SQL operation complete.
> >>
> >>create function qa_udf_blob
> +>(INVAL blob)
> +>returns (c_blob blob)
> +>language c
> +>parameter style sql
> +>external name 'qa_func_vcstruct'
> +>library qa_udf_lib
> +>deterministic
> +>state area size 1024
> +>allow any parallelism
> +>no sql;
> --- SQL operation complete.
> >>
> >>create function qa_udf_clob_mvf
> +>(INVAL clob)
> +>returns (c_clob1 clob, c_clob2 clob)
> +>language c
> +>parameter style sql
> +>external name 'qa_func_vcstruct_mvf'
> +>library qa_udf_lib
> +>deterministic
> +>state area size 1024
> +>allow any parallelism
> +>no sql;
> --- SQL operation complete.
> >>
> >>create function qa_udf_blob_mvf
> +>(INVAL blob)
> +>returns (c_blob1 blob, c_blob2 blob)
> +>language c
> +>parameter style sql
> +>external name 'qa_func_vcstruct_mvf'
> +>library qa_udf_lib
> +>deterministic
> +>state area size 1024
> +>allow any parallelism
> +>no sql;
> --- SQL operation complete.
> >>
> >>create table mytable (c_clob clob, c_blob blob);
> --- SQL operation complete.
> >>insert into mytable values ('CLOB_1', 'BLOB_1');
> --- 1 row(s) inserted.
> >>
> >>select
> +>cast(qa_udf_clob(c_clob) as char(10)),
> +>cast(qa_udf_blob(c_blob) as char(10))
> +>from mytable;
> (EXPR)  (EXPR)
> --  --
> CLOB_1  BLOB_1
> --- 1 row(s) selected.
> >>
> >>select qa_udf_clob_mvf(c_clob) from mytable;
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x74c5b9a2, pid=18680, tid=140737187650592
> #
> # JRE version: Java(TM) SE Runtime Environment (7.0_67-b01) (build 
> 1.7.0_67-b01)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode 
> linux-amd64 compressed oops)
> # Problematic frame:
> # C  [libexecutor.so+0x2489a2]  ExSimpleSQLBuffer::init(NAMemory*)+0x92
> #
> # Core dump written. Default location: /core or core.18680
> #
> # An error report file with more information is saved as:
> # /hs_err_pid18680.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.sun.com/bugreport/crash.jsp
> #
> Aborted (core dumped)
> ---
> Here is the stack trace of the core.
> (gdb) bt
> #0  0x0039e28328a5 in raise () from /lib64/libc.so.6
> #1  0x0039e283400d in abort () from /lib64/libc.so.6
> #2  0x77120a55 in os::abort(bool) ()
>from /opt/home/tools/jdk1.7.0_67/jre/lib/amd64/server/libjvm.so
> #3  0x772a0f87 in VMError::report_and_die() ()
>from /opt/home/tools/jdk1.7.0_67/jre/lib/amd64/server/libjvm.so
> #4  0x772a150e 

[jira] [Updated] (TRAFODION-1014) LP Bug: 1421747 - SQL Upsert using load periodically not saving all rows

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1014:
--
Fix Version/s: (was: 2.2.0)
   2.4

> LP Bug: 1421747 - SQL Upsert using load periodically not saving all rows
> 
>
> Key: TRAFODION-1014
> URL: https://issues.apache.org/jira/browse/TRAFODION-1014
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Gary W Hall
>Assignee: Suresh Subbiah
>Priority: Critical
> Fix For: 2.4
>
>
> When running a script that initiates 32 parallel streams loading a table, we 
> have found that periodically there are gaps in the resulting saved data...for 
> example we will find that we are missing stock items #29485 thru #30847 
> inclusive for Warehouse #5.  The number of gaps found for a given load run 
> varies...normally none, but I've seen as many as eight gaps of missing data.
> The sql statement used in all streams is as follows:
> sql_statement = "upsert using load into " + stock_table_name
>   + " (S_I_ID, S_W_ID, S_QUANTITY, S_DIST_01, S_DIST_02, 
> S_DIST_03, S_DIST_04,"
>   + " S_DIST_05, S_DIST_06, S_DIST_07, S_DIST_08, S_DIST_09, 
> S_DIST_10,"
>   + " S_YTD, S_ORDER_CNT, S_REMOTE_CNT, S_DATA)"
>   + " values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 
> ?)";
> This is not easily repeatable…I’ve run the script to drop/create/load this 
> table 12 times today, resulting in some missing rows 4 of the 12 times.  
> Worst case we were missing 0.03% of the required rows in the table…obviously, 
> ANY missing data is not acceptable.
> Our test environment control parameters (in case any are of value to you)...
> OrderEntryLoader
>   Load Starting : 2015-02-13 04:58:13
>PropertyFile : trafodion.properties
>Datebase : trafodion
>  Schema : trafodion.javabench
> ScaleFactor : 512
> Streams : 32
>Maintian : true
>Load : true
>  AutoCommit : true
>   BatchSize : 1000
>  Upsert : true
>   UsingLoad : true
>  IntervalLength : 60



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-531) LP Bug: 1355034 - SPJ w result set failed with ERROR[8413]

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-531:
-
Fix Version/s: (was: 2.2.0)
   2.4

> LP Bug: 1355034 - SPJ w result set failed with ERROR[8413]
> --
>
> Key: TRAFODION-531
> URL: https://issues.apache.org/jira/browse/TRAFODION-531
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Chong Hsu
>Assignee: Suresh Subbiah
>Priority: Critical
> Fix For: 2.4
>
>
> Tested with Trafodion build, 20140801-0830.
> Calling a SPJ with result set:
>public static void NS786(String paramString, ResultSet[] 
> paramArrayOfResultSet)
>  throws Exception
>{
>  String str1 = "jdbc:default:connection";
>  
>  Connection localConnection = DriverManager.getConnection(str1);
>  String str2 = "select * from " + paramString;
>  Statement localStatement = localConnection.createStatement();
>  paramArrayOfResultSet[0] = localStatement.executeQuery(str2);
>}
> it failed with ERROR[8413]:
> *** ERROR[8413] The string argument contains characters that cannot be 
> converted. [2014-08-11 04:06:32]
> *** ERROR[8402] A string overflow occurred during the evaluation of a 
> character expression. Conversion of Source Type:LARGEINT(REC_BIN64_SIGNED) 
> Source Value:79341348341248 to Target Type:CHAR(REC_BYTE_F_ASCII). 
> [2014-08-11 04:06:32]
> The SPJ Jar file is attached. Here are the steps to produce the error:
>   
> set schema testspj;
> create library spjrs file '//Testrs.jar';
> create procedure RS786(varchar(100))
>language java 
>parameter style java  
>external name 'Testrs.NS786'
>dynamic result sets 1
>library spjrs;
> create table datetime_interval (
> date_keydate not null,
> date_coldate default date '0001-01-01',
> time_coltime default time '00:00:00',
> timestamp_col   timestamp
>  default timestamp 
> '0001-01-01:00:00:00.00',
> interval_year   interval year default interval '00' year,
> yr2_to_mo   interval year to month
>  default interval '00-00' year to month,
> yr6_to_mo   interval year(6) to month
>  default interval '00-00' year(6) to 
> month,
> yr16_to_mo  interval year(16) to month default
>   interval '-00' year(16) to 
> month,
> year18  interval year(18) default
>  interval '00' year(18),
> day2interval day default interval '00' day,
> day18   interval day(18)
>  default interval '00' 
> day(18),
> day16_to_hr interval day(16) to hour
> default interval ':00' day(16) to 
> hour,
> day14_to_mininterval day(14) to minute default  
>   interval '00:00:00' day(14) to 
> minute,
> day5_to_second6 interval day(5) to second(6) default
>  interval '0:00:00:00.00' day(5) to second(6),
> hour2   interval hour default interval '00' hour,
> hour18  interval hour(18)
>  default interval '00' 
> hour(18),
> hour16_to_min   interval hour(16) to minute default
>   interval ':00' hour(16) to minute,
> hour14_to_ss0   interval hour(14) to second(0) default
>   interval '00:00:00' hour(14) to 
> second(0),
> hour10_to_second4interval hour(10) to second(4) default
>  interval '00:00:00.' hour(10) to 
> second(4),
> min2interval minute default interval '00' minute,
> min18   interval minute(18) default
>  interval '00' minute(18),
> min13_s3interval minute(13) to second(3) default
> interval '0:00.000' minute(13) to 
> second(3),
> min16_s0interval minute(16) to second(0) default
> interval ':00' minute(16) to 
> second(0),
> seconds interval second default interval '00' second,
> seconds5interval second(5) default interval '0' second(5),
> seconds18   interval second(18,0) default
>  interval '00' second(18,0),
> seconds15   interval