[jira] [Resolved] (TRAFODION-3325) Inefficient plan when using a join to a tuple list

2019-09-17 Thread David Wayne Birdsall (Jira)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3325.
-
Fix Version/s: 2.4
   Resolution: Fixed

> Inefficient plan when using a join to a tuple list
> --
>
> Key: TRAFODION-3325
> URL: https://issues.apache.org/jira/browse/TRAFODION-3325
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
> Attachments: repro.sql.txt
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> In the example below, statement s1 gets a hash join plan that does a full 
> scan on table T1, even though a far more efficient nested join plan using 
> index IT1 is possible. The problem is that the Optimizer is not considering 
> an index scan using IT1 when the join predicates are between T1 and a tuple 
> list.
> In contrasting examples below where we use an IN list instead of a join to a 
> tuple list, and where we join to a table T2 instead of a tuple list, we do 
> get efficient index access plans.
> To reproduce, execute the attached script to create the test tables and 
> compile the test queries.
> When executed, we see the following plans:
> {quote}>>obey repro.sql(testit);
> >>
> >>prepare s1 from select t1.* from t1 join (values (3),(4)) f(b) on t1.b = 
> >>f.b;
> --- SQL command prepared.
> >>explain options 'f' s1;
> LC RC OP OPERATOR OPT DESCRIPTION CARD
>       -
> 5 . 6 root 4.00E+002
> 4 . 5 esp_exchange 1:2(hash2) 4.00E+002
> 3 2 4 hybrid_hash_join 4.00E+002
> . . 3 trafodion_scan T1 2.00E+006
> 1 . 2 esp_exchange 2(rep-b):1 2.00E+000
> . . 1 tuplelist 2.00E+000
> --- SQL operation complete.
> >>showshape select * from t1 join (values (3),(4)) f(b) on t1.b = f.b;
> control query shape esp_exchange(hybrid_hash_join(
> scan(path 'TRAFODION.SCH.T1', forward, blocks_per_access 489 , mdam off),
> esp_exchange(anything)));
> --- SQL operation complete.
> >>
> >>prepare s2 from select t1.* from t1 where b in (3,4);
> --- SQL command prepared.
> >>explain options 'f' s2;
> LC RC OP OPERATOR OPT DESCRIPTION CARD
>       -
> 3 . 4 root 4.00E+002
> 1 2 3 nested_join 4.00E+002
> . . 2 trafodion_vsbb_scan T1 1.00E+000
> . . 1 trafodion_index_scan IT1 4.00E+002
> --- SQL operation complete.
> >>showshape select * from t1 where b in (3,4);
> control query shape nested_join(scan(path 'TRAFODION.SCH.IT1', forward
> , blocks_per_access 1 , mdam forced, mdam_columns (dense, sparse)),
> scan(path 'TRAFODION.SCH.T1', forward, blocks_per_access 400 , mdam off)
> ,INDEXJOIN);
> --- SQL operation complete.
> >>
> >>prepare s3 from select t1.* from t1 join t2 on t1.b = t2.b where t2.b in 
> >>(3,4);
> --- SQL command prepared.
> >>explain options 'f' s3;
> LC RC OP OPERATOR OPT DESCRIPTION CARD
>       -
> 5 . 6 root 4.00E+002
> 1 4 5 nested_join 4.00E+002
> 2 3 4 nested_join 2.00E+002
> . . 3 trafodion_vsbb_scan T1 1.00E+000
> . . 2 trafodion_index_scan IT1 2.00E+002
> . . 1 trafodion_scan T2 2.00E+000
> --- SQL operation complete.
> >>showshape select t1.* from t1 join t2 on t1.b = t2.b where t2.b in (3,4);
> control query shape nested_join(scan(path 'TRAFODION.SCH.T2', forward
> , blocks_per_access 2 , mdam forced, mdam_columns all(dense)),nested_join(
> scan(path 'TRAFODION.SCH.IT1', forward, blocks_per_access 1 , mdam off),
> scan(path 'TRAFODION.SCH.T1', forward, blocks_per_access 400 , mdam off)
> ,INDEXJOIN));
> --- SQL operation complete.
> >>
> >>exit;
> End of MXCI Session
> {quote}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (TRAFODION-3326) Rewrite logsort utility

2019-09-16 Thread David Wayne Birdsall (Jira)
David Wayne Birdsall created TRAFODION-3326:
---

 Summary: Rewrite logsort utility
 Key: TRAFODION-3326
 URL: https://issues.apache.org/jira/browse/TRAFODION-3326
 Project: Apache Trafodion
  Issue Type: Improvement
  Components: dev-environment
Affects Versions: 2.4
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall


The logsort utility is used by the Trafodion development regression tests to 
sort sqlci log output for SELECT statements that lack ORDER BY clauses. This is 
done so that non-determinism in the order that rows are listed does not cause 
false failures when comparing expected results with actual results.

This utility was written (by me as luck would have it) around 1990 or so in C. 
It is object-oriented in design, and so could easily be rewritten using C++ 
classes. Too, it manually implements things such as linked lists that could be 
replaced with C++ STL templates.

The most compelling reason to do a rewrite is that it uses the equivalent of a 
bubble sort when sorting output. For most of our test results this is not an 
issue; the number of rows listed in a typical test is usually in the tens or 
less. But if there is a bug (either in the engine itself or in the testware), 
we may get millions or rows or more. When this happens, logsort may run for 
hours. So it would  be good to replace the existing o(n^2) sort with a more 
efficient one, perhaps using STL for this purpose.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Work started] (TRAFODION-3325) Inefficient plan when using a join to a tuple list

2019-09-09 Thread David Wayne Birdsall (Jira)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on TRAFODION-3325 started by David Wayne Birdsall.
---
> Inefficient plan when using a join to a tuple list
> --
>
> Key: TRAFODION-3325
> URL: https://issues.apache.org/jira/browse/TRAFODION-3325
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Attachments: repro.sql.txt
>
>
> In the example below, statement s1 gets a hash join plan that does a full 
> scan on table T1, even though a far more efficient nested join plan using 
> index IT1 is possible. The problem is that the Optimizer is not considering 
> an index scan using IT1 when the join predicates are between T1 and a tuple 
> list.
> In contrasting examples below where we use an IN list instead of a join to a 
> tuple list, and where we join to a table T2 instead of a tuple list, we do 
> get efficient index access plans.
> To reproduce, execute the attached script to create the test tables and 
> compile the test queries.
> When executed, we see the following plans:
> {quote}>>obey repro.sql(testit);
> >>
> >>prepare s1 from select t1.* from t1 join (values (3),(4)) f(b) on t1.b = 
> >>f.b;
> --- SQL command prepared.
> >>explain options 'f' s1;
> LC RC OP OPERATOR OPT DESCRIPTION CARD
>       -
> 5 . 6 root 4.00E+002
> 4 . 5 esp_exchange 1:2(hash2) 4.00E+002
> 3 2 4 hybrid_hash_join 4.00E+002
> . . 3 trafodion_scan T1 2.00E+006
> 1 . 2 esp_exchange 2(rep-b):1 2.00E+000
> . . 1 tuplelist 2.00E+000
> --- SQL operation complete.
> >>showshape select * from t1 join (values (3),(4)) f(b) on t1.b = f.b;
> control query shape esp_exchange(hybrid_hash_join(
> scan(path 'TRAFODION.SCH.T1', forward, blocks_per_access 489 , mdam off),
> esp_exchange(anything)));
> --- SQL operation complete.
> >>
> >>prepare s2 from select t1.* from t1 where b in (3,4);
> --- SQL command prepared.
> >>explain options 'f' s2;
> LC RC OP OPERATOR OPT DESCRIPTION CARD
>       -
> 3 . 4 root 4.00E+002
> 1 2 3 nested_join 4.00E+002
> . . 2 trafodion_vsbb_scan T1 1.00E+000
> . . 1 trafodion_index_scan IT1 4.00E+002
> --- SQL operation complete.
> >>showshape select * from t1 where b in (3,4);
> control query shape nested_join(scan(path 'TRAFODION.SCH.IT1', forward
> , blocks_per_access 1 , mdam forced, mdam_columns (dense, sparse)),
> scan(path 'TRAFODION.SCH.T1', forward, blocks_per_access 400 , mdam off)
> ,INDEXJOIN);
> --- SQL operation complete.
> >>
> >>prepare s3 from select t1.* from t1 join t2 on t1.b = t2.b where t2.b in 
> >>(3,4);
> --- SQL command prepared.
> >>explain options 'f' s3;
> LC RC OP OPERATOR OPT DESCRIPTION CARD
>       -
> 5 . 6 root 4.00E+002
> 1 4 5 nested_join 4.00E+002
> 2 3 4 nested_join 2.00E+002
> . . 3 trafodion_vsbb_scan T1 1.00E+000
> . . 2 trafodion_index_scan IT1 2.00E+002
> . . 1 trafodion_scan T2 2.00E+000
> --- SQL operation complete.
> >>showshape select t1.* from t1 join t2 on t1.b = t2.b where t2.b in (3,4);
> control query shape nested_join(scan(path 'TRAFODION.SCH.T2', forward
> , blocks_per_access 2 , mdam forced, mdam_columns all(dense)),nested_join(
> scan(path 'TRAFODION.SCH.IT1', forward, blocks_per_access 1 , mdam off),
> scan(path 'TRAFODION.SCH.T1', forward, blocks_per_access 400 , mdam off)
> ,INDEXJOIN));
> --- SQL operation complete.
> >>
> >>exit;
> End of MXCI Session
> {quote}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (TRAFODION-3325) Inefficient plan when using a join to a tuple list

2019-09-09 Thread David Wayne Birdsall (Jira)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall updated TRAFODION-3325:

Attachment: repro.sql.txt

> Inefficient plan when using a join to a tuple list
> --
>
> Key: TRAFODION-3325
> URL: https://issues.apache.org/jira/browse/TRAFODION-3325
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Attachments: repro.sql.txt
>
>
> In the example below, statement s1 gets a hash join plan that does a full 
> scan on table T1, even though a far more efficient nested join plan using 
> index IT1 is possible. The problem is that the Optimizer is not considering 
> an index scan using IT1 when the join predicates are between T1 and a tuple 
> list.
> In contrasting examples below where we use an IN list instead of a join to a 
> tuple list, and where we join to a table T2 instead of a tuple list, we do 
> get efficient index access plans.
> To reproduce, execute the attached script to create the test tables and 
> compile the test queries.
> When executed, we see the following plans:
> {quote}>>obey repro.sql(testit);
> >>
> >>prepare s1 from select t1.* from t1 join (values (3),(4)) f(b) on t1.b = 
> >>f.b;
> --- SQL command prepared.
> >>explain options 'f' s1;
> LC RC OP OPERATOR OPT DESCRIPTION CARD
>       -
> 5 . 6 root 4.00E+002
> 4 . 5 esp_exchange 1:2(hash2) 4.00E+002
> 3 2 4 hybrid_hash_join 4.00E+002
> . . 3 trafodion_scan T1 2.00E+006
> 1 . 2 esp_exchange 2(rep-b):1 2.00E+000
> . . 1 tuplelist 2.00E+000
> --- SQL operation complete.
> >>showshape select * from t1 join (values (3),(4)) f(b) on t1.b = f.b;
> control query shape esp_exchange(hybrid_hash_join(
> scan(path 'TRAFODION.SCH.T1', forward, blocks_per_access 489 , mdam off),
> esp_exchange(anything)));
> --- SQL operation complete.
> >>
> >>prepare s2 from select t1.* from t1 where b in (3,4);
> --- SQL command prepared.
> >>explain options 'f' s2;
> LC RC OP OPERATOR OPT DESCRIPTION CARD
>       -
> 3 . 4 root 4.00E+002
> 1 2 3 nested_join 4.00E+002
> . . 2 trafodion_vsbb_scan T1 1.00E+000
> . . 1 trafodion_index_scan IT1 4.00E+002
> --- SQL operation complete.
> >>showshape select * from t1 where b in (3,4);
> control query shape nested_join(scan(path 'TRAFODION.SCH.IT1', forward
> , blocks_per_access 1 , mdam forced, mdam_columns (dense, sparse)),
> scan(path 'TRAFODION.SCH.T1', forward, blocks_per_access 400 , mdam off)
> ,INDEXJOIN);
> --- SQL operation complete.
> >>
> >>prepare s3 from select t1.* from t1 join t2 on t1.b = t2.b where t2.b in 
> >>(3,4);
> --- SQL command prepared.
> >>explain options 'f' s3;
> LC RC OP OPERATOR OPT DESCRIPTION CARD
>       -
> 5 . 6 root 4.00E+002
> 1 4 5 nested_join 4.00E+002
> 2 3 4 nested_join 2.00E+002
> . . 3 trafodion_vsbb_scan T1 1.00E+000
> . . 2 trafodion_index_scan IT1 2.00E+002
> . . 1 trafodion_scan T2 2.00E+000
> --- SQL operation complete.
> >>showshape select t1.* from t1 join t2 on t1.b = t2.b where t2.b in (3,4);
> control query shape nested_join(scan(path 'TRAFODION.SCH.T2', forward
> , blocks_per_access 2 , mdam forced, mdam_columns all(dense)),nested_join(
> scan(path 'TRAFODION.SCH.IT1', forward, blocks_per_access 1 , mdam off),
> scan(path 'TRAFODION.SCH.T1', forward, blocks_per_access 400 , mdam off)
> ,INDEXJOIN));
> --- SQL operation complete.
> >>
> >>exit;
> End of MXCI Session
> {quote}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (TRAFODION-3325) Inefficient plan when using a join to a tuple list

2019-09-09 Thread David Wayne Birdsall (Jira)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall updated TRAFODION-3325:

Description: 
In the example below, statement s1 gets a hash join plan that does a full scan 
on table T1, even though a far more efficient nested join plan using index IT1 
is possible. The problem is that the Optimizer is not considering an index scan 
using IT1 when the join predicates are between T1 and a tuple list.

In contrasting examples below where we use an IN list instead of a join to a 
tuple list, and where we join to a table T2 instead of a tuple list, we do get 
efficient index access plans.

To reproduce, execute the attached script to create the test tables and compile 
the test queries.

When executed, we see the following plans:
{quote}>>obey repro.sql(testit);
>>
>>prepare s1 from select t1.* from t1 join (values (3),(4)) f(b) on t1.b = f.b;

--- SQL command prepared.
>>explain options 'f' s1;

LC RC OP OPERATOR OPT DESCRIPTION CARD
      -

5 . 6 root 4.00E+002
4 . 5 esp_exchange 1:2(hash2) 4.00E+002
3 2 4 hybrid_hash_join 4.00E+002
. . 3 trafodion_scan T1 2.00E+006
1 . 2 esp_exchange 2(rep-b):1 2.00E+000
. . 1 tuplelist 2.00E+000

--- SQL operation complete.
>>showshape select * from t1 join (values (3),(4)) f(b) on t1.b = f.b;
control query shape esp_exchange(hybrid_hash_join(
scan(path 'TRAFODION.SCH.T1', forward, blocks_per_access 489 , mdam off),
esp_exchange(anything)));

--- SQL operation complete.
>>
>>prepare s2 from select t1.* from t1 where b in (3,4);

--- SQL command prepared.
>>explain options 'f' s2;

LC RC OP OPERATOR OPT DESCRIPTION CARD
      -

3 . 4 root 4.00E+002
1 2 3 nested_join 4.00E+002
. . 2 trafodion_vsbb_scan T1 1.00E+000
. . 1 trafodion_index_scan IT1 4.00E+002

--- SQL operation complete.
>>showshape select * from t1 where b in (3,4);
control query shape nested_join(scan(path 'TRAFODION.SCH.IT1', forward
, blocks_per_access 1 , mdam forced, mdam_columns (dense, sparse)),
scan(path 'TRAFODION.SCH.T1', forward, blocks_per_access 400 , mdam off)
,INDEXJOIN);

--- SQL operation complete.
>>
>>prepare s3 from select t1.* from t1 join t2 on t1.b = t2.b where t2.b in 
>>(3,4);

--- SQL command prepared.
>>explain options 'f' s3;

LC RC OP OPERATOR OPT DESCRIPTION CARD
      -

5 . 6 root 4.00E+002
1 4 5 nested_join 4.00E+002
2 3 4 nested_join 2.00E+002
. . 3 trafodion_vsbb_scan T1 1.00E+000
. . 2 trafodion_index_scan IT1 2.00E+002
. . 1 trafodion_scan T2 2.00E+000

--- SQL operation complete.
>>showshape select t1.* from t1 join t2 on t1.b = t2.b where t2.b in (3,4);
control query shape nested_join(scan(path 'TRAFODION.SCH.T2', forward
, blocks_per_access 2 , mdam forced, mdam_columns all(dense)),nested_join(
scan(path 'TRAFODION.SCH.IT1', forward, blocks_per_access 1 , mdam off),
scan(path 'TRAFODION.SCH.T1', forward, blocks_per_access 400 , mdam off)
,INDEXJOIN));

--- SQL operation complete.
>>
>>exit;

End of MXCI Session
{quote}
 

> Inefficient plan when using a join to a tuple list
> --
>
> Key: TRAFODION-3325
> URL: https://issues.apache.org/jira/browse/TRAFODION-3325
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Attachments: repro.sql.txt
>
>
> In the example below, statement s1 gets a hash join plan that does a full 
> scan on table T1, even though a far more efficient nested join plan using 
> index IT1 is possible. The problem is that the Optimizer is not considering 
> an index scan using IT1 when the join predicates are between T1 and a tuple 
> list.
> In contrasting examples below where we use an IN list instead of a join to a 
> tuple list, and where we join to a table T2 instead of a tuple list, we do 
> get efficient index access plans.
> To reproduce, execute the attached script to create the test tables and 
> compile the test queries.
> When executed, we see the following plans:
> {quote}>>obey repro.sql(testit);
> >>
> >>prepare s1 from select t1.* from t1 join (values (3),(4)) f(b) on t1.b = 
> >>f.b;
> --- SQL command prepared.
> >>explain options 'f' s1;
> LC RC OP OPERATOR OPT DESCRIPTION CARD
>       -
> 5 . 6 root 4.00E+002
> 4 . 5 esp_exchange 1:2(hash2) 4.00E+002
> 3 2 4 hybrid_hash_join 4.00E+002
> . . 3 trafodion_scan T1 2.00E+006
> 1 . 2 esp_exchange 2(rep-b):1 2.00E+000
> . . 1 tuplelist 2.00E+000
> --- SQL operation complete.
> >>showshape select * from t1 join (values (3),(4)) f(b) 

[jira] [Created] (TRAFODION-3325) Inefficient plan when using a join to a tuple list

2019-09-09 Thread David Wayne Birdsall (Jira)
David Wayne Birdsall created TRAFODION-3325:
---

 Summary: Inefficient plan when using a join to a tuple list
 Key: TRAFODION-3325
 URL: https://issues.apache.org/jira/browse/TRAFODION-3325
 Project: Apache Trafodion
  Issue Type: Bug
  Components: sql-cmp
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall






--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (TRAFODION-3324) Make compiler a bit more robust when Hive DDL changes are done in Hive

2019-08-14 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3324.
-
   Resolution: Fixed
Fix Version/s: 2.4

> Make compiler a bit more robust when Hive DDL changes are done in Hive
> --
>
> Key: TRAFODION-3324
> URL: https://issues.apache.org/jira/browse/TRAFODION-3324
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The scenario: Create a Hive table, populate it with some rows, then in 
> Trafodion perform UPDATE STATISTICS on it. Then, in Hive, drop the table and 
> create it again with fewer columns. Then in Trafodion, execute "select 
> count(*)" against the table. Trafodion will then core, because the old 
> histograms are still there and are not consistent with the new Hive DDL.
> The following scripts will reproduce it:
> In Hive, do:
> {quote}create table hive1 
> ( a int,
>  b int,
>  c int )
> stored as textfile;
> {quote}
> In Trafodion sqlci do:
> {quote}insert into hive.hive.hive1 values 
> (1,2,3),(2,2,3),(2,3,4),(3,3,5),(3,4,6),(3,6,
> 8),(4,1,1);
> update statistics for table hive.hive.hive1 on every column;
> {quote}
> Then in Hive do:
> drop table if exists hive1;
> {quote}create table hive1 
> ( a int,
>  b int )
> stored as textfile;
> {quote}
> Then in Trafodion sqlci do:
> {quote}select count(*) from hive.hive.hive1;
> {quote}
> You will now get a sqlci core, with a stack trace like:
> {quote}(gdb) bt
> #0  0x7fe5b6976495 in raise () from /lib64/libc.so.6
> #1  0x7fe5b6977bfd in abort () from /lib64/libc.so.6
> #2  0x7fe5b63a93ea in assert_botch_abend (f=
>     0x7fe5b06459e8 "../common/Collections.cpp", l=903, m=
>     0x7fe5b06459c8 "List index exceeds # of entries", c=0x0)
>     at ../export/NAAbort.cpp:285
> #3  0x7fe5b63a9107 in NAAbort (filename=
>     0x7fe5b06459e8 "../common/Collections.cpp", lineno=903, msg=
>     0x7fe5b06459c8 "List index exceeds # of entries")
>     at ../export/NAAbort.cpp:207
> #4  0x7fe5b0455371 in NAList::operator[] (this=0x7fe5a3a45e88,
>     i=2) at ../common/Collections.cpp:903
> #5  0x7fe5afd95ca6 in HSHistogrmCursor::fetch (this=0x7fff381773a0, cs=
>     ..., cursor2=..., colmap=0x7fe594704428, fakeHistogram=0x7fe5947043a8,
>     emptyHistogram=0x7fe5947043c8, smallSampleHistogram=0x7fe5947043e8,
>     smallSampleSize=0x7fe594704408, fakeRowCount=@0x7fff38177f78, statsTime=
>     @0x7fe5a3a45fd8, allFakeStats=@0x7fff38178794, preFetch=1, offset=0,
>     tabDef=0x7fe594705238, cmpContextSwitched=1) at ../ustat/hs_read.cpp:1537
> #6  0x7fe5afd954ad in readHistograms (tabDef=0x7fe594705238, fullQualName=
>     ..., histogramTableName=..., histintsTableName=..., specialTable=0, type=
>     ExtendedQualName::NORMAL_TABLE, colArray=..., statsTime=@0x7fe5a3a45fd8,
>     allFakeStat=@0x7fff38178794, preFetch=1, fakeHistogram=0x7fe5947043a8,
>     emptyHistogram=0x7fe5947043c8, smallSampleHistogram=0x7fe5947043e8,
> ---Type  to continue, or q  to quit---
>     smallSampleSize=0x7fe594704408, colmap=0x7fe594704428, histogramRowCount=
>     @0x7fff38177f78, cs=0x7fff38177d10, offset=0) at ../ustat/hs_read.cpp:1330
> #7  0x7fe5afd93d3f in FetchHistograms (qualifiedName=..., type=
>     ExtendedQualName::NORMAL_TABLE, colArray=..., colStatsList=...,
>     isSQLMPTable=0, heap=0x7fe5946e3228, statsTime=@0x7fe5a3a45fd8,
>     allFakeStat=@0x7fff38178794, preFetch=1, createStatsSize=0)
>     at ../ustat/hs_read.cpp:962
> #8  0x7fe5ae795d23 in HistogramCache::createColStatsList (this=
>     0x7fe5a3a6cac8, table=..., cachedHistograms=0x0)
>     at ../optimizer/NATable.cpp:497
> #9  0x7fe5ae7958ec in HistogramCache::getHistograms (this=0x7fe5a3a6cac8,
>     table=...) at ../optimizer/NATable.cpp:327
> #10 0x7fe5ae7a8d6d in NATable::getStatistics (this=0x7fe5a3a45be0)
>     at ../optimizer/NATable.cpp:5980
> #11 0x7fe5aea95bae in TableDesc::getTableColStats (this=0x7fe5946eb4d8)
>     at ../optimizer/TableDesc.cpp:373
> #12 0x7fe5b0631eda in TableDesc::tableColStats (this=0x7fe5946eb4d8)
>     at ../optimizer/TableDesc.h:134
> #13 0x7fe5ae892f6e in Scan::synthLogProp (this=0x7fe5946c86b0, normWAPtr=
>     0x7fff3817b270) at ../optimizer/OptLogRelExpr.cpp:5193
> #14 0x7fe5ae880279 in RelExpr::synthLogProp (this=0x7fe5946f70f0,
>     normWAPtr=0x7fff3817b270) at ../optimizer/OptLogRelExpr.cpp:622
> #15 0x7fe5ae88ff09 in GroupByAgg::synthLogProp (this=0x7fe5946f70f0,
> ---Type  to continue, or q  to quit---
> {quote}
> Note: Trafodion has the feature that it can execute Hive DDL. So, the CREATE 

[jira] [Work started] (TRAFODION-3324) Make compiler a bit more robust when Hive DDL changes are done in Hive

2019-08-12 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on TRAFODION-3324 started by David Wayne Birdsall.
---
> Make compiler a bit more robust when Hive DDL changes are done in Hive
> --
>
> Key: TRAFODION-3324
> URL: https://issues.apache.org/jira/browse/TRAFODION-3324
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The scenario: Create a Hive table, populate it with some rows, then in 
> Trafodion perform UPDATE STATISTICS on it. Then, in Hive, drop the table and 
> create it again with fewer columns. Then in Trafodion, execute "select 
> count(*)" against the table. Trafodion will then core, because the old 
> histograms are still there and are not consistent with the new Hive DDL.
> The following scripts will reproduce it:
> In Hive, do:
> {quote}create table hive1 
> ( a int,
>  b int,
>  c int )
> stored as textfile;
> {quote}
> In Trafodion sqlci do:
> {quote}insert into hive.hive.hive1 values 
> (1,2,3),(2,2,3),(2,3,4),(3,3,5),(3,4,6),(3,6,
> 8),(4,1,1);
> update statistics for table hive.hive.hive1 on every column;
> {quote}
> Then in Hive do:
> drop table if exists hive1;
> {quote}create table hive1 
> ( a int,
>  b int )
> stored as textfile;
> {quote}
> Then in Trafodion sqlci do:
> {quote}select count(*) from hive.hive.hive1;
> {quote}
> You will now get a sqlci core, with a stack trace like:
> {quote}(gdb) bt
> #0  0x7fe5b6976495 in raise () from /lib64/libc.so.6
> #1  0x7fe5b6977bfd in abort () from /lib64/libc.so.6
> #2  0x7fe5b63a93ea in assert_botch_abend (f=
>     0x7fe5b06459e8 "../common/Collections.cpp", l=903, m=
>     0x7fe5b06459c8 "List index exceeds # of entries", c=0x0)
>     at ../export/NAAbort.cpp:285
> #3  0x7fe5b63a9107 in NAAbort (filename=
>     0x7fe5b06459e8 "../common/Collections.cpp", lineno=903, msg=
>     0x7fe5b06459c8 "List index exceeds # of entries")
>     at ../export/NAAbort.cpp:207
> #4  0x7fe5b0455371 in NAList::operator[] (this=0x7fe5a3a45e88,
>     i=2) at ../common/Collections.cpp:903
> #5  0x7fe5afd95ca6 in HSHistogrmCursor::fetch (this=0x7fff381773a0, cs=
>     ..., cursor2=..., colmap=0x7fe594704428, fakeHistogram=0x7fe5947043a8,
>     emptyHistogram=0x7fe5947043c8, smallSampleHistogram=0x7fe5947043e8,
>     smallSampleSize=0x7fe594704408, fakeRowCount=@0x7fff38177f78, statsTime=
>     @0x7fe5a3a45fd8, allFakeStats=@0x7fff38178794, preFetch=1, offset=0,
>     tabDef=0x7fe594705238, cmpContextSwitched=1) at ../ustat/hs_read.cpp:1537
> #6  0x7fe5afd954ad in readHistograms (tabDef=0x7fe594705238, fullQualName=
>     ..., histogramTableName=..., histintsTableName=..., specialTable=0, type=
>     ExtendedQualName::NORMAL_TABLE, colArray=..., statsTime=@0x7fe5a3a45fd8,
>     allFakeStat=@0x7fff38178794, preFetch=1, fakeHistogram=0x7fe5947043a8,
>     emptyHistogram=0x7fe5947043c8, smallSampleHistogram=0x7fe5947043e8,
> ---Type  to continue, or q  to quit---
>     smallSampleSize=0x7fe594704408, colmap=0x7fe594704428, histogramRowCount=
>     @0x7fff38177f78, cs=0x7fff38177d10, offset=0) at ../ustat/hs_read.cpp:1330
> #7  0x7fe5afd93d3f in FetchHistograms (qualifiedName=..., type=
>     ExtendedQualName::NORMAL_TABLE, colArray=..., colStatsList=...,
>     isSQLMPTable=0, heap=0x7fe5946e3228, statsTime=@0x7fe5a3a45fd8,
>     allFakeStat=@0x7fff38178794, preFetch=1, createStatsSize=0)
>     at ../ustat/hs_read.cpp:962
> #8  0x7fe5ae795d23 in HistogramCache::createColStatsList (this=
>     0x7fe5a3a6cac8, table=..., cachedHistograms=0x0)
>     at ../optimizer/NATable.cpp:497
> #9  0x7fe5ae7958ec in HistogramCache::getHistograms (this=0x7fe5a3a6cac8,
>     table=...) at ../optimizer/NATable.cpp:327
> #10 0x7fe5ae7a8d6d in NATable::getStatistics (this=0x7fe5a3a45be0)
>     at ../optimizer/NATable.cpp:5980
> #11 0x7fe5aea95bae in TableDesc::getTableColStats (this=0x7fe5946eb4d8)
>     at ../optimizer/TableDesc.cpp:373
> #12 0x7fe5b0631eda in TableDesc::tableColStats (this=0x7fe5946eb4d8)
>     at ../optimizer/TableDesc.h:134
> #13 0x7fe5ae892f6e in Scan::synthLogProp (this=0x7fe5946c86b0, normWAPtr=
>     0x7fff3817b270) at ../optimizer/OptLogRelExpr.cpp:5193
> #14 0x7fe5ae880279 in RelExpr::synthLogProp (this=0x7fe5946f70f0,
>     normWAPtr=0x7fff3817b270) at ../optimizer/OptLogRelExpr.cpp:622
> #15 0x7fe5ae88ff09 in GroupByAgg::synthLogProp (this=0x7fe5946f70f0,
> ---Type  to continue, or q  to quit---
> {quote}
> Note: Trafodion has the feature that it can execute Hive DDL. So, the CREATE 
> TABLE and DROP TABLE statements could be done through 

[jira] [Resolved] (TRAFODION-3323) Remove obsolete line from odbc.ini file in documentation

2019-08-12 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3323.
-
   Resolution: Fixed
Fix Version/s: 2.4

> Remove obsolete line from odbc.ini file in documentation
> 
>
> Key: TRAFODION-3323
> URL: https://issues.apache.org/jira/browse/TRAFODION-3323
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: documentation
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> In the Client Installation Guide, in section 9.2.2, Configure Data Sources, 
> it describes an odbc.ini file to use with the Unix ODBC driver and the odb 
> utility. It contains the line "AppUnicodeType=utf16" which does not work 
> correctly with odb anymore. This line should be removed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (TRAFODION-3324) Make compiler a bit more robust when Hive DDL changes are done in Hive

2019-08-08 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3324:
---

 Summary: Make compiler a bit more robust when Hive DDL changes are 
done in Hive
 Key: TRAFODION-3324
 URL: https://issues.apache.org/jira/browse/TRAFODION-3324
 Project: Apache Trafodion
  Issue Type: Bug
  Components: sql-cmp
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall


The scenario: Create a Hive table, populate it with some rows, then in 
Trafodion perform UPDATE STATISTICS on it. Then, in Hive, drop the table and 
create it again with fewer columns. Then in Trafodion, execute "select 
count(*)" against the table. Trafodion will then core, because the old 
histograms are still there and are not consistent with the new Hive DDL.

The following scripts will reproduce it:

In Hive, do:
{quote}create table hive1 
( a int,
 b int,
 c int )
stored as textfile;
{quote}
In Trafodion sqlci do:
{quote}insert into hive.hive.hive1 values 
(1,2,3),(2,2,3),(2,3,4),(3,3,5),(3,4,6),(3,6,
8),(4,1,1);

update statistics for table hive.hive.hive1 on every column;
{quote}
Then in Hive do:

drop table if exists hive1;
{quote}create table hive1 
( a int,
 b int )
stored as textfile;
{quote}
Then in Trafodion sqlci do:
{quote}select count(*) from hive.hive.hive1;
{quote}
You will now get a sqlci core, with a stack trace like:
{quote}(gdb) bt

#0  0x7fe5b6976495 in raise () from /lib64/libc.so.6

#1  0x7fe5b6977bfd in abort () from /lib64/libc.so.6

#2  0x7fe5b63a93ea in assert_botch_abend (f=

    0x7fe5b06459e8 "../common/Collections.cpp", l=903, m=

    0x7fe5b06459c8 "List index exceeds # of entries", c=0x0)

    at ../export/NAAbort.cpp:285

#3  0x7fe5b63a9107 in NAAbort (filename=

    0x7fe5b06459e8 "../common/Collections.cpp", lineno=903, msg=

    0x7fe5b06459c8 "List index exceeds # of entries")

    at ../export/NAAbort.cpp:207

#4  0x7fe5b0455371 in NAList::operator[] (this=0x7fe5a3a45e88,

    i=2) at ../common/Collections.cpp:903

#5  0x7fe5afd95ca6 in HSHistogrmCursor::fetch (this=0x7fff381773a0, cs=

    ..., cursor2=..., colmap=0x7fe594704428, fakeHistogram=0x7fe5947043a8,

    emptyHistogram=0x7fe5947043c8, smallSampleHistogram=0x7fe5947043e8,

    smallSampleSize=0x7fe594704408, fakeRowCount=@0x7fff38177f78, statsTime=

    @0x7fe5a3a45fd8, allFakeStats=@0x7fff38178794, preFetch=1, offset=0,

    tabDef=0x7fe594705238, cmpContextSwitched=1) at ../ustat/hs_read.cpp:1537

#6  0x7fe5afd954ad in readHistograms (tabDef=0x7fe594705238, fullQualName=

    ..., histogramTableName=..., histintsTableName=..., specialTable=0, type=

    ExtendedQualName::NORMAL_TABLE, colArray=..., statsTime=@0x7fe5a3a45fd8,

    allFakeStat=@0x7fff38178794, preFetch=1, fakeHistogram=0x7fe5947043a8,

    emptyHistogram=0x7fe5947043c8, smallSampleHistogram=0x7fe5947043e8,

---Type  to continue, or q  to quit---

    smallSampleSize=0x7fe594704408, colmap=0x7fe594704428, histogramRowCount=

    @0x7fff38177f78, cs=0x7fff38177d10, offset=0) at ../ustat/hs_read.cpp:1330

#7  0x7fe5afd93d3f in FetchHistograms (qualifiedName=..., type=

    ExtendedQualName::NORMAL_TABLE, colArray=..., colStatsList=...,

    isSQLMPTable=0, heap=0x7fe5946e3228, statsTime=@0x7fe5a3a45fd8,

    allFakeStat=@0x7fff38178794, preFetch=1, createStatsSize=0)

    at ../ustat/hs_read.cpp:962

#8  0x7fe5ae795d23 in HistogramCache::createColStatsList (this=

    0x7fe5a3a6cac8, table=..., cachedHistograms=0x0)

    at ../optimizer/NATable.cpp:497

#9  0x7fe5ae7958ec in HistogramCache::getHistograms (this=0x7fe5a3a6cac8,

    table=...) at ../optimizer/NATable.cpp:327

#10 0x7fe5ae7a8d6d in NATable::getStatistics (this=0x7fe5a3a45be0)

    at ../optimizer/NATable.cpp:5980

#11 0x7fe5aea95bae in TableDesc::getTableColStats (this=0x7fe5946eb4d8)

    at ../optimizer/TableDesc.cpp:373

#12 0x7fe5b0631eda in TableDesc::tableColStats (this=0x7fe5946eb4d8)

    at ../optimizer/TableDesc.h:134

#13 0x7fe5ae892f6e in Scan::synthLogProp (this=0x7fe5946c86b0, normWAPtr=

    0x7fff3817b270) at ../optimizer/OptLogRelExpr.cpp:5193

#14 0x7fe5ae880279 in RelExpr::synthLogProp (this=0x7fe5946f70f0,

    normWAPtr=0x7fff3817b270) at ../optimizer/OptLogRelExpr.cpp:622

#15 0x7fe5ae88ff09 in GroupByAgg::synthLogProp (this=0x7fe5946f70f0,

---Type  to continue, or q  to quit---
{quote}
Note: Trafodion has the feature that it can execute Hive DDL. So, the CREATE 
TABLE and DROP TABLE statements could be done through sqlci directly. (The 
table name has to be qualified as hive.hive.hive1 however.) When done through 
sqlci, any Trafodion histograms are cleaned up and this issue does not occur.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (TRAFODION-3323) Remove obsolete line from odbc.ini file in documentation

2019-08-07 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3323:
---

 Summary: Remove obsolete line from odbc.ini file in documentation
 Key: TRAFODION-3323
 URL: https://issues.apache.org/jira/browse/TRAFODION-3323
 Project: Apache Trafodion
  Issue Type: Bug
  Components: documentation
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall


In the Client Installation Guide, in section 9.2.2, Configure Data Sources, it 
describes an odbc.ini file to use with the Unix ODBC driver and the odb 
utility. It contains the line "AppUnicodeType=utf16" which does not work 
correctly with odb anymore. This line should be removed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (TRAFODION-3322) odb incorrectly generates 0 rows in a certain scenario

2019-08-07 Thread David Wayne Birdsall (JIRA)


[ 
https://issues.apache.org/jira/browse/TRAFODION-3322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902483#comment-16902483
 ] 

David Wayne Birdsall commented on TRAFODION-3322:
-

I debugged this. The bug seems to be in the Oload function in module odb.c. The 
variable "len" gets used for multiple uses, and in one place in the code is 
essentially left uninitialized. In the following code:
{quote}{{ /* Reading input file */}}
{{ pstats = 1; /* From now on print stats on exit */}}
{{ Odp = [eid].Orowsetl[m*etab[eid].s];}}
{{ while ( go ) {}}
{{    if ( isgz ) { /* is a gzipped file */}}
{{        if ( gzstream.avail_in ) { /* continue inflating previous gzbuff into 
buff */}}
{{            gzstream.avail_out = (unsigned int) etab[eid].buffsz ;}}
{{            gzstream.next_out = (unsigned char *)buff ;}}
{{            gzret = inflate (, Z_NO_FLUSH) ;}}
{{            switch ( gzret ) {}}
{{            case Z_OK:}}
{{                break ; /* everything is ok - continue */}}
{{            case Z_STREAM_END:}}
{{                inflateReset (  ) ;}}
{{                break;}}
{{            default:}}
{{                fprintf(stderr, "odb [Oload(%d)] - Error during deflate: 
[%d]\n",}}
{{                __LINE__, gzret);}}
{{                goto oload_exit ;}}
{{                break;}}
{{            }}}
{{            len = etab[eid].buffsz - gzstream.avail_out ;}}
{{        } else {}}
{{            if ( fl ) { /* read new data from normal file-system into gzbuff 
*/}}
{{            len = fread ( gzbuff, 1, (size_t)etab[eid].buffsz, fl);}}
{{#ifdef HDFS}}
{{            } else if ( fhl ) { /* read new data from HDFS into gzbuff */}}
{{                len = (size_t)(*hdfsread)(hfs, fhl, (void *)buff, 
etab[eid].buffsz);}}
{{#endif   }}
{{            }}}
{{            gzstream.avail_in = (unsigned int)len ;}}
{{            gzstream.next_in = (unsigned char *)gzbuff ;}}
{{            if ( len )}}
{{                continue ;}}
{{        }}}
{{    } else if ( fl ) {}}
{{        len = fread ( buff, 1, (size_t)etab[eid].buffsz, fl);}}
{{#ifdef HDFS}}
{{    } else if ( fhl ) {}}
{{        len = (size_t)(*hdfsread)(hfs, fhl, (void *)buff, etab[eid].buffsz);}}
{{#endif}}
{{    } }}{{ }}
{{    if ( len == 0 ) { /* EOF */}}
{{        if ( ( k + 1 ) == mff && !(fg & 0004) ) { /* complete last row & 
insert */}}
{{            ch = -1; /* insert this block */}}
{{            goto oload_lastrow;}}
{{        } else if ( m ) { /* rows to be inserted */ }}
{{            goto oload_insert;}}
{{        } else { /* exit loop */}}
{{            break;}}
{{         }}}
{{    }}}
{{    nb += len; /* update bytes read from file */}}
{{    p = 0; /* reset buffer index */}}
{{    while (lts && p < len) { /* skip initial lines */}}
{{        if (buff[p++] == lrs) {}}
{{            --lts;}}
{{        }}}
{{    }}}
{{    if ( ccl ) { /* continue cleaning rest of line */}}
{{        while ( p < len && buff[p] != lrs ) /* ... skip the rest of the line 
*/}}
{{            p++;}}
{{        if ( buff[p] == lrs ) { /* if a record separator has been found */}}
{{            ccl = 0; /* switch the continue cleaning flag off */}}
{{            p++; /* skip the record separator */}}
{{        }}}
{{    }}}
{{    for ( ; p < len ; p++ ) {}}
{{        ch = buff[p];}}
{quote}
 

In the loop above, after "while ( go )", we go through a set of if / else if 
blocks that set len to the length of a file. But in the case where there is no 
input file (src=nofile), len is not set. So it has whatever value it had before 
we get to this block of code. It happens that in our failing test case, when we 
get to "for ( ; p < len ; p++ )", len has the value 1. This causes the "for" 
loop to be exited prematurely.

I found that if I added an "else" case setting len = 2, then the code works 
correctly. (The value p is always reset to 0 within the "for" loop so the test 
"p < len" always succeeds.)

The patched code is below (added code is in red bolded italics):
{quote}{{/* Reading input file */}}
{{pstats = 1; /* From now on print stats on exit */}}
{{Odp = [eid].Orowsetl[m*etab[eid].s];}}
{{while ( go ) {}}
{{    if ( isgz ) { /* is a gzipped file */}}
{{        if ( gzstream.avail_in ) { /* continue inflating previous gzbuff into 
buff */}}
{{            gzstream.avail_out = (unsigned int) etab[eid].buffsz ;}}
{{            gzstream.next_out = (unsigned char *)buff ;}}
{{            gzret = inflate (, Z_NO_FLUSH) ;}}
{{            switch ( gzret ) {}}
{{            case Z_OK:}}
{{                break ; /* everything is ok - continue */}}
{{            case Z_STREAM_END:}}
{{                inflateReset (  ) ;}}
{{                break;}}
{{            default:}}
{{                fprintf(stderr, "odb [Oload(%d)] - Error during deflate: 
[%d]\n",}}
{{                __LINE__, gzret);}}
{{                goto oload_exit ;}}
{{                break;}}
{{            }}}
{{            

[jira] [Updated] (TRAFODION-3322) odb incorrectly generates 0 rows in a certain scenario

2019-08-07 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall updated TRAFODION-3322:

Description: 
In the following scenario, odb generates 0 rows when it should generate 5 rows.

DDL:

create table toyodb2 (a int not null, b int, c varchar(100), d varchar(100), 
primary key (a));

Map file (here named, try2OdbCrandFails.map):

A:SEQ:0
B:IRAND:3:12
C:DSRAND:carMakes.txt
D:CRAND:10

The file carMakes.txt contains:

Chevrolet
Dodge
Toyota
Nissan
Suzuki
Hyundai

odb command:

odb64luo -u db__root -p traf123 -d traf -l 
src=nofile:tgt=trafodion.sch.toyodb2:max=100:map=try2OdbCrandFails.map:rows=100:truncate

In this example, 0 rows will be generated, when 5 should have been.

If you change the map file to the following, it will succeed in generating 5 
rows:

A:SEQ:0
B:IRAND:3:12
C:CRAND:10
D:DSRAND:carMakes.txt

 

 

> odb incorrectly generates 0 rows in a certain scenario
> --
>
> Key: TRAFODION-3322
> URL: https://issues.apache.org/jira/browse/TRAFODION-3322
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: db-utility-odb
>Affects Versions: 2.4
>Reporter: David Wayne Birdsall
>Priority: Major
>
> In the following scenario, odb generates 0 rows when it should generate 5 
> rows.
> DDL:
> create table toyodb2 (a int not null, b int, c varchar(100), d varchar(100), 
> primary key (a));
> Map file (here named, try2OdbCrandFails.map):
> A:SEQ:0
> B:IRAND:3:12
> C:DSRAND:carMakes.txt
> D:CRAND:10
> The file carMakes.txt contains:
> Chevrolet
> Dodge
> Toyota
> Nissan
> Suzuki
> Hyundai
> odb command:
> odb64luo -u db__root -p traf123 -d traf -l 
> src=nofile:tgt=trafodion.sch.toyodb2:max=100:map=try2OdbCrandFails.map:rows=100:truncate
> In this example, 0 rows will be generated, when 5 should have been.
> If you change the map file to the following, it will succeed in generating 5 
> rows:
> A:SEQ:0
> B:IRAND:3:12
> C:CRAND:10
> D:DSRAND:carMakes.txt
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (TRAFODION-3322) odb incorrectly generates 0 rows in a certain scenario

2019-08-07 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3322:
---

 Summary: odb incorrectly generates 0 rows in a certain scenario
 Key: TRAFODION-3322
 URL: https://issues.apache.org/jira/browse/TRAFODION-3322
 Project: Apache Trafodion
  Issue Type: Bug
  Components: db-utility-odb
Affects Versions: 2.4
Reporter: David Wayne Birdsall






--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (TRAFODION-3321) odb Generate and Load Data documentation incorrect

2019-08-07 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3321.
-
   Resolution: Fixed
Fix Version/s: 2.4

> odb Generate and Load Data documentation incorrect
> --
>
> Key: TRAFODION-3321
> URL: https://issues.apache.org/jira/browse/TRAFODION-3321
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.4
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In section 5.5 Generate and Load Data in the odb User Guide, there is an 
> example of a mapping file with all the columns on the same line:
> {{FNAME:1 BDATE:CDATE COMMENT:4}}
> {{This doesn't work. Instead, each column should be listed on a separate 
> line.}}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (TRAFODION-3321) odb Generate and Load Data documentation incorrect

2019-08-05 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall updated TRAFODION-3321:

Description: 
In section 5.5 Generate and Load Data in the odb User Guide, there is an 
example of a mapping file with all the columns on the same line:

{{FNAME:1 BDATE:CDATE COMMENT:4}}

{{This doesn't work. Instead, each column should be listed on a separate line.}}

 

 

> odb Generate and Load Data documentation incorrect
> --
>
> Key: TRAFODION-3321
> URL: https://issues.apache.org/jira/browse/TRAFODION-3321
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.4
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
>
> In section 5.5 Generate and Load Data in the odb User Guide, there is an 
> example of a mapping file with all the columns on the same line:
> {{FNAME:1 BDATE:CDATE COMMENT:4}}
> {{This doesn't work. Instead, each column should be listed on a separate 
> line.}}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (TRAFODION-3316) Fix some issues with incremental UPDATE STATISTICS

2019-07-10 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3316.
-
   Resolution: Fixed
Fix Version/s: 2.4

> Fix some issues with incremental UPDATE STATISTICS
> --
>
> Key: TRAFODION-3316
> URL: https://issues.apache.org/jira/browse/TRAFODION-3316
> Project: Apache Trafodion
>  Issue Type: Improvement
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Fix some issues with incremental UPDATE STATISTICS.
>  # The Hive TIMESTAMP datatype has precision 9. When processing a Hive 
> TIMESTAMP column, UPDATE STATISTICS abends.
>  # After fixing that, when testing a small example, the sampling ratio for 
> the sample table is made incorrectly small, resulting in 9207 errors (sample 
> is empty).
>  # Sometimes, when populating the sample table, the UPSERT fails because 
> garbage is used for the table name. This is because the table name is taken 
> from an NAString that has gone out of scope.
> The following test script can be used to test these conditions. In Hive, do 
> the following:
> {quote}create database if not exists h_increUpdatestats;
> use h_increUpdatestats;
> drop table if exists hive_sequencefile ;
> create table hive_sequencefile(C_CUSTKEY int,C_NAME char(20),C_ADDRESS 
> varchar(20), C_NATIONKEY int, C_TIMESTAMP timestamp,C_PHONE char(20), 
> C_ACCTBAL string, 
> C_MKTSEGMENT char(20), C_DATE date) stored as sequencefile;
> insert into hive_sequencefile values 
> (1,'x','xx',1,'2018-08-22 12:00:00.123','xxx','','x','2018-08-22'),
> (2,'x','xx',2,'2018-08-23 12:00:00.123','xxx','','x','2018-08-23'),
> (3,'x','xx',3,'2018-08-24 12:00:00.123','xxx','','x','2018-08-24'),
> (4,'x','xx',4,'2018-08-25 12:00:00.123','xxx','','x','2018-08-25'),
> (5,'x','xx',5,'2018-08-26 12:00:00.123','xxx','','x','2018-08-26');
> {quote}
> Then in Trafodion do the following:
> {quote}update statistics for table hive.h_increUpdatestats.hive_sequencefile 
> remove sample;
> update statistics for table hive.h_increUpdatestats.hive_sequencefile create 
> sample random 100 percent;
> update statistics for table hive.h_increUpdatestats.hive_sequencefile on 
> every column;
> update statistics for table hive.h_increUpdatestats.hive_sequencefile on 
> existing columns incremental where C_CUSTKEY>=0;
> {quote}
> Without any fixes, the third UPDATE STATISTICS command abends.
> With a fix for the first problem, the last UPDATE STATISTICS command 
> sometimes fails with error 2109, and always fails with error 9207.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (TRAFODION-3316) Fix some issues with incremental UPDATE STATISTICS

2019-07-09 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on TRAFODION-3316 started by David Wayne Birdsall.
---
> Fix some issues with incremental UPDATE STATISTICS
> --
>
> Key: TRAFODION-3316
> URL: https://issues.apache.org/jira/browse/TRAFODION-3316
> Project: Apache Trafodion
>  Issue Type: Improvement
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Fix some issues with incremental UPDATE STATISTICS.
>  # The Hive TIMESTAMP datatype has precision 9. When processing a Hive 
> TIMESTAMP column, UPDATE STATISTICS abends.
>  # After fixing that, when testing a small example, the sampling ratio for 
> the sample table is made incorrectly small, resulting in 9207 errors (sample 
> is empty).
>  # Sometimes, when populating the sample table, the UPSERT fails because 
> garbage is used for the table name. This is because the table name is taken 
> from an NAString that has gone out of scope.
> The following test script can be used to test these conditions. In Hive, do 
> the following:
> {quote}create database if not exists h_increUpdatestats;
> use h_increUpdatestats;
> drop table if exists hive_sequencefile ;
> create table hive_sequencefile(C_CUSTKEY int,C_NAME char(20),C_ADDRESS 
> varchar(20), C_NATIONKEY int, C_TIMESTAMP timestamp,C_PHONE char(20), 
> C_ACCTBAL string, 
> C_MKTSEGMENT char(20), C_DATE date) stored as sequencefile;
> insert into hive_sequencefile values 
> (1,'x','xx',1,'2018-08-22 12:00:00.123','xxx','','x','2018-08-22'),
> (2,'x','xx',2,'2018-08-23 12:00:00.123','xxx','','x','2018-08-23'),
> (3,'x','xx',3,'2018-08-24 12:00:00.123','xxx','','x','2018-08-24'),
> (4,'x','xx',4,'2018-08-25 12:00:00.123','xxx','','x','2018-08-25'),
> (5,'x','xx',5,'2018-08-26 12:00:00.123','xxx','','x','2018-08-26');
> {quote}
> Then in Trafodion do the following:
> {quote}update statistics for table hive.h_increUpdatestats.hive_sequencefile 
> remove sample;
> update statistics for table hive.h_increUpdatestats.hive_sequencefile create 
> sample random 100 percent;
> update statistics for table hive.h_increUpdatestats.hive_sequencefile on 
> every column;
> update statistics for table hive.h_increUpdatestats.hive_sequencefile on 
> existing columns incremental where C_CUSTKEY>=0;
> {quote}
> Without any fixes, the third UPDATE STATISTICS command abends.
> With a fix for the first problem, the last UPDATE STATISTICS command 
> sometimes fails with error 2109, and always fails with error 9207.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TRAFODION-3316) Fix some issues with incremental UPDATE STATISTICS

2019-07-08 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3316:
---

 Summary: Fix some issues with incremental UPDATE STATISTICS
 Key: TRAFODION-3316
 URL: https://issues.apache.org/jira/browse/TRAFODION-3316
 Project: Apache Trafodion
  Issue Type: Improvement
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall


Fix some issues with incremental UPDATE STATISTICS.
 # The Hive TIMESTAMP datatype has precision 9. When processing a Hive 
TIMESTAMP column, UPDATE STATISTICS abends.
 # After fixing that, when testing a small example, the sampling ratio for the 
sample table is made incorrectly small, resulting in 9207 errors (sample is 
empty).
 # Sometimes, when populating the sample table, the UPSERT fails because 
garbage is used for the table name. This is because the table name is taken 
from an NAString that has gone out of scope.

The following test script can be used to test these conditions. In Hive, do the 
following:
{quote}create database if not exists h_increUpdatestats;
use h_increUpdatestats;
drop table if exists hive_sequencefile ;
create table hive_sequencefile(C_CUSTKEY int,C_NAME char(20),C_ADDRESS 
varchar(20), C_NATIONKEY int, C_TIMESTAMP timestamp,C_PHONE char(20), C_ACCTBAL 
string, 
C_MKTSEGMENT char(20), C_DATE date) stored as sequencefile;

insert into hive_sequencefile values 
(1,'x','xx',1,'2018-08-22 12:00:00.123','xxx','','x','2018-08-22'),
(2,'x','xx',2,'2018-08-23 12:00:00.123','xxx','','x','2018-08-23'),
(3,'x','xx',3,'2018-08-24 12:00:00.123','xxx','','x','2018-08-24'),
(4,'x','xx',4,'2018-08-25 12:00:00.123','xxx','','x','2018-08-25'),
(5,'x','xx',5,'2018-08-26 12:00:00.123','xxx','','x','2018-08-26');
{quote}
Then in Trafodion do the following:
{quote}update statistics for table hive.h_increUpdatestats.hive_sequencefile 
remove sample;
update statistics for table hive.h_increUpdatestats.hive_sequencefile create 
sample random 100 percent;
update statistics for table hive.h_increUpdatestats.hive_sequencefile on every 
column;
update statistics for table hive.h_increUpdatestats.hive_sequencefile on 
existing columns incremental where C_CUSTKEY>=0;
{quote}
Without any fixes, the third UPDATE STATISTICS command abends.

With a fix for the first problem, the last UPDATE STATISTICS command sometimes 
fails with error 2109, and always fails with error 9207.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (TRAFODION-3315) OSIM gives no error detail when Hive failures occur

2019-07-08 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3315.
-
Resolution: Fixed

> OSIM gives no error detail when Hive failures occur
> ---
>
> Key: TRAFODION-3315
> URL: https://issues.apache.org/jira/browse/TRAFODION-3315
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: sql-cmp
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> OSIM load under the covers uses Hive to load histogram data into the 
> histogram metadata tables.
> When there is a failure at the Hive level, OSIM reports only:
> *** ERROR[6009] The Optimizer Simulator (OSIM): Error running hive SQL.
> This is not at all helpful in diagnosing the problem.
> We should change OSIM so that it also reports the error information returned 
> from Hive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (TRAFODION-3315) OSIM gives no error detail when Hive failures occur

2019-07-03 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on TRAFODION-3315 started by David Wayne Birdsall.
---
> OSIM gives no error detail when Hive failures occur
> ---
>
> Key: TRAFODION-3315
> URL: https://issues.apache.org/jira/browse/TRAFODION-3315
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: sql-cmp
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> OSIM load under the covers uses Hive to load histogram data into the 
> histogram metadata tables.
> When there is a failure at the Hive level, OSIM reports only:
> *** ERROR[6009] The Optimizer Simulator (OSIM): Error running hive SQL.
> This is not at all helpful in diagnosing the problem.
> We should change OSIM so that it also reports the error information returned 
> from Hive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (TRAFODION-3314) OSIM generates redundant DDL for unique constraints

2019-07-03 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3314.
-
   Resolution: Fixed
Fix Version/s: 2.4

> OSIM generates redundant DDL for unique constraints
> ---
>
> Key: TRAFODION-3314
> URL: https://issues.apache.org/jira/browse/TRAFODION-3314
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: sql-cmp
>Affects Versions: 2.4
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Unique constraints are usually implemented by creation of a unique index 
> under the covers. When doing an OSIM CAPTURE of a query on a table that has a 
> unique constraint, OSIM generates both an ALTER TABLE ADD CONSTRAINT 
> statement and a CREATE UNIQUE INDEX statement in the CREATE_TABLE_DDLS.txt 
> file. When this is replayed in OSIM LOAD, we get an error when creating the 
> unique index. 
> To fix this, perhaps OSIM should generate the CREATE UNIQUE INDEX statement 
> only in commented-out form.
> {{To reproduce, run the following script in sqlci:}}
> {{?section setup}}
> {{drop table if exists t1 cascade;}}
> {{create table t1 (a int not null, b int not null, c int not null, primary 
> key (a));}}
> {{alter table t1 add constraint uniqueb unique(b);}}
> {{?section osimgen}}
> {{control osim capture location 'osimgen';}}
> {{prepare xx from select b,count(*) from t1 group by b;}}
> {{control osim capture stop;}}
> {{Then in a separate session, do:}}
> {{control osim load from 'osimgen';}}
> {{This will fail with the following output:}}
> {{>>control osim load from '.';}}
> {{[OSIM]loading tables and views ...}}
> {{[OSIM]DROP TABLE IF EXISTS TRAFODION.SCH.T1 CASCADE;}}
> {{[OSIM]Step 1 Create Schemas:}}
> {{[OSIM]CREATE SCHEMA IF NOT EXISTS TRAFODION.SCH}}
> {{[OSIM]Step 2 Create Tables:}}
> {{[OSIM]CREATE TABLE TRAFODION.SCH.T1}}
> {{[OSIM] The following index is a system created index --CREATE UNIQUE INDEX 
> UNIQUEB ON TRAFODION.SCH.T1}}
> {{[OSIM]ALTER TABLE TRAFODION.SCH.T1 ADD CONSTRAINT TRAFODION.SCH.UNIQUEB 
> UNIQUE}}
> {{*** ERROR[1043] Constraint TRAFODION.SCH.UNIQUEB already exists.}}
> {{*** ERROR[6009] The Optimizer Simulator (OSIM): Create Table Error: -1043}}
> {{*** ERROR[8822] The statement was not prepared.}}
> {{>>exit;}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TRAFODION-3315) OSIM gives no error detail when Hive failures occur

2019-07-03 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3315:
---

 Summary: OSIM gives no error detail when Hive failures occur
 Key: TRAFODION-3315
 URL: https://issues.apache.org/jira/browse/TRAFODION-3315
 Project: Apache Trafodion
  Issue Type: Improvement
  Components: sql-cmp
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall
 Fix For: 2.4


OSIM load under the covers uses Hive to load histogram data into the histogram 
metadata tables.

When there is a failure at the Hive level, OSIM reports only:

*** ERROR[6009] The Optimizer Simulator (OSIM): Error running hive SQL.

This is not at all helpful in diagnosing the problem.

We should change OSIM so that it also reports the error information returned 
from Hive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (TRAFODION-3314) OSIM generates redundant DDL for unique constraints

2019-07-02 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on TRAFODION-3314 started by David Wayne Birdsall.
---
> OSIM generates redundant DDL for unique constraints
> ---
>
> Key: TRAFODION-3314
> URL: https://issues.apache.org/jira/browse/TRAFODION-3314
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: sql-cmp
>Affects Versions: 2.4
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Unique constraints are usually implemented by creation of a unique index 
> under the covers. When doing an OSIM CAPTURE of a query on a table that has a 
> unique constraint, OSIM generates both an ALTER TABLE ADD CONSTRAINT 
> statement and a CREATE UNIQUE INDEX statement in the CREATE_TABLE_DDLS.txt 
> file. When this is replayed in OSIM LOAD, we get an error when creating the 
> unique index. 
> To fix this, perhaps OSIM should generate the CREATE UNIQUE INDEX statement 
> only in commented-out form.
> {{To reproduce, run the following script in sqlci:}}
> {{?section setup}}
> {{drop table if exists t1 cascade;}}
> {{create table t1 (a int not null, b int not null, c int not null, primary 
> key (a));}}
> {{alter table t1 add constraint uniqueb unique(b);}}
> {{?section osimgen}}
> {{control osim capture location 'osimgen';}}
> {{prepare xx from select b,count(*) from t1 group by b;}}
> {{control osim capture stop;}}
> {{Then in a separate session, do:}}
> {{control osim load from 'osimgen';}}
> {{This will fail with the following output:}}
> {{>>control osim load from '.';}}
> {{[OSIM]loading tables and views ...}}
> {{[OSIM]DROP TABLE IF EXISTS TRAFODION.SCH.T1 CASCADE;}}
> {{[OSIM]Step 1 Create Schemas:}}
> {{[OSIM]CREATE SCHEMA IF NOT EXISTS TRAFODION.SCH}}
> {{[OSIM]Step 2 Create Tables:}}
> {{[OSIM]CREATE TABLE TRAFODION.SCH.T1}}
> {{[OSIM] The following index is a system created index --CREATE UNIQUE INDEX 
> UNIQUEB ON TRAFODION.SCH.T1}}
> {{[OSIM]ALTER TABLE TRAFODION.SCH.T1 ADD CONSTRAINT TRAFODION.SCH.UNIQUEB 
> UNIQUE}}
> {{*** ERROR[1043] Constraint TRAFODION.SCH.UNIQUEB already exists.}}
> {{*** ERROR[6009] The Optimizer Simulator (OSIM): Create Table Error: -1043}}
> {{*** ERROR[8822] The statement was not prepared.}}
> {{>>exit;}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TRAFODION-3314) OSIM generates redundant DDL for unique constraints

2019-07-01 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3314:
---

 Summary: OSIM generates redundant DDL for unique constraints
 Key: TRAFODION-3314
 URL: https://issues.apache.org/jira/browse/TRAFODION-3314
 Project: Apache Trafodion
  Issue Type: Improvement
  Components: sql-cmp
Affects Versions: 2.4
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall


Unique constraints are usually implemented by creation of a unique index under 
the covers. When doing an OSIM CAPTURE of a query on a table that has a unique 
constraint, OSIM generates both an ALTER TABLE ADD CONSTRAINT statement and a 
CREATE UNIQUE INDEX statement in the CREATE_TABLE_DDLS.txt file. When this is 
replayed in OSIM LOAD, we get an error when creating the unique index. 

To fix this, perhaps OSIM should generate the CREATE UNIQUE INDEX statement 
only in commented-out form.

{{To reproduce, run the following script in sqlci:}}

{{?section setup}}

{{drop table if exists t1 cascade;}}

{{create table t1 (a int not null, b int not null, c int not null, primary key 
(a));}}

{{alter table t1 add constraint uniqueb unique(b);}}

{{?section osimgen}}

{{control osim capture location 'osimgen';}}

{{prepare xx from select b,count(*) from t1 group by b;}}

{{control osim capture stop;}}

{{Then in a separate session, do:}}

{{control osim load from 'osimgen';}}

{{This will fail with the following output:}}

{{>>control osim load from '.';}}
{{[OSIM]loading tables and views ...}}
{{[OSIM]DROP TABLE IF EXISTS TRAFODION.SCH.T1 CASCADE;}}
{{[OSIM]Step 1 Create Schemas:}}
{{[OSIM]CREATE SCHEMA IF NOT EXISTS TRAFODION.SCH}}
{{[OSIM]Step 2 Create Tables:}}
{{[OSIM]CREATE TABLE TRAFODION.SCH.T1}}
{{[OSIM] The following index is a system created index --CREATE UNIQUE INDEX 
UNIQUEB ON TRAFODION.SCH.T1}}
{{[OSIM]ALTER TABLE TRAFODION.SCH.T1 ADD CONSTRAINT TRAFODION.SCH.UNIQUEB 
UNIQUE}}

{{*** ERROR[1043] Constraint TRAFODION.SCH.UNIQUEB already exists.}}

{{*** ERROR[6009] The Optimizer Simulator (OSIM): Create Table Error: -1043}}

{{*** ERROR[8822] The statement was not prepared.}}

{{>>exit;}}

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (TRAFODION-3309) Improve analyzeMessagesGuide.py for errors in 2000 range

2019-05-21 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3309.
-
   Resolution: Fixed
Fix Version/s: 2.4

> Improve analyzeMessagesGuide.py for errors in 2000 range
> 
>
> Key: TRAFODION-3309
> URL: https://issues.apache.org/jira/browse/TRAFODION-3309
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: sql-cmp
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> File arkcmp/CmpErrors.h contains some "static const int" variables that are 
> used as error number identifiers, particularly for errors in the 2000 range. 
> If we convert those to an enum, and add this file to analyzeMessagesGuide.py, 
> then the latter script will be able to do a better job identifying where 
> these errors are generated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (TRAFODION-3308) Uninformative error messages when executables are unavailable

2019-05-16 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3308.
-
   Resolution: Fixed
Fix Version/s: 2.4

> Uninformative error messages when executables are unavailable
> -
>
> Key: TRAFODION-3308
> URL: https://issues.apache.org/jira/browse/TRAFODION-3308
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.4
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> When the tdm_arkcmp executable is unavailable on a Trafodion node due to a 
> file system error, we get the following less-than-informative error:
> {quote}>>create schema scythians;
> *** ERROR[2012] Server process tdm_arkcmp could not be created on \\NSK - 
> Operating system error 4022, TPCError = 53, error detail = 0.  (See variants 
> of Seabed procedure msg_mon_start_process for details).
> *** ERROR[2013] Server process tdm_arkcmp could not be created on \\NSK - 
> Operating system error 4022.
> *** ERROR[2002] Internal error: cannot create compiler.
> *** ERROR[8822] The statement was not prepared.
> --- SQL operation failed with errors.
> >> 
> {quote}
> We get a similarly uninformative series of errors when the tdm_arkesp 
> executable is unavailable:
> {quote}>>prepare s1 from select * From t1 where b = 6;
> --- SQL command prepared.
> >>explain options 'f' s1;
> LC   RC   OP   OPERATOR  OPT   DESCRIPTION   CARD
>          -
> 2    .    3    root  1.30E+004
> 1    .    2    esp_exchange    1:2(hash2)    1.30E+004
> .    .    1    trafodion_scan  T1    1.30E+004
> --- SQL operation complete.
> >>execute s1;
> *** ERROR[2012] Server process tdm_arkesp could not be created on \NSK cpu 0 
> - Operating system error 4022, TPCError = 53, error detail = 0.  (See 
> variants of Seabed procedure msg_mon_start_process for details).
> *** ERROR[2013] Server process tdm_arkesp could not be created on \NSK cpu 0 
> - Operating system error 4022.
> *** ERROR[2012] Server process tdm_arkesp could not be created on \NSK cpu 0 
> - Operating system error 4022, TPCError = 53, error detail = 0.  (See 
> variants of Seabed procedure msg_mon_start_process for details).
> --- 0 row(s) selected.
> >> 
> {quote}
> Among the issues with these error messages:
>  # They do not give the correct node name where we were trying to create the 
> process, but instead report NSK.
>  # Error 2013 is completely redundant; everything it says is in error 2012.
>  # Error 2012 could be much more informative. Text could be added explaining 
> the meaning of the error codes given.
>  # In the tdm_arkcmp case, error 2002 adds no information at all.
> To reproduce these issues on a development instance, first create a table T1 
> with one million rows (so a parallel plan will be picked for the tdm_arkesp 
> example). Then go to the trafodion/core/sql/lib/linux/64bit/debug directory 
> and rename the tdm_arkcmp and tdm_arkesp executables to something else. Try 
> any DDL command to get the tdm_arkcmp failure. Try any parallel DML statement 
> to get the tdm_arkesp failure.
> It is likely that similar issues exist for other processes, e.g. tdm_udrserv.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TRAFODION-3309) Improve analyzeMessagesGuide.py for errors in 2000 range

2019-05-16 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3309:
---

 Summary: Improve analyzeMessagesGuide.py for errors in 2000 range
 Key: TRAFODION-3309
 URL: https://issues.apache.org/jira/browse/TRAFODION-3309
 Project: Apache Trafodion
  Issue Type: Improvement
  Components: sql-cmp
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall


File arkcmp/CmpErrors.h contains some "static const int" variables that are 
used as error number identifiers, particularly for errors in the 2000 range. If 
we convert those to an enum, and add this file to analyzeMessagesGuide.py, then 
the latter script will be able to do a better job identifying where these 
errors are generated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TRAFODION-3308) Uninformative error messages when executables are unavailable

2019-05-14 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3308:
---

 Summary: Uninformative error messages when executables are 
unavailable
 Key: TRAFODION-3308
 URL: https://issues.apache.org/jira/browse/TRAFODION-3308
 Project: Apache Trafodion
  Issue Type: Bug
  Components: sql-cmp
Affects Versions: 2.4
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall


When the tdm_arkcmp executable is unavailable on a Trafodion node due to a file 
system error, we get the following less-than-informative error:
{quote}>>create schema scythians;

*** ERROR[2012] Server process tdm_arkcmp could not be created on \\NSK - 
Operating system error 4022, TPCError = 53, error detail = 0.  (See variants of 
Seabed procedure msg_mon_start_process for details).

*** ERROR[2013] Server process tdm_arkcmp could not be created on \\NSK - 
Operating system error 4022.

*** ERROR[2002] Internal error: cannot create compiler.

*** ERROR[8822] The statement was not prepared.

--- SQL operation failed with errors.

>> 
{quote}
We get a similarly uninformative series of errors when the tdm_arkesp 
executable is unavailable:
{quote}>>prepare s1 from select * From t1 where b = 6;

--- SQL command prepared.

>>explain options 'f' s1;

LC   RC   OP   OPERATOR  OPT   DESCRIPTION   CARD

         -

2    .    3    root  1.30E+004

1    .    2    esp_exchange    1:2(hash2)    1.30E+004

.    .    1    trafodion_scan  T1    1.30E+004

--- SQL operation complete.

>>execute s1;

*** ERROR[2012] Server process tdm_arkesp could not be created on \NSK cpu 0 - 
Operating system error 4022, TPCError = 53, error detail = 0.  (See variants of 
Seabed procedure msg_mon_start_process for details).

*** ERROR[2013] Server process tdm_arkesp could not be created on \NSK cpu 0 - 
Operating system error 4022.

*** ERROR[2012] Server process tdm_arkesp could not be created on \NSK cpu 0 - 
Operating system error 4022, TPCError = 53, error detail = 0.  (See variants of 
Seabed procedure msg_mon_start_process for details).

--- 0 row(s) selected.

>> 
{quote}
Among the issues with these error messages:
 # They do not give the correct node name where we were trying to create the 
process, but instead report NSK.
 # Error 2013 is completely redundant; everything it says is in error 2012.
 # Error 2012 could be much more informative. Text could be added explaining 
the meaning of the error codes given.
 # In the tdm_arkcmp case, error 2002 adds no information at all.

To reproduce these issues on a development instance, first create a table T1 
with one million rows (so a parallel plan will be picked for the tdm_arkesp 
example). Then go to the trafodion/core/sql/lib/linux/64bit/debug directory and 
rename the tdm_arkcmp and tdm_arkesp executables to something else. Try any DDL 
command to get the tdm_arkcmp failure. Try any parallel DML statement to get 
the tdm_arkesp failure.

It is likely that similar issues exist for other processes, e.g. tdm_udrserv.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (TRAFODION-3300) ROUND function on extreme data types fails with numeric overflows

2019-04-24 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3300.
-
   Resolution: Fixed
Fix Version/s: 2.4

> ROUND function on extreme data types fails with numeric overflows
> -
>
> Key: TRAFODION-3300
> URL: https://issues.apache.org/jira/browse/TRAFODION-3300
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp, sql-exe
>Affects Versions: 2.4
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The following script reproduces the failures.
> {quote}drop table if exists t;
> create table t (
> c1 numeric(128),
> c2 numeric(128,1),
> c3 numeric(128,2));
> insert into t values (
> 12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678,
> 1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567.8,
> 123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456.78
> );
> select * from t;
> select ROUND(c1, 1) from t;
> select ROUND(c2, 1) from t;
> select ROUND(c3, 1) from t;
> select ROUND(c1, 2) from t;
> select ROUND(c2, 2) from t;
> select ROUND(c3, 2) from t;
> select ROUND(c1, 3) from t;
> select ROUND(c2, 3) from t;
> select ROUND(c3, 3) from t;
> {quote}
> When run, some of the ROUND functions fail with numeric overflows. They all 
> should execute successfully.
> {quote}>>drop table if exists t;
> --- SQL operation complete.
> >>create table t (
> +>c1 numeric(128),
> +>c2 numeric(128,1),
> +>c3 numeric(128,2));
> --- SQL operation complete.
> >>
> >>insert into t values (
> +>12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678,
> +>1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567.8,
> +>123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456.78
> +>);
> --- 1 row(s) inserted.
> >>select * from t;
> C1 C2 C3
> -
>  
> --
>  
> --
> 12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
>  
> 1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567.8
>  
> 123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456.78
> --- 1 row(s) selected.
> >>
> >>select ROUND(c1, 1) from t;
> (EXPR)
> -
> 12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
> --- 1 row(s) selected.
> >>select ROUND(c2, 1) from t;
> *** ERROR[8411] A numeric overflow occurred during an arithmetic computation 
> or data conversion. Intermediate conversion of Source 
> Type:CHAR(REC_BYTE_F_ASCII,129 BYTES,ISO88591) Source 
> Value:123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456780
>  to Target Type:NUMERIC(REC_NUM_BIG_SIGNED).
> --- 0 row(s) selected.
> >>select ROUND(c3, 1) from t;
> (EXPR)
> --
> 123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456.80
> --- 1 row(s) selected.
> >>
> >>select ROUND(c1, 2) from t;
> (EXPR)
> -
> 12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
> --- 1 row(s) selected.
> >>select ROUND(c2, 2) from t;
> *** ERROR[8411] A numeric overflow occurred during an arithmetic computation 
> or data conversion. Intermediate conversion of Source 
> 

[jira] [Resolved] (TRAFODION-3299) Update Messages Guide for still more binder messages

2019-04-24 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3299.
-
   Resolution: Fixed
Fix Version/s: 2.4

> Update Messages Guide for still more binder messages
> 
>
> Key: TRAFODION-3299
> URL: https://issues.apache.org/jira/browse/TRAFODION-3299
> Project: Apache Trafodion
>  Issue Type: Documentation
>  Components: documentation
>Affects Versions: 2.4
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TRAFODION-3300) ROUND function on extreme data types fails with numeric overflows

2019-04-18 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3300:
---

 Summary: ROUND function on extreme data types fails with numeric 
overflows
 Key: TRAFODION-3300
 URL: https://issues.apache.org/jira/browse/TRAFODION-3300
 Project: Apache Trafodion
  Issue Type: Bug
  Components: sql-cmp, sql-exe
Affects Versions: 2.4
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall


The following script reproduces the failures.
{quote}drop table if exists t;
create table t (
c1 numeric(128),
c2 numeric(128,1),
c3 numeric(128,2));

insert into t values (
12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678,
1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567.8,
123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456.78
);
select * from t;

select ROUND(c1, 1) from t;
select ROUND(c2, 1) from t;
select ROUND(c3, 1) from t;

select ROUND(c1, 2) from t;
select ROUND(c2, 2) from t;
select ROUND(c3, 2) from t;

select ROUND(c1, 3) from t;
select ROUND(c2, 3) from t;
select ROUND(c3, 3) from t;
{quote}
When run, some of the ROUND functions fail with numeric overflows. They all 
should execute successfully.
{quote}>>drop table if exists t;

--- SQL operation complete.
>>create table t (
+>c1 numeric(128),
+>c2 numeric(128,1),
+>c3 numeric(128,2));

--- SQL operation complete.
>>
>>insert into t values (
+>12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678,
+>1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567.8,
+>123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456.78
+>);

--- 1 row(s) inserted.
>>select * from t;

C1 C2 C3
-
 
--
 
--

12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
 
1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567.8
 
123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456.78

--- 1 row(s) selected.
>>
>>select ROUND(c1, 1) from t;

(EXPR)
-

12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678

--- 1 row(s) selected.
>>select ROUND(c2, 1) from t;

*** ERROR[8411] A numeric overflow occurred during an arithmetic computation or 
data conversion. Intermediate conversion of Source 
Type:CHAR(REC_BYTE_F_ASCII,129 BYTES,ISO88591) Source 
Value:123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456780
 to Target Type:NUMERIC(REC_NUM_BIG_SIGNED).

--- 0 row(s) selected.
>>select ROUND(c3, 1) from t;

(EXPR)
--

123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456.80

--- 1 row(s) selected.
>>
>>select ROUND(c1, 2) from t;

(EXPR)
-

12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678

--- 1 row(s) selected.
>>select ROUND(c2, 2) from t;

*** ERROR[8411] A numeric overflow occurred during an arithmetic computation or 
data conversion. Intermediate conversion of Source 
Type:CHAR(REC_BYTE_F_ASCII,130 BYTES,ISO88591) Source 
Value:1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567800
 to Target Type:NUMERIC(REC_NUM_BIG_SIGNED).

--- 0 row(s) selected.
>>select ROUND(c3, 2) from t;

*** ERROR[8411] A numeric overflow occurred during an arithmetic computation or 
data conversion. Intermediate conversion of Source 
Type:CHAR(REC_BYTE_F_ASCII,129 BYTES,ISO88591) Source 

[jira] [Created] (TRAFODION-3299) Update Messages Guide for still more binder messages

2019-04-17 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3299:
---

 Summary: Update Messages Guide for still more binder messages
 Key: TRAFODION-3299
 URL: https://issues.apache.org/jira/browse/TRAFODION-3299
 Project: Apache Trafodion
  Issue Type: Documentation
  Components: documentation
Affects Versions: 2.4
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (TRAFODION-3296) Subquery with [first n] + ORDER BY gives wrong answer

2019-04-16 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3296.
-
   Resolution: Fixed
Fix Version/s: 2.4

> Subquery with [first n] + ORDER BY gives wrong answer
> -
>
> Key: TRAFODION-3296
> URL: https://issues.apache.org/jira/browse/TRAFODION-3296
> Project: Apache Trafodion
>  Issue Type: Bug
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> The following test script illustrates the problem:
> {quote}?section setup
> drop table if exists t1;
> drop table if exists t2;
> create table t1 (val integer);
> create table t2 (val integer);
> insert into t1 values(994707150),(1923979352),(-1865644273);
> insert into t2 select * from t1;
> ?section testit
> select [first 1] val from t2 order by val;
> prepare xx from select val from t1 where val in (select [first 1] val from t2 
> order by val);
> explain options 'f' xx;
> -- should return -1865644273, but returns something different
> execute xx;
> {quote}
> When run, the script shows:
> {quote}>>?section testit
> >>
> >>select [first 1] val from t2 order by val;
> VAL 
> ---
> -1865644273
> --- 1 row(s) selected.
> >>
> >>prepare xx from select val from t1 where val in (select [first 1] val from 
> >>t2 order by val);
> --- SQL command prepared.
> >>
> >>explain options 'f' xx;
> LC RC OP OPERATOR OPT DESCRIPTION CARD
>       -
> 5 . 6 root 1.00E+002
> 3 4 5 nested_join 1.00E+002
> . . 4 trafodion_scan T1 1.00E+002
> 2 . 3 hash_groupby 1.00E+000
> 1 . 2 firstn 1.00E+000
> . . 1 trafodion_scan T2 1.00E+002
> --- SQL operation complete.
> >>
> >>-- should return -1865644273, but returns something different
> >>execute xx;
> VAL 
> ---
> 994707150
> --- 1 row(s) selected.
> >>
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (TRAFODION-3294) Update Messages Guide for some binder messages

2019-04-15 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall reassigned TRAFODION-3294:
---

Assignee: David Wayne Birdsall

> Update Messages Guide for some binder messages
> --
>
> Key: TRAFODION-3294
> URL: https://issues.apache.org/jira/browse/TRAFODION-3294
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.4
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Updating the Messages Guide to make it consistent with the code for some 
> Binder messages (4000 range).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (TRAFODION-3297) Update Messages Guide for some more binder messages

2019-04-15 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3297.
-
   Resolution: Fixed
Fix Version/s: 2.4

> Update Messages Guide for some more binder messages
> ---
>
> Key: TRAFODION-3297
> URL: https://issues.apache.org/jira/browse/TRAFODION-3297
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: documentation
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Continuing work on Messages Guide for binder messages (4000-4999).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TRAFODION-3297) Update Messages Guide for some more binder messages

2019-04-12 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3297:
---

 Summary: Update Messages Guide for some more binder messages
 Key: TRAFODION-3297
 URL: https://issues.apache.org/jira/browse/TRAFODION-3297
 Project: Apache Trafodion
  Issue Type: Improvement
  Components: documentation
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall


Continuing work on Messages Guide for binder messages (4000-4999).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TRAFODION-3296) Subquery with [first n] + ORDER BY gives wrong answer

2019-04-09 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3296:
---

 Summary: Subquery with [first n] + ORDER BY gives wrong answer
 Key: TRAFODION-3296
 URL: https://issues.apache.org/jira/browse/TRAFODION-3296
 Project: Apache Trafodion
  Issue Type: Bug
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall


The following test script illustrates the problem:
{quote}?section setup

drop table if exists t1;

drop table if exists t2;

create table t1 (val integer);

create table t2 (val integer);

insert into t1 values(994707150),(1923979352),(-1865644273);

insert into t2 select * from t1;

?section testit

select [first 1] val from t2 order by val;

prepare xx from select val from t1 where val in (select [first 1] val from t2 
order by val);

explain options 'f' xx;

-- should return -1865644273, but returns something different
execute xx;
{quote}
When run, the script shows:
{quote}>>?section testit
>>
>>select [first 1] val from t2 order by val;

VAL 
---

-1865644273

--- 1 row(s) selected.
>>
>>prepare xx from select val from t1 where val in (select [first 1] val from t2 
>>order by val);

--- SQL command prepared.
>>
>>explain options 'f' xx;

LC RC OP OPERATOR OPT DESCRIPTION CARD
      -

5 . 6 root 1.00E+002
3 4 5 nested_join 1.00E+002
. . 4 trafodion_scan T1 1.00E+002
2 . 3 hash_groupby 1.00E+000
1 . 2 firstn 1.00E+000
. . 1 trafodion_scan T2 1.00E+002

--- SQL operation complete.
>>
>>-- should return -1865644273, but returns something different
>>execute xx;

VAL 
---

994707150

--- 1 row(s) selected.
>>
{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (TRAFODION-3294) Update Messages Guide for some binder messages

2019-04-08 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3294.
-
   Resolution: Fixed
Fix Version/s: 2.4

> Update Messages Guide for some binder messages
> --
>
> Key: TRAFODION-3294
> URL: https://issues.apache.org/jira/browse/TRAFODION-3294
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.4
>Reporter: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Updating the Messages Guide to make it consistent with the code for some 
> Binder messages (4000 range).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (TRAFODION-3291) UPDATE STATISTICS on wide tables causes a tdm_arkcmp core

2019-04-05 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3291.
-
   Resolution: Fixed
Fix Version/s: 2.4

> UPDATE STATISTICS on wide tables causes a tdm_arkcmp core
> -
>
> Key: TRAFODION-3291
> URL: https://issues.apache.org/jira/browse/TRAFODION-3291
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.4
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> The following script creates a table with 500+ columns. When we attempt 
> UPDATE STATISTICS on all columns, we get a core in tdm_arkcmp.
> {quote}drop schema if exists mytest cascade;
> create schema mytest;
> set schema mytest;
> create table mytable (id int, c0 char(10) default null, c1 char(10) default 
> null, c2 char(10) default null, c3 char(10) default null, c4 char(10) default 
> null, c5 char(10) default null, c6 char(10) default null, c7 char(10) default 
> null, c8 char(10) default null, c9 char(10) default null, c10 char(10) 
> default null, c11 char(10) default null, c12 char(10) default null, c13 
> char(10) default null, c14 char(10) default null, c15 char(10) default null, 
> c16 char(10) default null, c17 char(10) default null, c18 char(10) default 
> null, c19 char(10) default null, c20 char(10) default null, c21 char(10) 
> default null, c22 char(10) default null, c23 char(10) default null, c24 
> char(10) default null, c25 char(10) default null, c26 char(10) default null, 
> c27 char(10) default null, c28 char(10) default null, c29 char(10) default 
> null, c30 char(10) default null, c31 char(10) default null, c32 char(10) 
> default null, c33 char(10) default null, c34 char(10) default null, c35 
> char(10) default null, c36 char(10) default null, c37 char(10) default null, 
> c38 char(10) default null, c39 char(10) default null, c40 char(10) default 
> null, c41 char(10) default null, c42 char(10) default null, c43 char(10) 
> default null, c44 char(10) default null, c45 char(10) default null, c46 
> char(10) default null, c47 char(10) default null, c48 char(10) default null, 
> c49 char(10) default null, c50 char(10) default null, c51 char(10) default 
> null, c52 char(10) default null, c53 char(10) default null, c54 char(10) 
> default null, c55 char(10) default null, c56 char(10) default null, c57 
> char(10) default null, c58 char(10) default null, c59 char(10) default null, 
> c60 char(10) default null, c61 char(10) default null, c62 char(10) default 
> null, c63 char(10) default null, c64 char(10) default null, c65 char(10) 
> default null, c66 char(10) default null, c67 char(10) default null, c68 
> char(10) default null, c69 char(10) default null, c70 char(10) default null, 
> c71 char(10) default null, c72 char(10) default null, c73 char(10) default 
> null, c74 char(10) default null, c75 char(10) default null, c76 char(10) 
> default null, c77 char(10) default null, c78 char(10) default null, c79 
> char(10) default null, c80 char(10) default null, c81 char(10) default null, 
> c82 char(10) default null, c83 char(10) default null, c84 char(10) default 
> null, c85 char(10) default null, c86 char(10) default null, c87 char(10) 
> default null, c88 char(10) default null, c89 char(10) default null, c90 
> char(10) default null, c91 char(10) default null, c92 char(10) default null, 
> c93 char(10) default null, c94 char(10) default null, c95 char(10) default 
> null, c96 char(10) default null, c97 char(10) default null, c98 char(10) 
> default null, c99 char(10) default null, c100 char(10) default null, c101 
> char(10) default null, c102 char(10) default null, c103 char(10) default 
> null, c104 char(10) default null, c105 char(10) default null, c106 char(10) 
> default null, c107 char(10) default null, c108 char(10) default null, c109 
> char(10) default null, c110 char(10) default null, c111 char(10) default 
> null, c112 char(10) default null, c113 char(10) default null, c114 char(10) 
> default null, c115 char(10) default null, c116 char(10) default null, c117 
> char(10) default null, c118 char(10) default null, c119 char(10) default 
> null, c120 char(10) default null, c121 char(10) default null, c122 char(10) 
> default null, c123 char(10) default null, c124 char(10) default null, c125 
> char(10) default null, c126 char(10) default null, c127 char(10) default 
> null, c128 char(10) default null, c129 char(10) default null, c130 char(10) 
> default null, c131 char(10) default null, c132 char(10) default null, c133 
> char(10) default null, c134 char(10) default null, c135 char(10) default 
> null, c136 char(10) default null, c137 char(10) 

[jira] [Created] (TRAFODION-3294) Update Messages Guide for some binder messages

2019-04-05 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3294:
---

 Summary: Update Messages Guide for some binder messages
 Key: TRAFODION-3294
 URL: https://issues.apache.org/jira/browse/TRAFODION-3294
 Project: Apache Trafodion
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.4
Reporter: David Wayne Birdsall


Updating the Messages Guide to make it consistent with the code for some Binder 
messages (4000 range).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (TRAFODION-3290) Creating a sample table on an all-LOB table fails with syntax error

2019-04-04 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3290.
-
   Resolution: Fixed
Fix Version/s: 2.4

> Creating a sample table on an all-LOB table fails with syntax error
> ---
>
> Key: TRAFODION-3290
> URL: https://issues.apache.org/jira/browse/TRAFODION-3290
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.4
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The following script reproduces the problem:
> {quote}control query default TRAF_BLOB_AS_VARCHAR 'OFF';
> control query default TRAF_CLOB_AS_VARCHAR 'OFF';
> drop table if exists t1;
> create table t1 (c_clob clob, c_blob blob);
> insert into t1 values (stringtolob('clob_1'), stringtolob('blob_1'));
> update statistics for table t1 on (c_clob, c_blob) sample random 50 percent 
> persistent;
> update statistics for table t1 create sample random 100 percent;
> update statistics for table t1 on every column sample random 100 percent 
> persistent;
> update statistics for table t1 on every key sample random 100 percent 
> persistent;
> update statistics for table t1 on existing column sample random 100 percent 
> persistent;
> update statistics for table t1 on necessary column sample random 100 percent 
> persistent;
> {quote}
> When run, attempts to create a sample table fail with a syntax error. Instead 
> we should fail with error 9246, as shown below:
> {quote}>>obey repro.sql;
> >>control query default TRAF_BLOB_AS_VARCHAR 'OFF';
> --- SQL operation complete.
> >>control query default TRAF_CLOB_AS_VARCHAR 'OFF';
> --- SQL operation complete.
> >>
> >>drop table if exists t1;
> --- SQL operation complete.
> >>
> >>create table t1 (c_clob clob, c_blob blob);
> --- SQL operation complete.
> >>insert into t1 values (stringtolob('clob_1'), stringtolob('blob_1'));
> --- 1 row(s) inserted.
> >>
> >>update statistics for table t1 on (c_clob, c_blob) sample random 50 percent 
> >>persistent;
> *** ERROR[9246] UPDATE STATISTICS is not supported on LOB columns. Column 
> C_BLOB is a LOB column.
> --- SQL operation failed with errors.
> >>
> >>update statistics for table t1 create sample random 100 percent;
> *** ERROR[9214] Object 
> TRAFODION.SCH.TRAF_SAMPLE_00469125659506823465_1553720877_834150 could not be 
> created.
> *** ERROR[15001] A syntax error occurred at or before: 
> create table TRAFODION.SCH.TRAF_SAMPLE_00469125659506823465_1553720877_834150 
>  ( ) ATTRIBUTES ALIGNED FORMAT;
>  ^ (86 characters from start of SQL statement)
> *** ERROR[8822] The statement was not prepared.
> *** ERROR[8839] Transaction was aborted.
> --- SQL operation failed with errors.
> >>update statistics for table t1 on every column sample random 100 percent 
> >>persistent;
> *** ERROR[9214] Object 
> TRAFODION.SCH.TRAF_SAMPLE_00469125659506823465_1553720916_953637 could not be 
> created.
> *** ERROR[15001] A syntax error occurred at or before: 
> create table TRAFODION.SCH.TRAF_SAMPLE_00469125659506823465_1553720916_953637 
>  ( ) ATTRIBUTES ALIGNED FORMAT;
>  ^ (86 characters from start of SQL statement)
> *** ERROR[8822] The statement was not prepared.
> *** ERROR[8839] Transaction was aborted.
> --- SQL operation failed with errors.
> >>update statistics for table t1 on every key sample random 100 percent 
> >>persistent;
> *** ERROR[9214] Object 
> TRAFODION.SCH.TRAF_SAMPLE_00469125659506823465_1553720950_701762 could not be 
> created.
> *** ERROR[15001] A syntax error occurred at or before: 
> create table TRAFODION.SCH.TRAF_SAMPLE_00469125659506823465_1553720950_701762 
>  ( ) ATTRIBUTES ALIGNED FORMAT;
>  ^ (86 characters from start of SQL statement)
> *** ERROR[8822] The statement was not prepared.
> *** ERROR[8839] Transaction was aborted.
> --- SQL operation failed with errors.
> >>update statistics for table t1 on existing column sample random 100 percent 
> >>persistent;
> *** WARNING[9220] The table is empty, so no persistent sample table was 
> created.
> *** WARNING[9217] The statement will have no effect because no histograms are 
> currently maintained for the table.
> --- SQL operation completed with warnings.
> >>update statistics for table t1 on necessary column sample random 100 
> >>percent persistent;
> *** WARNING[9220] The table is empty, so no persistent sample table was 
> created.
> *** WARNING[9218] The statement will have no effect because no histograms 
> need to be updated.
> --- SQL operation completed with warnings.
> >>exit;
> End of MXCI Session
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TRAFODION-3291) UPDATE STATISTICS on wide tables causes a tdm_arkcmp core

2019-03-27 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3291:
---

 Summary: UPDATE STATISTICS on wide tables causes a tdm_arkcmp core
 Key: TRAFODION-3291
 URL: https://issues.apache.org/jira/browse/TRAFODION-3291
 Project: Apache Trafodion
  Issue Type: Bug
  Components: sql-cmp
Affects Versions: 2.4
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall


The following script creates a table with 500+ columns. When we attempt UPDATE 
STATISTICS on all columns, we get a core in tdm_arkcmp.
{quote}drop schema if exists mytest cascade;
create schema mytest;
set schema mytest;

create table mytable (id int, c0 char(10) default null, c1 char(10) default 
null, c2 char(10) default null, c3 char(10) default null, c4 char(10) default 
null, c5 char(10) default null, c6 char(10) default null, c7 char(10) default 
null, c8 char(10) default null, c9 char(10) default null, c10 char(10) default 
null, c11 char(10) default null, c12 char(10) default null, c13 char(10) 
default null, c14 char(10) default null, c15 char(10) default null, c16 
char(10) default null, c17 char(10) default null, c18 char(10) default null, 
c19 char(10) default null, c20 char(10) default null, c21 char(10) default 
null, c22 char(10) default null, c23 char(10) default null, c24 char(10) 
default null, c25 char(10) default null, c26 char(10) default null, c27 
char(10) default null, c28 char(10) default null, c29 char(10) default null, 
c30 char(10) default null, c31 char(10) default null, c32 char(10) default 
null, c33 char(10) default null, c34 char(10) default null, c35 char(10) 
default null, c36 char(10) default null, c37 char(10) default null, c38 
char(10) default null, c39 char(10) default null, c40 char(10) default null, 
c41 char(10) default null, c42 char(10) default null, c43 char(10) default 
null, c44 char(10) default null, c45 char(10) default null, c46 char(10) 
default null, c47 char(10) default null, c48 char(10) default null, c49 
char(10) default null, c50 char(10) default null, c51 char(10) default null, 
c52 char(10) default null, c53 char(10) default null, c54 char(10) default 
null, c55 char(10) default null, c56 char(10) default null, c57 char(10) 
default null, c58 char(10) default null, c59 char(10) default null, c60 
char(10) default null, c61 char(10) default null, c62 char(10) default null, 
c63 char(10) default null, c64 char(10) default null, c65 char(10) default 
null, c66 char(10) default null, c67 char(10) default null, c68 char(10) 
default null, c69 char(10) default null, c70 char(10) default null, c71 
char(10) default null, c72 char(10) default null, c73 char(10) default null, 
c74 char(10) default null, c75 char(10) default null, c76 char(10) default 
null, c77 char(10) default null, c78 char(10) default null, c79 char(10) 
default null, c80 char(10) default null, c81 char(10) default null, c82 
char(10) default null, c83 char(10) default null, c84 char(10) default null, 
c85 char(10) default null, c86 char(10) default null, c87 char(10) default 
null, c88 char(10) default null, c89 char(10) default null, c90 char(10) 
default null, c91 char(10) default null, c92 char(10) default null, c93 
char(10) default null, c94 char(10) default null, c95 char(10) default null, 
c96 char(10) default null, c97 char(10) default null, c98 char(10) default 
null, c99 char(10) default null, c100 char(10) default null, c101 char(10) 
default null, c102 char(10) default null, c103 char(10) default null, c104 
char(10) default null, c105 char(10) default null, c106 char(10) default null, 
c107 char(10) default null, c108 char(10) default null, c109 char(10) default 
null, c110 char(10) default null, c111 char(10) default null, c112 char(10) 
default null, c113 char(10) default null, c114 char(10) default null, c115 
char(10) default null, c116 char(10) default null, c117 char(10) default null, 
c118 char(10) default null, c119 char(10) default null, c120 char(10) default 
null, c121 char(10) default null, c122 char(10) default null, c123 char(10) 
default null, c124 char(10) default null, c125 char(10) default null, c126 
char(10) default null, c127 char(10) default null, c128 char(10) default null, 
c129 char(10) default null, c130 char(10) default null, c131 char(10) default 
null, c132 char(10) default null, c133 char(10) default null, c134 char(10) 
default null, c135 char(10) default null, c136 char(10) default null, c137 
char(10) default null, c138 char(10) default null, c139 char(10) default null, 
c140 char(10) default null, c141 char(10) default null, c142 char(10) default 
null, c143 char(10) default null, c144 char(10) default null, c145 char(10) 
default null, c146 char(10) default null, c147 char(10) default null, c148 
char(10) default null, c149 char(10) default null, c150 char(10) default null, 
c151 char(10) default null, c152 char(10) default null, c153 char(10) default 

[jira] [Created] (TRAFODION-3290) Creating a sample table on an all-LOB table fails with syntax error

2019-03-27 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3290:
---

 Summary: Creating a sample table on an all-LOB table fails with 
syntax error
 Key: TRAFODION-3290
 URL: https://issues.apache.org/jira/browse/TRAFODION-3290
 Project: Apache Trafodion
  Issue Type: Bug
  Components: sql-cmp
Affects Versions: 2.4
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall


The following script reproduces the problem:
{quote}control query default TRAF_BLOB_AS_VARCHAR 'OFF';
control query default TRAF_CLOB_AS_VARCHAR 'OFF';

drop table if exists t1;

create table t1 (c_clob clob, c_blob blob);
insert into t1 values (stringtolob('clob_1'), stringtolob('blob_1'));

update statistics for table t1 on (c_clob, c_blob) sample random 50 percent 
persistent;

update statistics for table t1 create sample random 100 percent;
update statistics for table t1 on every column sample random 100 percent 
persistent;
update statistics for table t1 on every key sample random 100 percent 
persistent;
update statistics for table t1 on existing column sample random 100 percent 
persistent;
update statistics for table t1 on necessary column sample random 100 percent 
persistent;
{quote}
When run, attempts to create a sample table fail with a syntax error. Instead 
we should fail with error 9246, as shown below:
{quote}>>obey repro.sql;
>>control query default TRAF_BLOB_AS_VARCHAR 'OFF';

--- SQL operation complete.
>>control query default TRAF_CLOB_AS_VARCHAR 'OFF';

--- SQL operation complete.
>>
>>drop table if exists t1;

--- SQL operation complete.
>>
>>create table t1 (c_clob clob, c_blob blob);

--- SQL operation complete.
>>insert into t1 values (stringtolob('clob_1'), stringtolob('blob_1'));

--- 1 row(s) inserted.
>>
>>update statistics for table t1 on (c_clob, c_blob) sample random 50 percent 
>>persistent;

*** ERROR[9246] UPDATE STATISTICS is not supported on LOB columns. Column 
C_BLOB is a LOB column.

--- SQL operation failed with errors.
>>
>>update statistics for table t1 create sample random 100 percent;

*** ERROR[9214] Object 
TRAFODION.SCH.TRAF_SAMPLE_00469125659506823465_1553720877_834150 could not be 
created.

*** ERROR[15001] A syntax error occurred at or before: 
create table TRAFODION.SCH.TRAF_SAMPLE_00469125659506823465_1553720877_834150 
 ( ) ATTRIBUTES ALIGNED FORMAT;
 ^ (86 characters from start of SQL statement)

*** ERROR[8822] The statement was not prepared.

*** ERROR[8839] Transaction was aborted.

--- SQL operation failed with errors.
>>update statistics for table t1 on every column sample random 100 percent 
>>persistent;

*** ERROR[9214] Object 
TRAFODION.SCH.TRAF_SAMPLE_00469125659506823465_1553720916_953637 could not be 
created.

*** ERROR[15001] A syntax error occurred at or before: 
create table TRAFODION.SCH.TRAF_SAMPLE_00469125659506823465_1553720916_953637 
 ( ) ATTRIBUTES ALIGNED FORMAT;
 ^ (86 characters from start of SQL statement)

*** ERROR[8822] The statement was not prepared.

*** ERROR[8839] Transaction was aborted.

--- SQL operation failed with errors.
>>update statistics for table t1 on every key sample random 100 percent 
>>persistent;

*** ERROR[9214] Object 
TRAFODION.SCH.TRAF_SAMPLE_00469125659506823465_1553720950_701762 could not be 
created.

*** ERROR[15001] A syntax error occurred at or before: 
create table TRAFODION.SCH.TRAF_SAMPLE_00469125659506823465_1553720950_701762 
 ( ) ATTRIBUTES ALIGNED FORMAT;
 ^ (86 characters from start of SQL statement)

*** ERROR[8822] The statement was not prepared.

*** ERROR[8839] Transaction was aborted.

--- SQL operation failed with errors.
>>update statistics for table t1 on existing column sample random 100 percent 
>>persistent;

*** WARNING[9220] The table is empty, so no persistent sample table was created.

*** WARNING[9217] The statement will have no effect because no histograms are 
currently maintained for the table.

--- SQL operation completed with warnings.
>>update statistics for table t1 on necessary column sample random 100 percent 
>>persistent;

*** WARNING[9220] The table is empty, so no persistent sample table was created.

*** WARNING[9218] The statement will have no effect because no histograms need 
to be updated.

--- SQL operation completed with warnings.
>>exit;

End of MXCI Session
{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (TRAFODION-3284) SUM aggregate overflow criteria are inconsistent with rest of product

2019-03-12 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3284.
-
   Resolution: Fixed
Fix Version/s: 2.4

> SUM aggregate overflow criteria are inconsistent with rest of product
> -
>
> Key: TRAFODION-3284
> URL: https://issues.apache.org/jira/browse/TRAFODION-3284
> Project: Apache Trafodion
>  Issue Type: Bug
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The following script demonstrates the problem:
> {quote}drop table if exists t;
> create table t (
> c1 float,
> c2 double precision,
> c3 float,
> c4 double precision
> );
> insert into t values (-1.234, -1.234, -1.42085740198313120E-309, 
> -1.42085740198313120E-309);
> select * from t;
> select avg(c1) from t;
> select avg(c2) from t;
> select avg(c3) from t;
> select avg(c4) from t;
> select count(c1) from t;
> select count(c2) from t;
> select count(c3) from t;
> select count(c4) from t;
> select max(c1) from t;
> select max(c2) from t;
> select max(c3) from t;
> select max(c4) from t;
> select min(c1) from t;
> select min(c2) from t;
> select min(c3) from t;
> select min(c4) from t;
> select stddev(c1) from t;
> select stddev(c2) from t;
> select stddev(c3) from t;
> select stddev(c4) from t;
> select sum(c1) from t;
> select sum(c2) from t;
> select sum(c3) from t;
> select sum(c4) from t;
> select variance(c1) from t;
> select variance(c2) from t;
> select variance(c3) from t;
> select variance(c4) from t;
> {quote}
>  
> When run, the AVG, SUM, STDDEV and VARIANCE aggregates report error 8411 
> (floating point overflow) on columns C3 and C4. For STDDEV and VARIANCE this 
> is correct, since calculating these aggregates requires squaring the column 
> value. For SUM and AVG, we are merely summing the value, which should not 
> cause an overflow.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TRAFODION-3284) SUM aggregate overflow criteria are inconsistent with rest of product

2019-03-11 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3284:
---

 Summary: SUM aggregate overflow criteria are inconsistent with 
rest of product
 Key: TRAFODION-3284
 URL: https://issues.apache.org/jira/browse/TRAFODION-3284
 Project: Apache Trafodion
  Issue Type: Bug
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall


The following script demonstrates the problem:
{quote}drop table if exists t;

create table t (
c1 float,
c2 double precision,
c3 float,
c4 double precision
);

insert into t values (-1.234, -1.234, -1.42085740198313120E-309, 
-1.42085740198313120E-309);
select * from t;

select avg(c1) from t;
select avg(c2) from t;
select avg(c3) from t;
select avg(c4) from t;

select count(c1) from t;
select count(c2) from t;
select count(c3) from t;
select count(c4) from t;

select max(c1) from t;
select max(c2) from t;
select max(c3) from t;
select max(c4) from t;

select min(c1) from t;
select min(c2) from t;
select min(c3) from t;
select min(c4) from t;

select stddev(c1) from t;
select stddev(c2) from t;
select stddev(c3) from t;
select stddev(c4) from t;

select sum(c1) from t;
select sum(c2) from t;
select sum(c3) from t;
select sum(c4) from t;

select variance(c1) from t;
select variance(c2) from t;
select variance(c3) from t;
select variance(c4) from t;
{quote}
 

When run, the AVG, SUM, STDDEV and VARIANCE aggregates report error 8411 
(floating point overflow) on columns C3 and C4. For STDDEV and VARIANCE this is 
correct, since calculating these aggregates requires squaring the column value. 
For SUM and AVG, we are merely summing the value, which should not cause an 
overflow.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (TRAFODION-3282) Buffer overrun in ExHdfsScan::work in certain conditions

2019-03-05 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3282.
-
   Resolution: Fixed
Fix Version/s: 2.4

> Buffer overrun in ExHdfsScan::work in certain conditions
> 
>
> Key: TRAFODION-3282
> URL: https://issues.apache.org/jira/browse/TRAFODION-3282
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 2.4
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> If we have a large enough Hive text table with string columns, and the string 
> columns have values that are longer than CQD HIVE_MAX_STRING_LENGTH_IN_BYTES, 
> and there is no external table definition with longer column sizes given, we 
> may core in ExHdfsScan::work with a buffer overrun.
> The following test case reproduces the behavior.
> First, use the following python script, called datagen.py:
> {quote}#! /usr/bin/env python
> import sys
> if len(sys.argv) != 5 or \
>  sys.argv[1].lower() == '-h' or \
>  sys.argv[1].lower() == '-help':
>  print 'Usage: ' + sys.argv[0] + 'ns> '
>  sys.exit()
> f = open(sys.argv[1], "w+")
> marker=list('ABCDEFGHIJKLMNOPQRSTUVWXYZ')
> for num_rows in range(0, int(sys.argv[2])):
>  f.write(str(num_rows) + '|')
>  for num_cols in range(0, int(sys.argv[3])):
>  f.write(marker[num_rows%len(marker)])
>  for i in range (1, int(sys.argv[4])):
>  f.write(str(i % 10))
>  f.write('|')
>  f.write(str(num_rows))
>  f.write('\n')
> f.close()
> {quote}
> Run this script as follows:
> {quote}chmod 755 ./datagen.py
> ./datagen.py ./data_lgvc.10rows_512KB.txt 10 2 524288
> {quote}
> Next, perform the following commands in a Hive shell:
> {quote}drop table if exists lgvc_base_table;
> create table lgvc_base_table(c_int int, c_string1 string, c_string2 string, 
> p_in
> t int) row format delimited fields terminated by '|';
> load data local inpath './data_lgvc.10rows_512KB.txt' overwrite into table 
> lgvc_
> base_table;
> {quote}
> Finally, do the following in sqlci:
> {quote}CQD HDFS_IO_BUFFERSIZE '2048';
> prepare s1 from select * from hive.hive.lgvc_base_table where c_int > 10;
> execute s1;
> {quote}
> (The point of the CQD is to reduce the default HDFS read buffer size to 2Mb 
> rather than its default of 65Mb, so the test will fail with a smaller input 
> file.)
> When this test case is run, we get a core with the following stack trace:
> {quote}(gdb) bt
> #0 0x75116495 in raise () from /lib64/libc.so.6
> #1 0x75117c75 in abort () from /lib64/libc.so.6
> #2 0x76f02935 in ?? ()
>  from /usr/lib/jvm/java-1.7.0-openjdk.x86_64/jre/lib/amd64/server/libjvm.so
> #3 0x7707bfdf in ?? ()
>  from /usr/lib/jvm/java-1.7.0-openjdk.x86_64/jre/lib/amd64/server/libjvm.so
> #4 0x76f077c2 in JVM_handle_linux_signal ()
>  from /usr/lib/jvm/java-1.7.0-openjdk.x86_64/jre/lib/amd64/server/libjvm.so
> #5 
> #6 0x7516d753 in memcpy () from /lib64/libc.so.6
> #7 0x735b4dd5 in ExHdfsScanTcb::work (this=0x77e99148)
>  at ../executor/ExHdfsScan.cpp:601
> #8 0x7333d7a1 in ex_tcb::sWork (tcb=0x77e99148)
>  at ../executor/ex_tcb.h:102
> #9 0x7350dba7 in ExSubtask::work (this=0x77e99ad0)
>  at ../executor/ExScheduler.cpp:757
> #10 0x7350cbf1 in ExScheduler::work (this=0x77e98cb0, 
> prevWaitTime=
>  0) at ../executor/ExScheduler.cpp:280
> #11 0x733a41c7 in ex_root_tcb::execute (this=0x77e99b78, 
>  cliGlobals=0xba5970, glob=0x77ea5d40, input_desc=0x77ee1178, 
>  diagsArea=@0x7ffee020, reExecute=0) at ../executor/ex_root.cpp:928
> #12 0x74e4c452 in Statement::execute (this=0x77e84f40, cliGlobals=
>  0xba5970, input_desc=0x77ee1178, diagsArea=..., execute_state=
> ---Type  to continue, or q  to quit---q
> Statement:Quit
> (gdb)
> {quote}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TRAFODION-3282) Buffer overrun in ExHdfsScan::work in certain conditions

2019-03-04 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3282:
---

 Summary: Buffer overrun in ExHdfsScan::work in certain conditions
 Key: TRAFODION-3282
 URL: https://issues.apache.org/jira/browse/TRAFODION-3282
 Project: Apache Trafodion
  Issue Type: Bug
  Components: sql-exe
Affects Versions: 2.4
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall


If we have a large enough Hive text table with string columns, and the string 
columns have values that are longer than CQD HIVE_MAX_STRING_LENGTH_IN_BYTES, 
and there is no external table definition with longer column sizes given, we 
may core in ExHdfsScan::work with a buffer overrun.

The following test case reproduces the behavior.

First, use the following python script, called datagen.py:
{quote}#! /usr/bin/env python
import sys

if len(sys.argv) != 5 or \
 sys.argv[1].lower() == '-h' or \
 sys.argv[1].lower() == '-help':
 print 'Usage: ' + sys.argv[0] + ''
 sys.exit()

f = open(sys.argv[1], "w+")

marker=list('ABCDEFGHIJKLMNOPQRSTUVWXYZ')
for num_rows in range(0, int(sys.argv[2])):
 f.write(str(num_rows) + '|')
 for num_cols in range(0, int(sys.argv[3])):
 f.write(marker[num_rows%len(marker)])
 for i in range (1, int(sys.argv[4])):
 f.write(str(i % 10))
 f.write('|')
 f.write(str(num_rows))
 f.write('\n')

f.close()
{quote}
Run this script as follows:
{quote}chmod 755 ./datagen.py
./datagen.py ./data_lgvc.10rows_512KB.txt 10 2 524288
{quote}
Next, perform the following commands in a Hive shell:
{quote}drop table if exists lgvc_base_table;

create table lgvc_base_table(c_int int, c_string1 string, c_string2 string, p_in
t int) row format delimited fields terminated by '|';
load data local inpath './data_lgvc.10rows_512KB.txt' overwrite into table lgvc_
base_table;
{quote}
Finally, do the following in sqlci:
{quote}CQD HDFS_IO_BUFFERSIZE '2048';

prepare s1 from select * from hive.hive.lgvc_base_table where c_int > 10;

execute s1;
{quote}
(The point of the CQD is to reduce the default HDFS read buffer size to 2Mb 
rather than its default of 65Mb, so the test will fail with a smaller input 
file.)

When this test case is run, we get a core with the following stack trace:
{quote}(gdb) bt
#0 0x75116495 in raise () from /lib64/libc.so.6
#1 0x75117c75 in abort () from /lib64/libc.so.6
#2 0x76f02935 in ?? ()
 from /usr/lib/jvm/java-1.7.0-openjdk.x86_64/jre/lib/amd64/server/libjvm.so
#3 0x7707bfdf in ?? ()
 from /usr/lib/jvm/java-1.7.0-openjdk.x86_64/jre/lib/amd64/server/libjvm.so
#4 0x76f077c2 in JVM_handle_linux_signal ()
 from /usr/lib/jvm/java-1.7.0-openjdk.x86_64/jre/lib/amd64/server/libjvm.so
#5 
#6 0x7516d753 in memcpy () from /lib64/libc.so.6
#7 0x735b4dd5 in ExHdfsScanTcb::work (this=0x77e99148)
 at ../executor/ExHdfsScan.cpp:601
#8 0x7333d7a1 in ex_tcb::sWork (tcb=0x77e99148)
 at ../executor/ex_tcb.h:102
#9 0x7350dba7 in ExSubtask::work (this=0x77e99ad0)
 at ../executor/ExScheduler.cpp:757
#10 0x7350cbf1 in ExScheduler::work (this=0x77e98cb0, prevWaitTime=
 0) at ../executor/ExScheduler.cpp:280
#11 0x733a41c7 in ex_root_tcb::execute (this=0x77e99b78, 
 cliGlobals=0xba5970, glob=0x77ea5d40, input_desc=0x77ee1178, 
 diagsArea=@0x7ffee020, reExecute=0) at ../executor/ex_root.cpp:928
#12 0x74e4c452 in Statement::execute (this=0x77e84f40, cliGlobals=
 0xba5970, input_desc=0x77ee1178, diagsArea=..., execute_state=
---Type  to continue, or q  to quit---q
Statement:Quit
(gdb)
{quote}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (TRAFODION-3276) Fix regression caused by [TRAFODION-3270]

2019-02-14 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3276.
-
   Resolution: Fixed
Fix Version/s: 2.4

> Fix regression caused by [TRAFODION-3270]
> -
>
> Key: TRAFODION-3276
> URL: https://issues.apache.org/jira/browse/TRAFODION-3276
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.4
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The following script demonstrates the issue.
> {quote}drop table if exists t1 cascade;
> drop table if exists t2 cascade;
> create table t1 (
> t1_pk integer,
> t1_na integer
> );
> insert into t1 values (1,1),(2,null);
> create table t2 (
> t2_pk integer not null,
> t2_na integer,
> primary key (t2_pk)
> );
> insert into t2 values (2,null),(3,3);
> create table t3 (
> t3_pk integer not null,
> t3_na integer
> );
> insert into t3 values (2,null),(3,3);
> select t1_na, t2_pk from t1 left join t2 on t1_na = t2_pk and t1_na is null;
> select t1_na, t3_pk from t1 left join t3 on t1_na = t3_pk and t1_na is null;
> {quote}
>  
> When run, we see this failure. This failure is due to the fix in 
> -TRAFODION-3270.-
> {quote}>>select t1_na, t2_pk from t1 left join t2 on t1_na = t2_pk and t1_na 
> is null;
> *** ERROR[4099] A NULL operand is not allowed in predicate 
> (TRAFODION.SEABASE.T2.T2_PK = NULL).
> *** ERROR[4099] A NULL operand is not allowed in predicate 
> (TRAFODION.SEABASE.T2.T2_PK = NULL).
> *** ERROR[2006] Internal error: assertion failure (vdesc) in file 
> ../optimizer/ValueDesc.cpp at line 118.
> *** ERROR[8822] The statement was not prepared.
> >>
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TRAFODION-3276) Fix regression caused by [TRAFODION-3270]

2019-02-13 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3276:
---

 Summary: Fix regression caused by [TRAFODION-3270]
 Key: TRAFODION-3276
 URL: https://issues.apache.org/jira/browse/TRAFODION-3276
 Project: Apache Trafodion
  Issue Type: Bug
  Components: sql-cmp
Affects Versions: 2.4
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall


The following script demonstrates the issue.
{quote}drop table if exists t1 cascade;
drop table if exists t2 cascade;

create table t1 (
t1_pk integer,
t1_na integer
);
insert into t1 values (1,1),(2,null);

create table t2 (
t2_pk integer not null,
t2_na integer,
primary key (t2_pk)
);
insert into t2 values (2,null),(3,3);

create table t3 (
t3_pk integer not null,
t3_na integer
);
insert into t3 values (2,null),(3,3);

select t1_na, t2_pk from t1 left join t2 on t1_na = t2_pk and t1_na is null;

select t1_na, t3_pk from t1 left join t3 on t1_na = t3_pk and t1_na is null;
{quote}
 

When run, we see this failure. This failure is due to the fix in 
-TRAFODION-3270.-
{quote}>>select t1_na, t2_pk from t1 left join t2 on t1_na = t2_pk and t1_na is 
null;

*** ERROR[4099] A NULL operand is not allowed in predicate 
(TRAFODION.SEABASE.T2.T2_PK = NULL).

*** ERROR[4099] A NULL operand is not allowed in predicate 
(TRAFODION.SEABASE.T2.T2_PK = NULL).

*** ERROR[2006] Internal error: assertion failure (vdesc) in file 
../optimizer/ValueDesc.cpp at line 118.

*** ERROR[8822] The statement was not prepared.

>>
{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (TRAFODION-3273) Cli can go off the end of a wide descriptor

2019-02-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3273.
-
   Resolution: Fixed
Fix Version/s: 2.4

> Cli can go off the end of a wide descriptor
> ---
>
> Key: TRAFODION-3273
> URL: https://issues.apache.org/jira/browse/TRAFODION-3273
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 2.4
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The Trafodion CLI (Call Level Interface) supports two styles of descriptors, 
> "wide" and "non-wide". Non-wide is sufficient for all SQL statements except 
> the CALL statement. For such statements, the set of input and output columns 
> is fixed by the semantics of the statement and therefore is known exactly at 
> compile time.
> For non-wide descriptors, method InputOutputExpr::outputValues 
> (cli/CliExpExchange.cpp) checks to see if the output descriptor entry count 
> matches the statement output column count, and raises an 8893 error if they 
> do not match.
> Wide descriptors on the other hand are permitted to be wider than the number 
> of statement columns. This is needed for CALL statements; stored procedures 
> are permitted to produce result sets of arbitrary kinds and sizes that are 
> not known at compile time.
> Trafodion ODBC and JDBC (via the mxosrvr process) use wide descriptors all 
> the time.
> If we happen to have more statement output columns than there are output 
> descriptor entries, and we are using a wide descriptor, method 
> InputOutputExpr::outputValues will go off the end of the output descriptor, 
> with unpredictable results. We may get an error; we may core.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (TRAFODION-3261) Improve logsort utility to sort output from GET SCHEMA etc. statements

2019-02-12 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3261.
-
   Resolution: Fixed
Fix Version/s: 2.4

> Improve logsort utility to sort output from GET SCHEMA etc. statements
> --
>
> Key: TRAFODION-3261
> URL: https://issues.apache.org/jira/browse/TRAFODION-3261
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dev-environment
>Affects Versions: 2.4
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The logsort utility is used by the developer regression test suite to sort 
> output in a sqlci log file when a SELECT statement lacks an ORDER BY. This is 
> useful because it eliminates false failures in regression testing due to 
> non-determinism of row order.
> The GET SCHEMA and other metadata GET statements do not order their output 
> either. Occasionally changes to the GET implementations cause differences in 
> output order.
> So it would be good if logsort could be improved to sort output from GET 
> statements as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TRAFODION-3273) Cli can go off the end of a wide descriptor

2019-02-07 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3273:
---

 Summary: Cli can go off the end of a wide descriptor
 Key: TRAFODION-3273
 URL: https://issues.apache.org/jira/browse/TRAFODION-3273
 Project: Apache Trafodion
  Issue Type: Bug
  Components: sql-exe
Affects Versions: 2.4
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall


The Trafodion CLI (Call Level Interface) supports two styles of descriptors, 
"wide" and "non-wide". Non-wide is sufficient for all SQL statements except the 
CALL statement. For such statements, the set of input and output columns is 
fixed by the semantics of the statement and therefore is known exactly at 
compile time.

For non-wide descriptors, method InputOutputExpr::outputValues 
(cli/CliExpExchange.cpp) checks to see if the output descriptor entry count 
matches the statement output column count, and raises an 8893 error if they do 
not match.

Wide descriptors on the other hand are permitted to be wider than the number of 
statement columns. This is needed for CALL statements; stored procedures are 
permitted to produce result sets of arbitrary kinds and sizes that are not 
known at compile time.

Trafodion ODBC and JDBC (via the mxosrvr process) use wide descriptors all the 
time.

If we happen to have more statement output columns than there are output 
descriptor entries, and we are using a wide descriptor, method 
InputOutputExpr::outputValues will go off the end of the output descriptor, 
with unpredictable results. We may get an error; we may core.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (TRAFODION-3270) Query that has mismatched datatypes on an IN predicate cores on debug builds

2019-02-06 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3270.
-
   Resolution: Fixed
Fix Version/s: 2.4

> Query that has mismatched datatypes on an IN predicate cores on debug builds
> 
>
> Key: TRAFODION-3270
> URL: https://issues.apache.org/jira/browse/TRAFODION-3270
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.4
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> |The following query causes a core on debug builds:
>  
> >>select object_name from "_MD_".objects 
> +> where object_name in (select table_uid from sb_histograms);
> Aborted (core dumped)
>  
> (Note: I meant to type "object_uid in (select ... )" but I mistyped my query, 
> stumbling into this abend.)
>  
> The stack trace is:
>  
> (gdb) bt
> #0 0x75033207 in raise () from /lib64/libc.so.6
> #1 0x750348f8 in abort () from /lib64/libc.so.6
> #2 0x7fffec1333a6 in GeneratorAbort (
>  file=0x7fffec2dae44 "../generator/GenPreCode.cpp", line=9195, 
>  message=0x7fffec2db658 "source type and target type incompatible in NARROW") 
> at ../generator/Generator.cpp:3057
> #3 0x7fffec1c3631 in Cast::preCodeGen (this=0x7fffcfb84f88, 
>  generator=0x7ffebdf0) at ../generator/GenPreCode.cpp:9194
> #4 0x7fffec14fdb8 in ExpGenerator::generateKeyCast (this=0x7fffcfb70368, 
>  vid=..., dataConversionErrorFlag=0x7fffcfb852f0, desc_flag=0, 
>  tf=ExpTupleDesc::SQLMX_KEY_FORMAT, possibleErrorCount=@0x7ffe767c: 0, 
>  allChosenPredsAreEqualPreds=0, castVarcharToAnsiChar=0)
>  at ../generator/GenExpGenerator.cpp:3338
> #5 0x7fffec150aa8 in ExpGenerator::generateKeyExpr (this=0x7fffcfb70368, 
>  indexKeyColumns=..., val_id_list=..., atp=1, atp_index=2, 
>  dataConversionErrorFlag=0x7fffcfb852f0, tf=ExpTupleDesc::SQLMX_KEY_FORMAT, 
>  keyLen=@0x7ffe79dc: 1, key_expr=0x7ffe79d0, 
>  allChosenPredsAreEqualPreds=0) at ../generator/GenExpGenerator.cpp:3486
> #6 0x7fffec190754 in ExpGenerator::buildKeyInfo (this=0x7fffcfb70368, 
>  keyInfo=0x7ffe8348, generator=0x7ffebdf0, keyColumns=..., 
>  listOfKeyColumns=..., beginKeyPred=..., endKeyPred=..., searchKey=
>  0x7fffcfb6c7f8, mdamKeyPtr=0x0, reverseScan=0, 
>  tf=ExpTupleDesc::SQLMX_KEY_FORMAT) at ../generator/GenKey.cpp:146
> #7 0x7fffec25ae82 in HbaseAccess::codeGen (this=0x7fffcfb6b330, 
>  generator=0x7ffebdf0) at ../generator/GenRelScan.cpp:2468
> #8 0x7fffec1e1025 in ProbeCache::codeGen (this=0x7fffcfb71c30, 
>  generator=0x7ffebdf0) at ../generator/GenProbeCache.cpp:121
> #9 0x7fffec22a771 in NestedJoin::codeGen (this=0x7fffcfb5b7e8, 
>  generator=0x7ffebdf0) at ../generator/GenRelJoin.cpp:3314
> #10 0x7fffec232cbd in RelRoot::codeGen (this=0x7fffcfb4b6f0, 
>  generator=0x7ffebdf0) at ../generator/GenRelMisc.cpp:1256
> #11 0x7fffec12b4cb in Generator::genCode (this=0x7ffebdf0, 
>  source=0x7fffd0ca63f0 "select object_name from \"_MD_\".objects where 
> object_name in (select table_uid from sb_histograms);", 
> expr_node=0x7fffcfb4b6f0)
>  at ../generator/Generator.cpp:572
> #12 0x7fffeeb0827e in CmpMain::compile (this=0x7ffedd60, 
>  input_str=0x7fffd0ca63f0 "select object_name from \"_MD_\".objects where 
> object_name in (select table_uid from sb_histograms);", charset=15, 
>  queryExpr=@0x7ffedbc8: 0x7fffcfb4b6f0, gen_code=0x7fffd0c5d1c8, 
>  gen_code_len=0x7fffd0c5d1c0, heap=0x7fffd0c6bb70, phase=CmpMain::END, 
>  fragmentDir=0x7ffeddf8, op=3004, useQueryCache=CmpMain::NORMAL, 
>  cacheable=0x7ffedbb4, begTime=0x7ffedbd0, shouldLog=0)
>  at ../sqlcomp/CmpMain.cpp:2344
> #13 0x7fffeeb05eb8 in CmpMain::sqlcomp (this=0x7ffedd60, 
>  input_str=0x7fffd0ca63f0 "select object_name from \"_MD_\".objects where 
> object_name in (select table_uid from sb_histograms);", charset=15, 
>  queryExpr=@0x7ffedbc8: 0x7fffcfb4b6f0, gen_code=0x7fffd0c5d1c8, 
>  gen_code_len=0x7fffd0c5d1c0, heap=0x7fffd0c6bb70, phase=CmpMain::END, 
>  fragmentDir=0x7ffeddf8, op=3004, useQueryCache=CmpMain::NORMAL, 
>  cacheable=0x7ffedbb4, begTime=0x7ffedbd0, shouldLog=0)
>  at ../sqlcomp/CmpMain.cpp:1658
> #14 0x7fffeeb037e7 in CmpMain::sqlcomp (this=0x7ffedd60, input=..., 
>  gen_code=0x7fffd0c5d1c8, gen_code_len=0x7fffd0c5d1c0, heap=0x7fffd0c6bb70, 
>  phase=CmpMain::END, fragmentDir=0x7ffeddf8, op=3004, 
>  useQueryCache=CmpMain::NORMAL) at ../sqlcomp/CmpMain.cpp:774
> #15 0x742b496d in CmpStatement::process (this=0x7fffd0ca5c70, 
>  sqltext=...) at ../arkcmp/CmpStatement.cpp:499

[jira] [Commented] (TRAFODION-3270) Query that has mismatched datatypes on an IN predicate cores on debug builds

2019-02-06 Thread David Wayne Birdsall (JIRA)


[ 
https://issues.apache.org/jira/browse/TRAFODION-3270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16761946#comment-16761946
 ] 

David Wayne Birdsall commented on TRAFODION-3270:
-

The logic to handle incompatible types is implemented in the Binder code. So, I 
added a call to BiRelat::bindNode for the key expression so that code will be 
executed for the key predicate.

> Query that has mismatched datatypes on an IN predicate cores on debug builds
> 
>
> Key: TRAFODION-3270
> URL: https://issues.apache.org/jira/browse/TRAFODION-3270
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.4
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> |The following query causes a core on debug builds:
>  
> >>select object_name from "_MD_".objects 
> +> where object_name in (select table_uid from sb_histograms);
> Aborted (core dumped)
>  
> (Note: I meant to type "object_uid in (select ... )" but I mistyped my query, 
> stumbling into this abend.)
>  
> The stack trace is:
>  
> (gdb) bt
> #0 0x75033207 in raise () from /lib64/libc.so.6
> #1 0x750348f8 in abort () from /lib64/libc.so.6
> #2 0x7fffec1333a6 in GeneratorAbort (
>  file=0x7fffec2dae44 "../generator/GenPreCode.cpp", line=9195, 
>  message=0x7fffec2db658 "source type and target type incompatible in NARROW") 
> at ../generator/Generator.cpp:3057
> #3 0x7fffec1c3631 in Cast::preCodeGen (this=0x7fffcfb84f88, 
>  generator=0x7ffebdf0) at ../generator/GenPreCode.cpp:9194
> #4 0x7fffec14fdb8 in ExpGenerator::generateKeyCast (this=0x7fffcfb70368, 
>  vid=..., dataConversionErrorFlag=0x7fffcfb852f0, desc_flag=0, 
>  tf=ExpTupleDesc::SQLMX_KEY_FORMAT, possibleErrorCount=@0x7ffe767c: 0, 
>  allChosenPredsAreEqualPreds=0, castVarcharToAnsiChar=0)
>  at ../generator/GenExpGenerator.cpp:3338
> #5 0x7fffec150aa8 in ExpGenerator::generateKeyExpr (this=0x7fffcfb70368, 
>  indexKeyColumns=..., val_id_list=..., atp=1, atp_index=2, 
>  dataConversionErrorFlag=0x7fffcfb852f0, tf=ExpTupleDesc::SQLMX_KEY_FORMAT, 
>  keyLen=@0x7ffe79dc: 1, key_expr=0x7ffe79d0, 
>  allChosenPredsAreEqualPreds=0) at ../generator/GenExpGenerator.cpp:3486
> #6 0x7fffec190754 in ExpGenerator::buildKeyInfo (this=0x7fffcfb70368, 
>  keyInfo=0x7ffe8348, generator=0x7ffebdf0, keyColumns=..., 
>  listOfKeyColumns=..., beginKeyPred=..., endKeyPred=..., searchKey=
>  0x7fffcfb6c7f8, mdamKeyPtr=0x0, reverseScan=0, 
>  tf=ExpTupleDesc::SQLMX_KEY_FORMAT) at ../generator/GenKey.cpp:146
> #7 0x7fffec25ae82 in HbaseAccess::codeGen (this=0x7fffcfb6b330, 
>  generator=0x7ffebdf0) at ../generator/GenRelScan.cpp:2468
> #8 0x7fffec1e1025 in ProbeCache::codeGen (this=0x7fffcfb71c30, 
>  generator=0x7ffebdf0) at ../generator/GenProbeCache.cpp:121
> #9 0x7fffec22a771 in NestedJoin::codeGen (this=0x7fffcfb5b7e8, 
>  generator=0x7ffebdf0) at ../generator/GenRelJoin.cpp:3314
> #10 0x7fffec232cbd in RelRoot::codeGen (this=0x7fffcfb4b6f0, 
>  generator=0x7ffebdf0) at ../generator/GenRelMisc.cpp:1256
> #11 0x7fffec12b4cb in Generator::genCode (this=0x7ffebdf0, 
>  source=0x7fffd0ca63f0 "select object_name from \"_MD_\".objects where 
> object_name in (select table_uid from sb_histograms);", 
> expr_node=0x7fffcfb4b6f0)
>  at ../generator/Generator.cpp:572
> #12 0x7fffeeb0827e in CmpMain::compile (this=0x7ffedd60, 
>  input_str=0x7fffd0ca63f0 "select object_name from \"_MD_\".objects where 
> object_name in (select table_uid from sb_histograms);", charset=15, 
>  queryExpr=@0x7ffedbc8: 0x7fffcfb4b6f0, gen_code=0x7fffd0c5d1c8, 
>  gen_code_len=0x7fffd0c5d1c0, heap=0x7fffd0c6bb70, phase=CmpMain::END, 
>  fragmentDir=0x7ffeddf8, op=3004, useQueryCache=CmpMain::NORMAL, 
>  cacheable=0x7ffedbb4, begTime=0x7ffedbd0, shouldLog=0)
>  at ../sqlcomp/CmpMain.cpp:2344
> #13 0x7fffeeb05eb8 in CmpMain::sqlcomp (this=0x7ffedd60, 
>  input_str=0x7fffd0ca63f0 "select object_name from \"_MD_\".objects where 
> object_name in (select table_uid from sb_histograms);", charset=15, 
>  queryExpr=@0x7ffedbc8: 0x7fffcfb4b6f0, gen_code=0x7fffd0c5d1c8, 
>  gen_code_len=0x7fffd0c5d1c0, heap=0x7fffd0c6bb70, phase=CmpMain::END, 
>  fragmentDir=0x7ffeddf8, op=3004, useQueryCache=CmpMain::NORMAL, 
>  cacheable=0x7ffedbb4, begTime=0x7ffedbd0, shouldLog=0)
>  at ../sqlcomp/CmpMain.cpp:1658
> #14 0x7fffeeb037e7 in CmpMain::sqlcomp (this=0x7ffedd60, input=..., 
>  gen_code=0x7fffd0c5d1c8, gen_code_len=0x7fffd0c5d1c0, heap=0x7fffd0c6bb70, 
>  phase=CmpMain::END, fragmentDir=0x7ffeddf8, op=3004, 
>  

[jira] [Created] (TRAFODION-3270) Query that has mismatched datatypes on an IN predicate cores on debug builds

2019-02-04 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3270:
---

 Summary: Query that has mismatched datatypes on an IN predicate 
cores on debug builds
 Key: TRAFODION-3270
 URL: https://issues.apache.org/jira/browse/TRAFODION-3270
 Project: Apache Trafodion
  Issue Type: Bug
  Components: sql-cmp
Affects Versions: 2.4
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall


|The following query causes a core on debug builds:
 
>>select object_name from "_MD_".objects 
+> where object_name in (select table_uid from sb_histograms);
Aborted (core dumped)
 
(Note: I meant to type "object_uid in (select ... )" but I mistyped my query, 
stumbling into this abend.)
 
The stack trace is:
 
(gdb) bt
#0 0x75033207 in raise () from /lib64/libc.so.6
#1 0x750348f8 in abort () from /lib64/libc.so.6
#2 0x7fffec1333a6 in GeneratorAbort (
 file=0x7fffec2dae44 "../generator/GenPreCode.cpp", line=9195, 
 message=0x7fffec2db658 "source type and target type incompatible in NARROW") 
at ../generator/Generator.cpp:3057
#3 0x7fffec1c3631 in Cast::preCodeGen (this=0x7fffcfb84f88, 
 generator=0x7ffebdf0) at ../generator/GenPreCode.cpp:9194
#4 0x7fffec14fdb8 in ExpGenerator::generateKeyCast (this=0x7fffcfb70368, 
 vid=..., dataConversionErrorFlag=0x7fffcfb852f0, desc_flag=0, 
 tf=ExpTupleDesc::SQLMX_KEY_FORMAT, possibleErrorCount=@0x7ffe767c: 0, 
 allChosenPredsAreEqualPreds=0, castVarcharToAnsiChar=0)
 at ../generator/GenExpGenerator.cpp:3338
#5 0x7fffec150aa8 in ExpGenerator::generateKeyExpr (this=0x7fffcfb70368, 
 indexKeyColumns=..., val_id_list=..., atp=1, atp_index=2, 
 dataConversionErrorFlag=0x7fffcfb852f0, tf=ExpTupleDesc::SQLMX_KEY_FORMAT, 
 keyLen=@0x7ffe79dc: 1, key_expr=0x7ffe79d0, 
 allChosenPredsAreEqualPreds=0) at ../generator/GenExpGenerator.cpp:3486
#6 0x7fffec190754 in ExpGenerator::buildKeyInfo (this=0x7fffcfb70368, 
 keyInfo=0x7ffe8348, generator=0x7ffebdf0, keyColumns=..., 
 listOfKeyColumns=..., beginKeyPred=..., endKeyPred=..., searchKey=
 0x7fffcfb6c7f8, mdamKeyPtr=0x0, reverseScan=0, 
 tf=ExpTupleDesc::SQLMX_KEY_FORMAT) at ../generator/GenKey.cpp:146
#7 0x7fffec25ae82 in HbaseAccess::codeGen (this=0x7fffcfb6b330, 
 generator=0x7ffebdf0) at ../generator/GenRelScan.cpp:2468
#8 0x7fffec1e1025 in ProbeCache::codeGen (this=0x7fffcfb71c30, 
 generator=0x7ffebdf0) at ../generator/GenProbeCache.cpp:121
#9 0x7fffec22a771 in NestedJoin::codeGen (this=0x7fffcfb5b7e8, 
 generator=0x7ffebdf0) at ../generator/GenRelJoin.cpp:3314
#10 0x7fffec232cbd in RelRoot::codeGen (this=0x7fffcfb4b6f0, 
 generator=0x7ffebdf0) at ../generator/GenRelMisc.cpp:1256
#11 0x7fffec12b4cb in Generator::genCode (this=0x7ffebdf0, 
 source=0x7fffd0ca63f0 "select object_name from \"_MD_\".objects where 
object_name in (select table_uid from sb_histograms);", 
expr_node=0x7fffcfb4b6f0)
 at ../generator/Generator.cpp:572
#12 0x7fffeeb0827e in CmpMain::compile (this=0x7ffedd60, 
 input_str=0x7fffd0ca63f0 "select object_name from \"_MD_\".objects where 
object_name in (select table_uid from sb_histograms);", charset=15, 
 queryExpr=@0x7ffedbc8: 0x7fffcfb4b6f0, gen_code=0x7fffd0c5d1c8, 
 gen_code_len=0x7fffd0c5d1c0, heap=0x7fffd0c6bb70, phase=CmpMain::END, 
 fragmentDir=0x7ffeddf8, op=3004, useQueryCache=CmpMain::NORMAL, 
 cacheable=0x7ffedbb4, begTime=0x7ffedbd0, shouldLog=0)
 at ../sqlcomp/CmpMain.cpp:2344
#13 0x7fffeeb05eb8 in CmpMain::sqlcomp (this=0x7ffedd60, 
 input_str=0x7fffd0ca63f0 "select object_name from \"_MD_\".objects where 
object_name in (select table_uid from sb_histograms);", charset=15, 
 queryExpr=@0x7ffedbc8: 0x7fffcfb4b6f0, gen_code=0x7fffd0c5d1c8, 
 gen_code_len=0x7fffd0c5d1c0, heap=0x7fffd0c6bb70, phase=CmpMain::END, 
 fragmentDir=0x7ffeddf8, op=3004, useQueryCache=CmpMain::NORMAL, 
 cacheable=0x7ffedbb4, begTime=0x7ffedbd0, shouldLog=0)
 at ../sqlcomp/CmpMain.cpp:1658
#14 0x7fffeeb037e7 in CmpMain::sqlcomp (this=0x7ffedd60, input=..., 
 gen_code=0x7fffd0c5d1c8, gen_code_len=0x7fffd0c5d1c0, heap=0x7fffd0c6bb70, 
 phase=CmpMain::END, fragmentDir=0x7ffeddf8, op=3004, 
 useQueryCache=CmpMain::NORMAL) at ../sqlcomp/CmpMain.cpp:774
#15 0x742b496d in CmpStatement::process (this=0x7fffd0ca5c70, 
 sqltext=...) at ../arkcmp/CmpStatement.cpp:499
#16 0x742a0455 in CmpContext::compileDirect (this=0x7fffe23b1090, 
 data=0x77e961b0 "h", data_len=208, outHeap=0x77ef0cf0, charset=15, 
 op=CmpMessageObj::SQLTEXT_COMPILE, gen_code=@0x7ffee290: 0x0, 
 gen_code_len=@0x7ffee29c: 0, parserFlags=0, parentQid=0x0, 
 parentQidLen=0, diagsArea=@0x7ffee268: 0x0)
 at ../arkcmp/CmpContext.cpp:829
#17 0x74d60c20 in Statement::prepare2 (this=0x77e81f40, 
 source=0x77e76f08 "select object_name from \"_MD_\".objects 

[jira] [Resolved] (TRAFODION-3266) UPDATE STATISTICS should use SKIP CONFLICT ACCESS when accessing the base table

2019-01-31 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3266.
-
   Resolution: Fixed
Fix Version/s: 2.4

> UPDATE STATISTICS should use SKIP CONFLICT ACCESS when accessing the base 
> table
> ---
>
> Key: TRAFODION-3266
> URL: https://issues.apache.org/jira/browse/TRAFODION-3266
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: sql-cmu
>Affects Versions: 2.4
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Today, UPDATE STATISTICS uses READ UNCOMMITTED ACCESS when accessing the base 
> table. Unfortunately, our engine maps this to READ COMMITTED ACCESS. In a 
> scenario where data ingest happens continuously, this may lead to transaction 
> conflicts between the full table scan done by UPDATE STATISTICS and data 
> ingest transactions that wish to commit.
> By changing UPDATE STATISTICS to use SKIP CONFLICT ACCESS, we can avoid such 
> conflicts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TRAFODION-3266) UPDATE STATISTICS should use SKIP CONFLICT ACCESS when accessing the base table

2019-01-29 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3266:
---

 Summary: UPDATE STATISTICS should use SKIP CONFLICT ACCESS when 
accessing the base table
 Key: TRAFODION-3266
 URL: https://issues.apache.org/jira/browse/TRAFODION-3266
 Project: Apache Trafodion
  Issue Type: Improvement
  Components: sql-cmu
Affects Versions: 2.4
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall


Today, UPDATE STATISTICS uses READ UNCOMMITTED ACCESS when accessing the base 
table. Unfortunately, our engine maps this to READ COMMITTED ACCESS. In a 
scenario where data ingest happens continuously, this may lead to transaction 
conflicts between the full table scan done by UPDATE STATISTICS and data ingest 
transactions that wish to commit.

By changing UPDATE STATISTICS to use SKIP CONFLICT ACCESS, we can avoid such 
conflicts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TRAFODION-3261) Improve logsort utility to sort output from GET SCHEMA etc. statements

2019-01-22 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3261:
---

 Summary: Improve logsort utility to sort output from GET SCHEMA 
etc. statements
 Key: TRAFODION-3261
 URL: https://issues.apache.org/jira/browse/TRAFODION-3261
 Project: Apache Trafodion
  Issue Type: Bug
  Components: dev-environment
Affects Versions: 2.4
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall


The logsort utility is used by the developer regression test suite to sort 
output in a sqlci log file when a SELECT statement lacks an ORDER BY. This is 
useful because it eliminates false failures in regression testing due to 
non-determinism of row order.

The GET SCHEMA and other metadata GET statements do not order their output 
either. Occasionally changes to the GET implementations cause differences in 
output order.

So it would be good if logsort could be improved to sort output from GET 
statements as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (TRAFODION-3256) Missing error return in Statement::doHiveTableSimCheck

2019-01-16 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3256.
-
   Resolution: Fixed
Fix Version/s: 2.4

> Missing error return in Statement::doHiveTableSimCheck
> --
>
> Key: TRAFODION-3256
> URL: https://issues.apache.org/jira/browse/TRAFODION-3256
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 2.4
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> [https://github.com/apache/trafodion/pull/1766] rewrote much of 
> Statement::doHiveTableSimCheck (cli/Statement.cpp). In one of the error 
> cases, an 8577 error is raised but the method still returns SUCCESS. That can 
> cause an infinite loop in the caller, Statement::execute. This error path 
> should contain a "return ERROR" statement.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TRAFODION-3257) SPLIT_PART behavior for null values is inconsistent

2019-01-15 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3257:
---

 Summary: SPLIT_PART behavior for null values is inconsistent
 Key: TRAFODION-3257
 URL: https://issues.apache.org/jira/browse/TRAFODION-3257
 Project: Apache Trafodion
  Issue Type: Bug
  Components: sql-cmp
Affects Versions: 2.4
Reporter: David Wayne Birdsall


The following script demonstrates the inconsistencies:

>>drop table if exists temp1;

--- SQL operation complete.
>>create table temp1(a int not null, b char(40), c char(1), d int);

--- SQL operation complete.
>>insert into temp1 values (1,'hi there!','i',1),
+> (2,null,'i',1),
+> (3,'hi there!',null,1),
+> (4,'hi there!','i',null);

--- 4 row(s) inserted.
>>
>>select split_part('hi there!','i',1) from dual;

(EXPR) 
-

h

--- 1 row(s) selected.
>>
>>select split_part(null,'i',1) from dual;

*** ERROR[4097] A NULL operand is not allowed in function (SPLIT_PART(NULL, 
'i', 1)).

*** ERROR[8822] The statement was not prepared.

>>
>>select split_part('hi there!',null,1) from dual;

*** ERROR[4097] A NULL operand is not allowed in function (SPLIT_PART('hi 
there!', NULL, 1)).

*** ERROR[8822] The statement was not prepared.

>>
>>select split_part('hi there!','i',null) from dual;

*** ERROR[4097] A NULL operand is not allowed in function (SPLIT_PART('hi 
there!', 'i', NULL)).

*** ERROR[8822] The statement was not prepared.

>>
>>select a,split_part(b,c,d) from temp1;

A (EXPR) 
--- 

1 h 
 2 ? 
 3 ? 
 4 ?

--- 4 row(s) selected.
>>

So, if NULL is explicitly specified, we get error 4097. But if a null value is 
passed via a column, split_part returns null. 

Almost all functions in Trafodion SQL return null in both cases. So I think the 
error 4097 behavior is incorrect; instead the statement should compile and 
split_part should return null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (TRAFODION-3256) Missing error return in Statement::doHiveTableSimCheck

2019-01-15 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on TRAFODION-3256 started by David Wayne Birdsall.
---
> Missing error return in Statement::doHiveTableSimCheck
> --
>
> Key: TRAFODION-3256
> URL: https://issues.apache.org/jira/browse/TRAFODION-3256
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 2.4
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> [https://github.com/apache/trafodion/pull/1766] rewrote much of 
> Statement::doHiveTableSimCheck (cli/Statement.cpp). In one of the error 
> cases, an 8577 error is raised but the method still returns SUCCESS. That can 
> cause an infinite loop in the caller, Statement::execute. This error path 
> should contain a "return ERROR" statement.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TRAFODION-3256) Missing error return in Statement::doHiveTableSimCheck

2019-01-15 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3256:
---

 Summary: Missing error return in Statement::doHiveTableSimCheck
 Key: TRAFODION-3256
 URL: https://issues.apache.org/jira/browse/TRAFODION-3256
 Project: Apache Trafodion
  Issue Type: Bug
  Components: sql-exe
Affects Versions: 2.4
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall


[https://github.com/apache/trafodion/pull/1766] rewrote much of 
Statement::doHiveTableSimCheck (cli/Statement.cpp). In one of the error cases, 
an 8577 error is raised but the method still returns SUCCESS. That can cause an 
infinite loop in the caller, Statement::execute. This error path should contain 
a "return ERROR" statement.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (TRAFODION-3237) Incorrect results for INTERVAL addition in an uncommon case

2018-12-12 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3237.
-
   Resolution: Fixed
Fix Version/s: 2.4

> Incorrect results for INTERVAL addition in an uncommon case
> ---
>
> Key: TRAFODION-3237
> URL: https://issues.apache.org/jira/browse/TRAFODION-3237
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.4
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>
> The following script illustrates the problem:
> {quote}?section ddl
> drop table if exists mytableF_base;
> create table mytableF_base (
> val int,
> "c9" interval day(7) to second default NULL,
> "c10" interval day(13) to second(0) default NULL,
> "key c10" interval day(13) to second(0) not null default interval 
> '1:01:11:11' day(13) to second(0),
> unique ("key c10")
> );
> ?section populateit
> insert into mytableF_base values (
> 0,
> interval '0 00:00:00' day(7) to second,
> interval '0 00:00:00' day(13) to second(0),
> interval '0 00:00:00' day(13) to second(0));
> insert into mytableF_base 
> select 1, 
> "c9" + interval '1' day,
> "c10" + interval '1' day, -- gives incorrect result 
> "key c10" + interval '1' day -- gives incorrect result 
> from mytableF_base
> where val = 0;
> insert into mytableF_base 
> select 23, 
> "c9" + interval '1' day,
> "c10" + interval '2' day, -- gives correct result
> "key c10" + interval '3' day -- gives correct result
> from mytableF_base
> where val = 0;
> select "c10","key c10" from mytableF_base;
> {quote}
> When run, the last SELECT gives:
> {quote}>>select "c10","key c10" from mytableF_base;
> c10 key c10 
> --- ---
> 0 00:00:00 0 00:00:00
>  100 00:00:00 100 00:00:00
>  2 00:00:00 3 00:00:00
> --- 3 row(s) selected.
> >>exit;
> {quote}
> The second line is incorrect; the values are off by a factor of one million.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TRAFODION-3243) Dereference of deallocated NAString in UPDATE STATISTICS can cause cores

2018-12-05 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3243:
---

 Summary: Dereference of deallocated NAString in UPDATE STATISTICS 
can cause cores
 Key: TRAFODION-3243
 URL: https://issues.apache.org/jira/browse/TRAFODION-3243
 Project: Apache Trafodion
  Issue Type: Bug
  Components: sql-cmp
Affects Versions: 2.3, 2.4
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall


In ustat/hs_globals.cpp, the HSColGroupStruct destructor deletes the colNames 
member (an NAString), then calls HSColGroupStruct::freeISMemory. If logging is 
on (which is the default nowadays), the latter method dereferences colNames, 
calling its data() method. NAString :: data() adds a null terminator to the end 
of its string; in a deleted NAString, this can result in the corruption of a 
heap boundary tag. This seems to only happen in the case where the original 
string did not fit into the small_ buffer of the NAString (which happens when 
column names are sufficiently long, e.g. multi-column histograms) and then only 
rarely.

The fix is to defer deleting colNames until after freeISMemory is called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TRAFODION-3237) Incorrect results for INTERVAL addition in an uncommon case

2018-11-26 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3237:
---

 Summary: Incorrect results for INTERVAL addition in an uncommon 
case
 Key: TRAFODION-3237
 URL: https://issues.apache.org/jira/browse/TRAFODION-3237
 Project: Apache Trafodion
  Issue Type: Bug
  Components: sql-cmp
Affects Versions: 2.4
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall


The following script illustrates the problem:
{quote}?section ddl

drop table if exists mytableF_base;

create table mytableF_base (
val int,
"c9" interval day(7) to second default NULL,
"c10" interval day(13) to second(0) default NULL,
"key c10" interval day(13) to second(0) not null default interval 
'1:01:11:11' day(13) to second(0),
unique ("key c10")
);


?section populateit

insert into mytableF_base values (
0,
interval '0 00:00:00' day(7) to second,
interval '0 00:00:00' day(13) to second(0),
interval '0 00:00:00' day(13) to second(0));

insert into mytableF_base 
select 1, 
"c9" + interval '1' day,
"c10" + interval '1' day, -- gives incorrect result 
"key c10" + interval '1' day -- gives incorrect result 
from mytableF_base
where val = 0;

insert into mytableF_base 
select 23, 
"c9" + interval '1' day,
"c10" + interval '2' day, -- gives correct result
"key c10" + interval '3' day -- gives correct result
from mytableF_base
where val = 0;

select "c10","key c10" from mytableF_base;
{quote}
When run, the last SELECT gives:
{quote}>>select "c10","key c10" from mytableF_base;

c10 key c10 
--- ---

0 00:00:00 0 00:00:00
 100 00:00:00 100 00:00:00
 2 00:00:00 3 00:00:00

--- 3 row(s) selected.
>>exit;
{quote}
The second line is incorrect; the values are off by a factor of one million.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (TRAFODION-3223) Row count estimation code works poorly on time-ordered aged-out data

2018-10-18 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall resolved TRAFODION-3223.
-
   Resolution: Fixed
Fix Version/s: 2.4

> Row count estimation code works poorly on time-ordered aged-out data
> 
>
> Key: TRAFODION-3223
> URL: https://issues.apache.org/jira/browse/TRAFODION-3223
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: any
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.4
>
>
> The estimateRowCountBody method in module HBaseClient.java samples cells from 
> the first 500 rows from the first HFile it sees in order to estimate the 
> number of rows in a Trafodion table. If the table happens to have a 
> time-ordered key, and data are aged out over time, we can get large clumps of 
> "delete" tombstones in one or more HFiles. If estimateRowCountBody happens to 
> look at such an HFile, it will incorrectly conclude that most cells are 
> "delete" tombstones and therefore drastically underestimate the row count.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TRAFODION-3223) Row count estimation code works poorly on time-ordered aged-out data

2018-10-16 Thread David Wayne Birdsall (JIRA)
David Wayne Birdsall created TRAFODION-3223:
---

 Summary: Row count estimation code works poorly on time-ordered 
aged-out data
 Key: TRAFODION-3223
 URL: https://issues.apache.org/jira/browse/TRAFODION-3223
 Project: Apache Trafodion
  Issue Type: Bug
  Components: sql-cmp
Affects Versions: any
Reporter: David Wayne Birdsall
Assignee: David Wayne Birdsall


The estimateRowCountBody method in module HBaseClient.java samples cells from 
the first 500 rows from the first HFile it sees in order to estimate the number 
of rows in a Trafodion table. If the table happens to have a time-ordered key, 
and data are aged out over time, we can get large clumps of "delete" tombstones 
in one or more HFiles. If estimateRowCountBody happens to look at such an 
HFile, it will incorrectly conclude that most cells are "delete" tombstones and 
therefore drastically underestimate the row count.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-3153) CREATE EXTERNAL TABLE doesn't report error if Hive table already exists

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-3153.
---

> CREATE EXTERNAL TABLE doesn't report error if Hive table already exists
> ---
>
> Key: TRAFODION-3153
> URL: https://issues.apache.org/jira/browse/TRAFODION-3153
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> The following session demonstrates the problem. The second CREATE EXTERNAL 
> TABLE statement should fail and report that the object already exists, but 
> instead it reports success.
> {quote}
> >>drop external table if exists hive.hive.test1;
> --- SQL operation complete.
> >>
> >>create external table hive.hive.test1
> +>( a date, b char(20) )
> +>row format delimited fields terminated by '|' 
> +>location '/user/trafodion/hive/exttables/test1';
> --- SQL operation complete.
> >>
> >>select count(*) from hive.hive.test1;
> (EXPR) 
> 
> 0
> --- 1 row(s) selected.
> >>
> >>-- should fail with error 1387, but reports success
> >>create external table hive.hive.test1
> +>( a date, b char(20) )
> +>row format delimited fields terminated by '|' 
> +>location '/user/trafodion/hive/exttables/test1';
> --- SQL operation complete.
> >>exit;
> End of MXCI Session
> {quote}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-3152) UPDATE STATISTICS ... CREATE SAMPLE failure on Hive table

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-3152.
---

> UPDATE STATISTICS ... CREATE SAMPLE failure on Hive table
> -
>
> Key: TRAFODION-3152
> URL: https://issues.apache.org/jira/browse/TRAFODION-3152
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Minor
> Fix For: 2.3
>
>
> On a fresh Trafodion instance where the "_HIVESTATS_" schema does not yet 
> exist, if one does an UPDATE STATISTICS ... CREATE SAMPLE (or REMOVE SAMPLE) 
> on a Hive table, it fails with an ugly error, as the following example shows:
> {quote}>>get schemas;
>  
> Schemas in Catalog TRAFODION
> 
>  
> SEABASE
> _LIBMGR_
> _MD_
> _PRIVMGR_MD_
> _REPOS_
>  
> --- SQL operation complete.
> >>update statistics for table hive.hive.customer create sample random 10 
> >>percent;
>  
> *** ERROR[4082] Object TRAFODION."_HIVESTATS_".SB_PERSISTENT_SAMPLES does not 
> exist or is inaccessible.
>  
> *** ERROR[1003] Schema TRAFODION._HIVESTATS_ does not exist.
>  
> *** ERROR[8839] Transaction was aborted.
>  
> *** ERROR[9200] UPDATE STATISTICS for table HIVE.HIVE.CUSTOMER encountered an 
> error (8609) from statement Process_Query.
>  
> *** ERROR[8609] Waited rollback performed without starting a transaction.
>  
> --- SQL operation failed with errors.
> >>update statistics for table hive.hive.customer remove sample;
>  
> *** ERROR[4082] Object TRAFODION."_HIVESTATS_".SB_PERSISTENT_SAMPLES does not 
> exist or is inaccessible.
>  
> *** ERROR[1003] Schema TRAFODION._HIVESTATS_ does not exist.
>  
> *** ERROR[8839] Transaction was aborted.
>  
> *** ERROR[9200] UPDATE STATISTICS for table HIVE.HIVE.CUSTOMER encountered an 
> error (8609) from statement Process_Query.
>  
> *** ERROR[8609] Waited rollback performed without starting a transaction.
>  
> --- SQL operation failed with errors.
> {quote}
> There is a simple work-around. Just perform a normal UPDATE STATISTICS on any 
> Hive table, then Trafodion will create the "_HIVESTATS_" schema under the 
> covers. Alternatively, perform a REGISTER command for any Hive table; again 
> Trafodion will create "_HIVESTATS_" under the covers. The CREATE SAMPLE / 
> REMOVE SAMPLE commands will now work.
> To reproduce, you need a fresh Trafodion instance (e.g. created using 
> INITIALIZE TRAFODION).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-3138) Remove obsolete code from UPDATE STATISTICS

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-3138.
---

> Remove obsolete code from UPDATE STATISTICS
> ---
>
> Key: TRAFODION-3138
> URL: https://issues.apache.org/jira/browse/TRAFODION-3138
> Project: Apache Trafodion
>  Issue Type: Improvement
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> UPDATE STATISTICS contains code from the predecessor product that uses static 
> SQL for certain operations. Trafodion does not support static SQL, so this 
> code can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-3140) Apache trafodion main page news updates

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-3140.
---

> Apache trafodion main page news updates
> ---
>
> Key: TRAFODION-3140
> URL: https://issues.apache.org/jira/browse/TRAFODION-3140
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: website
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> On the Apache Trafodion web site main page [https://trafodion.apache.org/,] 
> the QQ number should be updated to 233105278. (This change needs to be made 
> to the manuals as well).
> The news on the web site is out-of-date. Release 2.2 has been released, and 
> we are now working on 2.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-3135) Remove some obsolete cruft from the SQL parser and related modules

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-3135.
---

> Remove some obsolete cruft from the SQL parser and related modules
> --
>
> Key: TRAFODION-3135
> URL: https://issues.apache.org/jira/browse/TRAFODION-3135
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> Modules parser/ParKeyWords.cpp and common/ComResWords.cpp contain code that 
> is specific to SQL/MP, a predecessor product. That code is obsolete. Removing 
> this code will make parser maintenance a bit simpler.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-3131) Use of reserved words as names fails in many places

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-3131.
---

> Use of reserved words as names fails in many places
> ---
>
> Key: TRAFODION-3131
> URL: https://issues.apache.org/jira/browse/TRAFODION-3131
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
> Attachments: jira.sql.txt
>
>
> It should be possible to use a reserved word (e.g., "YEAR") as an identifier 
> if it is made into a delimited identifier (that is, upper-cased and 
> surrounded with double-quotes). The sqlci session below demonstrates several 
> examples where this fails:
> {quote}>>obey jira.sql;
> >>-- script to reproduce various problems with reserved words 
> >>
> >>?section tablename
> >>
> >>-- reserved word as a table name fails in various places
> >>
> >>create table "DELETE" (c1 int);
> --- SQL operation complete.
> >>invoke "DELETE";
> *** ERROR[15001] A syntax error occurred at or before: 
> TABLE TRAFODION.SCH.DELETE;
>  ^ (26 characters from start of SQL statement)
> *** ERROR[15001] A syntax error occurred at or before: 
> TABLE ;
>  ^ (7 characters from start of SQL statement)
> *** ERROR[8822] The statement was not prepared.
> >>showddl "DELETE";
> CREATE TABLE TRAFODION.SCH.DELETE
>  (
>  C1 INT DEFAULT NULL NOT SERIALIZED
>  )
>  ATTRIBUTES ALIGNED FORMAT
> ;
> -- GRANT SELECT, INSERT, DELETE, UPDATE, REFERENCES ON TRAFODION.SCH.DELETE 
> TO DB__ROOT WITH GRANT OPTION;
> --- SQL operation complete.
> >>insert into "DELETE" values (1);
> *** ERROR[8448] Unable to access Hbase interface. Call to 
> ExpHbaseInterface::checkAndInsertRow returned error HBASE_ACCESS_ERROR(-706). 
> Cause: java.lang.IllegalArgumentException: Table qualifier must not be empty
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:179)
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:149)
> org.apache.hadoop.hbase.TableName.(TableName.java:322)
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:358)
> org.apache.hadoop.hbase.TableName.valueOf(TableName.java:418)
> org.apache.hadoop.hbase.client.HTable.(HTable.java:274)
> org.apache.hadoop.hbase.client.transactional.TransactionalTable.(TransactionalTable.java:138)
> org.apache.hadoop.hbase.client.transactional.RMInterface.(RMInterface.java:159)
> org.trafodion.sql.HTableClient.init(HTableClient.java:348)
> org.trafodion.sql.HBaseClient.getHTableClient(HBaseClient.java:1008)
> org.trafodion.sql.HBaseClient.insertRow(HBaseClient.java:1964).
> --- 0 row(s) inserted.
> >>delete from "DELETE" where c1=1;
> *** ERROR[8448] Unable to access Hbase interface. Call to 
> ExpHbaseInterface::scanOpen returned error HBASE_OPEN_ERROR(-704). Cause: 
> java.lang.IllegalArgumentException: Table qualifier must not be empty
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:179)
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:149)
> org.apache.hadoop.hbase.TableName.(TableName.java:322)
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:358)
> org.apache.hadoop.hbase.TableName.valueOf(TableName.java:418)
> org.apache.hadoop.hbase.client.HTable.(HTable.java:274)
> org.apache.hadoop.hbase.client.transactional.TransactionalTable.(TransactionalTable.java:138)
> org.apache.hadoop.hbase.client.transactional.RMInterface.(RMInterface.java:159)
> org.trafodion.sql.HTableClient.init(HTableClient.java:348)
> org.trafodion.sql.HBaseClient.getHTableClient(HBaseClient.java:1008).
> --- 0 row(s) deleted.
> >>update "DELETE" set c1=2 where c1=1;
> *** ERROR[8448] Unable to access Hbase interface. Call to 
> ExpHbaseInterface::scanOpen returned error HBASE_OPEN_ERROR(-704). Cause: 
> java.lang.IllegalArgumentException: Table qualifier must not be empty
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:179)
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:149)
> org.apache.hadoop.hbase.TableName.(TableName.java:322)
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:358)
> org.apache.hadoop.hbase.TableName.valueOf(TableName.java:418)
> org.apache.hadoop.hbase.client.HTable.(HTable.java:274)
> org.apache.hadoop.hbase.client.transactional.TransactionalTable.(TransactionalTable.java:138)
> org.apache.hadoop.hbase.client.transactional.RMInterface.(RMInterface.java:159)
> org.trafodion.sql.HTableClient.init(HTableClient.java:348)
> org.trafodion.sql.HBaseClient.getHTableClient(HBaseClient.java:1008).
> --- 0 row(s) updated.
> >>select * 

[jira] [Closed] (TRAFODION-3128) INTERVAL SECOND(m,n) in multi-column key results in 6003 warnings

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-3128.
---

> INTERVAL SECOND(m,n) in multi-column key results in 6003 warnings
> -
>
> Key: TRAFODION-3128
> URL: https://issues.apache.org/jira/browse/TRAFODION-3128
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> The following script reproduces the problem:
> drop table if exists tproblem;
> create table tproblem (a interval second(2,6) not null, b int not null, c int,
>  primary key (a,b));
> insert into tproblem values (interval '12.345678' second(2,6),7,7),
>  (interval '23.456789' second(2,6),8,8);
> update statistics for table tproblem on every column;
> -- give RMS a little time to invalidate cached stats for tproblem
> sh sleep 20;
> select * from tproblem where b = 8;
> When run, the select statement gets a 6003 warning:
> >>select * from tproblem where b = 8;
> *** WARNING[6003] The metadata table HISTOGRAM_INTERVALS contains invalid 
> boundary value (INTERVAL '12.345678' SECOND(2, 6),7) for column 
> TRAFODION.SCH.TPROBLEM.A. If you have manually modified the metadata table, 
> then you should undo your changes using the CLEAR option in UPDATE STATISTICS 
> and regenerate the statistics.
> A B C 
> -- --- ---
> 23.456789 8 8
> --- 1 row(s) selected.
> >>
> The warning happens because EncodedValue::constructorFunction 
> (optimizer/EncodedValue.cpp) is not handling the INTERVAL literal in the 
> histogram intervals correctly.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-3088) Multi-value BETWEEN on interval key column gives wrong result

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-3088.
---

> Multi-value BETWEEN on interval key column gives wrong result
> -
>
> Key: TRAFODION-3088
> URL: https://issues.apache.org/jira/browse/TRAFODION-3088
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> The sqlci session below demonstrates the problem. When we do a multi-column 
> BETWEEN where one of the columns is an INTERVAL key column and the values in 
> the BETWEEN predicate for that column are equal, we get a data conversion 
> error.
> {quote}>>obey jira.sql;
> >>?section setup
> >>
> >>drop table if exists t1;
> --- SQL operation complete.
> >>
> >>create table t1 ( a interval second not null,
> +> b int,
> +> primary key (a) );
> --- SQL operation complete.
> >>
> >>insert into table t1 values ( interval '41' second, 41 );
> --- 1 row(s) inserted.
> >>
> >>?section testit
> >>
> >>select * from t1;
> A B 
> -- ---
> 41.00 41
> --- 1 row(s) selected.
> >>
> >>select * from t1 where a = interval '41' second;
> A B 
> -- ---
> 41.00 41
> --- 1 row(s) selected.
> >>
> >>-- fails with a data conversion error
> >>select * from t1 where (a,b) between 
> +> (interval '41' second, 40) and (interval '41' second, 42);
> *** ERROR[8413] The string argument contains characters that cannot be 
> converted. Source data(in hex): 494e
> --- 0 row(s) selected.
> >>
> >>select * from t1 where a = interval '41' second
> +> and b between 40 and 42;
> A B 
> -- ---
> 41.00 41
> --- 1 row(s) selected.
> >>
> >>log;
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-2926) Link to JIRA should use https:

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-2926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-2926.
---

> Link to JIRA should use https:
> --
>
> Key: TRAFODION-2926
> URL: https://issues.apache.org/jira/browse/TRAFODION-2926
> Project: Apache Trafodion
>  Issue Type: Improvement
>Reporter: Sebb
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> As  the subject says



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-3083) Trafodion compiles fail on CentOS7

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-3083.
---

> Trafodion compiles fail on CentOS7
> --
>
> Key: TRAFODION-3083
> URL: https://issues.apache.org/jira/browse/TRAFODION-3083
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dev-environment
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> The "make all" step when building Trafodion on a CentOS7 instance fails with 
> the following errors:
> {quote}/lib64/libpthread.so.0: error adding symbols: DSO missing from command 
> line ##(SQF)
> collect2: error: ld returned 1 exit status ##(SQF)
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-3071) DATEDIFF function gives strange results when executed on interval data types

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-3071.
---

> DATEDIFF function gives strange results when executed on interval data types
> 
>
> Key: TRAFODION-3071
> URL: https://issues.apache.org/jira/browse/TRAFODION-3071
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> According to the SQL reference manual, 
> [http://trafodion.apache.org/docs/sql_reference/index.html#datediff_function,]
>  the DATEDIFF function is limited to DATE and TIMESTAMP operands.
> If one attempts DATEDIFF on INTERVAL data types, the errors are strange and 
> non-intuitive. In some cases, DATEDIFF even succeeds on an INTERVAL data 
> type. The following session output illustrates:
> {quote}>>drop table if exists t;
> --- SQL operation complete.
> >>create table t (c1 interval year, c2 interval year to month, c3 interval 
> >>month);
> --- SQL operation complete.
> >>insert into t values (interval '11' year, interval '22-02' year to month, 
> >>interval '33' month);
> --- 1 row(s) inserted.
> >>
> >>select DATEDIFF(MONTH, c1, c1) from t;
> *** ERROR[4037] Field MONTH cannot be extracted from a source of type 
> INTERVAL YEAR(2).
> *** ERROR[4062] The preceding error actually occurred in function DATEDIFF.
> *** ERROR[8822] The statement was not prepared.
> >>select DATEDIFF(MONTH, c2, c2) from t;
> (EXPR) 
> ---
> 0
> --- 1 row(s) selected.
> >>select DATEDIFF(MONTH, c3, c3) from t;
> *** ERROR[4037] Field YEAR cannot be extracted from a source of type INTERVAL 
> MONTH(2).
> *** ERROR[4062] The preceding error actually occurred in function DATEDIFF.
> *** ERROR[8822] The statement was not prepared.
> >>
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-3052) Core in pCode evaluator for query on million-row non-partitioned table

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-3052.
---

> Core in pCode evaluator for query on million-row non-partitioned table
> --
>
> Key: TRAFODION-3052
> URL: https://issues.apache.org/jira/browse/TRAFODION-3052
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.2.0, 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> The following script reproduces the problem. When preparing the query, we 
> obtain a core with the stack trace shown below.
> {quote}?section setup
> drop table if exists PCodeCore;
> create table PCodeCore ( id varchar(20) not null,
>  PCodeCorexid varchar(20),
>  PCodeCoreyid varchar(20),
>  otherstuff char(10),
>  primary key (id) );
> upsert using load into PCodeCore
> select cast(c0+10*c1+100*c2+1000*c3+1*c4+10*c5 as varchar(20)), 
>  cast(c1+10*c2+100*c3+1000*c4+1*c5+10*c0 as varchar(20)), 
>  cast(c2+10*c3+100*c4+1000*c5+1*c0+10*c1 as varchar(20)),
>  cast(c0+7*c1+49*c2+343*c3+2401*c4+16807*c5 as char(10))
> from (values(1)) t
> transpose 0,1,2,3,4,5,6,7,8,9 as c0
> transpose 0,1,2,3,4,5,6,7,8,9 as c1
> transpose 0,1,2,3,4,5,6,7,8,9 as c2
> transpose 0,1,2,3,4,5,6,7,8,9 as c3
> transpose 0,1,2,3,4,5,6,7,8,9 as c4
> transpose 0,1,2,3,4,5,6,7,8,9 as c5;
> update statistics for table PCodeCore on every column;
> ?section s1
> -- causes a core
> prepare s1 from select otherstuff,count(*) from PCodeCore group by otherstuff;
> {quote}
> The stack trace follows:
> {quote}(gdb) bt
> #0 0x7efeb774f495 in raise () from /lib64/libc.so.6
> #1 0x7efeb7750bfd in abort () from /lib64/libc.so.6
> #2 0x7efeb774860e in __assert_fail_base () from /lib64/libc.so.6
> #3 0x7efeb77486d0 in __assert_fail () from /lib64/libc.so.6
> #4 0x7efeb3d7e919 in ex_expr::evalPCode (this=0x7efe95309eb8, 
>  pCode32=0x7efe95309fd8, atp1=0x7efe9530a240, atp2=0x0, datalen=-1, 
>  rowLen=0x0) at ../exp/exp_eval.cpp:2352
> #5 0x7efeb61a8f06 in ex_expr::eval (this=0x7efe95309eb8, 
>  atp1=0x7efe9530a240, atp2=0x0, exHeap=0x7efe95866280, datalen=-1, 
>  rowLen=0x0, lastFldIndex=0x0, fetchedDataPtr=0x0) at ../exp/exp_expr.h:371
> #6 0x7efeafb71c42 in ValueIdList::evalAtCompileTime (this=0x7fffa642fb70, 
>  addConvNodes=1, tf=ExpTupleDesc::SQLMX_KEY_FORMAT, resultBuffer=0x0, 
>  resultBufferLength=0, length=0x7fffa642fda8, offset=0x7fffa642fdac, 
>  diagsArea=0x7efea4bb2208) at ../optimizer/ValueDesc.cpp:4689
> #7 0x7efeafb71f98 in ValueIdList::evaluateTree (root=0x7efe952ff8d0, 
>  encodedKeyBuffer=0x0, encodedKeyLength=0, length=0x7fffa642fda8, 
>  offset=0x7fffa642fdac, diagsArea=0x7efea4bb2208)
>  at ../optimizer/ValueDesc.cpp:4741
> #8 0x7efeaf82ec48 in createRangePartitionBoundariesFromStats (idesc=
>  0x7efe958353a8, hist=..., numberOfPartitions=4, partColArray=..., 
>  partitioningKeyColumnsOrder=..., statsColsCount=1, heap=0x7efe95866280)
>  at ../optimizer/NATable.cpp:2327
> #9 0x7efeaf949f18 in FileScan::createRangePartFuncForHbaseTableUsingStats
>  (this=0x7efe952f7b88, partns=@0x7fffa64301bc, partitioningKeyColumns=..., 
>  partitioningKeyColumnsList=..., partitioningKeyColumnsOrder=...)
>  at ../optimizer/OptPhysRelExpr.cpp:14795
> #10 0x7efeaf94a863 in FileScan::synthHbaseScanPhysicalProperty (
>  this=0x7efe952f7b88, context=0x7efe952e82b0, planNumber=-1, 
>  sortOrderVEG=...) at ../optimizer/OptPhysRelExpr.cpp:14977
> #11 0x7efeaf94b351 in FileScan::synthPhysicalProperty (
>  this=0x7efe952f7b88, myContext=0x7efe952e82b0, planNumber=-1, 
>  pws=0x7efe952eade8) at ../optimizer/OptPhysRelExpr.cpp:15197
> #12 0x7efeaf9268c3 in RelExpr::createPlan (this=0x7efe952f7b88, 
>  myContext=0x7efe952e82b0, pws=0x7efe952eade8, rule=0x7efea4bdfcd0, 
>  guidance=0x0, guidanceForChild=@0x7fffa6430578)
>  at ../optimizer/OptPhysRelExpr.cpp:216
> #13 0x7efeafb24da1 in CreatePlanTask::perform (this=0x7efe952ead98, 
>  taskId=24) at ../optimizer/tasks.cpp:1486
> #14 0x7efeaf8d926d in QueryOptimizerDriver::optimizeAPassHelper (
>  this=0x7fffa6434b50, context=0x7efe952f04e0) at ../optimizer/opt.cpp:7042
> #15 0x7efeaf8d8e00 in QueryOptimizerDriver::optimizeAPass (
>  this=0x7fffa6434b50, context=0x7efe952f04e0) at ../optimizer/opt.cpp:6975
> #16 0x7efeaf8d8a24 in QueryOptimizerDriver::doPass2PerhapsPass1 (
>  this=0x7fffa6434b50, relExpr=0x7efe95815608, context=0x7efe952f04e0, 
>  original=0x7efe952e32b0) at ../optimizer/opt.cpp:6902
> #17 0x7efeaf8d7a0c in RelExpr::optimize2 (this=0x7efe95815608)
>  at ../optimizer/opt.cpp:6598
> #18 0x7efeaf90b4bc in RelExpr::optimizeNode (this=0x7efe95815608)

[jira] [Closed] (TRAFODION-3066) Deeply nested subqueries may have warning 2053 at compile time

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-3066.
---

> Deeply nested subqueries may have warning 2053 at compile time
> --
>
> Key: TRAFODION-3066
> URL: https://issues.apache.org/jira/browse/TRAFODION-3066
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
> Attachments: ReproJira.sql
>
>
> The following sqlci session demonstrates the problem. A query with three 
> levels of nested subqueries gets a 2053 warning when compiled. An equivalent 
> query (where the subqueries have been rewritten as joins + group bys) does 
> not show this problem. The attached script is the one used to create this 
> sqlci session.
> {quote}>>obey reproJira.sql;
> >>?section setup
> >>
> >>drop schema if exists mytest cascade;
> --- SQL operation complete.
> >>create schema mytest;
> --- SQL operation complete.
> >>set schema mytest;
> --- SQL operation complete.
> >>
> >>create table t1 (pic_x_1 char(1) not null not droppable) no partition;
> --- SQL operation complete.
> >>showddl t1;
> CREATE TABLE TRAFODION.MYTEST.T1
>  (
>  PIC_X_1 CHAR(1) CHARACTER SET ISO88591 COLLATE
>  DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE NOT SERIALIZED
>  )
>  ATTRIBUTES ALIGNED FORMAT
> ;
> -- GRANT SELECT, INSERT, DELETE, UPDATE, REFERENCES ON TRAFODION.MYTEST.T1 TO 
> DB__ROOT WITH GRANT OPTION;
> --- SQL operation complete.
> >>
> >>insert into t1 values ('B'), ('C'), ('B'), ('Q'), ('B'), ('C'), ('Q'), 
> >>('Q');
> --- 8 row(s) inserted.
> >>update statistics for table t1 on every column;
> --- SQL operation complete.
> >>
> >>create table t2 like t1;
> --- SQL operation complete.
> >>insert into t2 select * From t1;
> --- 8 row(s) inserted.
> >>update statistics for table t2 on every column;
> --- SQL operation complete.
> >>
> >>create table t3 like t1;
> --- SQL operation complete.
> >>insert into t3 select * From t1;
> --- 8 row(s) inserted.
> >>update statistics for table t3 on every column;
> --- SQL operation complete.
> >>
> >>create table t4 like t1;
> --- SQL operation complete.
> >>insert into t4 select * From t1;
> --- 8 row(s) inserted.
> >>update statistics for table t4 on every column;
> --- SQL operation complete.
> >>
> >>?section testit
> >>
> >>-- reproduces problem
> >>
> >>set schema mytest;
> --- SQL operation complete.
> >>
> >>prepare s1 from
> +>--display
> +>select * from mytest.t1
> +>where pic_x_1 in (select pic_x_1 from mytest.t2
> +>where pic_x_1 in (select pic_x_1 from mytest.t3
> +>where pic_x_1 in (select pic_x_1 from mytest.t4
> +>))) ;
> *** WARNING[2053] Optimizer pass two assertion failure ((gb_ == NULL_CA_ID) 
> || (gb_ == othersGB)) in file ../optimizer/Analyzer.cpp at line 4907. 
> Attempting to recover and produce a plan.
> --- SQL command prepared.
> >>
> >>?section rewrite
> >>
> >>set schema mytest;
> {quote}
> --- SQL operation complete.
> >>
> >>-- a rewrite of the query, equivalent to what the semi-join
> >>-- to inner-join transformation accomplishes
> >>
> >>prepare s2 from
> +>select t1.* from mytest.t1 join
> +> (select distinct t2.pic_x_1 from mytest.t2 join
> +> (select distinct t3.pic_x_1 from mytest.t3 join
> +> (select distinct t4.pic_x_1 from mytest.t4) as t4
> +> on t3.pic_x_1 = t4.pic_x_1) as t3
> +> on t2.pic_x_1 = t3.pic_x_1) as t2
> +>on t1.pic_x_1 = t2.pic_x_1;
> --- SQL command prepared.
> >>
> >>
> >>exit;
> End of MXCI Session



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-3056) UPDATE STATISTICS fails with error 8838 when RMS is not running

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-3056.
---

> UPDATE STATISTICS fails with error 8838 when RMS is not running
> ---
>
> Key: TRAFODION-3056
> URL: https://issues.apache.org/jira/browse/TRAFODION-3056
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Minor
> Fix For: 2.3
>
>
> If RMS is not running, UPDATE STATISTICS fails with a warning 2024 + error 
> 8838 as in the log below:
> {quote}>>update statistics for table t1 on every column;
> *** WARNING[2024] Server Process $ZSM0 is not running or could not be 
> created. O
> perating System Error 14 was returned.
> *** ERROR[8838] Unable to receive reply from Compiler, possibly caused by 
> intern
> al errors when compiling SQL statements, processing DDL statements, or 
> executing
>  the builtin stored procedures.
> --- SQL operation failed with errors.
> >>
> {quote}
>  
> Also, the tdm_arkcmp process cores. A sample stack trace is below:
> {quote}(gdb) bt
> #0 0x7f7a3d0ce495 in raise () from /lib64/libc.so.6
> #1 0x7f7a3d0cfc75 in abort () from /lib64/libc.so.6
> #2 0x7f7a403ef935 in ?? ()
>  from /usr/lib/jvm/java-1.7.0-openjdk.x86_64/jre/lib/amd64/server/libjvm.so
> #3 0x7f7a40568fdf in ?? ()
>  from /usr/lib/jvm/java-1.7.0-openjdk.x86_64/jre/lib/amd64/server/libjvm.so
> #4 0x7f7a403f47c2 in JVM_handle_linux_signal ()
>  from /usr/lib/jvm/java-1.7.0-openjdk.x86_64/jre/lib/amd64/server/libjvm.so
> #5 
> #6 0x7f7a47218e29 in tupp::release (this=0x7f7a481c7c68)
>  at ../exp/ExpSqlTupp.h:285
> #7 0x7f7a42d27c94 in atp_struct::release (this=0x7f7a481c7c38)
>  at ../exp/ExpAtp.h:303
> #8 0x7f7a42da5486 in ex_queue::deallocateAtps (this=0x7f7a481c7ab8)
>  at ../executor/ex_queue.cpp:416
> #9 0x7f7a42da4ae0 in ex_queue::~ex_queue (this=0x7f7a481c7ab8, 
>  __in_chrg=) at ../executor/ex_queue.cpp:115
> #10 0x7f7a42da4b50 in ex_queue::~ex_queue (this=0x7f7a481c7ab8, 
>  __in_chrg=) at ../executor/ex_queue.cpp:118
> #11 0x7f7a42dd146f in ExSortTcb::freeResources (this=0x7f7a481c77e0)
>  at ../executor/ex_sort.cpp:529
> #12 0x7f7a42dd0ccd in ExSortTcb::~ExSortTcb (this=0x7f7a481c77e0, 
> ---Type  to continue, or q  to quit--- 
>  __in_chrg=) at ../executor/ex_sort.cpp:470
> #13 0x7f7a42dd0e28 in ExSortTcb::~ExSortTcb (this=0x7f7a481c77e0, 
>  __in_chrg=) at ../executor/ex_sort.cpp:478
> #14 0x7f7a42d67588 in ex_globals::cleanupTcbs (this=0x7f7a481c4348)
>  at ../executor/ex_globals.cpp:193
> #15 0x7f7a42d673f3 in ex_globals::deleteMe (this=0x7f7a481c4348, 
>  fatalError=0) at ../executor/ex_globals.cpp:136
> #16 0x7f7a42d43688 in ExExeStmtGlobals::deleteMe (this=0x7f7a481c4348, 
>  fatalError=0) at ../executor/ex_exe_stmt_globals.cpp:321
> #17 0x7f7a42d440e5 in ExMasterStmtGlobals::deleteMe (this=0x7f7a481c4348, 
>  fatalError=0) at ../executor/ex_exe_stmt_globals.cpp:671
> #18 0x7f7a42dabeb2 in ex_root_tcb::deallocAndDelete (this=0x7f7a481c7e38, 
>  glob=0x7f7a481c4348, fragTable=0x7f7a481c61d8)
>  at ../executor/ex_root.cpp:2324
> #19 0x7f7a4756e5e3 in Statement::releaseTcbs (this=0x7f7a481bc140, 
>  closeAllOpens=0) at ../cli/Statement.cpp:4308
> #20 0x7f7a4756e7a4 in Statement::dealloc (this=0x7f7a481bc140, 
>  closeAllOpens=0) at ../cli/Statement.cpp:4356
> #21 0x7f7a47564110 in Statement::~Statement (this=0x7f7a481bc140, 
>  __in_chrg=) at ../cli/Statement.cpp:549
> #22 0x7f7a47564bca in Statement::~Statement (this=0x7f7a481bc140, 
>  __in_chrg=) at ../cli/Statement.cpp:716
> ---Type  to continue, or q  to quit--- 
> #23 0x7f7a475173bc in ContextCli::deallocStmt (this=0x7f7a48218cd0, 
>  statement_id=0x7f7a2743b740, deallocStaticStmt=0)
>  at ../cli/Context.cpp:1113
> #24 0x7f7a474e24d1 in SQLCLI_DeallocStmt (cliGlobals=0x1f6e9e0, 
>  statement_id=0x7f7a2743b740) at ../cli/Cli.cpp:1416
> #25 0x7f7a47581278 in SQL_EXEC_DeallocStmt (statement_id=0x7f7a2743b740)
>  at ../cli/CliExtern.cpp:1611
> #26 0x7f7a45a17d19 in HSCursor::~HSCursor (this=0x7ffce370e260, 
>  __in_chrg=) at ../ustat/hs_cli.cpp:2660
> #27 0x7f7a45a25f97 in printPlan (stmt=0x7f7a2743d650)
>  at ../ustat/hs_cli.cpp:5800
> #28 0x7f7a45a1a63e in HSCursor::prepare (this=0x7ffce370e660, clistr=
>  0x3087828 "SELECT COUNT(*) FROM TRAFODION.SEABASE.T1 FOR READ UNCOMMITTED 
> ACCESS", outDescEntries=1) at ../ustat/hs_cli.cpp:3029
> #29 0x7f7a45a1de33 in HSCursor::fetchNumColumn (this=0x7ffce370e660, 
>  clistr=
>  0x3087828 "SELECT COUNT(*) FROM TRAFODION.SEABASE.T1 FOR READ UNCOMMITTED 
> ACCESS", pSmallValue=0x0, 

[jira] [Closed] (TRAFODION-3023) Add more enum files to analyzeMessageGuide.py

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-3023.
---

> Add more enum files to analyzeMessageGuide.py
> -
>
> Key: TRAFODION-3023
> URL: https://issues.apache.org/jira/browse/TRAFODION-3023
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: dev-environment
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> Recently on the dev e-mail list there was a discussion about adding new SQL 
> diagnostic messages. While researching an answer for that, I came across some 
> error message enum files that analyzeMessageGuide.py doesn't know about. So, 
> this Jira has been filed to add those files to analyzeMessageGuide.py.
> The e-mail discussion is here: 
> http://mail-archives.apache.org/mod_mbox/trafodion-dev/201804.mbox/%3CMWHPR1001MB2350B631BC10A20733862D39E4BE0%40MWHPR1001MB2350.namprd10.prod.outlook.com%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-3043) DATE_PART(YEARQUARTER|YEARMONTH|YEARWEEK, ) should return a proper error

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-3043.
---

> DATE_PART(YEARQUARTER|YEARMONTH|YEARWEEK, ) should return a proper 
> error
> --
>
> Key: TRAFODION-3043
> URL: https://issues.apache.org/jira/browse/TRAFODION-3043
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> If we run the following script:
> drop table if exists t;
>  create table t (c1 interval year to month, c2 interval month);
>  insert into t values (interval '11-01' year to month, interval '11' month);
> select DATE_PART('YEARQUARTER', c1) from t;
>  select DATE_PART('YEARQUARTER', c2) from t;
> select DATE_PART('YEARMONTH', c1) from t;
>  select DATE_PART('YEARMONTH', c2) from t;
> select DATE_PART('YEARWEEK', c1) from t;
>  select DATE_PART('YEARWEEK', c2) from t;
> We get correct 4037 error messages on column c2. For YEARQUARTER and 
> YEARMONTH on c1 though we get an internal error 8001 at run-time instead. For 
> YEARWEEK we get a compile time error 4035 on c1 instead.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-3031) Query with nested subqueries chooses bad plan

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-3031.
---

> Query with nested subqueries chooses bad plan
> -
>
> Key: TRAFODION-3031
> URL: https://issues.apache.org/jira/browse/TRAFODION-3031
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
> Attachments: jira.log, jira.sql
>
>
> The attached files demonstrate the problem. The file jira.sql is a script 
> that reproduces the problem, while jira.log is a sqlci showing the results.
> The query in question does an IN-subquery from T1 to T2, then T2 has an 
> =-subquery back to T1. T2 contains two indexes, one each on the join columns.
> The default plan uses a hybrid hash join of T1 to T2 and is very slow. It 
> does a full scan of both T1 and T2.
> If we set CQD SEMIJOIN_TO_INNERJOIN_TRANSFORMATION 'ON', the plan is a little 
> bit better. We get a nested join of T1 to T2. But it is inefficient; we still 
> do a full scan of T2.
> If we rename the index T2A to T2Y, and we still have the CQD set, we get a 
> good nested join plan that uses the index T2Y and reads just one row at each 
> level. This is very fast.
> So, there are two issues here.
>  # We could do a better job of deciding when to do the semi-join to join 
> transformation. When the inner table is small, it is profitable to do this.
>  # The index elimination logic is mistakenly eliminating index T2A so the 
> Optimizer misses a chance to use it and so does not find the efficient nested 
> join plan.[^jira.log]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-3002) UPDATE STATS fails on certain INTERVAL columns

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-3002.
---

> UPDATE STATS fails on certain INTERVAL columns
> --
>
> Key: TRAFODION-3002
> URL: https://issues.apache.org/jira/browse/TRAFODION-3002
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> The following script demonstrates the failures:
> {{drop schema if exists mytest cascade;}}
> {{create schema mytest;}}
> {{set schema mytest;}}
> {{create table t1 (c1 interval day(13) to second(0));}}
> {{insert into t1 values}}
> {{(interval '1:11:11:11' day(13) to second(0)),}}
> {{(interval '2:22:22:22' day(13) to second(0));}}
> {{select * from t1 order by 1;}}
> {{update statistics for table t1 clear;}}
> {{update statistics for table t1 on every column;}}
> {{create table t2 (c1 interval day(8) to second(0));}}
> {{insert into t2 values}}
> {{(interval ':11:11:11' day(8) to second(0)),}}
> {{(interval ':22:22:22' day(8) to second(0));}}
> {{select * from t2 order by 1;}}
> {{update statistics for table t2 clear;}}
> {{update statistics for table t2 on every column;}}
> When executed, the UPDATE STATISTICS command on T1 fails as follows:
> {{>>update statistics for table t1 on every column;}}
> {{*** ERROR[9200] UPDATE STATISTICS for table TRAFODION.MYTEST.T1 encountered 
> an error (3044) from statement PREPARE_TO_READ_COLS_INTO_MEM.}}
> {{*** ERROR[3044] The interval SECOND(18,6) is not valid.}}
> {{*** ERROR[8822] The statement was not prepared.}}
> {{*** ERROR[9200] UPDATE STATISTICS for table TRAFODION.MYTEST.T1 encountered 
> an error (3044) from statement HSCursor::fetchRowset().}}
> {{*** ERROR[3044] The interval SECOND(18,6) is not valid.}}
> {{*** ERROR[8822] The statement was not prepared.}}
> {{*** ERROR[8845] An internal error occurred: root_tcb is null.}}
> {{*** ERROR[9200] UPDATE STATISTICS for table TRAFODION.MYTEST.T1 encountered 
> an error (3044) from statement READ_COLS_INTO_MEM.}}
> {{*** ERROR[3044] The interval SECOND(18,6) is not valid.}}
> {{*** ERROR[8822] The statement was not prepared.}}
> {{*** ERROR[8845] An internal error occurred: root_tcb is null.}}
> {{--- SQL operation failed with errors.}}
> {{>>}}
> The UPDATE STATISTICS on table T2 on the other hand, fails with an executor 
> assertion (and a core).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-3000) MERGE DELETE on table with unique index fails with error 4002

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-3000.
---

> MERGE DELETE on table with unique index fails with error 4002
> -
>
> Key: TRAFODION-3000
> URL: https://issues.apache.org/jira/browse/TRAFODION-3000
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> The following session illustrates the problem:
> {{>>drop table if exists t1 cascade;}}
> {{--- SQL operation complete.}}
> {{>>}}
> {{>>create table t1 (c1 int not null, c2 int, primary key (c1), unique 
> (c2));}}
> {{--- SQL operation complete.}}
> {{>>insert into t1 values (1,1),(2,2),(3,3);}}
> {{--- 3 row(s) inserted.}}
> {{>>}}
> {{>>merge into t1 on c2=1 when matched then delete;}}
> {{*** ERROR[4002] Column "NEW@".C2 is not found. Table "NEW@" not exposed. 
> Tables in scope: "OLD@". Default schema: TRAFODION.SEABASE.}}
> {{*** ERROR[8822] The statement was not prepared.}}
> {{>>}}
> If the "unique" constraint is absent in the CREATE TABLE statement, the MERGE 
> statement completes successfully.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-3018) Select of UPPER on upshifted column in join predicate cores

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-3018.
---

> Select of UPPER on upshifted column in join predicate cores
> ---
>
> Key: TRAFODION-3018
> URL: https://issues.apache.org/jira/browse/TRAFODION-3018
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> The following script reproduces the problem. First create the table and 
> populate it:
> drop table if exists mytable cascade;
> create table mytable (
> val int,
> c1 char(1) upshift,
> c2 char(1)
> );
> insert into mytable values (1, 'a', 'a');
> select * from mytable;
> If we now do any of the following SELECTs, we'll get an internal error on 
> release builds or a core on debug builds:
> select t1.val from mytable t1, mytable t2 where UCASE(t1.c1) = UCASE(t2.c1);
> select t1.val from mytable t1, mytable t2 where UPPER(t1.c1) = UPPER(t2.c1);
> select t1.val from mytable t1, mytable t2 where UPSHIFT(t1.c1) = 
> UPSHIFT(t2.c1);
> The first several frames on the core stack trace look like:
> (gdb) bt
> #0 0x7511a495 in raise () from /lib64/libc.so.6
> #1 0x7511bc75 in abort () from /lib64/libc.so.6
> #2 0x7fffec50986d in GeneratorAbort (file=
>  0x7fffec68fce1 "../generator/Generator.cpp", line=2655, message=
>  0x7ffe8dc0 "\nValueId 7 (TRAFODION.SCH.MYTABLE.C1...) not found in 
> MapTable 0x7ffeb460") at ../generator/Generator.cpp:3029
> #3 0x7fffec508caa in Generator::getMapInfo (this=0x7ffeb460, value_id=
>  ..., mapTable=0x0) at ../generator/Generator.cpp:2655
> #4 0x7fffec5093a7 in Generator::getAttr (this=0x7ffeb460, ie=
>  0x7fffd300ac80) at ../generator/Generator.cpp:2876
> #5 0x7fffec551e37 in ItemExpr::codegen_and_set_attributes (this=
>  0x7fffd09fd860, generator=0x7ffeb460, attr=0x7fffd0a015b8, num_attrs=3)
>  at ../generator/GenItemExpr.cpp:127
> #6 0x7fffec52e2c8 in ExpGenerator::genItemExpr (this=0x7fffd09e2d08, 
>  item_expr=0x7fffd09fd860, out_attr=0x7ffe90a0, num_attrs=3, gen_child=
>  -1) at ../generator/GenExpGenerator.cpp:5389
> #7 0x7fffec553920 in BiRelat::codeGen (this=0x7fffd09fd860, generator=
>  0x7ffeb460) at ../generator/GenItemExpr.cpp:560
> #8 0x7fffec5598b1 in BoolResult::codeGen (this=0x7fffd09fe028, generator=
>  0x7ffeb460) at ../generator/GenItemFunc.cpp:864
> #9 0x7fffec5255a2 in ExpGenerator::generateExpr (this=0x7fffd09e2d08, 
>  val_id=..., node_type=ex_expr::exp_SCAN_PRED, expr=0x7ffe98b8)
>  at ../generator/GenExpGenerator.cpp:2870
> ---Type  to continue, or q  to quit--- 
> #10 0x7fffec5ff210 in HashJoin::codeGen (this=0x7fffd09d7590, generator=
>  0x7ffeb460) at ../generator/GenRelJoin.cpp:1185
> #11 0x7fffec60f66e in RelRoot::codeGen (this=0x7fffd09d57c0, generator=
>  0x7ffeb460) at ../generator/GenRelMisc.cpp:1253
> #12 0x7fffec501535 in Generator::genCode (this=0x7ffeb460, source=
>  0x7fffd3036ba8 "select t1.val from mytable t1, mytable t2 where UCASE(t1.c1) 
> = UCASE(t2.c1);", expr_node=0x7fffd09d57c0) at ../generator/Generator.cpp:557
> #13 0x7fffeee01eae in CmpMain::compile (this=0x7ffeed60, input_str=
>  0x7fffd3036ba8 "select t1.val from mytable t1, mytable t2 where UCASE(t1.c1) 
> = UCASE(t2.c1);", charset=15, queryExpr=@0x7ffeec58, gen_code=
>  0x7fffd2ff69a8, gen_code_len=0x7fffd2ff69a0, heap=0x7fffd30391d8, phase=
>  CmpMain::END, fragmentDir=0x7ffeeeb8, op=3004, useQueryCache=
>  CmpMain::NORMAL, cacheable=0x7ffeec44, begTime=0x7ffeec60, 
>  shouldLog=0) at ../sqlcomp/CmpMain.cpp:2339
> #14 0x7fffeedff94d in CmpMain::sqlcomp (this=0x7ffeed60, input_str=
>  0x7fffd3036ba8 "select t1.val from mytable t1, mytable t2 where UCASE(t1.c1) 
> = UCASE(t2.c1);", charset=15, queryExpr=@0x7ffeec58, gen_code=
>  0x7fffd2ff69a8, gen_code_len=0x7fffd2ff69a0, heap=0x7fffd30391d8, phase=
>  CmpMain::END, fragmentDir=0x7ffeeeb8, op=3004, useQueryCache=
>  CmpMain::NORMAL, cacheable=0x7ffeec44, begTime=0x7ffeec60, 
>  shouldLog=0) at ../sqlcomp/CmpMain.cpp:1653
> #15 0x7fffeedfd23d in CmpMain::sqlcomp (this=0x7ffeed60, input=..., 
>  gen_code=0x7fffd2ff69a8, gen_code_len=0x7fffd2ff69a0, heap=0x7fffd30391d8, 
> ---Type  to continue, or q  to quit---q
> pQuit
> (gdb)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-2977) Trafodion allows creating objects with longer key lengths than HBase supports

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-2977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-2977.
---

> Trafodion allows creating objects with longer key lengths than HBase supports
> -
>
> Key: TRAFODION-2977
> URL: https://issues.apache.org/jira/browse/TRAFODION-2977
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> The following illustrates the problem:
> {quote}
> >>create table t0 (c1 char(16777216));
> --- SQL operation complete.
> >>
> >>insert into t0 values ('A');
> --- 1 row(s) inserted.
> >>
> >>create table t1 (c1 char(32768) unique);
> --- SQL operation complete.
> >>
> >>-- next statement fails
> >>insert into t1 values ('A');
> *** ERROR[8448] Unable to access Hbase interface. Call to 
> ExpHbaseInterface::checkAndInsertRow returned error HBASE_ACCESS_ERROR(-706). 
> Cause: java.lang.IllegalArgumentException: Row length 32770 is > 32767
> org.apache.hadoop.hbase.client.Mutation.checkRow(Mutation.java:537)
> org.apache.hadoop.hbase.client.Put.(Put.java:110)
> org.apache.hadoop.hbase.client.Put.(Put.java:68)
> org.apache.hadoop.hbase.client.Put.(Put.java:58)
> org.trafodion.sql.HTableClient.putRow(HTableClient.java:1543)
> org.trafodion.sql.HBaseClient.insertRow(HBaseClient.java:1931).
> --- 0 row(s) inserted.
> >>
> >>create table t2 (c1 char(32768) not null primary key);
> --- SQL operation complete.
> >>
> >>-- next statement fails
> >>insert into t2 values ('A');
> *** ERROR[8448] Unable to access Hbase interface. Call to 
> ExpHbaseInterface::checkAndInsertRow returned error HBASE_ACCESS_ERROR(-706). 
> Cause: java.lang.IllegalArgumentException: Row length 32768 is > 32767
> org.apache.hadoop.hbase.client.Mutation.checkRow(Mutation.java:537)
> org.apache.hadoop.hbase.client.Put.(Put.java:110)
> org.apache.hadoop.hbase.client.Put.(Put.java:68)
> org.apache.hadoop.hbase.client.Put.(Put.java:58)
> org.trafodion.sql.HTableClient.putRow(HTableClient.java:1543)
> org.trafodion.sql.HBaseClient.insertRow(HBaseClient.java:1931).
> --- 0 row(s) inserted.
> >>
> >>create table t3 (c1 char(32768) check (c1!=''));
> --- SQL operation complete.
> >>
> >>insert into t3 values ('A');
> --- 1 row(s) inserted.
> >>
> {quote}
>  
> In the example above, table T1 has an implicit index with a key length 
> greater than 32767. Table T2 has a primary key length greater than 32767. 
> HBase 1.0.0 imposes a limit on key length in the hbase-client module 
> Mutation.java at Put time (as seen in the stack traces). The T0 and T3 
> examples succeed because the long character value occurs in the value part of 
> an HBase KeyValue, not in the key part.
> The behavior above is unfriendly from a user perspective. It would be better 
> to catch the over-long key at CREATE time and raise an error there.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-3005) CREATE INDEX on certain long chars fails with Java exceptions

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-3005.
---

> CREATE INDEX on certain long chars fails with Java exceptions
> -
>
> Key: TRAFODION-3005
> URL: https://issues.apache.org/jira/browse/TRAFODION-3005
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> The following session illustrates the issue. Creating an index on a 
> CHAR(32757) succeeds. Creating an index on a CHAR(32766) fails with a good 
> error message (1141). But creating an index on intermediate sizes fails with 
> a Java exception, pointing to some bug with near-boundary conditions.
> {{>>drop table if exists mytable;}}{{--- SQL operation complete.}}
> {{>>create table mytable (}}
> {{+>c1 char(32757),}}
> {{+>c2 char(32758),}}
> {{+>c3 char(32765),}}
> {{+>c4 char(32766));}}{{--- SQL operation complete.}}
> {{>>insert into mytable values ('A','A','A','A');}}{{--- 1 row(s) inserted.}}
> {{>>create index idx1 on mytable (c1);}}{{--- SQL operation complete.}}
> {{>>create index idx2 on mytable (c2);}}{{*** ERROR[8448] Unable to access 
> Hbase interface. Call to ExpHbaseInterface::addToHFile returned error 
> HBASE_ADD_TO_HFILE_ERROR(-713). Cause: java.lang.NegativeArraySizeException}}
> {{org.trafodion.sql.HBulkLoadClient.addToHFile(HBulkLoadClient.java:247).}}
> {{*** ERROR[1081] Loading of index TRAFODION.SCH.IDX2 failed unexpectedly.}}
> {{--- SQL operation failed with errors.}}
> {{>>create index idx3 on mytable (c3);}}
> {{*** ERROR[8448] Unable to access Hbase interface. Call to 
> ExpHbaseInterface::addToHFile returned error HBASE_ADD_TO_HFILE_ERROR(-713). 
> Cause: java.lang.NegativeArraySizeException}}
> {{org.trafodion.sql.HBulkLoadClient.addToHFile(HBulkLoadClient.java:247).}}
> {{*** ERROR[1081] Loading of index TRAFODION.SCH.IDX3 failed unexpectedly.}}
> {{--- SQL operation failed with errors.}}
> {{>>create index idx4 on mytable (c4);}}
> {{*** ERROR[1141] Key length 32768 exceeds the maximum allowed key length of 
> 32767.}}
> {{--- SQL operation failed with errors.}}
> {{>>}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-2990) CREATE TABLE LIKE fails with long numeric default value

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-2990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-2990.
---

> CREATE TABLE LIKE fails with long numeric default value
> ---
>
> Key: TRAFODION-2990
> URL: https://issues.apache.org/jira/browse/TRAFODION-2990
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> The following script reproduces the problem:
> drop table if exists t1 cascade;
> drop table if exists new_t1 cascade;
> create table t1 (
> val int,
> c1 numeric(128,127) default 
> 1.01234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789
> );
> insert into t1(val) values (1);
> select * from t1;
> create table new_t1 like t1;
> When run, the CREATE TABLE LIKE statement fails as follows:
> >>create table new_t1 like t1;
> *** ERROR[15001] A syntax error occurred at or before:
> create table TRAFODION.SEABASE.NEW_T1 ( "VAL"
>  INT DEFAULT NULL NOT SERIALIZED , "C1" NUMERIC(12
> 8, 127) DEFAULT
>   1.
>   
> 0123456789012345678901234567890123456789012345678901234567890123456789012
> 3456789012345678901234567890123456789
> ^ (299 characters from start of SQL 
> statement)
> *** ERROR[8822] The statement was not prepared.
> --- SQL operation failed with errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-2969) Subquery with [first 1] in select list results in Normalizer internal error

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-2969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-2969.
---

> Subquery with [first 1] in select list results in Normalizer internal error
> ---
>
> Key: TRAFODION-2969
> URL: https://issues.apache.org/jira/browse/TRAFODION-2969
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> The following script illustrates the error:
> {quote}?section setup
> CREATE TABLE subquery_table
>  ( 
>  x int no default not null,
>  y int,
>  primary key ( x )
>  );
> CREATE TABLE main_table
>  ( 
>  a int no default not null,
>  b int,
>  primary key (a)
>  );
> ?section testit
> prepare s1 from
>  select 
>  (select [FIRST 1] y aa from subquery_table b where b.x = a.b) as company_id
>  from main_table a;
> ?section testit2
> prepare s1 from
>  select 
>  (select y aa from subquery_table b where b.x = a.b) as company_id
>  from main_table a;
> {quote}
> If we run section setup to create the tables, then run section testit, the 
> prepare fails with this error:
>  * 
>  ** 
>  *** ERROR[2006] Internal error: assertion failure 
> (CmpCommon::getDefault(COMP_BOOL_137) == DF_ON) in file 
> ../optimizer/NormItemExpr.cpp at line 6028.
> The second query, in section testit2, on the other hand compiles and executes 
> just fine.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-2983) Query fails with an internal error in the generator

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-2983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-2983.
---

> Query fails with an internal error in the generator
> ---
>
> Key: TRAFODION-2983
> URL: https://issues.apache.org/jira/browse/TRAFODION-2983
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> The following script reproduces the problem:
> {quote}?section setup
> drop table if exists t1;
> drop table if exists t2;
> drop table if exists t3;
> create table t1
>  ( 
>  id1 varchar(50 bytes) no default not null not droppable not serialized
>  , yn varchar(1 byte) default null not serialized
>  )
>  attributes aligned format 
> ;
> create unique index t1pk ON t1
>  ( 
>  id1 asc
>  )
>  attributes aligned format 
> ;
> create table t2
>  ( 
>  id1 varchar(50 bytes) no default not null not droppable not serialized
>  , id3 varchar(50 bytes) no default not null not droppable not serialized
>  , id2 varchar(4 bytes) default null not serialized
>  , c1 varchar(50 bytes) default null not serialized
>  )
>  attributes aligned format 
> ;
> create unique index t2pk ON t2
>  ( 
>  id1 asc
>  , id3 asc
>  )
>  attributes aligned format 
> ;
>  
> create table t3
>  ( 
>  id2 varchar(50 bytes) no default not null not droppable not serialized
>  , nm varchar(100 bytes) no default not null not droppable not serialized
>  )
>  attributes aligned format 
> ;
> ?section s1
> -- reproduces the error
> prepare s1 from
> select t1.id1, 
>  t2.c1, 
>  decode(t1.yn, 'Y', (select t3.nm
>  from t3
>  where t3.id2 = t2.id2),
>  'N') yn
>  from t1
>  left outer join t2 
>  on t2.id1 = t1.id1
>  where t1.id1 = 'this is a dummy ID string';
> {quote}
> When run, the PREPARE fails with an internal error in the generator, 7000. 
> (On debug builds, it fails with a core.)
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-2964) New MDAM costing code incorrectly assumes key column is always on the left

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-2964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-2964.
---

> New MDAM costing code incorrectly assumes key column is always on the left
> --
>
> Key: TRAFODION-2964
> URL: https://issues.apache.org/jira/browse/TRAFODION-2964
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> With Jira TRAFODION-2645, the MDAM costing code has been rewritten. The new 
> code is turned off by default at the moment until it can undergo large-scale 
> testing.
> But a bug has been noticed in this code. Method 
> NewMDAMOptimalDisjunctPrefixWA::calculateMetricsFromKeyPred assumes in its 
> predicate analysis that the key column is always on the left. (This is 
> significant only for <, <=, > and >= predicates.) But it turns out this is 
> not always true.
> For example, in the following theta-join query, when MDAM is considered on 
> the inner table of a nested loop join, and FACT1 is the inner table, the key 
> column is on the right of the join predicate:
> select  * from district d join fact1 f  on d.district_number > 
> f.district_number where f.serial_number > 100;
> The DDL used for this example is:
> ```
> >>showddl fact1;
> CREATE TABLE TRAFODION.SCH.FACT1
>  (
>  DISTRICT_NUMBER INT NO DEFAULT NOT NULL NOT DROPPABLE NOT
>  SERIALIZED
>  , SERIAL_NUMBER INT NO DEFAULT NOT NULL NOT DROPPABLE NOT
>  SERIALIZED
>  , STUFF INT DEFAULT NULL NOT SERIALIZED
>  , PRIMARY KEY (DISTRICT_NUMBER ASC, SERIAL_NUMBER ASC)
>  )
>  SALT USING 4 PARTITIONS
>  ATTRIBUTES ALIGNED FORMAT
> ;
> -- GRANT SELECT, INSERT, DELETE, UPDATE, REFERENCES ON TRAFODION.SCH.FACT1 TO 
> DB__ROOT WITH GRANT OPTION;
> --- SQL operation complete.
> >>showddl district;
> CREATE TABLE TRAFODION.SCH.DISTRICT
>  (
>  DISTRICT_NAME CHAR(20) CHARACTER SET ISO88591 COLLATE
>  DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE NOT SERIALIZED
>  , DISTRICT_NUMBER INT NO DEFAULT NOT NULL NOT DROPPABLE NOT
>  SERIALIZED
>  , PRIMARY KEY (DISTRICT_NAME ASC)
>  )
>  ATTRIBUTES ALIGNED FORMAT
> ;
> -- GRANT SELECT, INSERT, DELETE, UPDATE, REFERENCES ON TRAFODION.SCH.DISTRICT 
> TO DB__ROOT WITH GRANT OPTION;
> --- SQL operation complete.
> >>
> ```
> Here, DISTRICT was populated with 10 rows, FACT1 with 2 million rows. UPDATE 
> STATISTICS was done on both. Notice that FACT1 is salted but DISTRICT is not.
> CQD MDAM_COSTING_REWRITE 'ON' was in force to get the new costing code. The 
> following CQDs also were in force (these force a horrible plan. by the way, 
> but it illustrates the bug):
> cqd keyless_nested_joins 'ON';
> cqd MDAM_UNDER_NJ_PROBES_THRESHOLD '1000';
> cqd hash_joins 'OFF';
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-2967) UPDATE STATISTICS sometimes cores in CommonLogger::buildMsgBuffer

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-2967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-2967.
---

> UPDATE STATISTICS sometimes cores in CommonLogger::buildMsgBuffer
> -
>
> Key: TRAFODION-2967
> URL: https://issues.apache.org/jira/browse/TRAFODION-2967
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> Jira TRAFODION-2927 refactored the UPDATE STATISTICS logging logic to use the 
> CommonLogger class. Unfortunately, this introduced a problem that sometimes 
> causes cores in CommonLogger::buildMsgBuffer. The UPDATE STATISTICS code 
> passes raw data via a "template" parameter to the logging layers. The logging 
> layers interpret this template like a printf template. So, if the data 
> happens to have a substring such as %d or %n in it, the logger code expects 
> an additional argument to supply a substitution value for it. But none is 
> supplied so we abend.
> A workaround is to turn automatic logging off, e.g. by setting CQD 
> USTAT_AUTOMATIC_LOGGING 'OFF' in the "_MD_".DEFAULTS table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-2840) ORDER BY clause on a view circumvents [first n] updatability check

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-2840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-2840.
---

> ORDER BY clause on a view circumvents [first n] updatability check
> --
>
> Key: TRAFODION-2840
> URL: https://issues.apache.org/jira/browse/TRAFODION-2840
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
> Environment: All
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> The following script fails:
> >>create table t1 (a int not null, b int, primary key (a));
> --- SQL operation complete.
> >>
> >>insert into t1 values (1,1),(2,2),(3,3),(4,4),(5,5),(6,6);
> --- 6 row(s) inserted.
> >>
> >>create view v1 as select [first 5] * from t1 order by a;
> --- SQL operation complete.
> >>
> >>create view v2 as select [first 5] * from t1;
> --- SQL operation complete.
> >>
> >>update v1 set b = 6;
> --- 6 row(s) updated.
> >> -- should fail; v1 should be non-updatable
> >>
> >>update v2 set b = 7;
> *** ERROR[4028] Table or view TRAFODION.SEABASE.V2 is not updatable.
> *** ERROR[8822] The statement was not prepared.
> >>-- does fail; v2 is non-updatable (correctly)
> >>
> It seems the presence of the ORDER BY clause in the view definition 
> circumvents the [first n] updatability check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-2645) MDAM costing overestimates I/O cost

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-2645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-2645.
---

> MDAM costing overestimates I/O cost
> ---
>
> Key: TRAFODION-2645
> URL: https://issues.apache.org/jira/browse/TRAFODION-2645
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.2.0
> Environment: All
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> The method MDAMCostWA::compute (optimizer/ScanOptimizer.cpp) has logic to 
> calculate the total cost of an MDAM plan and compare it to the cost of a 
> single subset plan. In the case of a Trafodion table, the cost logic for MDAM 
> cost has an error. It is adding the total number of probes (totalRqsts) to 
> the number of seeks (totalSeeks) in order to calculate I/O costs.
> TotalSeeks models direct access disk seeks. These happen at the block level 
> and are an I/O cost.
> TotalRqsts models the number of probes, which is a CPU cost. A probe is a 
> direct access within a block that has already been read into memory. This is 
> a vastly smaller unit of cost than an I/O operation.
> Note that TotalRqsts is already modeled as a message cost since it is added 
> to TotalRows.
> The effect of the bug is to vastly penalize MDAM plans having large numbers 
> of probes. This will show up the most on the largest tables.
> The fix is simply not to add totalRqsts to totalSeeks. However, in making 
> this fix, it is likely that other MDAM-related parameters will need tuning. 
> So this change needs to be done carefully.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-2966) New MDAM costing code does not always respect CQS forcing MDAM

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-2966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-2966.
---

> New MDAM costing code does not always respect CQS forcing MDAM
> --
>
> Key: TRAFODION-2966
> URL: https://issues.apache.org/jira/browse/TRAFODION-2966
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> I ran the full regression suite but with CQD MDAM_COSTING_REWRITE set to 'ON' 
> in the system defaults table. The suite passed except for test 
> seabase/TEST010. There, several queries switched from MDAM plans to single 
> subset scans. They did so in spite of the presence of CONTROL QUERY SHAPE 
> statements that force an MDAM plan. So there is a bug in the new costing code 
> where it is not respecting these CQS statements.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-2927) Keep log information for UPDATE STATISTICS in case of errors

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-2927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-2927.
---

> Keep log information for UPDATE STATISTICS in case of errors
> 
>
> Key: TRAFODION-2927
> URL: https://issues.apache.org/jira/browse/TRAFODION-2927
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> Presently, UPDATE STATISTICS keeps a detailed log of its internal activities 
> if one has specifed "update statistics log on" in advance.
> In production scenarios, this is typically not done. That means when a 
> long-running UPDATE STATISTICS command fails on a large table, one has to 
> redo it with logging turned on in order to troubleshoot.
> A better practice might be to always log, and delete the log if the operation 
> succeeds.
> Another issue with UPDATE STATISTICS logs is their location. The directory is 
> different than other Trafodion logs and is sometimes hard to find. As part of 
> this JIRA, consideration should be given to writing the logs to the Trafodion 
> logs directory instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-2928) Add news articles about Trafodion to Trafodion web site

2018-08-13 Thread David Wayne Birdsall (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-2928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wayne Birdsall closed TRAFODION-2928.
---

> Add news articles about Trafodion to Trafodion web site
> ---
>
> Key: TRAFODION-2928
> URL: https://issues.apache.org/jira/browse/TRAFODION-2928
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: website
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Fix For: 2.3
>
>
> Add recent articles about Trafodion as a top-level project to the Trafodion 
> web site.
> [http://globenewswire.com/news-release/2018/01/10/1286517/0/en/The-Apache-Software-Foundation-Announces-Apache-Trafodion-as-a-Top-Level-Project.html]
> [https://thenewstack.io/sql-hadoop-database-trafodion-bridges-transactions-analysis-divide/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   >