[jira] [Closed] (TRAFODION-174) LP Bug: 1274651 - Getting TM error 97 when our tables split or get moved.
[ https://issues.apache.org/jira/browse/TRAFODION-174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-174. -- Resolution: Fixed LP Bug: 1274651 - Getting TM error 97 when our tables split or get moved. - Key: TRAFODION-174 URL: https://issues.apache.org/jira/browse/TRAFODION-174 Project: Apache Trafodion Issue Type: Bug Components: dtm Reporter: Guy Groulx Assignee: John de Roo Priority: Blocker Fix For: 1.1 (pre-incubation) Testing with transaction enabled. Our system is using HortonWorks and is running HBASE 0.94 across 12 nodes. Our hbase max store size was 1GB. We noticed during loading of large tables, that we would get error 97 returned from the TM and that the batch of rows were not added. Turns out that our table was being split and that the TM is not handling this at the moment. We also found out that after a split, the hbase balancer would move the new region to another region server. When this happened, we got more error 97. WORKAROUND: - We changed the MAX STORE SIZE to 100GB. - We changed the SPLIT POLICY to org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy which causes split to happen only once the max size if reached. Default for HBASE 0.94 and up is new POWERof2 policy which causes splits more often. - We turned off HBASE BALANCER via hbase shell. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-174) LP Bug: 1274651 - Getting TM error 97 when our tables split or get moved.
[ https://issues.apache.org/jira/browse/TRAFODION-174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692598#comment-14692598 ] Atanu Mishra commented on TRAFODION-174: Oliver Bucaojit (oliver-bucaojit) wrote on 2015-03-31: #3 Our latest changes to set the closing flag seemed to have fixed the latest split issues that we have been seeing. I also have not heard of any new problems related to this bug so I’ll go ahead and mark it as resolved. The closing flag change was checked into the stable 1.0.1 branch on Feb17 and into mainline on Feb14 so it is in the released code. Changed in trafodion: status: In Progress → Fix Released LP Bug: 1274651 - Getting TM error 97 when our tables split or get moved. - Key: TRAFODION-174 URL: https://issues.apache.org/jira/browse/TRAFODION-174 Project: Apache Trafodion Issue Type: Bug Components: dtm Reporter: Guy Groulx Assignee: John de Roo Priority: Blocker Fix For: 1.1 (pre-incubation) Testing with transaction enabled. Our system is using HortonWorks and is running HBASE 0.94 across 12 nodes. Our hbase max store size was 1GB. We noticed during loading of large tables, that we would get error 97 returned from the TM and that the batch of rows were not added. Turns out that our table was being split and that the TM is not handling this at the moment. We also found out that after a split, the hbase balancer would move the new region to another region server. When this happened, we got more error 97. WORKAROUND: - We changed the MAX STORE SIZE to 100GB. - We changed the SPLIT POLICY to org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy which causes split to happen only once the max size if reached. Default for HBASE 0.94 and up is new POWERof2 policy which causes splits more often. - We turned off HBASE BALANCER via hbase shell. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-226) LP Bug: 1308243 - DCS - ODBC driver - Error messages needs to be cleaned up
[ https://issues.apache.org/jira/browse/TRAFODION-226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-226. -- Resolution: Fixed Assignee: Aruna Sadashiva (was: Apache Trafodion) Fix Version/s: 1.0 (pre-incubation) LP Bug: 1308243 - DCS - ODBC driver - Error messages needs to be cleaned up --- Key: TRAFODION-226 URL: https://issues.apache.org/jira/browse/TRAFODION-226 Project: Apache Trafodion Issue Type: Bug Components: client-odbc-linux Reporter: Aruna Sadashiva Assignee: Aruna Sadashiva Priority: Blocker Fix For: 1.0 (pre-incubation) ODBC error msgs need to be cleaned up for Traf. ODBC ERROR MSGS REFER TO NEOVIEW and HP ODBC Driver: [HP][HP ODBC Driver][HP Neoview Database] SQL ERROR:*** ERROR[4082] Object TRAFODION.ODBCTEST.LR5GT3YMEY does not exist or is inaccessible. [HP][HP ODBC Driver] GENERAL WARNING. CONNECTED TO THE DEFAULT DATA SOURCE: [01!S!]. (0) Assigned to LaunchPad User Rajeswari Muddu -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-226) LP Bug: 1308243 - DCS - ODBC driver - Error messages needs to be cleaned up
[ https://issues.apache.org/jira/browse/TRAFODION-226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14694408#comment-14694408 ] Atanu Mishra commented on TRAFODION-226: Aruna Sadashiva (aruna-sadashiva) on 2014-07-16 Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1308243 - DCS - ODBC driver - Error messages needs to be cleaned up --- Key: TRAFODION-226 URL: https://issues.apache.org/jira/browse/TRAFODION-226 Project: Apache Trafodion Issue Type: Bug Components: client-odbc-linux Reporter: Aruna Sadashiva Assignee: Apache Trafodion Priority: Blocker ODBC error msgs need to be cleaned up for Traf. ODBC ERROR MSGS REFER TO NEOVIEW and HP ODBC Driver: [HP][HP ODBC Driver][HP Neoview Database] SQL ERROR:*** ERROR[4082] Object TRAFODION.ODBCTEST.LR5GT3YMEY does not exist or is inaccessible. [HP][HP ODBC Driver] GENERAL WARNING. CONNECTED TO THE DEFAULT DATA SOURCE: [01!S!]. (0) Assigned to LaunchPad User Rajeswari Muddu -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-278) LP Bug: 1320397 - update statistics for hive table reports ERROR[9200], ERROR[1002]
[ https://issues.apache.org/jira/browse/TRAFODION-278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708652#comment-14708652 ] Atanu Mishra commented on TRAFODION-278: Trafodion-Gerrit (neo-devtools) wrote on 2015-02-25: Fix proposed to core (master) #3 Fix proposed to branch: master Review: https://review.trafodion.org/1181 Trafodion-Gerrit (neo-devtools) wrote on 2015-02-25: Fix merged to core (master)#4 Reviewed: https://review.trafodion.org/1181 Committed: https://github.com/trafodion/core/commit/20c1470100060d32e38ba442fd45f794260d5fa8 Submitter: Trafodion Jenkins Branch: master commit 20c1470100060d32e38ba442fd45f794260d5fa8 Author: Barry Fritchman email address hidden Date: Wed Feb 25 06:35:30 2015 + Errors reported for Update Stats on Hive tables Attempts to execute Update Statistics statements on Hive tables fail with error 1002 (catalog does not exist). The catalog used to store Hive stats was no longer being successfully created on demand. The Trafodion catalog is now used instead, with a schema for Hive stats created in it on first use. The tables for histograms and histogram intervals for Hive tables are created in that schema on first use. Change-Id: Ib57d0af4a3da6f52f0544d6c2fce3e77d2e823c1 Closes-Bug: #1320397 Changed in trafodion: status: In Progress → Fix Committed Julie Thai (julie-y-thai) wrote on 2015-04-14: #5 verified on Traf 1.1.0rc0 build: SQLupdate statistics for table hive.tpch2x.customer on every column; --- SQL operation complete. SQLshowstats for table hive.tpch2x.customer on existing columns; Histogram data for Table HIVE.TPCH2X.CUSTOMER Table ID: 0 Hist ID # Ints Rowcount UEC Colname(s) == == === === === 1864945772 25 30 25 C_NATIONKEY 1864945777 62 30 30 C_ADDRESS 1864945782 62 30 30 C_NAME 1864945787 48 30 30 C_CUSTKEY 1864945791 62 30 280754 C_COMMENT 1864945796 5 30 5 C_MKTSEGMENT 1864945801 36 30 262499 C_ACCTBAL 1864945806 62 30 30 C_PHONE --- SQL operation complete. Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1320397 - update statistics for hive table reports ERROR[9200], ERROR[1002] --- Key: TRAFODION-278 URL: https://issues.apache.org/jira/browse/TRAFODION-278 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Julie Thai Assignee: Barry Fritchman Priority: Critical Fix For: 1.1 (pre-incubation) select count(*) from hive.hive.nation; (EXPR) 25 --- 1 row(s) selected. cqd ustat_log 'updstat_hive'; --- SQL operation complete. update statistics log on; --- SQL operation complete. update statistics for table hive.hive.nation on every column; *** ERROR[9200] UPDATE STATISTICS for table HIVE.HIVE.NATION encountered an error (1002) from statement Process_Query. *** ERROR[1002] Catalog HIVESTATS does not exist or has not been registered on node . *** ERROR[8822] The statement was not prepared. --- SQL operation failed with errors. update statistics for table hive.hive.mynation on every column; *** ERROR[9200] UPDATE STATISTICS for table HIVE.HIVE.MYNATION encountered an error (1002) from statement Process_Query. *** ERROR[1002] Catalog HIVESTATS does not exist or has not been registered on node . *** ERROR[8822] The statement was not prepared. --- SQL operation failed with errors. update statistics log off; --- SQL operation complete. log off; encountered on workstation, datalake_64_1 v40596 MY_SQROOT=/opt/home/thaiju/datalake_64_1 who@host=tha...@g4t3029.houston.hp.com JAVA_HOME=/opt/home/tools/jdk1.7.0_09_64 linux=2.6.32-279.el6.x86_64 redhat=6.3 Release 0.7.0 (Build release [40596], branch 40596-project/datalake_64_1, date 16May14) to reproduce, see attached hive_updstats.tar contents: 1. in hive, create and load tables nation/mynation; see hive_setup and nation.tbl files 2. in sqlci, obey traf.sql -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-278) LP Bug: 1320397 - update statistics for hive table reports ERROR[9200], ERROR[1002]
[ https://issues.apache.org/jira/browse/TRAFODION-278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-278. -- Resolution: Fixed Assignee: (was: Barry Fritchman) LP Bug: 1320397 - update statistics for hive table reports ERROR[9200], ERROR[1002] --- Key: TRAFODION-278 URL: https://issues.apache.org/jira/browse/TRAFODION-278 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Julie Thai Priority: Critical Fix For: 1.1 (pre-incubation) select count(*) from hive.hive.nation; (EXPR) 25 --- 1 row(s) selected. cqd ustat_log 'updstat_hive'; --- SQL operation complete. update statistics log on; --- SQL operation complete. update statistics for table hive.hive.nation on every column; *** ERROR[9200] UPDATE STATISTICS for table HIVE.HIVE.NATION encountered an error (1002) from statement Process_Query. *** ERROR[1002] Catalog HIVESTATS does not exist or has not been registered on node . *** ERROR[8822] The statement was not prepared. --- SQL operation failed with errors. update statistics for table hive.hive.mynation on every column; *** ERROR[9200] UPDATE STATISTICS for table HIVE.HIVE.MYNATION encountered an error (1002) from statement Process_Query. *** ERROR[1002] Catalog HIVESTATS does not exist or has not been registered on node . *** ERROR[8822] The statement was not prepared. --- SQL operation failed with errors. update statistics log off; --- SQL operation complete. log off; encountered on workstation, datalake_64_1 v40596 MY_SQROOT=/opt/home/thaiju/datalake_64_1 who@host=tha...@g4t3029.houston.hp.com JAVA_HOME=/opt/home/tools/jdk1.7.0_09_64 linux=2.6.32-279.el6.x86_64 redhat=6.3 Release 0.7.0 (Build release [40596], branch 40596-project/datalake_64_1, date 16May14) to reproduce, see attached hive_updstats.tar contents: 1. in hive, create and load tables nation/mynation; see hive_setup and nation.tbl files 2. in sqlci, obey traf.sql -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-254) LP Bug: 1317729 - Table salted with a float key can’t be invoked or dropped
[ https://issues.apache.org/jira/browse/TRAFODION-254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708612#comment-14708612 ] Atanu Mishra commented on TRAFODION-254: Hans Zeller (hans-zeller) on 2014-06-23 Changed in trafodion: status: New → In Progress Hans Zeller (hans-zeller) on 2014-07-29 Changed in trafodion: status: In Progress → Fix Committed Hans Zeller (hans-zeller) wrote on 2014-07-29: #5 After an earlier fix by Anoop allowed salting on float columns, I now checked in a fix that returns an error on Jyly 17: https://review.trafodion.org/108 Weishiun Tsai (wei-shiun-tsai) wrote on 2014-08-01: #6 Verified on the 0801_0830 daily build. Create table td1 with a float column salt key now returns an error 1120: set schema mytest; --- SQL operation complete. create table td1 (a largeint not null, b smallint not null, c float(10) not null, d double precision) store by (a, b, c) salt using 2 partitions on (c, b); *** ERROR[1120] Use of approximate numeric datatype (float, real, double precision) in a partitioning key or salt clause is not allowed. --- SQL operation failed with errors. invoke td1; *** ERROR[4082] Object TRAFODION.MYTEST.TD1 does not exist or is inaccessible. --- SQL operation failed with errors. drop table td1; *** ERROR[1389] Object TRAFODION.MYTEST.TD1 does not exist in Trafodion. --- SQL operation failed with errors. create table sd1 (a largeint not null, b smallint not null, c float(10) not null, d double precision) store by (a, b, c); --- SQL operation complete. invoke sd1; -- Definition of Trafodion table TRAFODION.MYTEST.SD1 -- Definition current Fri Aug 1 21:04:44 2014 ( SYSKEY LARGEINT NO DEFAULT NOT NULL NOT DROPPABLE , A LARGEINT NO DEFAULT NOT NULL NOT DROPPABLE , B SMALLINT NO DEFAULT NOT NULL NOT DROPPABLE , C DOUBLE PRECISION NO DEFAULT NOT NULL NOT DROPPABLE , D DOUBLE PRECISION DEFAULT NULL ) --- SQL operation complete. drop table sd1; --- SQL operation complete. Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1317729 - Table salted with a float key can’t be invoked or dropped --- Key: TRAFODION-254 URL: https://issues.apache.org/jira/browse/TRAFODION-254 Project: Apache Trafodion Issue Type: Bug Components: sql-exe Reporter: Weishiun Tsai Assignee: Hans Zeller Priority: Critical Fix For: 0.8 (pre-incubation) When a table was created using salt with a float key, the table creation operation was completed successfully. But the table couldn’t be invoked, nor could it be dropped. The invoke statement returned error 1120 and the drop statement returned error 1389. The following output shows that td1 was created using salt with a float key and the errors from invoke and drop. Sd1 was created with the same ddl except for the salting. Invoke and drop work fine on sd1. This is seen on the datalake v40174 build installed on a workstation. set schema mytest; --- SQL operation complete. create table td1 (a largeint not null, b smallint not null, c float(10) not null, d double precision) store by (a, b, c) salt using 2 partitions on (c, b); --- SQL operation complete. invoke td1; *** ERROR[1120] Use of float datatype in a partitioning key is not allowed. --- SQL operation failed with errors. drop table td1; *** ERROR[1389] Object TRAFODION.MYTEST.TD1 does not exist in Trafodion. create table sd1 (a largeint not null, b smallint not null, c float(10) not null, d double precision) store by (a, b, c); --- SQL operation complete. invoke sd1; -- Definition of Trafodion table TRAFODION.MYTEST.SD1 -- Definition current Fri May 9 03:49:04 2014 ( SYSKEY LARGEINT NO DEFAULT NOT NULL NOT DROPPABLE , ALARGEINT NO DEFAULT NOT NULL NOT DROPPABLE , BSMALLINT NO DEFAULT NOT NULL NOT DROPPABLE , CDOUBLE PRECISION NO DEFAULT NOT NULL NOT DROPPABLE , DDOUBLE PRECISION DEFAULT NULL ) --- SQL operation complete. drop table sd1; --- SQL operation complete. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-255) LP Bug: 1317736 - Update statement crashes sqlci with a core at HbaseAccessUMDTcb::work()
[ https://issues.apache.org/jira/browse/TRAFODION-255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708613#comment-14708613 ] Atanu Mishra commented on TRAFODION-255: Anoop Sharma (anoop-sharma) on 2014-07-09 Changed in trafodion: status: New → In Progress Anoop Sharma (anoop-sharma) wrote on 2014-07-11:#2 fixed in July RC1 build Changed in trafodion: status: In Progress → Fix Committed Weishiun Tsai (wei-shiun-tsai) wrote on 2014-07-15: #3 Download full text (3.1 KiB) Verified on the v0.8.3rc1 build, this problem has been fixed: [...] --- SQL operation complete. CREATE TABLE TABLE6d(Col_var1 VARCHAR(30), +Col_fix2 CHAR(10), +Col_var3 VARCHAR(20), +Col_fix4 INT, +Col_var5 VARCHAR(25), +Col_fix6 DECIMAL(5,1) SIGNED, +Col_var7 VARCHAR(50), +Col_fix8 LARGEINT, +Col_var9 VARCHAR(25), +Col_fix10 NUMERIC(10), +Col_var11 VARCHAR(30), +Col_var12 VARCHAR(35) +) no partition; --- SQL operation complete. CREATE VIEW VWTAB6c AS SELECT * FROM TABLE6c; --- SQL operation complete. CREATE VIEW VWTAB6d AS SELECT * FROM TABLE6d; --- SQL operation complete. CREATE INDEX i6cvar7 ON TABLE6c(col_var7); --- SQL operation complete. CREATE INDEX i6cvar9 ON TABLE6c(col_var9); --- SQL operation complete. CREATE INDEX i6cvar11 ON TABLE6c(col_var11); --- SQL operation complete. CREATE INDEX i6dvar1 ON TABLE6d(col_var1); --- SQL operation complete. CREATE INDEX i6dvar3 ON TABLE6d(col_var3); --- SQL operation complete. CREATE INDEX i6dvar5 ON TABLE6d(col_var5); --- SQL operation complete. CREATE INDEX i6dvar7 ON TABLE6d(col_var7); --- SQL operation complete. CREATE INDEX i6dvar9 ON TABLE6d(col_var9); --- SQL operation complete. CREATE INDEX i6dvar11 ON TABLE6d(col_var11); --- SQL operation complete. CREATE INDEX i6dvar12 ON TABLE6d(col_var12); --- SQL operation complete. INSERT INTO TABLE6c VALUES('Karen', +'XIONG', +'Female', +001, +'LOC251', +2200, +'San Jose State', +980520, +'China', +94, +'Texas', +1997 +); --- 1 row(s) inserted. INSERT INTO TABLE6c VALUES('Lalitha', +'Maruvada', +'Female', +002, +'LOC252', +2130, +'University of Colorado', +970320, +'India', +93, +'Colorado', +1997 +); --- 1 row(s) inserted. INSERT INTO TABLE6c VALUES('Jerry', +'Zheng', +'Male', +003, +'LOC201', +1320, +'Cornell University', +960302, +'Taiwan', +92, +'New York', +1995 +); --- 1 row(s) inserted. INSERT INTO TABLE6d VALUES('Karen', +'XIONG', +'Female', +001, +'LOC251', +2200, +'San Jose State', +980520, +'China', +94, +'Texas', +'University relations' +); --- 1 row(s) inserted. INSERT INTO TABLE6d VALUES('Lalitha', +'Maruvada', +'Female', +002, +'LOC252', +2130, +'University of Colorado', +970320, +'India', +93, +'Colorado', +'Job Fair' +); --- 1 row(s) inserted. INSERT INTO TABLE6d VALUES('Jerry', +'Zheng', +'Male', +003, +'LOC201', +1320, +'Cornell University', +960302, +'Taiwan', +92, +'New York', +'Internal Transfer' +); --- 1 row(s) inserted. UPDATE TABLE6c +SET col_fix2 = 'LAST_NAME', +col_var3= 'GENGER', +col_var5 = 'LOCATION', +col_var7 = 'SCHOOL', +col_var9 = 'COUNTRY' +WHERE EXISTS +(select TABLE6c.col_fix8 from TABLE6c, TABLE6d +where TABLE6c.col_fix8 = TABLE6d.col_fix8) +; --- 3 row(s) updat... Read more... Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1317736 - Update statement crashes sqlci with a core at HbaseAccessUMDTcb::work() - Key: TRAFODION-255 URL: https://issues.apache.org/jira/browse/TRAFODION-255 Project: Apache Trafodion Issue Type: Bug Components: sql-exe Reporter: Weishiun Tsai Assignee: Anoop Sharma Priority: Critical Fix For: 0.9 (pre-incubation), 1.0 (pre-incubation) The following sequence of statements crashes sqlci with a core at HbaseAccessUMDTcb::work(). This is fairly reproducible on the datalake v40174 build installed on a workstation. Here is the entire script to reproduce this problem: log mytest.log clear; drop view VWTAB6c; drop view VWTAB6d; drop table table6c cascade; drop table table6d cascade; CREATE TABLE TABLE6c(Col_var1 VARCHAR(30) NOT NULL, Col_fix2 CHAR(20), Col_var3 VARCHAR(20), Col_fix4 INT, Col_var5 VARCHAR(30), Col_fix6 DECIMAL(5,1) SIGNED, Col_var7 VARCHAR(30), Col_fix8 LARGEINT, Col_var9 VARCHAR(25), Col_fix10 NUMERIC(10), Col_var11 Varchar(40) NOT NULL, Col_fix12 SMALLINT, PRIMARY KEY (Col_var1, Col_var11)) STORE BY PRIMARY KEY; CREATE TABLE TABLE6d(Col_var1 VARCHAR(30), Col_fix2 CHAR(10), Col_var3 VARCHAR(20), Col_fix4 INT, Col_var5 VARCHAR(25), Col_fix6 DECIMAL(5,1) SIGNED, Col_var7 VARCHAR(50), Col_fix8 LARGEINT, Col_var9 VARCHAR(25), Col_fix10 NUMERIC(10), Col_var11 VARCHAR(30), Col_var12 VARCHAR(35) ) no partition; CREATE VIEW VWTAB6c AS SELECT * FROM
[jira] [Closed] (TRAFODION-255) LP Bug: 1317736 - Update statement crashes sqlci with a core at HbaseAccessUMDTcb::work()
[ https://issues.apache.org/jira/browse/TRAFODION-255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-255. -- Resolution: Fixed Assignee: (was: Anoop Sharma) Fix Version/s: 1.0 (pre-incubation) 0.9 (pre-incubation) LP Bug: 1317736 - Update statement crashes sqlci with a core at HbaseAccessUMDTcb::work() - Key: TRAFODION-255 URL: https://issues.apache.org/jira/browse/TRAFODION-255 Project: Apache Trafodion Issue Type: Bug Components: sql-exe Reporter: Weishiun Tsai Priority: Critical Fix For: 0.9 (pre-incubation), 1.0 (pre-incubation) The following sequence of statements crashes sqlci with a core at HbaseAccessUMDTcb::work(). This is fairly reproducible on the datalake v40174 build installed on a workstation. Here is the entire script to reproduce this problem: log mytest.log clear; drop view VWTAB6c; drop view VWTAB6d; drop table table6c cascade; drop table table6d cascade; CREATE TABLE TABLE6c(Col_var1 VARCHAR(30) NOT NULL, Col_fix2 CHAR(20), Col_var3 VARCHAR(20), Col_fix4 INT, Col_var5 VARCHAR(30), Col_fix6 DECIMAL(5,1) SIGNED, Col_var7 VARCHAR(30), Col_fix8 LARGEINT, Col_var9 VARCHAR(25), Col_fix10 NUMERIC(10), Col_var11 Varchar(40) NOT NULL, Col_fix12 SMALLINT, PRIMARY KEY (Col_var1, Col_var11)) STORE BY PRIMARY KEY; CREATE TABLE TABLE6d(Col_var1 VARCHAR(30), Col_fix2 CHAR(10), Col_var3 VARCHAR(20), Col_fix4 INT, Col_var5 VARCHAR(25), Col_fix6 DECIMAL(5,1) SIGNED, Col_var7 VARCHAR(50), Col_fix8 LARGEINT, Col_var9 VARCHAR(25), Col_fix10 NUMERIC(10), Col_var11 VARCHAR(30), Col_var12 VARCHAR(35) ) no partition; CREATE VIEW VWTAB6c AS SELECT * FROM TABLE6c; CREATE VIEW VWTAB6d AS SELECT * FROM TABLE6d; CREATE INDEX i6cvar7 ON TABLE6c(col_var7); CREATE INDEX i6cvar9 ON TABLE6c(col_var9); CREATE INDEX i6cvar11 ON TABLE6c(col_var11); CREATE INDEX i6dvar1 ON TABLE6d(col_var1); CREATE INDEX i6dvar3 ON TABLE6d(col_var3); CREATE INDEX i6dvar5 ON TABLE6d(col_var5); CREATE INDEX i6dvar7 ON TABLE6d(col_var7); CREATE INDEX i6dvar9 ON TABLE6d(col_var9); CREATE INDEX i6dvar11 ON TABLE6d(col_var11); CREATE INDEX i6dvar12 ON TABLE6d(col_var12); INSERT INTO TABLE6c VALUES('Karen', 'XIONG', 'Female', 001, 'LOC251', 2200, 'San Jose State', 980520, 'China', 94, 'Texas', 1997 ); INSERT INTO TABLE6c VALUES('Lalitha', 'Maruvada', 'Female', 002, 'LOC252', 2130, 'University of Colorado', 970320, 'India', 93, 'Colorado', 1997 ); INSERT INTO TABLE6c VALUES('Jerry', 'Zheng', 'Male', 003, 'LOC201', 1320, 'Cornell University', 960302, 'Taiwan', 92, 'New York', 1995 ); INSERT INTO TABLE6d VALUES('Karen', 'XIONG', 'Female', 001, 'LOC251', 2200, 'San Jose State', 980520, 'China', 94, 'Texas', 'University relations' ); INSERT INTO TABLE6d VALUES('Lalitha', 'Maruvada', 'Female', 002, 'LOC252', 2130, 'University of Colorado', 970320, 'India', 93, 'Colorado', 'Job Fair' ); INSERT INTO TABLE6d VALUES('Jerry', 'Zheng', 'Male', 003, 'LOC201', 1320, 'Cornell University', 960302, 'Taiwan', 92, 'New York', 'Internal Transfer' ); UPDATE TABLE6c SET col_fix2 = 'LAST_NAME', col_var3= 'GENGER', col_var5 = 'LOCATION', col_var7 = 'SCHOOL', col_var9 = 'COUNTRY' WHERE EXISTS (select TABLE6c.col_fix8 from TABLE6c, TABLE6d where TABLE6c.col_fix8 = TABLE6d.col_fix8) ; drop view VWTAB6c; drop view VWTAB6d; drop table table6c cascade; drop table table6d cascade; Here is the output of the script execution: obey mytest.sql; log mytest.log clear; drop view VWTAB6c; --- SQL operation complete. drop view VWTAB6d; --- SQL operation complete. drop table table6c cascade; --- SQL operation complete. drop table table6d cascade; --- SQL operation complete. CREATE TABLE TABLE6c(Col_var1 VARCHAR(30) NOT NULL, +Col_fix2 CHAR(20), +Col_var3 VARCHAR(20), +Col_fix4 INT, +Col_var5 VARCHAR(30), +Col_fix6 DECIMAL(5,1) SIGNED, +Col_var7 VARCHAR(30), +Col_fix8 LARGEINT, +Col_var9 VARCHAR(25), +Col_fix10 NUMERIC(10), +Col_var11 Varchar(40) NOT NULL, +Col_fix12 SMALLINT, +PRIMARY KEY (Col_var1, Col_var11)) +STORE BY PRIMARY KEY; --- SQL operation complete. CREATE TABLE TABLE6d(Col_var1 VARCHAR(30), +Col_fix2 CHAR(10), +Col_var3 VARCHAR(20), +Col_fix4 INT, +Col_var5 VARCHAR(25), +Col_fix6 DECIMAL(5,1) SIGNED, +Col_var7 VARCHAR(50), +Col_fix8 LARGEINT, +Col_var9 VARCHAR(25), +Col_fix10 NUMERIC(10), +Col_var11 VARCHAR(30), +Col_var12 VARCHAR(35) +) no partition; --- SQL operation complete. CREATE VIEW VWTAB6c AS SELECT * FROM TABLE6c; --- SQL operation
[jira] [Closed] (TRAFODION-254) LP Bug: 1317729 - Table salted with a float key can’t be invoked or dropped
[ https://issues.apache.org/jira/browse/TRAFODION-254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-254. -- Resolution: Fixed Assignee: (was: Hans Zeller) Fix Version/s: 0.8 (pre-incubation) LP Bug: 1317729 - Table salted with a float key can’t be invoked or dropped --- Key: TRAFODION-254 URL: https://issues.apache.org/jira/browse/TRAFODION-254 Project: Apache Trafodion Issue Type: Bug Components: sql-exe Reporter: Weishiun Tsai Priority: Critical Fix For: 0.8 (pre-incubation) When a table was created using salt with a float key, the table creation operation was completed successfully. But the table couldn’t be invoked, nor could it be dropped. The invoke statement returned error 1120 and the drop statement returned error 1389. The following output shows that td1 was created using salt with a float key and the errors from invoke and drop. Sd1 was created with the same ddl except for the salting. Invoke and drop work fine on sd1. This is seen on the datalake v40174 build installed on a workstation. set schema mytest; --- SQL operation complete. create table td1 (a largeint not null, b smallint not null, c float(10) not null, d double precision) store by (a, b, c) salt using 2 partitions on (c, b); --- SQL operation complete. invoke td1; *** ERROR[1120] Use of float datatype in a partitioning key is not allowed. --- SQL operation failed with errors. drop table td1; *** ERROR[1389] Object TRAFODION.MYTEST.TD1 does not exist in Trafodion. create table sd1 (a largeint not null, b smallint not null, c float(10) not null, d double precision) store by (a, b, c); --- SQL operation complete. invoke sd1; -- Definition of Trafodion table TRAFODION.MYTEST.SD1 -- Definition current Fri May 9 03:49:04 2014 ( SYSKEY LARGEINT NO DEFAULT NOT NULL NOT DROPPABLE , ALARGEINT NO DEFAULT NOT NULL NOT DROPPABLE , BSMALLINT NO DEFAULT NOT NULL NOT DROPPABLE , CDOUBLE PRECISION NO DEFAULT NOT NULL NOT DROPPABLE , DDOUBLE PRECISION DEFAULT NULL ) --- SQL operation complete. drop table sd1; --- SQL operation complete. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-251) LP Bug: 1317301 - Unsupported column-level constraints should return errors instead of being ignored
[ https://issues.apache.org/jira/browse/TRAFODION-251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708610#comment-14708610 ] Atanu Mishra commented on TRAFODION-251: Anoop Sharma (anoop-sharma) on 2014-05-08 Changed in trafodion: status: New → In Progress Weishiun Tsai (wei-shiun-tsai) on 2014-05-08 tags: added: sql-exe Weishiun Tsai (wei-shiun-tsai) wrote on 2014-05-27: #1 Verified on the datalake v40963 build, this problem has been fixed: create table a12tab1 (int1 int not null not droppable, vch2 varchar(3), primary key (int1) not droppable); --- SQL operation complete. create table a12tab2 (int1 int not null, vch2 varchar(3)) store by (int1); --- SQL operation complete. alter table a12tab1 add c7 interval second check (c7 '100'); *** ERROR[4041] Type INTERVAL SECOND(2,6) cannot be compared with type CHAR(3). --- SQL operation failed with errors. alter table a12tab1 add c90 pic x(3) references a12tab2 (vch2); *** ERROR[1044] Constraint TRAFODION.SEABASE.A12TAB1_212397978_5514 could not be created because the referenced columns in the referenced table are not part of a unique constraint. --- SQL operation failed with errors. alter table a12tab1 add c91 int references a12tab2(int1); *** ERROR[1044] Constraint TRAFODION.SEABASE.A12TAB1_446697978_5514 could not be created because the referenced columns in the referenced table are not part of a unique constraint. --- SQL operation failed with errors. alter table a12tab1 add c92 int unique references a12tab2 (int1); *** ERROR[1044] Constraint TRAFODION.SEABASE.A12TAB1_297118978_5514 could not be created because the referenced columns in the referenced table are not part of a unique constraint. --- SQL operation failed with errors. Changed in trafodion: status: In Progress → Fix Released LP Bug: 1317301 - Unsupported column-level constraints should return errors instead of being ignored Key: TRAFODION-251 URL: https://issues.apache.org/jira/browse/TRAFODION-251 Project: Apache Trafodion Issue Type: Bug Components: sql-exe Reporter: Weishiun Tsai Assignee: Anoop Sharma Priority: Critical Fix For: 0.8 (pre-incubation) Trafodion does not support column-level constraints right now, but they are ignored as opposed to return an error. All of the following examples are meant to see one error or another, but they now just return SQL operation complete. They should return errors indicating that the feature is not supported. create table a12tab1 (int1 int not null not droppable, vch2 varchar(3), primary key (int1) not droppable); --- SQL operation complete. create table a12tab2 (int1 int not null, vch2 varchar(3)) store by (int1); --- SQL operation complete. -- SQ sees ERROR 4041 Type a cannot be compared with type b alter table a12tab1 add c7 interval second check (c7 '100'); --- SQL operation complete. -- SQ sees error 1044 Constraint could not be created because the -- referenced columns in the referenced table are not part of a -- unique constraint. alter table a12tab1 add c90 pic x(3) references a12tab2 (vch2); --- SQL operation complete. -- SQ sees error 1044 Constraint could not be created because the -- referenced columns in the referenced table are not part of a -- unique constraint. alter table a12tab1 add c91 int references a12tab2(int1); --- SQL operation complete. -- SQ sees error 1042 All PRIMARY KEY or UNIQUE constraint columns -- must be NOT NULL alter table a12tab1 add c92 int unique references a12tab2 (int1); --- SQL operation complete. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-249) LP Bug: 1316767 - update involving mod() func results in data corruption
[ https://issues.apache.org/jira/browse/TRAFODION-249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-249. -- Resolution: Fixed Assignee: (was: Anoop Sharma) Fix Version/s: 0.8 (pre-incubation) LP Bug: 1316767 - update involving mod() func results in data corruption Key: TRAFODION-249 URL: https://issues.apache.org/jira/browse/TRAFODION-249 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Julie Thai Priority: Critical Fix For: 0.8 (pre-incubation) Table contains 1000 rows, primary key (integer, numeric(11,3)). Update of primary key int column using mod() function, returns 3 row(s) updated.; expected 1000 row(s) updated. Subsequent select [count(*)|count(distinct colintk)] returns incorrect rowcount. -- integer primary key column -- expect: 1000 SELECT COUNT(DISTINCT colintk) FROM f00; (EXPR) 1000 --- 1 row(s) selected. prepare XX from UPDATE f00 SET colintk = MOD(colintk, 100); --- SQL command prepared. -- expect: 1000 row(s) updated. -- but instead get 3 row(s) updated??? execute XX; --- 3 row(s) updated. -- expect: 100 -- but instead get 999 SELECT COUNT(DISTINCT colintk) FROM f00; (EXPR) 999 --- 1 row(s) selected. -- expect: 1000 -- but instead get 999 SELECT COUNT(*) FROM f00; (EXPR) 999 --- 1 row(s) selected. To reproduce, see contents of attachment, updcorrupt.tar: - obey file upd_pkey.sql or: DROP TABLE f00; CREATE TABLE f00( colintk int not null, colint int not null, collint largeint not null, colnum numeric(11,3) not null, primary key (colintk, colnum)) ; UPSERT WITH NO ROLLBACK INTO f00 SELECT c1+c2*10+c3*100+c4*1000+c5*1, c1+c2*10+c3*100+c4*1000+c5*1, (c1+c2*10+c3*100+c4*1000+c5*1) + 549755813888, cast(c1+c2*10+c3*100+c4*1000+c5*1 as numeric(11,3)) from (values(1)) t transpose 0,1,2,3,4,5,6,7,8,9 as c1 transpose 0,1,2,3,4,5,6,7,8,9 as c2 transpose 0,1,2,3,4,5,6,7,8,9 as c3 --transpose 0,1,2,3,4,5,6,7,8,9 as c4 transpose 0 as c4 --transpose 0,1,2,3,4,5,6,7,8,9 as c5 transpose 0 as c5 ; UPDATE STATISTICS FOR TABLE f00 ON EVERY COLUMN; -- integer primary key column -- expect: 1000 SELECT COUNT(DISTINCT colintk) FROM f00; prepare XX from UPDATE f00 SET colintk = MOD(colintk, 100); -- expect: 1000 row(s) updated. -- but instead get 3 row(s) updated??? execute XX; -- expect: 100 -- but instead get 999 SELECT COUNT(DISTINCT colintk) FROM f00; -- expect: 1000 -- but instead get 999 SELECT COUNT(*) FROM f00; Attached updcorrupt.tar also contains logs generated without/with explain output (see upd_pkey.out[_wexp]). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-249) LP Bug: 1316767 - update involving mod() func results in data corruption
[ https://issues.apache.org/jira/browse/TRAFODION-249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708609#comment-14708609 ] Atanu Mishra commented on TRAFODION-249: Anoop Sharma (anoop-sharma) on 2014-05-30 Changed in trafodion: status: In Progress → Fix Committed Julie Thai (julie-y-thai) wrote on 2014-06-02: #2 Verified on traf_0601: CREATE TABLE f00( colintk int not null, colint int not null, collint largeint not null, colnum numeric(11,3) not null, primary key (colintk, colnum)) ; ++ --- SQL operation complete. UPSERT WITH NO ROLLBACK INTO f00 SELECT c1+c2*10+c3*100+c4*1000+c5*1, c1+c2*10+c3*100+c4*1000+c5*1, (c1+c2*10+c3*100+c4*1000+c5*1) + 549755813888, cast(c1+c2*10+c3*100+c4*1000+c5*1 as numeric(11,3)) from (values(1)) t transpose 0,1,2,3,4,5,6,7,8,9 as c1 transpose 0,1,2,3,4,5,6,7,8,9 as c2 transpose 0,1,2,3,4,5,6,7,8,9 as c3 --transpose 0,1,2,3,4,5,6,7,8,9 as c4 transpose 0 as c4 --transpose 0,1,2,3,4,5,6,7,8,9 as c5 transpose 0 as c5 ; + --- 1000 row(s) inserted. UPDATE STATISTICS FOR TABLE f00 ON EVERY COLUMN; --- SQL operation complete. SELECT COUNT(DISTINCT colintk) FROM f00; (EXPR) 1000 --- 1 row(s) selected. prepare XX from UPDATE f00 SET colintk = MOD(colintk, 100); --- SQL command prepared. execute XX; --- 1000 row(s) updated. SELECT COUNT(DISTINCT colintk) FROM f00; (EXPR) 100 --- 1 row(s) selected. SELECT COUNT(*) FROM f00; (EXPR) 1000 --- 1 row(s) selected. Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1316767 - update involving mod() func results in data corruption Key: TRAFODION-249 URL: https://issues.apache.org/jira/browse/TRAFODION-249 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Julie Thai Assignee: Anoop Sharma Priority: Critical Table contains 1000 rows, primary key (integer, numeric(11,3)). Update of primary key int column using mod() function, returns 3 row(s) updated.; expected 1000 row(s) updated. Subsequent select [count(*)|count(distinct colintk)] returns incorrect rowcount. -- integer primary key column -- expect: 1000 SELECT COUNT(DISTINCT colintk) FROM f00; (EXPR) 1000 --- 1 row(s) selected. prepare XX from UPDATE f00 SET colintk = MOD(colintk, 100); --- SQL command prepared. -- expect: 1000 row(s) updated. -- but instead get 3 row(s) updated??? execute XX; --- 3 row(s) updated. -- expect: 100 -- but instead get 999 SELECT COUNT(DISTINCT colintk) FROM f00; (EXPR) 999 --- 1 row(s) selected. -- expect: 1000 -- but instead get 999 SELECT COUNT(*) FROM f00; (EXPR) 999 --- 1 row(s) selected. To reproduce, see contents of attachment, updcorrupt.tar: - obey file upd_pkey.sql or: DROP TABLE f00; CREATE TABLE f00( colintk int not null, colint int not null, collint largeint not null, colnum numeric(11,3) not null, primary key (colintk, colnum)) ; UPSERT WITH NO ROLLBACK INTO f00 SELECT c1+c2*10+c3*100+c4*1000+c5*1, c1+c2*10+c3*100+c4*1000+c5*1, (c1+c2*10+c3*100+c4*1000+c5*1) + 549755813888, cast(c1+c2*10+c3*100+c4*1000+c5*1 as numeric(11,3)) from (values(1)) t transpose 0,1,2,3,4,5,6,7,8,9 as c1 transpose 0,1,2,3,4,5,6,7,8,9 as c2 transpose 0,1,2,3,4,5,6,7,8,9 as c3 --transpose 0,1,2,3,4,5,6,7,8,9 as c4 transpose 0 as c4 --transpose 0,1,2,3,4,5,6,7,8,9 as c5 transpose 0 as c5 ; UPDATE STATISTICS FOR TABLE f00 ON EVERY COLUMN; -- integer primary key column -- expect: 1000 SELECT COUNT(DISTINCT colintk) FROM f00; prepare XX from UPDATE f00 SET colintk = MOD(colintk, 100); -- expect: 1000 row(s) updated. -- but instead get 3 row(s) updated??? execute XX; -- expect: 100 -- but instead get 999 SELECT COUNT(DISTINCT colintk) FROM f00; -- expect: 1000 -- but instead get 999 SELECT COUNT(*) FROM f00; Attached updcorrupt.tar also contains logs generated without/with explain output (see upd_pkey.out[_wexp]). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-276) LP Bug: 1320385 - update of hive table generates sqlci core
[ https://issues.apache.org/jira/browse/TRAFODION-276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-276. -- Resolution: Fixed Assignee: (was: Suresh Subbiah) Fix Version/s: 1.0 (pre-incubation) LP Bug: 1320385 - update of hive table generates sqlci core --- Key: TRAFODION-276 URL: https://issues.apache.org/jira/browse/TRAFODION-276 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Julie Thai Priority: Critical Fix For: 1.0 (pre-incubation) update stmt on hive table generated sqlci core on workstation, datalake_64_1 v40596. Also tried to prepare-update stmt and sqlci core generated too. MY_SQROOT=/opt/home/thaiju/datalake_64_1 who@host=tha...@g4t3029.houston.hp.com JAVA_HOME=/opt/home/tools/jdk1.7.0_09_64 linux=2.6.32-279.el6.x86_64 redhat=6.3 Release 0.7.0 (Build release [40596], branch 40596-project/datalake_64_1, date 16May14) From sqlci: Trafodion Conversational Interface 0.7.0 (c) Copyright 2014 Hewlett-Packard Development Company, LP. obey traf.sql; select count(*) from hive.hive.nation; (EXPR) 25 --- 1 row(s) selected. insert into hive.hive.nation values (999, 'nines name', , 'eights comment'); --- 1 row(s) inserted. select count(*) from hive.hive.nation; (EXPR) 25 --- 1 row(s) selected. select n_nationkey from hive.hive.nation order by n_nationkey asc; N_NATIONKEY --- 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 999 --- 26 row(s) selected. update hive.hive.nation set n_regionkey = where n_nationkey = 999; # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x7fffefbf5210, pid=2352, tid=140737180482304 # # JRE version: 7.0_09-b05 # Java VM: Java HotSpot(TM) 64-Bit Server VM (23.5-b02 mixed mode linux-amd64 compressed oops) # Problematic frame: # C [liboptimizer.so+0x394210] GenericUpdate::bindUpdateExpr(BindWA*, ItemExpr*, ItemExprList, RelExpr*, Scan*, NASetshort, int)+0x120 # # Core dump written. Default location: /opt/home/thaiju/hive_upd/core or core.2352 # # An error report file with more information is saved as: # /opt/home/thaiju/hive_upd/hs_err_pid2352.log # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # Aborted (core dumped) Callstack from core.g4t3029.houston.hp.com.2352.sqlci: Core was generated by `sqlci'. Program terminated with signal 6, Aborted. #0 0x0033088328a5 in raise () from /lib64/libc.so.6 Missing separate debuginfos, use: debuginfo-install boost-filesystem-1.41.0-11.el6_1.2.x86_64 boost-program-options-1.41.0-11.el6_1.2.x86_64 boost-system-1.41.0-11.el6_1.2.x86_64 cyrus-sasl-lib-2.1.23-13.el6.x86_64 glibc-2.12-1.107.el6.x86_64 keyutils-libs-1.4-4.el6.x86_64 krb5-libs-1.9-33.el6.x86_64 libcom_err-1.41.12-12.el6.x86_64 libgcc-4.4.6-4.el6.x86_64 libselinux-2.0.94-5.3.el6.x86_64 libstdc++-4.4.6-4.el6.x86_64 libuuid-2.17.2-12.7.el6.x86_64 nss-softokn-freebl-3.12.9-11.el6.x86_64 openssl-1.0.0-20.el6_2.5.x86_64 protobuf-2.3.0-7.el6.x86_64 qpid-cpp-client-0.14-16.el6.x86_64 zlib-1.2.3-27.el6.x86_64 (gdb) where #0 0x0033088328a5 in raise () from /lib64/libc.so.6 #1 0x003308834085 in abort () from /lib64/libc.so.6 #2 0x76d46455 in os::abort(bool) () from /opt/home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #3 0x76ea6717 in VMError::report_and_die() () from /opt/home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #4 0x76d49f60 in JVM_handle_linux_signal () from /opt/home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #5 signal handler called #6 0x7fffefbf5210 in getColumnListCount (this=0x7fffd493bb80, bindWA= 0x7fff14c0, recExpr=0x7fffd493a658, assignList=..., boundView=0x0, scanNode= 0x7fffd493b108, stoiColumnSet=..., onRollback=0) at ../optimizer/BindRelExpr.cpp:10489 #7 GenericUpdate::bindUpdateExpr (this=0x7fffd493bb80, bindWA=0x7fff14c0, recExpr=0x7fffd493a658, assignList=..., boundView=0x0, scanNode=0x7fffd493b108, stoiColumnSet=..., onRollback=0) at ../optimizer/BindRelExpr.cpp:10489 #8 0x7fffefc0a2a7 in GenericUpdate::bindNode
[jira] [Closed] (TRAFODION-253) LP Bug: 1317709 - Delete from a table with constraints crashes sqlci/tdm_arkcmp with cores
[ https://issues.apache.org/jira/browse/TRAFODION-253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-253. -- Resolution: Fixed Assignee: (was: Anoop Sharma) Fix Version/s: 0.8 (pre-incubation) LP Bug: 1317709 - Delete from a table with constraints crashes sqlci/tdm_arkcmp with cores -- Key: TRAFODION-253 URL: https://issues.apache.org/jira/browse/TRAFODION-253 Project: Apache Trafodion Issue Type: Bug Components: sql-exe Reporter: Weishiun Tsai Priority: Critical Fix For: 0.8 (pre-incubation) The following sequence of statements crashed sqlci at the delete statement with a core at ExHbaseAccessTcb::setupUniqueKeyAndCols(). After that, the database entered into an inconsistent state. Dropping and recreating the same tables complained about duplicate unique constraints and generated tdm_arkcmp cores at reateConstraintInfo() This is seen on the datalake build v40174 installed on a workstation. Here is the entire script to reproduce it: -bash-4.1$ cat mytest.sql log mytest.log; set schema mytest2; drop table Female_actors cascade; drop table Male_actors cascade; drop table Directors cascade; drop table Movie_titles cascade; create table Female_actors ( f_no int not null not droppable, f_namevarchar(30) not null, f_realnamevarchar(50) default null, f_birthdaydate constraint md1 check (f_birthday date '1900-01-01'), primary key (f_no) ); create table Male_actors ( m_no int not null not droppable unique, m_namevarchar(30) not null, m_realnamevarchar(50) default null, m_birthdaydate constraint md2 check (m_birthday date '1900-01-01') ) no partition; create table Directors ( d_no int not null not droppable, d_namevarchar(30) not null, d_specialty varchar(15) not null unique, primary key (d_no), constraint td1 check (d_specialty 'Music Video'), unique (d_no, d_specialty) ); Create table Movie_titles ( mv_no int not null not droppable, mv_namevarchar (40) not null, mv_malestarint default NULL constraint ma_fk references male_actors(m_no), mv_femalestar int default NULL, mv_directorint default 0 not null, mv_yearmadeint check (mv_yearmade 1901), mv_star_rating char(4), mv_movietype varchar(15), primary key (mv_no), constraint fa_fk foreign key (mv_femalestar) references female_actors, constraint d_fk foreign key (mv_director, mv_movietype) references directors (d_no, d_specialty) ); insert into male_actors values (1444,'Mike Myers','Mike Myers',date '1963-05-23'); insert into male_actors values (6555,'Jimmy Stewart','James Maitland Stewart',date '1908-05-20'); delete from male_actors where m_no = 6555; drop table Female_actors cascade; drop table Male_actors cascade; drop table Directors cascade; drop table Movie_titles cascade; = Here is the output of the 1st run with the sqlci core: obey mytest.sql; log mytest.log; set schema mytest2; --- SQL operation complete. drop table Female_actors cascade; *** ERROR[1389] Object TRAFODION.MYTEST2.FEMALE_ACTORS does not exist in Trafodion. --- SQL operation failed with errors. drop table Male_actors cascade; *** ERROR[1389] Object TRAFODION.MYTEST2.MALE_ACTORS does not exist in Trafodion. --- SQL operation failed with errors. drop table Directors cascade; *** ERROR[1389] Object TRAFODION.MYTEST2.DIRECTORS does not exist in Trafodion. --- SQL operation failed with errors. drop table Movie_titles cascade; *** ERROR[1389] Object TRAFODION.MYTEST2.MOVIE_TITLES does not exist in Trafodion. --- SQL operation failed with errors. create table Female_actors ( +f_no int not null not droppable, +f_namevarchar(30) not null, +f_realnamevarchar(50) default null, +f_birthdaydate constraint md1 check (f_birthday date '1900-01-01'), +primary key (f_no) +); --- SQL operation complete. create table Male_actors ( +m_no int not null not droppable unique, +m_namevarchar(30) not null, +m_realnamevarchar(50) default null, +m_birthdaydate constraint md2 check (m_birthday date '1900-01-01') +) no partition; --- SQL operation complete. create table Directors ( +d_no int not null not droppable, +d_namevarchar(30) not null, +d_specialty varchar(15) not null unique, +primary key (d_no), +constraint td1 check (d_specialty 'Music Video'), +unique (d_no, d_specialty) +); --- SQL operation complete. Create table Movie_titles ( +mv_no int not null not droppable, +mv_namevarchar (40) not null,
[jira] [Commented] (TRAFODION-253) LP Bug: 1317709 - Delete from a table with constraints crashes sqlci/tdm_arkcmp with cores
[ https://issues.apache.org/jira/browse/TRAFODION-253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708611#comment-14708611 ] Atanu Mishra commented on TRAFODION-253: Anoop Sharma (anoop-sharma) on 2014-05-30 Changed in trafodion: status: In Progress → Fix Committed Weishiun Tsai (wei-shiun-tsai) wrote on 2014-05-30: #1 Verified on the GIT 0529_1530 build. This problem has been fixed: set schema mytest2; --- SQL operation complete. drop table Female_actors cascade; *** ERROR[1389] Object TRAFODION.MYTEST2.FEMALE_ACTORS does not exist in Trafodion. --- SQL operation failed with errors. drop table Male_actors cascade; *** ERROR[1389] Object TRAFODION.MYTEST2.MALE_ACTORS does not exist in Trafodion. --- SQL operation failed with errors. drop table Directors cascade; *** ERROR[1389] Object TRAFODION.MYTEST2.DIRECTORS does not exist in Trafodion. --- SQL operation failed with errors. drop table Movie_titles cascade; *** ERROR[1389] Object TRAFODION.MYTEST2.MOVIE_TITLES does not exist in Trafodion. --- SQL operation failed with errors. create table Female_actors ( + f_no int not null not droppable, + f_name varchar(30) not null, + f_realname varchar(50) default null, + f_birthday date constraint md1 check (f_birthday date '1900-01-01'), + primary key (f_no) + ); --- SQL operation complete. create table Male_actors ( + m_no int not null not droppable unique, + m_name varchar(30) not null, + m_realname varchar(50) default null, + m_birthday date constraint md2 check (m_birthday date '1900-01-01') + ) no partition; --- SQL operation complete. create table Directors ( + d_no int not null not droppable, + d_name varchar(30) not null, + d_specialty varchar(15) not null unique, + primary key (d_no), + constraint td1 check (d_specialty 'Music Video'), + unique (d_no, d_specialty) + ); --- SQL operation complete. Create table Movie_titles ( + mv_no int not null not droppable, + mv_name varchar (40) not null, + mv_malestar int default NULL constraint ma_fk + references male_actors(m_no), + mv_femalestar int default NULL, + mv_director int default 0 not null, + mv_yearmade int check (mv_yearmade 1901), + mv_star_rating char(4), + mv_movietype varchar(15), + primary key (mv_no), + constraint fa_fk foreign key (mv_femalestar) + references female_actors, + constraint d_fk foreign key (mv_director, mv_movietype) + references directors (d_no, d_specialty) + ); --- SQL operation complete. insert into male_actors values (1444,'Mike Myers','Mike Myers',date '1963-05-23'); --- 1 row(s) inserted. insert into male_actors values (6555,'Jimmy Stewart','James Maitland Stewart',date '1908-05-20'); --- 1 row(s) inserted. delete from male_actors where m_no = 6555; --- 1 row(s) deleted. drop table Female_actors cascade; --- SQL operation complete. drop table Male_actors cascade; --- SQL operation complete. drop table Directors cascade; --- SQL operation complete. drop table Movie_titles cascade; --- SQL operation complete. Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1317709 - Delete from a table with constraints crashes sqlci/tdm_arkcmp with cores -- Key: TRAFODION-253 URL: https://issues.apache.org/jira/browse/TRAFODION-253 Project: Apache Trafodion Issue Type: Bug Components: sql-exe Reporter: Weishiun Tsai Assignee: Anoop Sharma Priority: Critical The following sequence of statements crashed sqlci at the delete statement with a core at ExHbaseAccessTcb::setupUniqueKeyAndCols(). After that, the database entered into an inconsistent state. Dropping and recreating the same tables complained about duplicate unique constraints and generated tdm_arkcmp cores at reateConstraintInfo() This is seen on the datalake build v40174 installed on a workstation. Here is the entire script to reproduce it: -bash-4.1$ cat mytest.sql log mytest.log; set schema mytest2; drop table Female_actors cascade; drop table Male_actors cascade; drop table Directors cascade; drop table Movie_titles cascade; create table Female_actors ( f_no int not null not droppable, f_namevarchar(30) not null, f_realnamevarchar(50) default null, f_birthdaydate constraint md1 check (f_birthday date '1900-01-01'), primary key (f_no) ); create table Male_actors ( m_no int not null not droppable unique, m_namevarchar(30) not null, m_realnamevarchar(50) default null, m_birthdaydate constraint md2 check (m_birthday date '1900-01-01') ) no partition; create table Directors ( d_no int not null not droppable, d_namevarchar(30) not null, d_specialty varchar(15) not null unique, primary key (d_no), constraint td1
[jira] [Closed] (TRAFODION-271) LP Bug: 1320017 - DCS local-servers.sh script doesn't recognize --config switch
[ https://issues.apache.org/jira/browse/TRAFODION-271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-271. -- Resolution: Fixed Assignee: (was: Matt Brown) Fix Version/s: 0.8 (pre-incubation) LP Bug: 1320017 - DCS local-servers.sh script doesn't recognize --config switch --- Key: TRAFODION-271 URL: https://issues.apache.org/jira/browse/TRAFODION-271 Project: Apache Trafodion Issue Type: Bug Reporter: Matt Brown Priority: Critical Fix For: 0.7 (pre-incubation), 0.8 (pre-incubation) For development environments it's necessary for script to recognize --config conf switch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-272) LP Bug: 1320023 - Create table .. like .. store by () does not recognize column
[ https://issues.apache.org/jira/browse/TRAFODION-272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-272. -- Resolution: Fixed Assignee: (was: Anoop Sharma) Fix Version/s: 0.8 (pre-incubation) LP Bug: 1320023 - Create table .. like .. store by () does not recognize column --- Key: TRAFODION-272 URL: https://issues.apache.org/jira/browse/TRAFODION-272 Project: Apache Trafodion Issue Type: Bug Components: sql-exe Reporter: Weishiun Tsai Priority: Critical Fix For: 0.8 (pre-incubation) Create table .. like .. store by () does not recognize column names from the first table. As shown in the following output, when specifying column a in the store by clause, it returns error 1009. This is seen on the datalake v40535 build. create table t (a int not null not droppable primary key, b int); --- SQL operation complete. create table t1 like t store by (a); *** ERROR[1009] Column A does not exist in the specified table. --- SQL operation failed with errors. Here is the same sequence of statements run on SQ: create table t (a int not null not droppable primary key, b int); --- SQL operation complete. create table t1 like t store by (a); --- SQL operation complete. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-272) LP Bug: 1320023 - Create table .. like .. store by () does not recognize column
[ https://issues.apache.org/jira/browse/TRAFODION-272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708646#comment-14708646 ] Atanu Mishra commented on TRAFODION-272: Anoop Sharma (anoop-sharma) on 2014-05-30 Changed in trafodion: status: In Progress → Fix Committed Weishiun Tsai (wei-shiun-tsai) wrote on 2014-05-30: #1 Verified on the GIT 0529_1530 build. This problem has been fixed: -bash-4.1$ sqlci Trafodion Conversational Interface 0.7.0 (c) Copyright 2014 Hewlett-Packard Development Company, LP. create table t (a int not null not droppable primary key, b int); --- SQL operation complete. create table t1 like t store by (a); --- SQL operation complete. Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1320023 - Create table .. like .. store by () does not recognize column --- Key: TRAFODION-272 URL: https://issues.apache.org/jira/browse/TRAFODION-272 Project: Apache Trafodion Issue Type: Bug Components: sql-exe Reporter: Weishiun Tsai Assignee: Anoop Sharma Priority: Critical Fix For: 0.8 (pre-incubation) Create table .. like .. store by () does not recognize column names from the first table. As shown in the following output, when specifying column a in the store by clause, it returns error 1009. This is seen on the datalake v40535 build. create table t (a int not null not droppable primary key, b int); --- SQL operation complete. create table t1 like t store by (a); *** ERROR[1009] Column A does not exist in the specified table. --- SQL operation failed with errors. Here is the same sequence of statements run on SQ: create table t (a int not null not droppable primary key, b int); --- SQL operation complete. create table t1 like t store by (a); --- SQL operation complete. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-271) LP Bug: 1320017 - DCS local-servers.sh script doesn't recognize --config switch
[ https://issues.apache.org/jira/browse/TRAFODION-271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708644#comment-14708644 ] Atanu Mishra commented on TRAFODION-271: Matt Brown (mattbrown-2) wrote on 2014-05-15: #1 Script now recognized --config switch Changed in trafodion: status: In Progress → Fix Committed Stacey Johnson (sjohnson-w) on 2014-06-10 information type: Proprietary → Public Matt Brown (mattbrown-2) on 2014-09-08 Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1320017 - DCS local-servers.sh script doesn't recognize --config switch --- Key: TRAFODION-271 URL: https://issues.apache.org/jira/browse/TRAFODION-271 Project: Apache Trafodion Issue Type: Bug Reporter: Matt Brown Assignee: Matt Brown Priority: Critical Fix For: 0.7 (pre-incubation), 0.8 (pre-incubation) For development environments it's necessary for script to recognize --config conf switch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-247) LP Bug: 1315567 - insert into salt table fails with ERROR[1123]
[ https://issues.apache.org/jira/browse/TRAFODION-247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708608#comment-14708608 ] Atanu Mishra commented on TRAFODION-247: Hans Zeller (hans-zeller) wrote on 2014-07-02: #3 Download full text (6.5 KiB) Looks like the initial problems, reported on 05-02 and 05-14 are fixed in the meantime, probably by fixes for UCS2 columns that Qifan added. The last problem, the core dump reported on 06-09, remains. What happens there is that for the descending UCS2 column, we put a 0x wide character into one of the split key constants, and that gets interpreted in the Unicode lexer as an EOF. The fix is probably not to use 0x as a character, but instead to use the hex string form. We only see this issue on the second statement, because only during the second statement do we parse this string. Here is the stack trace: yyULexer::constructStringLiteralWithCharSet (this=0x1be2af0, isHex=0, cs=CharInfo::UNICODE, lvalp=0x7ffd93d0, quote=39 L'\047') at ../parser/ulexer.cpp:848 #1 0x7058dd9a in yyULexer::arkcmplex (this=0x1be2af0, lvalp=0x7ffd93d0) at ../parser/ulexer.cpp:1094 #2 0x712ad5c3 in Parser::arkcmplex (this=0x7ffef160, lvalp=0x7ffd93d0) at ../sqlcomp/parser.h:166 #3 0x712acf3c in arkcmplex (lvalp=0x7ffd93d0) at ../sqlcomp/parser.cpp:1646 #4 0x704bbe9a in arkcmpparse () at parser/linux/64bit/debug/sqlparser.cpp:41442 #5 0x712a9ad7 in Parser::parseSQL (this=0x7ffef160, node=0x7ffef010, internalExpr=1, paramItemList=0x0) at ../sqlcomp/parser.cpp:748 #6 0x712aa78b in Parser::parseDML (this=0x7ffef160, instr=0x7fffd80e5060 2,_UCS2'\357\277\210, '\357\277\277' repeats 49 times, ';, inlen=160, charset=CharInfo::UTF8, node=0x7ffef010, internalExpr=1, paramItemList=0x0) at ../sqlcomp/parser.cpp:952 #7 0x712ab52c in Parser::getExprTree (this=0x7ffef160, str=0x7fffd80e5060 2,_UCS2'\357\277\210, '\357\277\277' repeats 49 times, ';, strlength=160, strCharSet=CharInfo::UTF8, num_params=0, p1=0x0, p2=0x0, p3=0x0, p4=0x0, p5=0x0, p6=0x0, otherParams=0x0, internal_expr=1) at ../sqlcomp/parser.cpp:1174 #8 0x712ab92b in Parser::getItemExprTree (this=0x7ffef160, str=0x7fffd80e5060 2,_UCS2'\357\277\210, '\357\277\277' repeats 49 times, ';, len=160, strCharSet=CharInfo::UTF8, num_params=0, p1=0x0, p2=0x0, p3=0x0, p4=0x0, p5=0x0, p6=0x0, paramItemList=0x0) at ../sqlcomp/parser.cpp:1260 #9 0x7fffef9dd298 in getRangePartitionBoundaryValues (keyValueBuffer=0x1bdc2e8 2,_UCS2'\357\277\210, '\357\277\277' repeats 49 times, ', keyValueBufferSize=159, heap=0x7fffd812a988, strCharSet=CharInfo::UTF8) at ../optimizer/NATable.cpp:1359 #10 0x7fffefb1d2c1 in RangePartitionBoundaries::setupForStatement (this=0x7fffd8127a50, useStringVersion=1) at ../optimizer/PartFunc.cpp:3601 #11 0x7fffefb2098c in RangePartitioningFunction::setupForStatement (this=0x7fffd81039a8) at ../optimizer/PartFunc.cpp:4573 #12 0x7fffef9cee43 in NAFileSet::setupForStatement (this=0x7fffd8103c20) at ../optimizer/NAFileSet.cpp:249 #13 0x7fffef9ec37c in NATable::setupForStatement (this=0x7fffd8126268) at ../optimizer/NATable.cpp:6146 #14 0x7fffef9f0773 in NATableDB::get (this=0x7fffe7b09660, corrName=..., bindWA=0x7fff1430, inTableDescStruct=0x0) at ../optimizer/NATable.cpp:7229 #15 0x7fffef75727a in BindWA::getNATable (this=0x7fff1430,... Read more... Hans Zeller (hans-zeller) wrote on 2014-07-29: #4 Fix was delivered on July 24: https://review.trafodion.org/116 Changed in trafodion: status: In Progress → Fix Committed Stacey Johnson (sjohnson-w) on 2014-10-16 Changed in trafodion: milestone: none → r0.8 status: Fix Committed → Fix Released Stacey Johnson (sjohnson-w) on 2014-10-16 Changed in trafodion: milestone: r0.8 → r0.9 LP Bug: 1315567 - insert into salt table fails with ERROR[1123] --- Key: TRAFODION-247 URL: https://issues.apache.org/jira/browse/TRAFODION-247 Project: Apache Trafodion Issue Type: Bug Components: sql-general Reporter: Julie Thai Assignee: Hans Zeller Priority: Critical Fix For: 0.9 (pre-incubation) Create salt table with primary key on varchar ucs2 column. Insert into salt table fails with ERROR[1123]. From sqlci: CREATE TABLE f00( + colkey int not null, + colvchrucs2 varchar(100) character set ucs2 not null, + primary key (colvchrucs2)) + salt using 3 partitions; --- SQL operation complete. INSERT INTO f00 VALUES (1, 'abcde'), (2, 'fghij'), (3, 'klmnopqrs'), (4, 'tuvwxyz'), (5, 'abc@#$%^*()efgh'); *** ERROR[1123] Not all of the partition key values () for object TRAFODION.SEABASE.F00 could be processed. Please verify that the
[jira] [Closed] (TRAFODION-247) LP Bug: 1315567 - insert into salt table fails with ERROR[1123]
[ https://issues.apache.org/jira/browse/TRAFODION-247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-247. -- Resolution: Fixed Assignee: (was: Hans Zeller) LP Bug: 1315567 - insert into salt table fails with ERROR[1123] --- Key: TRAFODION-247 URL: https://issues.apache.org/jira/browse/TRAFODION-247 Project: Apache Trafodion Issue Type: Bug Components: sql-general Reporter: Julie Thai Priority: Critical Fix For: 0.9 (pre-incubation) Create salt table with primary key on varchar ucs2 column. Insert into salt table fails with ERROR[1123]. From sqlci: CREATE TABLE f00( + colkey int not null, + colvchrucs2 varchar(100) character set ucs2 not null, + primary key (colvchrucs2)) + salt using 3 partitions; --- SQL operation complete. INSERT INTO f00 VALUES (1, 'abcde'), (2, 'fghij'), (3, 'klmnopqrs'), (4, 'tuvwxyz'), (5, 'abc@#$%^*()efgh'); *** ERROR[1123] Not all of the partition key values () for object TRAFODION.SEABASE.F00 could be processed. Please verify that the correct key value data types were specified. *** ERROR[8822] The statement was not prepared. CREATE TABLE f01( + colkey int not null, + colvchrucs2 varchar(100) character set ucs2 not null, + primary key (colvchrucs2)); --- SQL operation complete. INSERT INTO f01 VALUES (1, 'abcde'), (2, 'fghij'), (3, 'klmnopqrs'), (4, 'tuvwxyz'), (5, 'abc@#$%^*()efgh'); --- 5 row(s) inserted. MY_SQROOT=/opt/home/thaiju/datalake_64_1 who@host=tha...@g4t3029.houston.hp.com JAVA_HOME=/opt/home/tools/jdk1.7.0_09_64 linux=2.6.32-279.el6.x86_64 redhat=6.3 Release 0.7.0 (Build release [39961], branch 39961-project/datalake_64_1, date 02May14) To reproduce, in sqlci issue: DROP TABLE f00; DROP TABLE f01; CREATE TABLE f00( colkey int not null, colvchrucs2 varchar(100) character set ucs2 not null, primary key (colvchrucs2)) salt using 3 partitions; INSERT INTO f00 VALUES (1, 'abcde'), (2, 'fghij'), (3, 'klmnopqrs'), (4, 'tuvwxyz'), (5, 'abc@#$%^*()efgh'); CREATE TABLE f01( colkey int not null, colvchrucs2 varchar(100) character set ucs2 not null, primary key (colvchrucs2)); INSERT INTO f01 VALUES (1, 'abcde'), (2, 'fghij'), (3, 'klmnopqrs'), (4, 'tuvwxyz'), (5, 'abc@#$%^*()efgh'); Also, try showddl and invoke. Then rerun reproducible script again but this time with primary key DESCENDING. Again, try showddl and invoke. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-268) LP Bug: 1319508 - Got assertionfailure (0) in file ../common/NAstring.cpp
[ https://issues.apache.org/jira/browse/TRAFODION-268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-268. -- Resolution: Fixed Fix Version/s: 0.8 (pre-incubation) LP Bug: 1319508 - Got assertionfailure (0) in file ../common/NAstring.cpp - Key: TRAFODION-268 URL: https://issues.apache.org/jira/browse/TRAFODION-268 Project: Apache Trafodion Issue Type: Bug Reporter: Apache Trafodion Priority: Critical Fix For: 0.8 (pre-incubation) SQLcreate table t6a064 +( +a int not null, +b int not null not droppable, +Y123456789B123456789C123456789D123456789E123456789F123456789G123456789H123456789I123456789J123456789K123456789L123456789M1234567 int not null unique, +primary key ( b asc, a)) +store by (b asc) +attributes extent (16, 16); --- SQL operation complete. SQL-- #expect any *18 row(s) inserted.* SQLinsert into t6a064 values (1,1,11); *** ERROR[2006] Internal error: assertion failure (0) in file ../common/NAString.cpp at line 396. [2014-05-14 10:57:57] SQLselect * from t6a064; --- 0 row(s) selected. test script - create table t6a064 ( a int not null, b int not null not droppable, Y123456789B123456789C123456789D123456789E123456789F123456789G123456789H123456789I123456789J123456789K123456789L123456789M1234567 int not null unique, primary key ( b asc, a)) store by (b asc) attributes extent (16, 16); -- #expect any *18 row(s) inserted.* -- ERROR[2006] Internal error: assertion failure (0) in file ../common/NAString.cpp at line 396. [2014-05-13 20:07:43] insert into t6a064 values (1,1,11),(1,11,12),(1,12,13), (1,2,21),(1,21,22),(1,22,23), (2,1,31),(2,11,32),(2,12,33), (2,2,41),(2,21,42),(2,22,43), (3,1,51),(3,11,52),(3,12,53), (3,2,61),(3,21,62),(3,22,63); select * from t6a064; -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-269) LP Bug: 1319524 - Update Statistics performance is poor.
[ https://issues.apache.org/jira/browse/TRAFODION-269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-269. -- Resolution: Fixed Assignee: (was: Barry Fritchman) Fix Version/s: 0.8 (pre-incubation) LP Bug: 1319524 - Update Statistics performance is poor. Key: TRAFODION-269 URL: https://issues.apache.org/jira/browse/TRAFODION-269 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Barry Fritchman Priority: Critical Fix For: 0.8 (pre-incubation) Update Statistics currently takes several times longer to complete than for Seaquest. The performance was improved somewhat by the partial fix for LP1301023, which caused the correct plan to be used for internally generated queries, but the performance is still not adequate. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-270) LP Bug: 1319965 - Regionserver looping on active transaction.
[ https://issues.apache.org/jira/browse/TRAFODION-270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708643#comment-14708643 ] Atanu Mishra commented on TRAFODION-270: Oliver Bucaojit (oliver-bucaojit) wrote on 2014-05-15: #1 In the transactional region, we delay the split from occurring until there are no active transactions. The transactionstate is held by each region and work would need to be done to move the necessary data to the daughter region on split if we are to support region splitting. Our current fix for this issue is to have regions delay splitting when there are preparing or active transactions during a split, which is why the log message is being repeated that it is stuck Preparing to close the transactional region. One solution that can be done on the user side is to check for open sqlci or jdbc connections with transactions running and close them. This will allow the region to split immediately. Another solution that I've implemented would disable the split delaying and transactions that are in flight will then be aborted since they will not be able to communicate with the region after it has been split and relocated. This would be useful in development or if we want to split and wouldn't want the region to get stuck in a loop. I have seen cases where we get stuck in a loop where the C++ side DTM aborts or gets killed and a transaction remains on the HBase region. This property to disable the split delay is below, and is added to conf/hbase-site.xml: property namehbase.regionserver.region.split.delay/name valuefalse/value /property If there is a case where there are no sqlci or transactions running and we see that the HBase region is stuck in this loop waiting for active transactions, and the TM is still running, then there is a bug and we will need to gather more logging and process information to debug this issue. One way to easily check if there is a transaction running from the TM perspective is through dtmci. Running dtmci and using the 'list' command will print out the current transactions and it's state. Bouncing the system will also get HBase back into a normal state because there will be no active transactions at that point. If there were any prepared transactions then it would go through the recovery flow and get redriven to abort or commit. Atanu Mishra (atanu-mishra) wrote on 2014-05-19:#2 Analysis by Narendra -- Later today, I would be checking in these changes to the datalake branch. Basically, only use the TransactionalRegion when the transid is not 0. That way, we will not ‘implicitly’ start a transaction with transid=0. This should help with the Launchpad bug 1319965. What happens is that once we start a transaction (in the region server) with transid=0, it stays put (as the user did not start a transaction with ‘transid=0’), and hence when the user tries to split the region manually, the splitting does not happen (as the object transactionsById is not empty) [I applied the same code updates to the other aggregator methods in this class (getMax/Min/Sum…)] I am hoping that it would help with the bug 1309121 too. Changed in trafodion: status: New → In Progress Atanu Mishra (atanu-mishra) on 2014-05-20 Changed in trafodion: status: In Progress → Fix Committed Stacey Johnson (sjohnson-w) on 2014-06-10 information type: Proprietary → Public Alice Chen (alchen) on 2014-10-15 Changed in trafodion: milestone: none → r0.8 status: Fix Committed → Fix Released LP Bug: 1319965 - Regionserver looping on active transaction. - Key: TRAFODION-270 URL: https://issues.apache.org/jira/browse/TRAFODION-270 Project: Apache Trafodion Issue Type: Bug Components: dtm Reporter: Guy Groulx Assignee: Oliver Bucaojit Priority: Critical Fix For: 0.8 (pre-incubation) I was trying to create a scenario for Trina on the errors we are getting when a split and/or balance happens. So I tried to use the split command on a table to force a split. It looked like the split command was not working.I started looking at the various logs and found that the regionserver containing my table goes into a loop about a transaction not completing. Here’s my table. Table Regions Name Region Server Start Key End Key Requests Name Region Server Start Key End Key Requests TRAFODION.MXOLTP.TBL500,,1400169591672.616691fe476f230bfebe6b7a3907f0b8. n006.cm.cluster:60030 \x00\x07\xBF\x14 TRAFODION.MXOLTP.TBL500,\x00\x07\xBF\x14,1400169591672.dc2f54cc3e9aee81aa5635e704d17753. n006.cm.cluster:60030 \x00\x07\xBF\x14 IE: Currently both on n006. Starting trafci: SQLset schema trafodion.mxoltp; --- SQL
[jira] [Commented] (TRAFODION-269) LP Bug: 1319524 - Update Statistics performance is poor.
[ https://issues.apache.org/jira/browse/TRAFODION-269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708641#comment-14708641 ] Atanu Mishra commented on TRAFODION-269: Barry Fritchman (barry-fritchman) wrote on 2014-05-30: #1 The performance degradation relative to Seaquest is unquestionable, but I'm not so sure it is limited to Update Stats. I tried using the same query we use to retrieve data from which histograms are constructed for a given column outside the ustat context, i.e., directly from sqlci. The query is SELECT FMTVAL, SUMVAL FROM (SELECT d, TRIM(TRAILING FROM CAST(d AS VARCHAR(30) CHARACTER SET UCS2)), COUNT(*) FROM cube2 GROUP BY d FOR READ UNCOMMITTED ACCESS) T(d, FMTVAL, SUMVAL) ORDER BY d; where d is the column name in this case. On a 1-million row table with 4 partitions, the execution time for this query in Trafodion was 130 seconds, whereas it takes about 2.5 seconds in Seaquest. The plans were very similar in both environments. Barry Fritchman (barry-fritchman) wrote on 2014-05-31: #2 I asked Taoufik to try to reproduce the results I referred to previously, since I ran the SQ and Traf sides of the test on different workstations. His results for SQ were the same, but the time on Trafodion was significantly faster -- 20 seconds as opposed to my 130 seconds. So the disparity is not as severe as I first thought, but still represents an order of magnitude difference, which roughly parallels the performance difference for Update Stats on SQ/Traf. For reference, here are the plans used for SQ and Traf. Plan for SQ: LC RC OP OPERATOR OPT DESCRIPTION CARD - 8 . 9 root 1.00E+001 7 . 8 esp_exchange sm 1:4(hash2) (m) 1.00E+001 6 . 7 sort_partial_groupby 1.00E+001 5 . 6 sort 1.00E+001 4 . 5 esp_exchange sm 4(hash2):4(hash2) 1.00E+001 3 . 4 hash_partial_groupby 1.00E+001 2 . 3 esp_exchange sm 4(hash2):1 1.00E+006 1 . 2 partition_access 1.00E+006 . . 1 file_scan fs fr CUBE2 1.00E+006 Plan for Traf: LC RC OP OPERATOR OPT DESCRIPTION CARD - 6 . 7 root 1.00E+001 5 . 6 sort_partial_groupby 1.00E+001 4 . 5 sort 1.00E+001 3 . 4 esp_exchange 1:3(hash2) 1.00E+001 2 . 3 hash_partial_groupby 1.00E+001 1 . 2 esp_exchange 3(hash2):2(range) 1.00E+006 . . 1 trafodion_scan CUBE2 1.00E+006 Taoufik tinkered with the degree of esp parallelism, but the original plan above was the fastest. Stacey Johnson (sjohnson-w) on 2014-06-10 information type: Proprietary → Public Barry Fritchman (barry-fritchman) wrote on 2014-07-07: #3 In addition to the improvements listed above, a change was made to perform sampling in the hbase layer instead of in Trafodion, which greatly reduces the number of rows returned from hbase. For the default 1% sampling rate, performance improvements of 2-4x were observed. Although performance improvement is an ongoing task for Trafodion, with these changes I believe the situation has improved sufficiently that this defect can be closed. Changed in trafodion: status: In Progress → Fix Committed Julie Thai (julie-y-thai) on 2014-08-06 Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1319524 - Update Statistics performance is poor. Key: TRAFODION-269 URL: https://issues.apache.org/jira/browse/TRAFODION-269 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Barry Fritchman Assignee: Barry Fritchman Priority: Critical Update Statistics currently takes several times longer to complete than for Seaquest. The performance was improved somewhat by the partial fix for LP1301023, which caused the correct plan to be used for internally generated queries, but the performance is still not adequate. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-229) LP Bug: 1308695 - Indexes caused insert to assert in ../optimizer/BindRelExpr.cpp
[ https://issues.apache.org/jira/browse/TRAFODION-229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-229. -- Resolution: Fixed Assignee: (was: Anoop Sharma) Fix Version/s: 0.8 (pre-incubation) LP Bug: 1308695 - Indexes caused insert to assert in ../optimizer/BindRelExpr.cpp - Key: TRAFODION-229 URL: https://issues.apache.org/jira/browse/TRAFODION-229 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Weishiun Tsai Priority: Critical Fix For: 0.8 (pre-incubation) When a table is dropped and recreated with the same name, if indexes were created for the tables, the insert on the second table returns an internal error: *** ERROR[2006] Internal error: assertion failure (tgtcols.entries() == baseColRefs().entries()) in file ../optimizer/BindRelExpr.cpp at line 11964. [2014-04-16 11:27:30] This is seen on the beta v39140 build, both on a workstation, and on the cluster installation. Here is the script to reproduce this problem. Following it are 2 execution outputs. The 1st output shows the errors. The 2nd output shows that the insert runs fine if the indexes were not created. == Create table T3(F int default null, G smallint default null, H largeint not null not droppable primary key, I numeric(9,3) default null); create index num_idx on T3(I); insert into T3 values(2, 1,1,1); drop table T3 cascade; Create table T3(F int not null not droppable , G smallint not null not droppable , H largeint not null not droppable , I numeric(9,3) default null, primary key (F,G,H) ); create index num_idx on T3(I); insert into T3 values(2, 1,1,1); insert into T3 values(4, 2,2,1); == SQLCreate table T3(F int default null, +G smallint default null, +H largeint not null not droppable primary key, +I numeric(9,3) default null); --- SQL operation complete. SQLcreate index num_idx on T3(I); --- SQL operation complete. SQLinsert into T3 values(2, 1,1,1); --- 1 row(s) inserted. SQLdrop table T3 cascade; --- SQL operation complete. SQLCreate table T3(F int not null not droppable , +G smallint not null not droppable , +H largeint not null not droppable , +I numeric(9,3) default null, +primary key (F,G,H) ); --- SQL operation complete. SQLcreate index num_idx on T3(I); --- SQL operation complete. SQLinsert into T3 values(2, 1,1,1); *** ERROR[2006] Internal error: assertion failure (tgtcols.entries() == baseColRefs().entries()) in file ../optimizer/BindRelExpr.cpp at line 11964. [2014-04-16 11:27:30] SQLinsert into T3 values(4, 2,2,1); *** ERROR[2006] Internal error: assertion failure (tgtcols.entries() == baseColRefs().entries()) in file ../optimizer/BindRelExpr.cpp at line 11964. [2014-04-16 11:27:30] == SQLCreate table T3(F int default null, +G smallint default null, +H largeint not null not droppable primary key, +I numeric(9,3) default null); --- SQL operation complete. SQL-- create index num_idx on T3(I); SQLinsert into T3 values(2, 1,1,1); --- 1 row(s) inserted. SQLdrop table T3 cascade; --- SQL operation complete. SQLCreate table T3(F int not null not droppable , +G smallint not null not droppable , +H largeint not null not droppable , +I numeric(9,3) default null, +primary key (F,G,H) ); --- SQL operation complete. SQL-- create index num_idx on T3(I); SQLinsert into T3 values(2, 1,1,1); --- 1 row(s) inserted. SQLinsert into T3 values(4, 2,2,1); --- 1 row(s) inserted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-230) LP Bug: 1308749 - Update statistics sees errors 9214/9215/20123/4222
[ https://issues.apache.org/jira/browse/TRAFODION-230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-230. -- Resolution: Fixed Assignee: (was: Barry Fritchman) Fix Version/s: 0.8 (pre-incubation) LP Bug: 1308749 - Update statistics sees errors 9214/9215/20123/4222 Key: TRAFODION-230 URL: https://issues.apache.org/jira/browse/TRAFODION-230 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Weishiun Tsai Priority: Critical Fix For: 0.8 (pre-incubation) When running update statistics on the following table, it returns errors 9214/9215/20123/4222. This is pretty reproducible right now on the beta build v39140 installed on the cluster centos-mapr1. To produce this problem will require the QA g_tpcds1x global tables to be created first. create table sb_customer +( +c_customer_sk integer not null, +c_customer_id char(16) not null, +c_current_cdemo_skinteger , +c_current_hdemo_skinteger , +c_current_addr_sk integer , +c_first_shipto_date_skinteger , +c_first_sales_date_sk integer , +c_salutation char(10) , +c_first_name char(20) not null, +c_last_name char(30) not null, +c_preferred_cust_flag char(1) , +c_birth_day integer , +c_birth_month integer , +c_birth_year integer , +c_birth_country varchar(20) , +c_login char(13) , +c_email_address char(50) , +c_last_review_datechar(10) , +primary key (c_customer_sk, c_last_name, c_first_name) +); --- SQL operation complete. upsert using load into sb_customer (select * from trafodion.g_tpcds1x.customer where c_customer_sk is not null and c_last_name is not null and c_first_name is not null); --- 94760 row(s) inserted. select count(*) from trafodion.g_tpcds1x.customer; (EXPR) 10 --- 1 row(s) selected. select count(*) from sb_customer; (EXPR) 94760 --- 1 row(s) selected. update statistics for table sb_customer on every column, (C_CUSTOMER_SK, C_LAST_NAME, C_BIRTH_DAY) sample; *** ERROR[9214] Object TRAFODION.PUBLIC_ACCESS_SCHEMA.SQLMX_37187003981691445_1397679531_607967 could not be created. *** ERROR[20123] A user-defined transaction has been started. This DDL operation cannot be performed. *** ERROR[9215] UPDATE STATISTICS encountered an internal error (from TRAFODION.PUBLIC_ACCESS_SCHEMA.SQLMX_37187003981691445_1397679531_607967, with return value=). Details: . *** ERROR[4222] The ALTER feature is not supported in this software version. *** ERROR[8822] The statement was not prepared. --- SQL operation failed with errors. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-230) LP Bug: 1308749 - Update statistics sees errors 9214/9215/20123/4222
[ https://issues.apache.org/jira/browse/TRAFODION-230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706307#comment-14706307 ] Atanu Mishra commented on TRAFODION-230: Pavani Puppala (pavani-puppala) on 2014-05-01 Changed in trafodion: status: In Progress → Fix Committed Weishiun Tsai (wei-shiun-tsai) wrote on 2014-05-09: #1 Verified on the datalake v40158 build. This problem has been fixed: [...] --- SQL operation complete. upsert using load into sb_customer (select * from trafodion.g_tpcds1x.customer where c_customer_sk is not null and c_last_name is not null and c_first_name is not null); --- 94760 row(s) inserted. select count(*) from trafodion.g_tpcds1x.customer; (EXPR) 10 --- 1 row(s) selected. select count(*) from sb_customer; (EXPR) 94760 --- 1 row(s) selected. update statistics for table sb_customer on every column, (C_CUSTOMER_SK, C_LAST_NAME, C_BIRTH_DAY) sample; --- SQL operation complete. Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1308749 - Update statistics sees errors 9214/9215/20123/4222 Key: TRAFODION-230 URL: https://issues.apache.org/jira/browse/TRAFODION-230 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Weishiun Tsai Assignee: Barry Fritchman Priority: Critical Fix For: 0.8 (pre-incubation) When running update statistics on the following table, it returns errors 9214/9215/20123/4222. This is pretty reproducible right now on the beta build v39140 installed on the cluster centos-mapr1. To produce this problem will require the QA g_tpcds1x global tables to be created first. create table sb_customer +( +c_customer_sk integer not null, +c_customer_id char(16) not null, +c_current_cdemo_skinteger , +c_current_hdemo_skinteger , +c_current_addr_sk integer , +c_first_shipto_date_skinteger , +c_first_sales_date_sk integer , +c_salutation char(10) , +c_first_name char(20) not null, +c_last_name char(30) not null, +c_preferred_cust_flag char(1) , +c_birth_day integer , +c_birth_month integer , +c_birth_year integer , +c_birth_country varchar(20) , +c_login char(13) , +c_email_address char(50) , +c_last_review_datechar(10) , +primary key (c_customer_sk, c_last_name, c_first_name) +); --- SQL operation complete. upsert using load into sb_customer (select * from trafodion.g_tpcds1x.customer where c_customer_sk is not null and c_last_name is not null and c_first_name is not null); --- 94760 row(s) inserted. select count(*) from trafodion.g_tpcds1x.customer; (EXPR) 10 --- 1 row(s) selected. select count(*) from sb_customer; (EXPR) 94760 --- 1 row(s) selected. update statistics for table sb_customer on every column, (C_CUSTOMER_SK, C_LAST_NAME, C_BIRTH_DAY) sample; *** ERROR[9214] Object TRAFODION.PUBLIC_ACCESS_SCHEMA.SQLMX_37187003981691445_1397679531_607967 could not be created. *** ERROR[20123] A user-defined transaction has been started. This DDL operation cannot be performed. *** ERROR[9215] UPDATE STATISTICS encountered an internal error (from TRAFODION.PUBLIC_ACCESS_SCHEMA.SQLMX_37187003981691445_1397679531_607967, with return value=). Details: . *** ERROR[4222] The ALTER feature is not supported in this software version. *** ERROR[8822] The statement was not prepared. --- SQL operation failed with errors. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-240) LP Bug: 1312922 - merge command should not allow to update primary key values
[ https://issues.apache.org/jira/browse/TRAFODION-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-240. -- Resolution: Fixed Assignee: (was: Anoop Sharma) Fix Version/s: 0.8 (pre-incubation) LP Bug: 1312922 - merge command should not allow to update primary key values - Key: TRAFODION-240 URL: https://issues.apache.org/jira/browse/TRAFODION-240 Project: Apache Trafodion Issue Type: Bug Reporter: Apache Trafodion Priority: Critical Fix For: 0.8 (pre-incubation) create table mychar (col1 largeint not null, + col2 char(10), + col3 char(5), + col4 char(20), + primary key(col1)); --- SQL operation complete. insert into mychar values +(1,'AA','B','my longish string'), +(2,'bb','c','my second string'); merge into mychar on col1=2 when matched then +update set (col1,col2)=(20,'a'); --- 1 row(s) updated. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-237) LP Bug: 1311871 - compiler doesn't set begin and end key for salted tables
[ https://issues.apache.org/jira/browse/TRAFODION-237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706320#comment-14706320 ] Atanu Mishra commented on TRAFODION-237: Ravisha Neelakanthappa (ravisha-neelakanthappa) wrote on 2014-05-05:#1 IThere are three problems : 1. Optimizer doesn't choose MDAM plan for range predicates on key column. For Range predicate, default plan for SALT table should be MDAM plan. This plan was not being considered because checkMDAMadditionalRestriction() method was deciding MDAM plan didn't make sense. I have added a logic if the leading key colum is salted column, then ignore missing predicate on _SALT_ and consider MDAM plan. 2. Explain info of MDAM plan doesn't display disjuncts information explain code FileScan::addLocalExpr() of RelExpr.cpp, thinks plan is subset scan even though MDAM key is not NULL. The reason being setting up both subset scan key and mdam key in HbaseAccess object. If the partitioing function is Range, HbaseAccess::preCodeGen() creates searchKey, since the first check is if (getSearchKeyPtr() != NULL), we think it's a subset scan. The correct way should be to check getMdamKeyPtr() first as it is being done in many places in generator code – this what I have done. 3. Parallel MDAM plan with range partitioning function return every row twice. Actually user should get 50 rows, but we get 100 rows, 50 from each range partition. If Dop is 4 range parts, then we would get 200 rows. prepare xx from select count(*) from shb1 where shb1.uniq 50 for read uncommitted access; --- SQL command prepared. explain options 'f' xx; LC RC OP OPERATOR OPT DESCRIPTION CARD - 4 . 5 root 1.00E+000 3 . 4 sort_partial_aggr_ro 1.00E+000 2 . 3 esp_exchange 1:2(range) 1.00E+000 1 . 2 sort_partial_aggr_le 1.00E+000 . . 1 trafodion_scan SHB1 5.00E+001 --- SQL operation complete. set statistics on; execute xx; (EXPR) 100 --- 1 row(s) selected. Start Time 2014/05/05 14:29:01.384941 End Time 2014/05/05 14:29:02.768792 Elapsed Time 00:00:01.383851 Compile Time 00:00:05.954896 Execution Time 00:00:01.383851 Table Name Records Records Hdfs Hdfs I/O Hdfs Access Accessed Used I/Os Bytes Time(usec) TRAFODION.HBASE.SHB1 100 100 0 0 215341 --- SQL operation complete. Hans is going to fix the third problem Ravisha Neelakanthappa (ravisha-neelakanthappa) wrote on 2014-05-15:#2 I have fixed incorrect results issue by adding partition key predicates as additional mdam disjuncts. This happens at preCodeGen phase. With this change, I get correct results i.e 50 rows execute xx; (EXPR) 50 --- 1 row(s) selected. Start Time 2014/05/06 23:09:07.880913 End Time 2014/05/06 23:09:09.365614 Elapsed Time 00:00:01.484701 Compile Time 00:00:06.210960 Execution Time 00:00:01.484701 Table Name Records Records Hdfs Hdfs I/O Hdfs Access Accessed Used I/Os Bytes Time(usec) TRAFODION.HBASE.SHB1 50 50 0 0 190726 --- SQL operation complete Changed in trafodion: status: In Progress → Fix Committed Stacey Johnson (sjohnson-w) on 2014-06-10 information type: Proprietary → Public Julie Thai (julie-y-thai) wrote on 2014-08-12: #3 Verified on cluster, build 20140730_0830. Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1311871 - compiler doesn't set begin and end key for salted tables -- Key: TRAFODION-237 URL: https://issues.apache.org/jira/browse/TRAFODION-237 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Ravisha Neelakanthappa Assignee: Ravisha Neelakanthappa Priority: Critical The compiler doesn't set begin and end key on salted table. Because of this, scan on Habase tables with range predicates on key columns reads all the rows. Same problem exist for both serial plan (with single partition function) and parallel plan with range partition function. To reproduce : set schema trafodion.hbase; drop table shb2; create table shb2 (uniq int not null, c10K int , c1K int, c100 int, c10 int, c1int, primary key (uniq) ) salt using 4 partitions ; upsert with no rollback into shb2 select 0 + (1 * x1) + (1000 * x1000) + (100 * x100) + (10 * x10) +( 1 * x1), 0 + (1000 * x1000) + (100 * x100) + (10 * x10) + (1 * x1), 0 + (100 * x100) + (10 * x10) + (1 * x1), 0 + (10 * x10) + (1 * x1), 0 + (1 * x1), 0 from (values(0))t transpose 0,1,2,3,4,5,6,7,8,9 as x1 transpose 0,1,2,3,4,5,6,7,8,9 as x1000 transpose 0,1,2,3,4,5,6,7,8,9 as x100 transpose
[jira] [Closed] (TRAFODION-237) LP Bug: 1311871 - compiler doesn't set begin and end key for salted tables
[ https://issues.apache.org/jira/browse/TRAFODION-237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-237. -- Resolution: Fixed Assignee: (was: Ravisha Neelakanthappa) Fix Version/s: 0.8 (pre-incubation) LP Bug: 1311871 - compiler doesn't set begin and end key for salted tables -- Key: TRAFODION-237 URL: https://issues.apache.org/jira/browse/TRAFODION-237 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Ravisha Neelakanthappa Priority: Critical Fix For: 0.8 (pre-incubation) The compiler doesn't set begin and end key on salted table. Because of this, scan on Habase tables with range predicates on key columns reads all the rows. Same problem exist for both serial plan (with single partition function) and parallel plan with range partition function. To reproduce : set schema trafodion.hbase; drop table shb2; create table shb2 (uniq int not null, c10K int , c1K int, c100 int, c10 int, c1int, primary key (uniq) ) salt using 4 partitions ; upsert with no rollback into shb2 select 0 + (1 * x1) + (1000 * x1000) + (100 * x100) + (10 * x10) +( 1 * x1), 0 + (1000 * x1000) + (100 * x100) + (10 * x10) + (1 * x1), 0 + (100 * x100) + (10 * x10) + (1 * x1), 0 + (10 * x10) + (1 * x1), 0 + (1 * x1), 0 from (values(0))t transpose 0,1,2,3,4,5,6,7,8,9 as x1 transpose 0,1,2,3,4,5,6,7,8,9 as x1000 transpose 0,1,2,3,4,5,6,7,8,9 as x100 transpose 0,1,2,3,4,5,6,7,8,9 as x10 transpose 0,1,2,3,4,5,6,7,8,9 as x1 ; update statistics for table shb2 on every column; prepare xx from select count(*), uniq, c10K, c1K, c100, c10, c1 from shb1 where shb1.uniq 50 group by uniq, c10K, c1K, C100, c10, c1 for read uncommitted access; explain options 'f' xx; LC RC OP OPERATOR OPT DESCRIPTION CARD - 3.4root 1.00E+000 2.3firstn1.00E+000 1.2hash_groupby 5.00E+001 ..1trafodion_scan SHB1 5.00E+001 --- SQL operation complete. TRAFODION_SCAN SEQ_NO 1NO CHILDREN TABLE_NAME ... SHB1 REQUESTS_IN .. 1 ROWS_OUT 50 EST_OPER_COST 0.03 EST_TOTAL_COST ... 0.03 DESCRIPTION max_card_est .. 50 fragment_id 0 parent_frag (none) fragment_type .. master scan_type .. subset scan of table TRAFODION.HBASE.SHB1 columns all begin_keys(incl) end_keys(incl) key_columns _SALT_, UNIQ executor_predicates (UNIQ 50) part_key_predicates (UNIQ 50) execute xx; --- 50 row(s) selected. Start Time 2014/04/16 15:32:38.308978 End Time 2014/04/16 15:32:45.236699 Elapsed Time 00:00:06.927721 Compile Time 00:00:00.093071 Execution Time00:00:06.927721 Table Name RecordsRecords Hdfs Hdfs I/O Hdfs Access Accessed Used I/Os Bytes Time(usec) TRAFODION.HBASE.SHB1 10 50 0 0 6923951 --- SQL operation complete. -- with Range partition plan explain options 'f' xx; LC RC OP OPERATOR OPT DESCRIPTION CARD - 4.5root 5.00E+001 3.4esp_exchange1:12(hash2) 5.00E+001 2.3hash_groupby 5.00E+001 1.2esp_exchange12(hash2):2(range)5.00E+001 ..1trafodion_scan SHB1 5.00E+001 --- SQL operation complete. TRAFODION_SCAN SEQ_NO 1NO CHILDREN TABLE_NAME ... SHB1 REQUESTS_IN .. 1 ROWS_OUT 50 EST_OPER_COST 0.03 EST_TOTAL_COST ... 0.03 DESCRIPTION max_card_est .. 50 fragment_id 3 parent_frag 2 fragment_type .. esp scan_type .. subset scan of table TRAFODION.HBASE.SHB1 key_columns _SALT_, UNIQ executor_predicates (UNIQ 50) part_key_predicates (UNIQ
[jira] [Closed] (TRAFODION-227) LP Bug: 1308306 - Tdm_arkcmp cores when running Opencart queries
[ https://issues.apache.org/jira/browse/TRAFODION-227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-227. -- Resolution: Fixed Assignee: (was: Qifan Chen) Fix Version/s: 0.8 (pre-incubation) LP Bug: 1308306 - Tdm_arkcmp cores when running Opencart queries Key: TRAFODION-227 URL: https://issues.apache.org/jira/browse/TRAFODION-227 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Weishiun Tsai Priority: Critical Fix For: 0.6 (pre-incubation), 0.8 (pre-incubation) With the set of Opencart queries that Venkat ported over for Trafodion, they generated lots of tdm_arkcmp cores. Opencart DDLs and DMLs have been ported as part of the QA regression test. The select queries that caused the cores did not return error messages, but running the entire set of Opencart queries left about 166 of cores on the node. The core files can be found at the $SQ_HOME directory of the node where mxosrvr runs. After the test is started, one can check the DCS master-status page when the DMLs are running to find out the connection node where the cores could be found. This is seen on the beta v39140 build installed on the cluster centos-mapr1. A typical stack trace of such core looks like the following. (gdb) bt #0 0x003db38328e5 in raise () from /lib64/libc.so.6 #1 0x003db38340c5 in abort () from /lib64/libc.so.6 #2 0x7f0446b0a8b5 in ?? () from /usr/lib/jvm/java/jre/lib/amd64/server/libjvm.so #3 0x7f0446c7878f in ?? () from /usr/lib/jvm/java/jre/lib/amd64/server/libjvm.so #4 0x7f0446b0fa82 in JVM_handle_linux_signal () from /usr/lib/jvm/java/jre/lib/amd64/server/libjvm.so #5 signal handler called #6 0x7f044a417954 in RangePartitionBoundaries::getOptimizedNumberOfPartKeys (this=value optimized out) at ../optimizer/PartFunc.cpp:3158 #7 0x7f044a4293eb in RangePartitioningFunction::createPartitioningFunctionForIndexDesc (this=0x7f042ae87ec0, idesc=0x7f042ae3d4e0) at ../optimizer/PartFunc.cpp:4152 #8 0x7f044a1b8e48 in IndexDesc::IndexDesc (this=0x7f042ae3d4e0, tdesc=0x7f042ae36f50, fileSet=0x7f042ae88138, cmpContext=0x7f042ae88138) at ../optimizer/IndexDesc.cpp:225 #9 0x7f044a09070e in createTableDesc2 (bindWA=value optimized out, naTable=0x7f042ae7f0f8, corrName=..., hint=0x0) at ../optimizer/BindRelExpr.cpp:1568 #10 0x7f044a091e7f in BindWA::createTableDesc (this=0x7fffd8cc4f80, naTable=0x7f042ae7f0f8, corrName=..., catmanCollectUsages=0, hint=value optimized out) at ../optimizer/BindRelExpr.cpp:1646 #11 0x7f044a0a5f87 in Scan::bindNode (this=0x7f042bcc2fa8, bindWA=0x7fffd8cc4f80) at ../optimizer/BindRelExpr.cpp:6723 #12 0x7f044a07ce57 in RelExpr::bindChildren (this=0x7f042bcc3a20, bindWA=0x7fffd8cc4f80) at ../optimizer/BindRelExpr.cpp:2164 #13 0x7f044a0b9ea1 in Join::bindNode (this=0x7f042bcc3a20, bindWA=0x7fffd8cc4f80) at ../optimizer/BindRelExpr.cpp:2498 #14 0x7f044a07ce57 in RelExpr::bindChildren (this=0x7f042bccf1a8, bindWA=0x7fffd8cc4f80) at ../optimizer/BindRelExpr.cpp:2164 #15 0x7f044a0b9ea1 in Join::bindNode (this=0x7f042bccf1a8, bindWA=0x7fffd8cc4f80) at ../optimizer/BindRelExpr.cpp:2498 #16 0x7f044a07ce57 in RelExpr::bindChildren (this=0x7f042bcd3bf0, bindWA=0x7fffd8cc4f80) at ../optimizer/BindRelExpr.cpp:2164 #17 0x7f044a0b7296 in RelRoot::bindNode (this=0x7f042bcd3bf0, bindWA=0x7fffd8cc4f80) at ../optimizer/BindRelExpr.cpp:5001 #18 0x7f044ce6b524 in CmpMain::compile (this=0x7fffd8cc71b0, input_str=0x7f042bcc7c28 SELECT COUNT(DISTINCT p.product_id) AS total FROM oc_category_path cp LEFT JOIN oc_product_to_category p2c ON (cp.category_id = p2c.category_id) LEFT JOIN oc_product p ON (p2c.product_id = p.product_i..., charset=15, queryExpr=@0x7fffd8cc70e8, gen_code=0x7f042ae96b88, gen_code_len=0x7f042ae96b80, heap=0x7f043e40adb8, phase=CmpMain::END, fragmentDir=0x7fffd8cc7308, op=3004, useQueryCache=value optimized out, cacheable=0x7fffd8cc70f8, begTime=0x7fffd8cc70d0, shouldLog=0) at ../sqlcomp/CmpMain.cpp:1755 #19 0x7f044ce6de5e in CmpMain::sqlcomp (this=0x7fffd8cc71b0, input_str=0x7f042bcc7c28 SELECT COUNT(DISTINCT p.product_id) AS total FROM oc_category_path cp LEFT JOIN oc_product_to_category p2c ON (cp.category_id = p2c.category_id) LEFT JOIN oc_product p ON (p2c.product_id = p.product_i..., charset=15, queryExpr=@0x7fffd8cc70e8, gen_code=0x7f042ae96b88, gen_code_len=0x7f042ae96b80, heap=0x7f043e40adb8, phase=CmpMain::END, fragmentDir=0x7fffd8cc7308, op=3004, useQueryCache=1, cacheable=0x7fffd8cc70f8, begTime=0x7fffd8cc70d0,
[jira] [Commented] (TRAFODION-229) LP Bug: 1308695 - Indexes caused insert to assert in ../optimizer/BindRelExpr.cpp
[ https://issues.apache.org/jira/browse/TRAFODION-229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706305#comment-14706305 ] Atanu Mishra commented on TRAFODION-229: Anoop Sharma (anoop-sharma) on 2014-05-05 Changed in trafodion: status: In Progress → Fix Committed Weishiun Tsai (wei-shiun-tsai) wrote on 2014-05-09: #1 Verified on the datalake v40174 build. This problem has been fixed: Create table T3(F int default null, + G smallint default null, + H largeint not null not droppable primary key, + I numeric(9,3) default null); --- SQL operation complete. create index num_idx on T3(I); --- SQL operation complete. insert into T3 values(2, 1,1,1); --- 1 row(s) inserted. drop table T3 cascade; --- SQL operation complete. Create table T3(F int not null not droppable , + G smallint not null not droppable , + H largeint not null not droppable , + I numeric(9,3) default null, + primary key (F,G,H) ); --- SQL operation complete. create index num_idx on T3(I); --- SQL operation complete. insert into T3 values(2, 1,1,1); --- 1 row(s) inserted. insert into T3 values(4, 2,2,1); --- 1 row(s) inserted. Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1308695 - Indexes caused insert to assert in ../optimizer/BindRelExpr.cpp - Key: TRAFODION-229 URL: https://issues.apache.org/jira/browse/TRAFODION-229 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Weishiun Tsai Assignee: Anoop Sharma Priority: Critical When a table is dropped and recreated with the same name, if indexes were created for the tables, the insert on the second table returns an internal error: *** ERROR[2006] Internal error: assertion failure (tgtcols.entries() == baseColRefs().entries()) in file ../optimizer/BindRelExpr.cpp at line 11964. [2014-04-16 11:27:30] This is seen on the beta v39140 build, both on a workstation, and on the cluster installation. Here is the script to reproduce this problem. Following it are 2 execution outputs. The 1st output shows the errors. The 2nd output shows that the insert runs fine if the indexes were not created. == Create table T3(F int default null, G smallint default null, H largeint not null not droppable primary key, I numeric(9,3) default null); create index num_idx on T3(I); insert into T3 values(2, 1,1,1); drop table T3 cascade; Create table T3(F int not null not droppable , G smallint not null not droppable , H largeint not null not droppable , I numeric(9,3) default null, primary key (F,G,H) ); create index num_idx on T3(I); insert into T3 values(2, 1,1,1); insert into T3 values(4, 2,2,1); == SQLCreate table T3(F int default null, +G smallint default null, +H largeint not null not droppable primary key, +I numeric(9,3) default null); --- SQL operation complete. SQLcreate index num_idx on T3(I); --- SQL operation complete. SQLinsert into T3 values(2, 1,1,1); --- 1 row(s) inserted. SQLdrop table T3 cascade; --- SQL operation complete. SQLCreate table T3(F int not null not droppable , +G smallint not null not droppable , +H largeint not null not droppable , +I numeric(9,3) default null, +primary key (F,G,H) ); --- SQL operation complete. SQLcreate index num_idx on T3(I); --- SQL operation complete. SQLinsert into T3 values(2, 1,1,1); *** ERROR[2006] Internal error: assertion failure (tgtcols.entries() == baseColRefs().entries()) in file ../optimizer/BindRelExpr.cpp at line 11964. [2014-04-16 11:27:30] SQLinsert into T3 values(4, 2,2,1); *** ERROR[2006] Internal error: assertion failure (tgtcols.entries() == baseColRefs().entries()) in file ../optimizer/BindRelExpr.cpp at line 11964. [2014-04-16 11:27:30] == SQLCreate table T3(F int default null, +G smallint default null, +H largeint not null not droppable primary key, +I numeric(9,3) default null); --- SQL operation complete. SQL-- create index num_idx on T3(I); SQLinsert into T3 values(2, 1,1,1); --- 1 row(s) inserted. SQLdrop table T3 cascade; --- SQL operation complete. SQLCreate table T3(F int not null not droppable , +G smallint not null not droppable , +H largeint not null not droppable , +I numeric(9,3) default null, +primary key (F,G,H) ); --- SQL operation complete. SQL-- create index num_idx on T3(I); SQLinsert into T3 values(2, 1,1,1); --- 1 row(s) inserted. SQLinsert into T3 values(4, 2,2,1); --- 1 row(s) inserted. -- This message was sent
[jira] [Commented] (TRAFODION-246) LP Bug: 1315537 - ODBC: Clients fail to connect with error 'Dialogue Id does not match'
[ https://issues.apache.org/jira/browse/TRAFODION-246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706310#comment-14706310 ] Atanu Mishra commented on TRAFODION-246: Arvind Narain (arvind-narain) wrote on 2014-06-20: #2 For this bug the defect is in DCS and Zbig has a fix for the same - assigning to him. Other Zookeeper bug mentioned is 1252790 (Zookeeper entry not in connecting state. Current state is AVAILABLE) Changed in trafodion: assignee: Tharak Capirala (capirala-tharaknath) → Zbigniew Omanski (zbigniew-omanski) Daniel Lu (ping-lu) on 2014-07-09 tags: added: connectivity-dcs removed: client-odbc-windows Zbigniew Omanski (zbigniew-omanski) on 2014-07-09 Changed in trafodion: assignee: Zbigniew Omanski (zbigniew-omanski) → Tharak Capirala (capirala-tharaknath) Tharak Capirala (capirala-tharaknath) on 2014-09-03 Changed in trafodion: status: In Progress → Fix Committed Stacey Johnson (sjohnson-w) on 2014-10-16 Changed in trafodion: milestone: none → r0.8 status: Fix Committed → Fix Released LP Bug: 1315537 - ODBC: Clients fail to connect with error 'Dialogue Id does not match' --- Key: TRAFODION-246 URL: https://issues.apache.org/jira/browse/TRAFODION-246 Project: Apache Trafodion Issue Type: Bug Components: connectivity-dcs Reporter: Aruna Sadashiva Assignee: Tharak Capirala Priority: Critical Fix For: 0.9 (pre-incubation) ODBC Endurance and some coast tests fail with this error: [Trafodion ODBC Driver][Trafodion Database] SQL error:Trafodian: Dialogue ID does not match There is a different bug to track Zookeeper error. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-283) LP Bug: 1321057 - sql-security unable to log in after failed authentication
[ https://issues.apache.org/jira/browse/TRAFODION-283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708659#comment-14708659 ] Atanu Mishra commented on TRAFODION-283: Paul Low (paul-low-x) wrote on 2014-07-08: #1 update: delay is now only 10 seconds Trafodion-Gerrit (neo-devtools) wrote on 2014-09-22: Fix proposed to core (master) #2 Fix proposed to branch: master Review: https://review.trafodion.org/443 Trafodion-Gerrit (neo-devtools) wrote on 2014-09-24: Fix merged to core (master)#3 Reviewed: https://review.trafodion.org/443 Committed: https://github.com/trafodion/core/commit/71acf22f86a2ac8d6979e9997e468190e78ddd1a Submitter: Trafodion Jenkins Branch: master commit 71acf22f86a2ac8d6979e9997e468190e78ddd1a Author: Arvind Narain email address hidden Date: Mon Sep 22 17:47:44 2014 + 1.DCS child server state remains as CONNECT_REJECT DCS server STATE is listed as CONNECT_REJECT after a failed authentication attempt. It remains stuck in that state for 10 seconds. Fixes Bug #1321057 Now mxosrvr state is immediately set to AVAILABLE. 2.DCS child server state is set to CONNECTED even after user gets *** ERROR[8837] Internal error occurred during authentication (e.g. due to wrongly configured .traf_authentication_config). Now mxosrvr state is set to AVAILABLE after the error is returned to the user. Change-Id: I55131a2ea6dfb362e6560c86b0398e7ba9e1cc2b Changed in trafodion: status: In Progress → Fix Committed Paul Low (paul-low-x) on 2014-10-03 Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1321057 - sql-security unable to log in after failed authentication --- Key: TRAFODION-283 URL: https://issues.apache.org/jira/browse/TRAFODION-283 Project: Apache Trafodion Issue Type: Bug Components: sql-security Reporter: Paul Low Assignee: Arvind Narain Priority: Critical Build ID: UTT version 40607 Authentication on: $MY_SQROOT/sqenvcom.sh set TRAFODION_ENABLE_AUTHENTICATION=YES DCS server STATE is listed as CONNECT_REJECT after a failed authentication attempt. It remains stuck in that state until the automatic reset time is expired and becomes AVAILABLE again. When the system is stuck in CONNECT_REJECT state, no authentication is allowed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-283) LP Bug: 1321057 - sql-security unable to log in after failed authentication
[ https://issues.apache.org/jira/browse/TRAFODION-283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-283. -- Resolution: Fixed Assignee: (was: Arvind Narain) Fix Version/s: 1.0 (pre-incubation) LP Bug: 1321057 - sql-security unable to log in after failed authentication --- Key: TRAFODION-283 URL: https://issues.apache.org/jira/browse/TRAFODION-283 Project: Apache Trafodion Issue Type: Bug Components: sql-security Reporter: Paul Low Priority: Critical Fix For: 1.0 (pre-incubation) Build ID: UTT version 40607 Authentication on: $MY_SQROOT/sqenvcom.sh set TRAFODION_ENABLE_AUTHENTICATION=YES DCS server STATE is listed as CONNECT_REJECT after a failed authentication attempt. It remains stuck in that state until the automatic reset time is expired and becomes AVAILABLE again. When the system is stuck in CONNECT_REJECT state, no authentication is allowed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-319) LP Bug: 1324247 - SPJs with resultsets not working, getting empty resultset
[ https://issues.apache.org/jira/browse/TRAFODION-319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708683#comment-14708683 ] Atanu Mishra commented on TRAFODION-319: Pavani Puppala (pavani-puppala) wrote on 2014-06-11:#1 There are two different problems here one in executor and one in compiler. Do not know yet if there is a problem in T2 driver are more problems. Will not know that until executor and compiler problems are fixed. The executor problem is that the parent of internal select statement is not recognized as CALL stmt and once that is fixed and passed that stage it fails in compiler because compiler trafodion code is not handling virtual table. Working on the compiler fix now. Changed in trafodion: status: New → In Progress Pavani Puppala (pavani-puppala) on 2014-06-25 Changed in trafodion: status: In Progress → Fix Committed Aruna Sadashiva (aruna-sadashiva) wrote on 2014-07-31: #2 SPJs are returning resultsets now. Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1324247 - SPJs with resultsets not working, getting empty resultset --- Key: TRAFODION-319 URL: https://issues.apache.org/jira/browse/TRAFODION-319 Project: Apache Trafodion Issue Type: Bug Components: client-jdbc-t2, sql-cmp, sql-exe Reporter: Aruna Sadashiva Assignee: Pavani Puppala Priority: Critical Fix For: 0.9 (pre-incubation) SPJs with resultsets are not returning any data. Please email me or Chong if you want sample SPJ code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-319) LP Bug: 1324247 - SPJs with resultsets not working, getting empty resultset
[ https://issues.apache.org/jira/browse/TRAFODION-319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-319. -- Resolution: Fixed Assignee: (was: Pavani Puppala) LP Bug: 1324247 - SPJs with resultsets not working, getting empty resultset --- Key: TRAFODION-319 URL: https://issues.apache.org/jira/browse/TRAFODION-319 Project: Apache Trafodion Issue Type: Bug Components: client-jdbc-t2, sql-cmp, sql-exe Reporter: Aruna Sadashiva Priority: Critical Fix For: 0.9 (pre-incubation) SPJs with resultsets are not returning any data. Please email me or Chong if you want sample SPJ code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-309) LP Bug: 1323878 - can't find table after add contraint primary key
[ https://issues.apache.org/jira/browse/TRAFODION-309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-309. -- Resolution: Fixed Assignee: (was: Anoop Sharma) LP Bug: 1323878 - can't find table after add contraint primary key -- Key: TRAFODION-309 URL: https://issues.apache.org/jira/browse/TRAFODION-309 Project: Apache Trafodion Issue Type: Bug Reporter: Apache Trafodion Priority: Critical Fix For: 0.8 (pre-incubation) 1) add constraint reportly complete without error 2) showddl didn't display add constraint 3) can't find table afterward. can't drop table because of that. SQLget tables; Tables in Schema TRAFODION.DEBUG_DDL04 == T1A011 --- SQL operation complete. SQLalter table t1a011 add constraint ca011 primary key (ubin0_uniq ) droppable; --- SQL operation complete. SQLget tables; --- SQL operation complete. SQLshowddl t1a011; CREATE TABLE TRAFODION.DEBUG_DDL04.T1A011 ( SBIN0_10 NUMERIC(18, 0) NO DEFAULT NOT NULL NOT DROPPABLE , CHAR0_2 CHAR(8) CHARACTER SET ISO88591 COLLATE DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE , UDEC0_UNIQ DECIMAL(9, 0) UNSIGNED NO DEFAULT NOT NULL NOT DROPPABLE , UBIN0_UNIQ NUMERIC(9, 0) UNSIGNED NO DEFAULT NOT NULL NOT DROPPABLE , SDEC0_500DECIMAL(9, 0) NO DEFAULT NOT NULL NOT DROPPABLE , VARCHAR0_10 VARCHAR(16) CHARACTER SET ISO88591 COLLATE DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE , VARCHAR1_20 VARCHAR(8) CHARACTER SET ISO88591 COLLATE DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE , SBIN1_5000 NUMERIC(4, 0) NO DEFAULT NOT NULL NOT DROPPABLE , SDEC1_4 DECIMAL(18, 0) NO DEFAULT NOT NULL NOT DROPPABLE , CHAR1_4 CHAR(8) CHARACTER SET ISO88591 COLLATE DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE ) STORE BY (UDEC0_UNIQ ASC) ; --- SQL operation complete. SQL-- fail to drop table SQLdrop table t1a011; *** ERROR[1389] Object TRAFODION.DEBUG_DDL04.T1A011 does not exist in Trafodion. [2014-05-27 16:15:31] SQLdrop table t1a011 cascade; *** ERROR[1389] Object TRAFODION.DEBUG_DDL04.T1A011 does not exist in Trafodion. [2014-05-27 16:15:31] -- test script: og a02log clear; -- #testcase a01 altered table -add constraint create schema TRAFODION.debug_ddl04; set schema TRAFODION.debug_ddl04; Create Table t1a011 ( sbin0_10Numeric(18) signednot null, char0_2 Character(8) not null, udec0_uniq Decimal(9) unsigned not null, ubin0_uniq PIC 9(9) COMP not null, sdec0_500 PIC S9(9) not null, varchar0_10 varchar(16) not null, varchar1_20 varchar(8)not null, sbin1_5000 Numeric(4) signed not null, sdec1_4 Decimal(18) signednot null, char1_4 Character(8) not null ) store by (udec0_uniq) ; get tables; alter table t1a011 add constraint ca011 primary key (ubin0_uniq ) droppable; get tables; showddl t1a011; -- can't find table drop table t1a011 cascade; get tables; alter table t1a011 drop constraint ca011; select * from t1a011; showddl t1a011; drop table t1a011; drop table t1a011 cascade; log off; exit; -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-309) LP Bug: 1323878 - can't find table after add contraint primary key
[ https://issues.apache.org/jira/browse/TRAFODION-309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708679#comment-14708679 ] Atanu Mishra commented on TRAFODION-309: Anoop Sharma (anoop-sharma) wrote on 2014-07-11:#2 fixed in July RC1 bld Changed in trafodion: status: In Progress → Fix Committed Alice Chen (alchen) on 2014-10-15 Changed in trafodion: milestone: none → r0.8 status: Fix Committed → Fix Released LP Bug: 1323878 - can't find table after add contraint primary key -- Key: TRAFODION-309 URL: https://issues.apache.org/jira/browse/TRAFODION-309 Project: Apache Trafodion Issue Type: Bug Reporter: Apache Trafodion Assignee: Anoop Sharma Priority: Critical Fix For: 0.8 (pre-incubation) 1) add constraint reportly complete without error 2) showddl didn't display add constraint 3) can't find table afterward. can't drop table because of that. SQLget tables; Tables in Schema TRAFODION.DEBUG_DDL04 == T1A011 --- SQL operation complete. SQLalter table t1a011 add constraint ca011 primary key (ubin0_uniq ) droppable; --- SQL operation complete. SQLget tables; --- SQL operation complete. SQLshowddl t1a011; CREATE TABLE TRAFODION.DEBUG_DDL04.T1A011 ( SBIN0_10 NUMERIC(18, 0) NO DEFAULT NOT NULL NOT DROPPABLE , CHAR0_2 CHAR(8) CHARACTER SET ISO88591 COLLATE DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE , UDEC0_UNIQ DECIMAL(9, 0) UNSIGNED NO DEFAULT NOT NULL NOT DROPPABLE , UBIN0_UNIQ NUMERIC(9, 0) UNSIGNED NO DEFAULT NOT NULL NOT DROPPABLE , SDEC0_500DECIMAL(9, 0) NO DEFAULT NOT NULL NOT DROPPABLE , VARCHAR0_10 VARCHAR(16) CHARACTER SET ISO88591 COLLATE DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE , VARCHAR1_20 VARCHAR(8) CHARACTER SET ISO88591 COLLATE DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE , SBIN1_5000 NUMERIC(4, 0) NO DEFAULT NOT NULL NOT DROPPABLE , SDEC1_4 DECIMAL(18, 0) NO DEFAULT NOT NULL NOT DROPPABLE , CHAR1_4 CHAR(8) CHARACTER SET ISO88591 COLLATE DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE ) STORE BY (UDEC0_UNIQ ASC) ; --- SQL operation complete. SQL-- fail to drop table SQLdrop table t1a011; *** ERROR[1389] Object TRAFODION.DEBUG_DDL04.T1A011 does not exist in Trafodion. [2014-05-27 16:15:31] SQLdrop table t1a011 cascade; *** ERROR[1389] Object TRAFODION.DEBUG_DDL04.T1A011 does not exist in Trafodion. [2014-05-27 16:15:31] -- test script: og a02log clear; -- #testcase a01 altered table -add constraint create schema TRAFODION.debug_ddl04; set schema TRAFODION.debug_ddl04; Create Table t1a011 ( sbin0_10Numeric(18) signednot null, char0_2 Character(8) not null, udec0_uniq Decimal(9) unsigned not null, ubin0_uniq PIC 9(9) COMP not null, sdec0_500 PIC S9(9) not null, varchar0_10 varchar(16) not null, varchar1_20 varchar(8)not null, sbin1_5000 Numeric(4) signed not null, sdec1_4 Decimal(18) signednot null, char1_4 Character(8) not null ) store by (udec0_uniq) ; get tables; alter table t1a011 add constraint ca011 primary key (ubin0_uniq ) droppable; get tables; showddl t1a011; -- can't find table drop table t1a011 cascade; get tables; alter table t1a011 drop constraint ca011; select * from t1a011; showddl t1a011; drop table t1a011; drop table t1a011 cascade; log off; exit; -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-304) LP Bug: 1323864 - create table as 106 column table return error 4023
[ https://issues.apache.org/jira/browse/TRAFODION-304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708675#comment-14708675 ] Atanu Mishra commented on TRAFODION-304: Anoop Sharma (anoop-sharma) wrote on 2014-05-28:#2 Download full text (5.6 KiB) Did not see this error. In the test run, did the target table myb2ul04 exist before the create...as...select stmt was issued? [...] Read more... ying-wen ku (ying-wen-ku) wrote on 2014-05-28: RE: [Bug 1323864] Re: create table as 106 column table return error 4023 #3 Download full text (24.3 KiB) I don't recall. But mostly like it did exist. But table was dropped before create table as command. Thanks, ying -Original Message- From: email address hidden [mailto:email address hidden] On Behalf Of Sharma, Anoop Sent: Wednesday, May 28, 2014 9:19 AM To: Ku, Ying-Wen Subject: [Bug 1323864] Re: create table as 106 column table return error 4023 Did not see this error. In the test run, did the target table myb2ul04 exist before the create...as...select stmt was issued? [...] Stacey Johnson (sjohnson-w) on 2014-06-10 information type: Proprietary → Public Anoop Sharma (anoop-sharma) on 2014-06-27 Changed in trafodion: status: New → In Progress Anoop Sharma (anoop-sharma) on 2014-07-11 Changed in trafodion: status: In Progress → Fix Committed Alice Chen (alchen) on 2014-10-15 Changed in trafodion: milestone: none → r0.8 status: Fix Committed → Fix Released LP Bug: 1323864 - create table as 106 column table return error 4023 Key: TRAFODION-304 URL: https://issues.apache.org/jira/browse/TRAFODION-304 Project: Apache Trafodion Issue Type: Bug Reporter: Apache Trafodion Assignee: Anoop Sharma Priority: Critical Fix For: 0.8 (pre-incubation) 1. table g_sqldopt.b2uwl04 has 106 columns 2. create table as ... return error 4023 SQLcreate table myb2ul04 store by (SDEC9_UNIQ) as (select * from g_sqldopt.b2uwl04); *** ERROR[4023] The degree of each row value constructor (106) must equal the degree of the target table column list (80). [2014-05-27 15:06:37] SQLselect * from myb2ul04; *** ERROR[4082] Object TRAFODION.USR.MYB2UL04 does not exist or is inaccessible. [2014-05-27 15:06:37] create schema g_sqldopt; set schema g_sqldopt; Create Table b2uwl04 ( sbin0_4 Integerdefault 3 not null, time0_uniq Time not null, varchar0_uniq VarChar(8) no default not null, sdec0_100 Decimal(9) no default not null, int0_dTOf6_4Interval day to second(6) not null, ts1_n100Timestamp heading 'ts1_n100 allowing nulls', sdec1_20Decimal(5) no default not null, int1_yTOm_n100 Interval year(1) to month no default, double1_2 Double Precision not null, udec1_nuniq Decimal(4) unsigned, char2_2 Character(2) not null, sbin2_nuniq Largeint , sdec2_500 Decimal(9) signed no default not null, date2_uniq Date not null, int2_dTOf6_n2 Interval day to second(6) no default, real2_500 Real not null, real3_n1000 Real , int3_yTOm_4 Interval year(1) to month no default not null, date3_n2000 Date no default, udec3_n100 Decimal(9) unsigned, ubin3_n2000 Numeric(4) unsigned, char3_4 Character(8) no default not null, sdec4_n20 Decimal(4) no default, int4_yTOm_uniq Interval year(5) to month not null, sbin4_n1000 Smallint , time4_1000 Time no default not null, char4_n10 Character(8) no default, real4_2000 Real not null, char5_n20 Character(8) , sdec5_10Decimal(9) signed no default not null, ubin5_n500 Numeric(9) unsignedno default, real5_uniq Real not null, dt5_yTOmin_n500 Timestamp(0) , int5_hTOs_500 Interval hour to second(0) no default not null, int6_dTOf6_nuniqInterval day to second(6) no default,
[jira] [Closed] (TRAFODION-304) LP Bug: 1323864 - create table as 106 column table return error 4023
[ https://issues.apache.org/jira/browse/TRAFODION-304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-304. -- Resolution: Fixed Assignee: (was: Anoop Sharma) LP Bug: 1323864 - create table as 106 column table return error 4023 Key: TRAFODION-304 URL: https://issues.apache.org/jira/browse/TRAFODION-304 Project: Apache Trafodion Issue Type: Bug Reporter: Apache Trafodion Priority: Critical Fix For: 0.8 (pre-incubation) 1. table g_sqldopt.b2uwl04 has 106 columns 2. create table as ... return error 4023 SQLcreate table myb2ul04 store by (SDEC9_UNIQ) as (select * from g_sqldopt.b2uwl04); *** ERROR[4023] The degree of each row value constructor (106) must equal the degree of the target table column list (80). [2014-05-27 15:06:37] SQLselect * from myb2ul04; *** ERROR[4082] Object TRAFODION.USR.MYB2UL04 does not exist or is inaccessible. [2014-05-27 15:06:37] create schema g_sqldopt; set schema g_sqldopt; Create Table b2uwl04 ( sbin0_4 Integerdefault 3 not null, time0_uniq Time not null, varchar0_uniq VarChar(8) no default not null, sdec0_100 Decimal(9) no default not null, int0_dTOf6_4Interval day to second(6) not null, ts1_n100Timestamp heading 'ts1_n100 allowing nulls', sdec1_20Decimal(5) no default not null, int1_yTOm_n100 Interval year(1) to month no default, double1_2 Double Precision not null, udec1_nuniq Decimal(4) unsigned, char2_2 Character(2) not null, sbin2_nuniq Largeint , sdec2_500 Decimal(9) signed no default not null, date2_uniq Date not null, int2_dTOf6_n2 Interval day to second(6) no default, real2_500 Real not null, real3_n1000 Real , int3_yTOm_4 Interval year(1) to month no default not null, date3_n2000 Date no default, udec3_n100 Decimal(9) unsigned, ubin3_n2000 Numeric(4) unsigned, char3_4 Character(8) no default not null, sdec4_n20 Decimal(4) no default, int4_yTOm_uniq Interval year(5) to month not null, sbin4_n1000 Smallint , time4_1000 Time no default not null, char4_n10 Character(8) no default, real4_2000 Real not null, char5_n20 Character(8) , sdec5_10Decimal(9) signed no default not null, ubin5_n500 Numeric(9) unsignedno default, real5_uniq Real not null, dt5_yTOmin_n500 Timestamp(0) , int5_hTOs_500 Interval hour to second(0) no default not null, int6_dTOf6_nuniqInterval day to second(6) no default, sbin6_nuniq Largeint no default, double6_n2 Float(23) , sdec6_4 Decimal(4) signed no default not null, char6_n100 Character(8) no default, date6_100 Date not null, time7_uniq Time not null, sbin7_n20 Smallint no default, char7_500 Character(8) no default not null, int7_hTOs_nuniq Interval hour(2) to second(0) , udec7_n10 Decimal(4) unsigned, real7_n4Real , ubin8_10Numeric(4) unsignednot null, int8_y_n1000Interval year(3) , date8_10Date no default not null, char8_n1000 Character(8) no default, double8_n10 Double Precision no default, sdec8_4 Decimal(9) unsignednot null, sdec9_uniq Decimal(18) signed no default not null, real9_n20 Real , time9_n4Time
[jira] [Commented] (TRAFODION-305) LP Bug: 1323865 - Some ODBC api tests fail with sql error Unknown PCode instruction
[ https://issues.apache.org/jira/browse/TRAFODION-305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708677#comment-14708677 ] Atanu Mishra commented on TRAFODION-305: James Capps (james-capps) wrote on 2015-02-10: #3 During PCODE generation, we were attempting to generate a PCODE instruction to compare two operands for equality. The two operands had a data type of REC_BYTE_V_ASCII_LONG which is used only by ODBC. PCIT::getMemoryAddressingMode() does not currently know how to handle that datatype so it returned AM_NONE for the operand type. That resulted in a failure later. Fix was to detect the operand(s) of that datatype and call ex_clause::pCodeGenerate(...) rather than doing PCODE generation of the current expression. Note: Also found a line saying return ex_clause::pCodeGenerate(space, f); which has been missing for a long time. We got away with it because the preceding 'if' was always false for Trafodion. Files changed: .../exp/ExpPCodeClauseGen.cpp Changed in trafodion: status: In Progress → Fix Committed Aruna Sadashiva (aruna-sadashiva) wrote on 2015-02-23: #4 Jieping tseted this and it works ok now. Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1323865 - Some ODBC api tests fail with sql error Unknown PCode instruction --- Key: TRAFODION-305 URL: https://issues.apache.org/jira/browse/TRAFODION-305 Project: Apache Trafodion Issue Type: Bug Components: sql-exe Reporter: Aruna Sadashiva Assignee: Apache Trafodion Priority: Critical Fix For: 1.1 (pre-incubation) Some ODBC API tests fail with this error: [Trafodion ODBC Driver][Trafodion Database] SQL ERROR:*** ERROR[2006] Internal error: assertion failure (Unknown PCode Instruction) in file ../exp/ExpPCode.cpp at line 1199. It worked when pcode was turned off with cqd pcode_opt_level 'OFF'. Relevant SQL stmts used by this test are below, it fails during prepare of insert stmt. drop table UJZ6R0EHYY; create table UJZ6R0EHYY(RLUJZ6R0EH CHAR(10) CHARACTER SET ISO88591,WP4VDZAWNV VARCHAR(10) CHARACTER SET ISO88591,VT2DEURLUJ DECIMAL(10,5),Z6R0EHYYFC NUMERIC(10,5),TCBMMOHJ7F SMALLINT,LPIOBAI9_S INTEGER,NQ3KXGK5QS REAL,X1GWP4VDZA FLOAT,WNVT2DEURL DOUBLE PRECISION,YYFCTCBMMO DATE,HJ7FLPIOBA TIME,I9_S8NQ3KX TIMESTAMP,GK5QSX1GWP bigint,VDZAWNVT2D LONG VARCHAR CHARACTER SET ISO88591,EURLUJZ6R0 CHAR(10) CHARACTER SET UCS2,EHYYFCTCBM VARCHAR(10) CHARACTER SET UCS2,MOHJ7FLPIO LONG VARCHAR CHARACTER SET UCS2,BAI9_S8NQ3 NUMERIC(19,0),KXGK5QSX1G NUMERIC(19,6),T2DEURLUJZ NUMERIC(128,0),R0EHYYFCTC NUMERIC(128,128),BMMOHJ7FLP NUMERIC(128,64),IOBAI9_S8N NUMERIC(10,5) UNSIGNED,Q3KXGK5QSX NUMERIC(18,5) UNSIGNED,GWP4VDZAWN NUMERIC(30,10) UNSIGNED) NO PARTITION; insert into UJZ6R0EHYY (RLUJZ6R0EH,WP4VDZAWNV,VT2DEURLUJ,Z6R0EHYYFC,TCBMMOHJ7F,LPIOBAI9_S,NQ3KXGK5QS,X1GWP4VDZA,WNVT2DEURL,YYFCTCBMMO,HJ7FLPIOBA,I9_S8NQ3KX,GK5QSX1GWP,VDZAWNVT2D,EURLUJZ6R0,EHYYFCTCBM,MOHJ7FLPIO,BAI9_S8NQ3,KXGK5QSX1G,T2DEURLUJZ,R0EHYYFCTC,BMMOHJ7FLP,IOBAI9_S8N,Q3KXGK5QSX,GWP4VDZAWN) values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?); Assigned to LaunchPad User James Capps -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-290) LP Bug: 1321498 - JDBC T4 tests start getting TMF error 73 after a while
[ https://issues.apache.org/jira/browse/TRAFODION-290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708665#comment-14708665 ] Atanu Mishra commented on TRAFODION-290: Oliver Bucaojit (oliver-bucaojit) wrote on 2014-06-13: #1 Made a change to disable early commit reply and added a normal commit reply to the end of the transaction processing after commit. These changes have resolved our error 73 problems and data consistency of an insert/update/delete followed by a select. Changed in trafodion: status: New → Fix Committed Aruna Sadashiva (aruna-sadashiva) wrote on 2014-07-16: #2 Have not seen error 73s in a while. Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1321498 - JDBC T4 tests start getting TMF error 73 after a while Key: TRAFODION-290 URL: https://issues.apache.org/jira/browse/TRAFODION-290 Project: Apache Trafodion Issue Type: Bug Components: dtm Reporter: Aruna Sadashiva Assignee: Oliver Bucaojit Priority: Critical Fix For: 1.0 (pre-incubation) JDBC T4 tests start failing with this error after a while, one run usually runs fine, but multiple runs (say 10) of the tests results in failed tests due to these errors. Also, no other program is accessing these tables at the time. Exception in test JDBCDelete..*** ERROR[8606] Transaction subsystem TMF returned error 73 on a commit transaction. [2014-05-20 17:14:02] The tests don't use any transactions explicitly. Have been noticing this for a while, but just saw this on v40646. Please contact me (aruna.sadash...@hp.com) for instructions on how to run the tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-285) LP Bug: 1321059 - sql-security authentication error message text unclear
[ https://issues.apache.org/jira/browse/TRAFODION-285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-285. -- Resolution: Fixed Assignee: (was: Cliff Gray) Fix Version/s: 1.0 (pre-incubation) LP Bug: 1321059 - sql-security authentication error message text unclear Key: TRAFODION-285 URL: https://issues.apache.org/jira/browse/TRAFODION-285 Project: Apache Trafodion Issue Type: Bug Components: sql-security Reporter: Paul Low Priority: Critical Fix For: 1.0 (pre-incubation) Build ID: UTT version 40607 Authentication on: $MY_SQROOT/sqenvcom.sh set TRAFODION_ENABLE_AUTHENTICATION=YES Error message text is unclear when authentication fails. The following error messages are currently displayed: *** ERROR[1] The message id: socket_write_error With parameters: Broken pipe *** ERROR[1] The message id: ids_dcs_srvr_not_available With parameters: -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-285) LP Bug: 1321059 - sql-security authentication error message text unclear
[ https://issues.apache.org/jira/browse/TRAFODION-285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708661#comment-14708661 ] Atanu Mishra commented on TRAFODION-285: Cliff Gray (cliff-gray) wrote on 2014-06-04:#1 Fix is known. Will be delivered after the initial release. Stacey Johnson (sjohnson-w) on 2014-06-10 information type: Proprietary → Public Paul Low (paul-low-x) on 2014-07-08 Changed in trafodion: status: In Progress → Fix Committed status: Fix Committed → Fix Released LP Bug: 1321059 - sql-security authentication error message text unclear Key: TRAFODION-285 URL: https://issues.apache.org/jira/browse/TRAFODION-285 Project: Apache Trafodion Issue Type: Bug Components: sql-security Reporter: Paul Low Assignee: Cliff Gray Priority: Critical Fix For: 1.0 (pre-incubation) Build ID: UTT version 40607 Authentication on: $MY_SQROOT/sqenvcom.sh set TRAFODION_ENABLE_AUTHENTICATION=YES Error message text is unclear when authentication fails. The following error messages are currently displayed: *** ERROR[1] The message id: socket_write_error With parameters: Broken pipe *** ERROR[1] The message id: ids_dcs_srvr_not_available With parameters: -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-290) LP Bug: 1321498 - JDBC T4 tests start getting TMF error 73 after a while
[ https://issues.apache.org/jira/browse/TRAFODION-290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-290. -- Resolution: Fixed Assignee: (was: Oliver Bucaojit) Fix Version/s: 1.0 (pre-incubation) LP Bug: 1321498 - JDBC T4 tests start getting TMF error 73 after a while Key: TRAFODION-290 URL: https://issues.apache.org/jira/browse/TRAFODION-290 Project: Apache Trafodion Issue Type: Bug Components: dtm Reporter: Aruna Sadashiva Priority: Critical Fix For: 1.0 (pre-incubation) JDBC T4 tests start failing with this error after a while, one run usually runs fine, but multiple runs (say 10) of the tests results in failed tests due to these errors. Also, no other program is accessing these tables at the time. Exception in test JDBCDelete..*** ERROR[8606] Transaction subsystem TMF returned error 73 on a commit transaction. [2014-05-20 17:14:02] The tests don't use any transactions explicitly. Have been noticing this for a while, but just saw this on v40646. Please contact me (aruna.sadash...@hp.com) for instructions on how to run the tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-288) LP Bug: 1321479 - Updating char column violates a check constraint
[ https://issues.apache.org/jira/browse/TRAFODION-288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-288. -- Resolution: Fixed Assignee: (was: Anoop Sharma) Fix Version/s: 1.0 (pre-incubation) LP Bug: 1321479 - Updating char column violates a check constraint -- Key: TRAFODION-288 URL: https://issues.apache.org/jira/browse/TRAFODION-288 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Weishiun Tsai Priority: Critical Fix For: 1.0 (pre-incubation) In the following example, column b is a char column with a constraint to check that it is either ‘AB’ or ‘CD’. But an update statement allows it to be updated to ‘EF’ as shown in the output. This is seen on the datalake v40671 build. Here is the script to reproduce this problem: create table t (a int not null not droppable primary key, b char(10) check (b in ('AB', 'CD'))); insert into t values (1, 'AB'); update t set b = 'EF' where b = 'AB'; select * from t; Here is the output of the execution: create table t (a int not null not droppable primary key, b char(10) check (b in ('AB', 'CD'))); --- SQL operation complete. insert into t values (1, 'AB'); --- 1 row(s) inserted. update t set b = 'EF' where b = 'AB'; --- 1 row(s) updated. select * from t; AB --- -- 1 EF --- 1 row(s) selected. For reference purpose, here is the behavior on SQ: create table t (a int not null not droppable primary key, b char(10) check (b in ('AB', 'CD'))); --- SQL operation complete. insert into t values (1, 'AB'); --- 1 row(s) inserted. update t set b = 'EF' where b = 'AB'; *** ERROR[8101] The operation is prevented by check constraint NEO.USR.T_5226969 13_6872 on table NEO.USR.T. --- 0 row(s) updated. select * from t; AB --- -- 1 AB --- 1 row(s) selected. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-288) LP Bug: 1321479 - Updating char column violates a check constraint
[ https://issues.apache.org/jira/browse/TRAFODION-288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708663#comment-14708663 ] Atanu Mishra commented on TRAFODION-288: Anoop Sharma (anoop-sharma) on 2014-08-20 Changed in trafodion: status: In Progress → Fix Committed Weishiun Tsai (wei-shiun-tsai) wrote on 2014-08-20: #2 Verified on the 0819_0830 build installed on a workstation. This problem has been fixed: create table t (a int not null not droppable primary key, b char(10) check (b in ('AB', 'CD'))); --- SQL operation complete. insert into t values (1, 'AB'); --- 1 row(s) inserted. update t set b = 'EF' where b = 'AB'; *** ERROR[8101] The operation is prevented by check constraint TRAFODION.MYTEST.T_118787894_2398 on table TRAFODION.MYTEST.T. --- 0 row(s) updated. select * from t; A B --- -- 1 AB --- 1 row(s) selected. Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1321479 - Updating char column violates a check constraint -- Key: TRAFODION-288 URL: https://issues.apache.org/jira/browse/TRAFODION-288 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Weishiun Tsai Assignee: Anoop Sharma Priority: Critical Fix For: 1.0 (pre-incubation) In the following example, column b is a char column with a constraint to check that it is either ‘AB’ or ‘CD’. But an update statement allows it to be updated to ‘EF’ as shown in the output. This is seen on the datalake v40671 build. Here is the script to reproduce this problem: create table t (a int not null not droppable primary key, b char(10) check (b in ('AB', 'CD'))); insert into t values (1, 'AB'); update t set b = 'EF' where b = 'AB'; select * from t; Here is the output of the execution: create table t (a int not null not droppable primary key, b char(10) check (b in ('AB', 'CD'))); --- SQL operation complete. insert into t values (1, 'AB'); --- 1 row(s) inserted. update t set b = 'EF' where b = 'AB'; --- 1 row(s) updated. select * from t; AB --- -- 1 EF --- 1 row(s) selected. For reference purpose, here is the behavior on SQ: create table t (a int not null not droppable primary key, b char(10) check (b in ('AB', 'CD'))); --- SQL operation complete. insert into t values (1, 'AB'); --- 1 row(s) inserted. update t set b = 'EF' where b = 'AB'; *** ERROR[8101] The operation is prevented by check constraint NEO.USR.T_5226969 13_6872 on table NEO.USR.T. --- 0 row(s) updated. select * from t; AB --- -- 1 AB --- 1 row(s) selected. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-322) LP Bug: 1324326 - left join BMO return ERROR[1] problem_with_server_read
[ https://issues.apache.org/jira/browse/TRAFODION-322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-322. -- Resolution: Fixed Fix Version/s: 1.0 (pre-incubation) LP Bug: 1324326 - left join BMO return ERROR[1] problem_with_server_read Key: TRAFODION-322 URL: https://issues.apache.org/jira/browse/TRAFODION-322 Project: Apache Trafodion Issue Type: Bug Reporter: Apache Trafodion Priority: Critical Fix For: 1.0 (pre-incubation) Spooling started at May 29, 2014 12:28:20 AM SQLenv; COLSEP HISTOPT DEFAULT [No expansion of script files] IDLETIMEOUT 30 min(s) LIST_COUNT 0 [All Rows] LOG FILEa03log LOG OPTIONS CLEAR,CMDTEXT ON MARKUP RAW PROMPT SQL SCHEMA SEABASE SERVER rhel-cdh1.hpl.hp.com:37800 SQLTERMINATOR ; STATISTICS OFF TIMEOFF TIMING OFF USERtrafodion SQLset schema g_hpit; --- SQL operation complete. SQLprepare xx from select [last 1] * from PERF_SUM_F fact left outer join CUST_ACCT_HIER_D cust on fact.SLDT_CUST_ACCT_HIER_KY = cust.CUST_ACCT_HIER_KY order by cust.cust_acct_hier_ky; --- SQL command prepared. SQLexplain options 'f' xx; LC RC OP OPERATOR OPT DESCRIPTION CARD - 5.6root 5.00E+003 4.5firstn5.00E+003 3.4sort 5.00E+003 213left_hybrid_hash_joi 5.00E+003 ..2trafodion_scan PERF_SUM_F1.00E+002 ..1trafodion_scan CUST_ACCT_HIER_D 1.00E+002 --- SQL operation complete. SQLexplain xx; -- PLAN SUMMARY MODULE_NAME .. DYNAMICALLY COMPILED STATEMENT_NAME ... XX PLAN_ID .. 212268083306579339 ROWS_OUT . 5,000 EST_TOTAL_COST ... 0.01 STATEMENT select [last 1] * from PERF_SUM_F fact left outer join CUST_ACCT_HIER_D cust on fact.SLDT_CUST_ACCT_HIER_KY = cust.CUST_ACCT_HIER_KY order by cust.cust_acct_hier_ky -- NODE LISTING ROOT == SEQ_NO 6ONLY CHILD 5 REQUESTS_IN .. 1 ROWS_OUT . 5,000 EST_OPER_COST 0 EST_TOTAL_COST ... 0.01 DESCRIPTION max_card_est ... 5,000 fragment_id 0 parent_frag (none) fragment_type .. master statement_index 0 affinity_value . 0 est_memory_per_cpu . 16906 KB max_max_cardinality 0 total_overflow_size 0.00 KB xn_access_mode . read_only xn_autoabort_interval0 auto_query_retry ... enabled plan_version ... 2,600 embedded_arkcmp used LDAP_USERNAME .. TRAFODION SCHEMA . G_HPIT select_list TRAFODION.G_HPIT.PERF_SUM_F.DT_MTH_KY, TRAFODION.G_HPIT.PERF_SUM_F.CTRY_KY, TRAFODION.G_HPIT.PERF_SUM_F.QTA_PROD_LN_KY, TRAFODION.G_HPIT.PERF_SUM_F.RTE_TO_MKT_KY, TRAFODION.G_HPIT.PERF_SUM_F.SLDT_CUST_ACCT_HIER_KY , TRAFODION.G_HPIT.PERF_SUM_F.SO_GRS_EXT_US_DLR_AM , TRAFODION.G_HPIT.PERF_SUM_F.SO_NET_EXT_US_DLR_AM , TRAFODION.G_HPIT.PERF_SUM_F.SHIP_GRS_US_DLR_AM, TRAFODION.G_HPIT.PERF_SUM_F.SHIP_NET_US_DLR_AM, TRAFODION.G_HPIT.PERF_SUM_F.REV_GRS_REV_US_DLR_AM, TRAFODION.G_HPIT.PERF_SUM_F.SO_DTL_QT, TRAFODION.G_HPIT.PERF_SUM_F.REV_DISC_US_DLR_AM, TRAFODION.G_HPIT.PERF_SUM_F.REV_TRD_DISC_FEE_US_DL R_AM, TRAFODION.G_HPIT.PERF_SUM_F.SHIP_QT, TRAFODION.G_HPIT.PERF_SUM_F.REV_PRC_PROT_US_DLR_AM
[jira] [Commented] (TRAFODION-322) LP Bug: 1324326 - left join BMO return ERROR[1] problem_with_server_read
[ https://issues.apache.org/jira/browse/TRAFODION-322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708689#comment-14708689 ] Atanu Mishra commented on TRAFODION-322: Anoop Sharma (anoop-sharma) wrote on 2014-07-16:#2 Attached DDL contains the statements used to create needed tables. But it doesn't contain information on how to populated data. Will help if that information is available. Or if a link to a system where this problem showed up could be attached so this issue could be debugged. Based on the error message, it seems like master executor process crashed. If there is a core file that got created, we would like to see that. Changed in trafodion: status: New → Incomplete Chong Hsu (chong-hsu) wrote on 2014-10-28: #3 Verified again with 20141023 build on cluster rhel-qa1. The left join statement completed in about 40 minutes, did not fail. Changed in trafodion: status: Incomplete → Fix Released LP Bug: 1324326 - left join BMO return ERROR[1] problem_with_server_read Key: TRAFODION-322 URL: https://issues.apache.org/jira/browse/TRAFODION-322 Project: Apache Trafodion Issue Type: Bug Reporter: Apache Trafodion Priority: Critical Fix For: 1.0 (pre-incubation) Spooling started at May 29, 2014 12:28:20 AM SQLenv; COLSEP HISTOPT DEFAULT [No expansion of script files] IDLETIMEOUT 30 min(s) LIST_COUNT 0 [All Rows] LOG FILEa03log LOG OPTIONS CLEAR,CMDTEXT ON MARKUP RAW PROMPT SQL SCHEMA SEABASE SERVER rhel-cdh1.hpl.hp.com:37800 SQLTERMINATOR ; STATISTICS OFF TIMEOFF TIMING OFF USERtrafodion SQLset schema g_hpit; --- SQL operation complete. SQLprepare xx from select [last 1] * from PERF_SUM_F fact left outer join CUST_ACCT_HIER_D cust on fact.SLDT_CUST_ACCT_HIER_KY = cust.CUST_ACCT_HIER_KY order by cust.cust_acct_hier_ky; --- SQL command prepared. SQLexplain options 'f' xx; LC RC OP OPERATOR OPT DESCRIPTION CARD - 5.6root 5.00E+003 4.5firstn5.00E+003 3.4sort 5.00E+003 213left_hybrid_hash_joi 5.00E+003 ..2trafodion_scan PERF_SUM_F1.00E+002 ..1trafodion_scan CUST_ACCT_HIER_D 1.00E+002 --- SQL operation complete. SQLexplain xx; -- PLAN SUMMARY MODULE_NAME .. DYNAMICALLY COMPILED STATEMENT_NAME ... XX PLAN_ID .. 212268083306579339 ROWS_OUT . 5,000 EST_TOTAL_COST ... 0.01 STATEMENT select [last 1] * from PERF_SUM_F fact left outer join CUST_ACCT_HIER_D cust on fact.SLDT_CUST_ACCT_HIER_KY = cust.CUST_ACCT_HIER_KY order by cust.cust_acct_hier_ky -- NODE LISTING ROOT == SEQ_NO 6ONLY CHILD 5 REQUESTS_IN .. 1 ROWS_OUT . 5,000 EST_OPER_COST 0 EST_TOTAL_COST ... 0.01 DESCRIPTION max_card_est ... 5,000 fragment_id 0 parent_frag (none) fragment_type .. master statement_index 0 affinity_value . 0 est_memory_per_cpu . 16906 KB max_max_cardinality 0 total_overflow_size 0.00 KB xn_access_mode . read_only xn_autoabort_interval0 auto_query_retry ... enabled plan_version ... 2,600 embedded_arkcmp used LDAP_USERNAME .. TRAFODION SCHEMA . G_HPIT select_list TRAFODION.G_HPIT.PERF_SUM_F.DT_MTH_KY, TRAFODION.G_HPIT.PERF_SUM_F.CTRY_KY, TRAFODION.G_HPIT.PERF_SUM_F.QTA_PROD_LN_KY, TRAFODION.G_HPIT.PERF_SUM_F.RTE_TO_MKT_KY, TRAFODION.G_HPIT.PERF_SUM_F.SLDT_CUST_ACCT_HIER_KY ,
[jira] [Commented] (TRAFODION-323) LP Bug: 1324370 - dcs-stop.sh script frequently hangs and does not stop any dcs processes
[ https://issues.apache.org/jira/browse/TRAFODION-323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708690#comment-14708690 ] Atanu Mishra commented on TRAFODION-323: Steve Varnau (steve-varnau) wrote on 2014-05-30:#2 This seems to happen in servers.sh script. There are 2 running from stop-dcs.sh script. I kill both, and then the shut-down continues. It may be that dcs-daemon.sh is called from servers.sh, and dcs-deamon.sh is in a loop Here is output, where it was stuck for 15 minutes until I killed the processes: 2014-05-30 13:15:45 localhost: stopping server. 2014-05-30 13:15:45 localhost: stopping server. 2014-05-30 13:15:45 localhost: stopping server. 2014-05-30 13:15:45 localhost: stopping server. 2014-05-30 13:15:45 localhost: stopping server. 2014-05-30 13:30:17 /home/jenkins/workspace/phoenix_test/dcs/dcs-0.8.0/bin/servers.sh: line 74: 4262 Terminated eval ${@// /\\ } $instance 21 2014-05-30 13:30:17 4264 | sed s/^/$server: / 2014-05-30 13:30:18 stopping master. 2014-05-30 13:30:19 Shutting down (normal) the SQ environment! Stacey Johnson (sjohnson-w) on 2014-06-10 information type: Proprietary → Public Aruna Sadashiva (aruna-sadashiva) wrote on 2014-07-22: #3 Have not seen this in a while, will close it...can reopen if we see this issue again. Changed in trafodion: status: New → Fix Released LP Bug: 1324370 - dcs-stop.sh script frequently hangs and does not stop any dcs processes - Key: TRAFODION-323 URL: https://issues.apache.org/jira/browse/TRAFODION-323 Project: Apache Trafodion Issue Type: Bug Components: connectivity-dcs Reporter: Aruna Sadashiva Assignee: Anuradha Hegde Priority: Critical dcs-stop.sh frequently hangs and does not stop any of the dcs processes. It displays Stopping server for all the servers, but just hangs at that point. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-281) LP Bug: 1321052 - get tables for schema hive.hive generates a sqlci core
[ https://issues.apache.org/jira/browse/TRAFODION-281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708657#comment-14708657 ] Atanu Mishra commented on TRAFODION-281: Anoop Sharma (anoop-sharma) on 2014-05-30 Changed in trafodion: status: In Progress → Fix Committed Julie Thai (julie-y-thai) wrote on 2014-06-06: #1 Verified on RC2 (traf_20140606_0930): Trafodion Conversational Interface 0.8.0 (c) Copyright 2014 Hewlett-Packard Development Company, LP. set schema hive.hive; --- SQL operation complete. get tables; --- SQL operation complete. Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1321052 - get tables for schema hive.hive generates a sqlci core Key: TRAFODION-281 URL: https://issues.apache.org/jira/browse/TRAFODION-281 Project: Apache Trafodion Issue Type: Bug Components: sql-exe Reporter: Julie Thai Assignee: Anoop Sharma Priority: Critical Fix For: 1.0 (pre-incubation) On workstation, datalake_64_1 v40646, get tables in schema hive.hive (where schema does not contain any tables) generates core. To reproduce in sqlci or trafci, issue: set schema hive.hive; [cqd mode_seahive 'on';] get tables; MY_SQROOT=/opt/home/thaiju/datalake_64_1 who@host=tha...@g4t3029.houston.hp.com JAVA_HOME=/opt/home/tools/jdk1.7.0_09_64 linux=2.6.32-279.el6.x86_64 redhat=6.3 Release 0.7.0 (Build release [40646], branch 40646-project/datalake_64_1, date 19May14) From sqlci: /opt/home/thaiju/datalake_64_1: sqlci Trafodion Conversational Interface 0.7.0 (c) Copyright 2014 Hewlett-Packard Development Company, LP. set schema hive.hive; --- SQL operation complete. get tables; *** glibc detected *** sqlci: munmap_chunk(): invalid pointer: 0x7fffe902ce20 *** === Backtrace: = /lib64/libc.so.6[0x33088760e6] /opt/home/thaiju/datalake_64_1/export/lib64/libexecutor.so(_ZN24ExExeUtilHiveMDaccessTcb4workEv+0xb8a)[0x74d589ea] /opt/home/thaiju/datalake_64_1/export/lib64/libexecutor.so(_ZN11ExScheduler4workEl+0x223)[0x74d97833] /opt/home/thaiju/datalake_64_1/export/lib64/libexecutor.so(_ZN11ex_root_tcb7executeEP10CliGlobalsP16ExExeStmtGlobalsP10DescriptorRP12ComDiagsAreai+0x662)[0x74d0d9f2] /opt/home/thaiju/datalake_64_1/export/lib64/libcli.so(_ZN12CliStatement7executeEP10CliGlobalsP10DescriptorR12ComDiagsAreaNS_9ExecStateEij+0x1104)[0x76115e84] /opt/home/thaiju/datalake_64_1/export/lib64/libcli.so(SQLCLI_PerformTasks+0x3fa)[0x760d8aaa] /opt/home/thaiju/datalake_64_1/export/lib64/libcli.so(SQLCLI_Exec+0x52)[0x760d97d2] /opt/home/thaiju/datalake_64_1/export/lib64/libcli.so(SQL_EXEC_Exec+0x115)[0x76121f65] /opt/home/thaiju/datalake_64_1/export/lib64/libexecutor.so(_ZN15ExeCliInterface4execEPci+0x55)[0x74d4aa05] /opt/home/thaiju/datalake_64_1/export/lib64/libexecutor.so(_ZN15ExeCliInterface17fetchRowsPrologueEPKciiPc+0x10c)[0x74d4c7ac] /opt/home/thaiju/datalake_64_1/export/lib64/libexecutor.so(_ZN15ExeCliInterface12fetchAllRowsERP5QueuePc+0x66)[0x74d4d786] /opt/home/thaiju/datalake_64_1/export/lib64/libexecutor.so(_ZN12ExExeUtilTcb12fetchAllRowsERP5QueuePciiRsi+0x28)[0x74d52e48] /opt/home/thaiju/datalake_64_1/export/lib64/libexecutor.so(_ZN31ExExeUtilGetHiveMetadataInfoTcb16fetchAllHiveRowsERP5QueueiRs+0xa1)[0x74d53171] /opt/home/thaiju/datalake_64_1/export/lib64/libexecutor.so(_ZN31ExExeUtilGetHiveMetadataInfoTcb4workEv+0x1e9)[0x74d56289] /opt/home/thaiju/datalake_64_1/export/lib64/libexecutor.so(_ZN11ExScheduler4workEl+0x223)[0x74d97833] /opt/home/thaiju/datalake_64_1/export/lib64/libexecutor.so(_ZN11ex_root_tcb7executeEP10CliGlobalsP16ExExeStmtGlobalsP10DescriptorRP12ComDiagsAreai+0x662)[0x74d0d9f2] /opt/home/thaiju/datalake_64_1/export/lib64/libcli.so(_ZN12CliStatement7executeEP10CliGlobalsP10DescriptorR12ComDiagsAreaNS_9ExecStateEij+0x1104)[0x76115e84] /opt/home/thaiju/datalake_64_1/export/lib64/libcli.so(SQLCLI_PerformTasks+0x3fa)[0x760d8aaa] /opt/home/thaiju/datalake_64_1/export/lib64/libcli.so(SQLCLI_Exec+0x52)[0x760d97d2] /opt/home/thaiju/datalake_64_1/export/lib64/libcli.so(SQL_EXEC_Exec+0x115)[0x76121f65] /opt/home/thaiju/datalake_64_1/export/lib64/libsqlcilib.so(_ZN6SqlCmd6doExecEP8SqlciEnvP13SQLCLI_OBJ_IDP8PrepStmtiPPcPN8CharInfo7CharSetEi+0x14b)[0x77bd6f7b] /opt/home/thaiju/datalake_64_1/export/lib64/libsqlcilib.so(_ZN6SqlCmd10do_executeEP8SqlciEnvP8PrepStmtiPPcPN8CharInfo7CharSetEi+0x909)[0x77bd78e9] /opt/home/thaiju/datalake_64_1/export/lib64/libsqlcilib.so(_ZN3DML7processEP8SqlciEnv+0x3dd)[0x77bd801d] /opt/home/thaiju/datalake_64_1/export/lib64/libsqlcilib.so(_ZN8SqlciEnv15executeCommandsERP9InputStmt+0x4b6)[0x77bc4c76]
[jira] [Commented] (TRAFODION-291) LP Bug: 1321857 - Update statistics fails with error 8448.
[ https://issues.apache.org/jira/browse/TRAFODION-291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708666#comment-14708666 ] Atanu Mishra commented on TRAFODION-291: Trafodion-Gerrit (neo-devtools) wrote on 2014-09-12: Fix proposed to core (master) #3 Fix proposed to branch: master Review: https://review.trafodion.org/422 Trafodion-Gerrit (neo-devtools) wrote on 2014-09-15: Fix merged to core (master)#4 Reviewed: https://review.trafodion.org/422 Committed: https://github.com/trafodion/core/commit/3deb3697b83a1235f6eb8514443fc0d9855e04fe Submitter: Trafodion Jenkins Branch: master commit 3deb3697b83a1235f6eb8514443fc0d9855e04fe Author: Barry Fritchman email address hidden Date: Thu Sep 11 17:57:03 2014 + Provide quick row count estimation for Ustat Update Statistics needs an estimation of the cardinality of an HBase table, which to this point has been provided by the result of selecting count(*) from the table with an internal query. This incurred a significant overhead for large files, and also occasionally resulted in an 8448 error due to a known coprocessor problem. The approach implemented by this fix is to access the HFiles through the FileSystem interface and read the EntryCount field in the trailer block of each file. Some sampling of initial data blocks is done to determine the expected number of missing KevValues due to nulls and the number of non-PUT KeyValues. The number of rows is estimated by dividing the adjusted count by the number of columns in the table. The MemStore of each of the table's regions is checked to get the total storage for the table outside of HFiles, and the number of rows in memory is estimated using the total MemStore size and the size-to-rowcount ratio for the HFiles. Change-Id: I7435ec3c765992084947b9dc7f8540c779f1f5d3 Closes-Bug: #1321857 Changed in trafodion: status: In Progress → Fix Committed Stacey Johnson (sjohnson-w) on 2014-10-16 Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1321857 - Update statistics fails with error 8448. -- Key: TRAFODION-291 URL: https://issues.apache.org/jira/browse/TRAFODION-291 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Guy Groulx Assignee: Barry Fritchman Priority: Critical Fix For: 0.9 (pre-incubation) After loading tables, we do update statistics on our debitcredit tables. Here's the output of the upd stats on ACCOUNT_2048. SQLEXCEPTION on Statement, Error Code = -9200 update statistics for table trafodion.debitcredit.account_2048 on every key sample *** ERROR[9200] UPDATE STATISTICS for table TRAFODION.DEBITCREDIT.ACCOUNT_2048 encountered an error (8448) from statement getRow(). [2014-05-21 06:52:30] *** ERROR[8448] Unable to access Hbase interface. Call to ExpHbaseInterface::coProcAggr returned error HBASE_ACCESS_ERROR(-705). Cause: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=10, exceptions: Wed May 21 06:42:18 PDT 2014, org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@1c387843, java.net.SocketTimeoutException: Call to lava-31.hpl.hp.com/15.25.115.188:60020 failed on socket timeout exception: java.net.SocketTimeoutException: 6 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/15.25.115.185:53508 remote=lava-31.hpl.hp.com/15.25.115.188:60020] Wed May 21 06:43:19 PDT 2014, org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@1c387843, java.net.SocketTimeoutException: Call to lava-31.hpl.hp.com/15.25.115.188:60020 failed on socket timeout exception: java.net.SocketTimeoutException: 6 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/15.25.115.185:53610 remote=lava-31.hpl.hp.com/15.25.115.188:60020] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-295) LP Bug: 1322400 - purgedata does not maintain number of salted partition information
[ https://issues.apache.org/jira/browse/TRAFODION-295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708668#comment-14708668 ] Atanu Mishra commented on TRAFODION-295: Anoop Sharma (anoop-sharma) wrote on 2014-08-20:#2 fixed and committed: create table t (a int not null primary key) salt using 4 partitions; --- SQL operation complete. showddl t; CREATE TABLE TRAFODION.SCH.T ( A INT NO DEFAULT NOT NULL NOT DROPPABLE , PRIMARY KEY (A ASC) ) SALT USING 4 PARTITIONS ; --- SQL operation complete. insert into t values (1); --- 1 row(s) inserted. select * from t; A --- 1 --- 1 row(s) selected. purgedata t; --- SQL operation complete. showddl t; CREATE TABLE TRAFODION.SCH.T ( A INT NO DEFAULT NOT NULL NOT DROPPABLE , PRIMARY KEY (A ASC) ) SALT USING 4 PARTITIONS ; --- SQL operation complete. Changed in trafodion: status: In Progress → Fix Committed LP Bug: 1322400 - purgedata does not maintain number of salted partition information Key: TRAFODION-295 URL: https://issues.apache.org/jira/browse/TRAFODION-295 Project: Apache Trafodion Issue Type: Bug Components: sql-exe Reporter: Anoop Sharma Assignee: Anoop Sharma Priority: Critical Fix For: 0.9 (pre-incubation) purgedata does not maintain number of salted partition information. It recreates it with 1 partition. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-291) LP Bug: 1321857 - Update statistics fails with error 8448.
[ https://issues.apache.org/jira/browse/TRAFODION-291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-291. -- Resolution: Fixed Assignee: (was: Barry Fritchman) LP Bug: 1321857 - Update statistics fails with error 8448. -- Key: TRAFODION-291 URL: https://issues.apache.org/jira/browse/TRAFODION-291 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Guy Groulx Priority: Critical Fix For: 0.9 (pre-incubation) After loading tables, we do update statistics on our debitcredit tables. Here's the output of the upd stats on ACCOUNT_2048. SQLEXCEPTION on Statement, Error Code = -9200 update statistics for table trafodion.debitcredit.account_2048 on every key sample *** ERROR[9200] UPDATE STATISTICS for table TRAFODION.DEBITCREDIT.ACCOUNT_2048 encountered an error (8448) from statement getRow(). [2014-05-21 06:52:30] *** ERROR[8448] Unable to access Hbase interface. Call to ExpHbaseInterface::coProcAggr returned error HBASE_ACCESS_ERROR(-705). Cause: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=10, exceptions: Wed May 21 06:42:18 PDT 2014, org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@1c387843, java.net.SocketTimeoutException: Call to lava-31.hpl.hp.com/15.25.115.188:60020 failed on socket timeout exception: java.net.SocketTimeoutException: 6 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/15.25.115.185:53508 remote=lava-31.hpl.hp.com/15.25.115.188:60020] Wed May 21 06:43:19 PDT 2014, org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@1c387843, java.net.SocketTimeoutException: Call to lava-31.hpl.hp.com/15.25.115.188:60020 failed on socket timeout exception: java.net.SocketTimeoutException: 6 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/15.25.115.185:53610 remote=lava-31.hpl.hp.com/15.25.115.188:60020] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-321) LP Bug: 1324303 - bad cardinality estimates for metadata queries
[ https://issues.apache.org/jira/browse/TRAFODION-321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-321. -- Resolution: Fixed Assignee: (was: Apache Trafodion) Fix Version/s: 0.8 (pre-incubation) LP Bug: 1324303 - bad cardinality estimates for metadata queries Key: TRAFODION-321 URL: https://issues.apache.org/jira/browse/TRAFODION-321 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Apache Trafodion Priority: Critical Fix For: 0.8 (pre-incubation) cardinality estimation for metadata queries is not correct. Specifically for the case of an index join and the join is on the index column. The cardinality of the join should reflect that of the left child of the join. Example of a metadata table that has the issue: prepare st1 from select T.CONSTRAINT_TYPE, o.OBJECT_NAME from trafodion._MD_.table_constraints T, trafodion._MD_.objects O where T.table_uid = o.object_uid; explain options 'f' st1; LC RC OP OPERATOR OPT DESCRIPTION CARD - 3.4root 5.00E+003 213hybrid_hash_join 5.00E+003 ..2trafodion_index_scanOBJECTS 1.00E+002 ..1trafodion_scan TABLE_CONSTRAINTS 1.00E+002 --- SQL operation complete. Assigned to LaunchPad User taoufik ben abdellatif -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-321) LP Bug: 1324303 - bad cardinality estimates for metadata queries
[ https://issues.apache.org/jira/browse/TRAFODION-321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708685#comment-14708685 ] Atanu Mishra commented on TRAFODION-321: taoufik ben abdellatif (taoufik-abdellatif) wrote on 2014-06-13:#1 Changes were made to method HistogramCache::createColStatsList in optimizer/NATable.cpp to look for columns that have unique indices specified on them and flag them as unique. The FetchHistograms logic in hs_read.cpp uses this flag to set the uec of the column to be the same as the table rowcount. Changed in trafodion: status: In Progress → Fix Committed Julie Thai (julie-y-thai) wrote on 2014-08-12: #2 Verified on workstation, daily build 20140807_0830. Verified cardinality of hybrid_hash_join reflects left child of the join. control query shape hybrid_hash_join(scan(TABLE 'T', path 'TRAFODION._MD_.TABLE_CONSTRAINTS', forward , blocks_per_access 1 , mdam off), scan(TABLE 'O', path 'TRAFODION._MD_.OBJECTS_UNIQ_IDX', forward , blocks_per_access 1 , mdam off)); +++ --- SQL operation complete. prepare st1 from select T.CONSTRAINT_TYPE, o.OBJECT_NAME from trafodion._MD_.table_constraints T, trafodion._MD_.objects O where T.table_uid = o.object_uid; --- SQL command prepared. explain options 'f' st1; LC RC OP OPERATOR OPT DESCRIPTION CARD - 3 . 4 root 1.00E+002 2 1 3 hybrid_hash_join u 1.00E+002 . . 2 trafodion_scan TABLE_CONSTRAINTS 1.00E+002 . . 1 trafodion_index_scan OBJECTS 1.00E+002 --- SQL operation complete. Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1324303 - bad cardinality estimates for metadata queries Key: TRAFODION-321 URL: https://issues.apache.org/jira/browse/TRAFODION-321 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Apache Trafodion Assignee: Apache Trafodion Priority: Critical cardinality estimation for metadata queries is not correct. Specifically for the case of an index join and the join is on the index column. The cardinality of the join should reflect that of the left child of the join. Example of a metadata table that has the issue: prepare st1 from select T.CONSTRAINT_TYPE, o.OBJECT_NAME from trafodion._MD_.table_constraints T, trafodion._MD_.objects O where T.table_uid = o.object_uid; explain options 'f' st1; LC RC OP OPERATOR OPT DESCRIPTION CARD - 3.4root 5.00E+003 213hybrid_hash_join 5.00E+003 ..2trafodion_index_scanOBJECTS 1.00E+002 ..1trafodion_scan TABLE_CONSTRAINTS 1.00E+002 --- SQL operation complete. Assigned to LaunchPad User taoufik ben abdellatif -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-281) LP Bug: 1321052 - get tables for schema hive.hive generates a sqlci core
[ https://issues.apache.org/jira/browse/TRAFODION-281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-281. -- Resolution: Fixed Assignee: (was: Anoop Sharma) Fix Version/s: 1.0 (pre-incubation) LP Bug: 1321052 - get tables for schema hive.hive generates a sqlci core Key: TRAFODION-281 URL: https://issues.apache.org/jira/browse/TRAFODION-281 Project: Apache Trafodion Issue Type: Bug Components: sql-exe Reporter: Julie Thai Priority: Critical Fix For: 1.0 (pre-incubation) On workstation, datalake_64_1 v40646, get tables in schema hive.hive (where schema does not contain any tables) generates core. To reproduce in sqlci or trafci, issue: set schema hive.hive; [cqd mode_seahive 'on';] get tables; MY_SQROOT=/opt/home/thaiju/datalake_64_1 who@host=tha...@g4t3029.houston.hp.com JAVA_HOME=/opt/home/tools/jdk1.7.0_09_64 linux=2.6.32-279.el6.x86_64 redhat=6.3 Release 0.7.0 (Build release [40646], branch 40646-project/datalake_64_1, date 19May14) From sqlci: /opt/home/thaiju/datalake_64_1: sqlci Trafodion Conversational Interface 0.7.0 (c) Copyright 2014 Hewlett-Packard Development Company, LP. set schema hive.hive; --- SQL operation complete. get tables; *** glibc detected *** sqlci: munmap_chunk(): invalid pointer: 0x7fffe902ce20 *** === Backtrace: = /lib64/libc.so.6[0x33088760e6] /opt/home/thaiju/datalake_64_1/export/lib64/libexecutor.so(_ZN24ExExeUtilHiveMDaccessTcb4workEv+0xb8a)[0x74d589ea] /opt/home/thaiju/datalake_64_1/export/lib64/libexecutor.so(_ZN11ExScheduler4workEl+0x223)[0x74d97833] /opt/home/thaiju/datalake_64_1/export/lib64/libexecutor.so(_ZN11ex_root_tcb7executeEP10CliGlobalsP16ExExeStmtGlobalsP10DescriptorRP12ComDiagsAreai+0x662)[0x74d0d9f2] /opt/home/thaiju/datalake_64_1/export/lib64/libcli.so(_ZN12CliStatement7executeEP10CliGlobalsP10DescriptorR12ComDiagsAreaNS_9ExecStateEij+0x1104)[0x76115e84] /opt/home/thaiju/datalake_64_1/export/lib64/libcli.so(SQLCLI_PerformTasks+0x3fa)[0x760d8aaa] /opt/home/thaiju/datalake_64_1/export/lib64/libcli.so(SQLCLI_Exec+0x52)[0x760d97d2] /opt/home/thaiju/datalake_64_1/export/lib64/libcli.so(SQL_EXEC_Exec+0x115)[0x76121f65] /opt/home/thaiju/datalake_64_1/export/lib64/libexecutor.so(_ZN15ExeCliInterface4execEPci+0x55)[0x74d4aa05] /opt/home/thaiju/datalake_64_1/export/lib64/libexecutor.so(_ZN15ExeCliInterface17fetchRowsPrologueEPKciiPc+0x10c)[0x74d4c7ac] /opt/home/thaiju/datalake_64_1/export/lib64/libexecutor.so(_ZN15ExeCliInterface12fetchAllRowsERP5QueuePc+0x66)[0x74d4d786] /opt/home/thaiju/datalake_64_1/export/lib64/libexecutor.so(_ZN12ExExeUtilTcb12fetchAllRowsERP5QueuePciiRsi+0x28)[0x74d52e48] /opt/home/thaiju/datalake_64_1/export/lib64/libexecutor.so(_ZN31ExExeUtilGetHiveMetadataInfoTcb16fetchAllHiveRowsERP5QueueiRs+0xa1)[0x74d53171] /opt/home/thaiju/datalake_64_1/export/lib64/libexecutor.so(_ZN31ExExeUtilGetHiveMetadataInfoTcb4workEv+0x1e9)[0x74d56289] /opt/home/thaiju/datalake_64_1/export/lib64/libexecutor.so(_ZN11ExScheduler4workEl+0x223)[0x74d97833] /opt/home/thaiju/datalake_64_1/export/lib64/libexecutor.so(_ZN11ex_root_tcb7executeEP10CliGlobalsP16ExExeStmtGlobalsP10DescriptorRP12ComDiagsAreai+0x662)[0x74d0d9f2] /opt/home/thaiju/datalake_64_1/export/lib64/libcli.so(_ZN12CliStatement7executeEP10CliGlobalsP10DescriptorR12ComDiagsAreaNS_9ExecStateEij+0x1104)[0x76115e84] /opt/home/thaiju/datalake_64_1/export/lib64/libcli.so(SQLCLI_PerformTasks+0x3fa)[0x760d8aaa] /opt/home/thaiju/datalake_64_1/export/lib64/libcli.so(SQLCLI_Exec+0x52)[0x760d97d2] /opt/home/thaiju/datalake_64_1/export/lib64/libcli.so(SQL_EXEC_Exec+0x115)[0x76121f65] /opt/home/thaiju/datalake_64_1/export/lib64/libsqlcilib.so(_ZN6SqlCmd6doExecEP8SqlciEnvP13SQLCLI_OBJ_IDP8PrepStmtiPPcPN8CharInfo7CharSetEi+0x14b)[0x77bd6f7b] /opt/home/thaiju/datalake_64_1/export/lib64/libsqlcilib.so(_ZN6SqlCmd10do_executeEP8SqlciEnvP8PrepStmtiPPcPN8CharInfo7CharSetEi+0x909)[0x77bd78e9] /opt/home/thaiju/datalake_64_1/export/lib64/libsqlcilib.so(_ZN3DML7processEP8SqlciEnv+0x3dd)[0x77bd801d] /opt/home/thaiju/datalake_64_1/export/lib64/libsqlcilib.so(_ZN8SqlciEnv15executeCommandsERP9InputStmt+0x4b6)[0x77bc4c76] /opt/home/thaiju/datalake_64_1/export/lib64/libsqlcilib.so(_ZN8SqlciEnv3runEv+0x3b)[0x77bc6d1b] sqlci[0x401af1] /lib64/libc.so.6(__libc_start_main+0xfd)[0x330881ecdd] sqlci[0x401479] === Memory map: 0040-00403000 r-xp fd:00 15859734 /opt/home/thaiju/datalake_64_1/export/bin64/sqlci 00602000-00603000 rw-p 2000 fd:00 15859734
[jira] [Commented] (TRAFODION-298) LP Bug: 1323159 - create-table-as fails, trafci reports ERROR[1] whereas sqlci cores
[ https://issues.apache.org/jira/browse/TRAFODION-298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708673#comment-14708673 ] Atanu Mishra commented on TRAFODION-298: Anoop Sharma (anoop-sharma) on 2014-05-30 Changed in trafodion: status: In Progress → Fix Committed Julie Thai (julie-y-thai) wrote on 2014-06-02: #1 Verified on traf_0601: create table aaa ( cola int not null, colb int not null, colc int not null, primary key (cola, colb)); --- SQL operation complete. create table bbb primary key (colc) no load as select * from aaa; --- 0 row(s) inserted. Julie Thai (julie-y-thai) on 2014-06-02 Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1323159 - create-table-as fails, trafci reports ERROR[1] whereas sqlci cores Key: TRAFODION-298 URL: https://issues.apache.org/jira/browse/TRAFODION-298 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Julie Thai Assignee: Anoop Sharma Priority: Critical Fix For: 0.8 (pre-incubation) Encountered this issue on workstation, datalake_64_1 v40963. In trafci: Welcome to Trafodion Command Interface Copyright(C) 2013-2014 Hewlett-Packard Development Company, L.P. Host Name/IP Address: localhost:27700 User Name: zz Connected to Trafodion SQLcreate table aaa ( cola int not null, colb int not null, colc int not null, primary key (cola, colb)); --- SQL operation complete. SQLcreate table bbb primary key (colc) no load as select * from aaa; *** ERROR[1] The message id: problem_with_server_read *** ERROR[1] The message id: header_not_long_enough *** ERROR[1] The message id: problem_with_server_read *** ERROR[1] The message id: header_not_long_enough SQL From sqlci: Trafodion Conversational Interface 0.7.0 (c) Copyright 2014 Hewlett-Packard Development Company, LP. create table aaa ( cola int not null, colb int not null, colc int not null, primary key (cola, colb)); --- SQL operation complete. create table bbb primary key (colc) no load as select * from aaa; # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x70ac84d0, pid=6318, tid=140737182505856 # # JRE version: 7.0_09-b05 # Java VM: Java HotSpot(TM) 64-Bit Server VM (23.5-b02 mixed mode linux-amd64 compressed oops) # Problematic frame: # C [libparser.so+0x3ab4d0] StmtDDLCreateTable::synthesize()+0x6d0 # # Core dump written. Default location: /opt/home/thaiju/dcs-0.7.0-beta/core or core.6318 # # An error report file with more information is saved as: # /opt/home/thaiju/dcs-0.7.0-beta/hs_err_pid6318.log # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # Aborted (core dumped) Callstack from sqlci core: #0 0x0033088328a5 in raise () from /lib64/libc.so.6 #1 0x003308834085 in abort () from /lib64/libc.so.6 #2 0x76d46455 in os::abort(bool) () from /opt/home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #3 0x76ea6717 in VMError::report_and_die() () from /opt/home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #4 0x76d49f60 in JVM_handle_linux_signal () from /opt/home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #5 signal handler called #6 0x70ac84d0 in StmtDDLCreateTable::synthesize ( this=value optimized out) at ../parser/StmtDDLCreate.cpp:4401 #7 0x70a278f1 in arkcmpparse () at parser/linux/64bit/release/sqlparser.cpp:72458 #8 0x716cc854 in Parser::parseSQL (this=0x7fff3d80, node=0x7fff3e90, internalExpr=value optimized out, paramItemList= 0x41) at ../sqlcomp/parser.cpp:736 #9 0x716cdf34 in Parser::parseDML (this=0x7fff3d80, txt=value optimized out, node=0x7fff3e90, internalExpr=0, paramItemList=0x0) at ../sqlcomp/parser.cpp:1032 #10 0x7165bdea in CmpMain::sqlcomp (this=0x7fff3f60, input=..., gen_code=0x7fffd4b30460, gen_code_len=0x7fffd4b30458, heap=0x7fffe91ceaa0, phase=CmpMain::END, fragmentDir=0x7fff40b8, op=3004) at ../sqlcomp/CmpMain.cpp:753 #11 0x7576715f in CmpStatement::process (this=0x7fffd4b0eae8, sqltext=value optimized out) at ../arkcmp/CmpStatement.cpp:474 #12 0x7575bd6e in CmpContext::compileDirect (this=0x7fffe8639090, data=0x7fffe91f1880 \200, data_len=200, outHeap=0x7fffe9c40660, charset=15, op=CmpMessageObj::SQLTEXT_COMPILE, gen_code=@0x7fff4680, gen_code_len=@0x7fff4688, parserFlags=0, diagsArea=0x7fffe91f1950) at ../arkcmp/CmpContext.cpp:663 #13
[jira] [Closed] (TRAFODION-298) LP Bug: 1323159 - create-table-as fails, trafci reports ERROR[1] whereas sqlci cores
[ https://issues.apache.org/jira/browse/TRAFODION-298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-298. -- Resolution: Fixed Assignee: (was: Anoop Sharma) Fix Version/s: 0.8 (pre-incubation) LP Bug: 1323159 - create-table-as fails, trafci reports ERROR[1] whereas sqlci cores Key: TRAFODION-298 URL: https://issues.apache.org/jira/browse/TRAFODION-298 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Julie Thai Priority: Critical Fix For: 0.8 (pre-incubation) Encountered this issue on workstation, datalake_64_1 v40963. In trafci: Welcome to Trafodion Command Interface Copyright(C) 2013-2014 Hewlett-Packard Development Company, L.P. Host Name/IP Address: localhost:27700 User Name: zz Connected to Trafodion SQLcreate table aaa ( cola int not null, colb int not null, colc int not null, primary key (cola, colb)); --- SQL operation complete. SQLcreate table bbb primary key (colc) no load as select * from aaa; *** ERROR[1] The message id: problem_with_server_read *** ERROR[1] The message id: header_not_long_enough *** ERROR[1] The message id: problem_with_server_read *** ERROR[1] The message id: header_not_long_enough SQL From sqlci: Trafodion Conversational Interface 0.7.0 (c) Copyright 2014 Hewlett-Packard Development Company, LP. create table aaa ( cola int not null, colb int not null, colc int not null, primary key (cola, colb)); --- SQL operation complete. create table bbb primary key (colc) no load as select * from aaa; # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x70ac84d0, pid=6318, tid=140737182505856 # # JRE version: 7.0_09-b05 # Java VM: Java HotSpot(TM) 64-Bit Server VM (23.5-b02 mixed mode linux-amd64 compressed oops) # Problematic frame: # C [libparser.so+0x3ab4d0] StmtDDLCreateTable::synthesize()+0x6d0 # # Core dump written. Default location: /opt/home/thaiju/dcs-0.7.0-beta/core or core.6318 # # An error report file with more information is saved as: # /opt/home/thaiju/dcs-0.7.0-beta/hs_err_pid6318.log # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # Aborted (core dumped) Callstack from sqlci core: #0 0x0033088328a5 in raise () from /lib64/libc.so.6 #1 0x003308834085 in abort () from /lib64/libc.so.6 #2 0x76d46455 in os::abort(bool) () from /opt/home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #3 0x76ea6717 in VMError::report_and_die() () from /opt/home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #4 0x76d49f60 in JVM_handle_linux_signal () from /opt/home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #5 signal handler called #6 0x70ac84d0 in StmtDDLCreateTable::synthesize ( this=value optimized out) at ../parser/StmtDDLCreate.cpp:4401 #7 0x70a278f1 in arkcmpparse () at parser/linux/64bit/release/sqlparser.cpp:72458 #8 0x716cc854 in Parser::parseSQL (this=0x7fff3d80, node=0x7fff3e90, internalExpr=value optimized out, paramItemList= 0x41) at ../sqlcomp/parser.cpp:736 #9 0x716cdf34 in Parser::parseDML (this=0x7fff3d80, txt=value optimized out, node=0x7fff3e90, internalExpr=0, paramItemList=0x0) at ../sqlcomp/parser.cpp:1032 #10 0x7165bdea in CmpMain::sqlcomp (this=0x7fff3f60, input=..., gen_code=0x7fffd4b30460, gen_code_len=0x7fffd4b30458, heap=0x7fffe91ceaa0, phase=CmpMain::END, fragmentDir=0x7fff40b8, op=3004) at ../sqlcomp/CmpMain.cpp:753 #11 0x7576715f in CmpStatement::process (this=0x7fffd4b0eae8, sqltext=value optimized out) at ../arkcmp/CmpStatement.cpp:474 #12 0x7575bd6e in CmpContext::compileDirect (this=0x7fffe8639090, data=0x7fffe91f1880 \200, data_len=200, outHeap=0x7fffe9c40660, charset=15, op=CmpMessageObj::SQLTEXT_COMPILE, gen_code=@0x7fff4680, gen_code_len=@0x7fff4688, parserFlags=0, diagsArea=0x7fffe91f1950) at ../arkcmp/CmpContext.cpp:663 #13 0x76114c57 in CliStatement::prepare2 (this=0x7fffe91cf010, source=0x7fffe9211a10 create table bbb primary key (colc) no load as select * from aaa;, diagsArea=..., passed_gen_code=value optimized out, passed_gen_code_len=value optimized out, charset=15, unpackTdbs=1, cliFlags=129) at ../cli/Statement.cpp:1786 #14 0x76115106 in CliStatement::prepare (this=0x7fffe91cf010, source=0x7fffe9211a10 create table bbb primary key (colc) no load as
[jira] [Commented] (TRAFODION-297) LP Bug: 1322691 - GET PROCEDURES command should be supported now as SPJs/UDRs are implemented in Traf
[ https://issues.apache.org/jira/browse/TRAFODION-297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14708672#comment-14708672 ] Atanu Mishra commented on TRAFODION-297: Suresh Subbiah (suresh-subbiah) wrote on 2014-06-24:#1 These 7 GET statements are now supported. get procedures [in schema schema-name]; get libraries [in schema schema-name]; get functions [in schema schema-name]; get table_mapping functions [in schema schema-name]; get procedures for library library-name; get functions for library library-name; get table_mapping for library library-name; Changed in trafodion: status: In Progress → Fix Committed Aruna Sadashiva (aruna-sadashiva) wrote on 2014-07-17: #2 Get procedures worked as expected. But from the commands that Suresh has listed, these two commands don't seem to be supported yet: SQLget functions in schema t4qa; *** ERROR[4218] The options specified in the GET queryType command are incorrect, inconsistent or not supported. [2014-07-17 15:01:58] SQLget table_mapping functions in schema t4qa; *** ERROR[4218] The options specified in the GET queryType command are incorrect, inconsistent or not supported. [2014-07-17 15:02:31] Aruna Sadashiva (aruna-sadashiva) wrote on 2014-07-18: #3 Have field new bug for the commands with IN SCHEMA that failed. The problem this bug was field for, has been fixed. Changed in trafodion: status: Fix Committed → Confirmed status: Confirmed → Fix Released LP Bug: 1322691 - GET PROCEDURES command should be supported now as SPJs/UDRs are implemented in Traf - Key: TRAFODION-297 URL: https://issues.apache.org/jira/browse/TRAFODION-297 Project: Apache Trafodion Issue Type: Bug Components: sql-general Reporter: Aruna Sadashiva Assignee: Suresh Subbiah Priority: Critical Fix For: 1.0 (pre-incubation) GET PROCEDURES command fails with unsupported command error, it should be supported now that we support SPJs and UDRs in Trafodion. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-296) LP Bug: 1322451 - SPJ calls does not work in Trafci
[ https://issues.apache.org/jira/browse/TRAFODION-296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-296. -- Resolution: Fixed Assignee: (was: justin...@hp.com) Fix Version/s: 1.0 (pre-incubation) LP Bug: 1322451 - SPJ calls does not work in Trafci Key: TRAFODION-296 URL: https://issues.apache.org/jira/browse/TRAFODION-296 Project: Apache Trafodion Issue Type: Bug Components: sql-exe Reporter: Chong Hsu Priority: Critical Fix For: 1.0 (pre-incubation) Testing Trafodion - trafodion-ci-project-datalake_64_1-20140516-v40596_release Calling a simple SPJ, that returns the input value, worked ok from sqlci. But failed from Trafci. A testing SPJ is attached with Java procedures: public static void NA001(int paramInt, Integer[] paramArrayOfInteger) { paramArrayOfInteger[0] = new Integer(paramInt); } public static void ST001(String paramString, String[] paramArrayOfString) { paramArrayOfString[0] = paramString; } - Running SPJ from sqlci: set schema spj_test; create library testlib file '/opt/home/SQFQA/SPJRoot/TestSPJ.jar'; Create procedure NA001 (in in1 int, out out1 integer) external name 'TestSPJ.NA001(int,java.lang.Integer[])' library testlib parameter style java language java ; call NA001(100,?); OUT1 --- 100 --- SQL operation complete. Create procedure ST001 (in in1 varchar(50), out out1 varchar(50)) external name 'TestSPJ.ST001' library testlib language java parameter style java; --- SQL operation complete. call st001('aaa', ?); OUT1 -- aaa --- SQL operation complete. - Running SPJ from trafci: set schema spj_test; SQLcall na001(100,?); OUT1 --- 60 --- SQL operation complete. SQLcall na001(32656148, ?); OUT1 --- 60 --- SQL operation complete. SQLcall st001('',?); *** ERROR[29451] Internal error processing command. Details=null SQLcall st001('aaa bbb ccc ddd', ?); *** ERROR[29451] Internal error processing command. Details=null -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-323) LP Bug: 1324370 - dcs-stop.sh script frequently hangs and does not stop any dcs processes
[ https://issues.apache.org/jira/browse/TRAFODION-323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-323. -- Resolution: Fixed Assignee: (was: Anuradha Hegde) Fix Version/s: 1.0 (pre-incubation) LP Bug: 1324370 - dcs-stop.sh script frequently hangs and does not stop any dcs processes - Key: TRAFODION-323 URL: https://issues.apache.org/jira/browse/TRAFODION-323 Project: Apache Trafodion Issue Type: Bug Components: connectivity-dcs Reporter: Aruna Sadashiva Priority: Critical Fix For: 1.0 (pre-incubation) dcs-stop.sh frequently hangs and does not stop any of the dcs processes. It displays Stopping server for all the servers, but just hangs at that point. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TRAFODION-1265) LP Bug: 1463179 - VSBB update causes TM heap leak
[ https://issues.apache.org/jira/browse/TRAFODION-1265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra updated TRAFODION-1265: Fix Version/s: 2.0-incubating LP Bug: 1463179 - VSBB update causes TM heap leak - Key: TRAFODION-1265 URL: https://issues.apache.org/jira/browse/TRAFODION-1265 Project: Apache Trafodion Issue Type: Bug Components: dtm Reporter: Buddy Wilbanks Assignee: Joanie Cooper Priority: Blocker Labels: heap, leak, vsbb Fix For: 2.0-incubating Since the May 1 executor checkin that turned on VSBB update, the TM will leak memory to the point of an OOM condition after only 26 hours of the longevity test. Pretty sure it will be much quicker if we limit longevity to only perform the delivery transaction, which is the only VSBB invoker. We've collected jmaps for some of the TMs on zircon4, and core files for analysis. Located on /home/squser4/fqw/TMdumps. You can see the corefile size increasing as well as the old space usage. Here's the first and last corefiles taken for pid 9915 -rw--- 1 squser4 seaquest 113142116 Jun 8 18:51 jmapdump-9915.0 -rw--- 1 squser4 seaquest 341615426 Jun 8 19:58 jmapdump-9915.13 and here is the initial jmap-9915.0 old space: PS Old Generation capacity = 1407713280 (1342.5MB) used = 100979864 (96.3019027709961MB) free = 1306733416 (1246.198097229004MB) 7.17332609094943% used and then jmap-9915.13 PS Old Generation capacity = 571998208 (545.5MB) used = 250169256 (238.57999420166016MB) free = 321828952 (306.92000579833984MB) it grows about 12Meg every 5 minutes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-325) LP Bug: 1324573 - DCS master died after many connections
[ https://issues.apache.org/jira/browse/TRAFODION-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-325. -- Resolution: Fixed Fix Version/s: 1.0 (pre-incubation) LP Bug: 1324573 - DCS master died after many connections Key: TRAFODION-325 URL: https://issues.apache.org/jira/browse/TRAFODION-325 Project: Apache Trafodion Issue Type: Bug Components: connectivity-dcs Reporter: Guy Groulx Assignee: Matt Brown Priority: Blocker Fix For: 1.0 (pre-incubation) On spinel, we've been pushing dcs connectivity.We're now running up to 1024 connections and may go higher. In a series of tests, each doing 1,2,4,8,16,32,64,128,256,512,1024 connections, the dcs master disappeared on us. core: 2014-05-29 07:11:33 /local/cores/1008/core.1401347493.n001.28720.java hs_err_pid: /opt/hp/squser2/gselva140525/dcs-0.7.0-beta ls bin conf dcs-0.7.0-beta.jar dcs-webapps docs hs_err_pid28720.log lib LICENSE.txt logs NOTICE.txt /opt/hp/squser2/gselva140525/dcs-0.7.0-beta logs: /opt/hp/squser2/gselva140525/dcs-0.7.0-beta/logs cat dcs-squser2-1-master-n001.out # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x76d875cd, pid=28720, tid=140737068099328 # # JRE version: 7.0_09-b05 # Java VM: Java HotSpot(TM) 64-Bit Server VM (23.5-b02 mixed mode linux-amd64 compressed oops) # Problematic frame: # V [libjvm.so+0x6995cd] MachNode::in_RegMask(unsigned int) const+0x3d # # Core dump written. Default location: /opt/hp/squser2/gselva140525/dcs-0.7.0-beta/core or core.28720 # # An error report file with more information is saved as: # /opt/hp/squser2/gselva140525/dcs-0.7.0-beta/hs_err_pid28720.log # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # /opt/hp/squser2/gselva140525/dcs-0.7.0-beta/logs /opt/hp/squser2/gselva140525/dcs-0.7.0-beta/logs ls -lt | head total 72252 -rw-r- 1 squser2 seaquest38961 May 29 11:37 dcs-squser2-815-server-n001.log -rw-r- 1 squser2 seaquest 703 May 29 07:11 dcs-squser2-1-master-n001.out -rw-r- 1 squser2 seaquest 4000199 May 29 07:11 dcs-squser2-1-master-n001.log == Does not have anything interesting. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-325) LP Bug: 1324573 - DCS master died after many connections
[ https://issues.apache.org/jira/browse/TRAFODION-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14696091#comment-14696091 ] Atanu Mishra commented on TRAFODION-325: Matt Brown (mattbrown-2) wrote on 2014-05-29: #1 Download full text (58.9 KiB) Looks like we’re hitting the bug below. Affected Java versions are 7u10 and is fixed in 8. http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8009460 See the “problematic frame” and stack trace below in bold /opt/hp/squser2/gselva140525/dcs-0.7.0-beta cat hs_err_pid28720.log # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x76d875cd, pid=28720, tid=140737068099328 # # JRE version: 7.0_09-b05 # Java VM: Java HotSpot(TM) 64-Bit Server VM (23.5-b02 mixed mode linux-amd64 compressed oops) # Problematic frame: # V [libjvm.so+0x6995cd] MachNode::in_RegMask(unsigned int) const+0x3d # # Core dump written. Default location: /opt/hp/squser2/gselva140525/dcs-0.7.0-beta/core or core.28720 # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # --- T H R E A D --- Current thread (0x00896800): JavaThread C2 CompilerThread0 daemon [_thread_in_native, id=28758, stack(0x7fffe6e36000,0x7fffe6f37000)] siginfo:si_signo=SIGSEGV: si_errno=0, si_code=128 (), si_addr=0x Registers: RAX=0x768d0db0, RBX=0x0001, RCX=0x7fffe6f32980, RDX=0x0e4f RSP=0x7fffe6f32780, RBP=0x7fffe6f327b0, RSI=0x0005, RDI=0x7733ce70 R8 =0x0060, R9 =0x00ff, R10=0x00ff, R11=0x76a2c620 R12=0x01c11740, R13=0x0001, R14=0x0005, R15=0x0005 RIP=0x76d875cd, EFLAGS=0x00010297, CSGSFS=0x0033, ERR=0x TRAPNO=0x000d Top of Stack: (sp=0x7fffe6f32780) 0x7fffe6f32780: 7fffe6f327b0 018a7538 0x7fffe6f32790: 00fa7948 0005 0x7fffe6f327a0: 0005 01c11740 0x7fffe6f327b0: 7fffe6f32820 76a33325 0x7fffe6f327c0: 9850 7fffe6f33f90 0x7fffe6f327d0: 008697f0 010dd880 0x7fffe6f327e0: 0008e6f32820 0006 0x7fffe6f327f0: 01a9c8f0 7fffe6f329e0 0x7fffe6f32800: 0008 0040 0x7fffe6f32810: 773e08c0 01bb 0x7fffe6f32820: 7fffe6f32a30 76a35b9f 0x7fffe6f32830: 02bfaf80 7fffe6f32920 0x7fffe6f32840: 7fffe6f329e0 7fffe6f32980 0x7fffe6f32850: 7fffe6f33f90 000800d1 0x7fffe6f32860: 0111e748 7fffe6f329a0 0x7fffe6f32870: 7fffe6f329e0 7fffe6f32a40 0x7fffe6f32880: 7fffe6f32930 76d3f5e3 0x7fffe6f32890: 0048 00070004 0x7fffe6f328a0: 7fffe6f340f8 0111e738 0x7fffe6f328b0: 00070102b170 7fff0007 0x7fffe6f328c0: 00897270 76f56496 0x7fffe6f328d0: 0102b150 76f563e7 0x7fffe6f328e0: 7fffe6f32930 76918c61 0x7fffe6f328f0: 01d687f0 0x7fffe6f32900: 000c 76f56496 0x7fffe6f32910: 7fffe6f3... Changed in trafodion: status: New → Confirmed assignee: nobody → Matt Brown (mattbrown-2) Matt Brown (mattbrown-2) wrote on 2014-05-29: #2 Looks like fix was backported to Java 7u40 that's Java 7 update 40. Spinel appears to be on Java 7 update 9. Guy Groulx (guy-groulx) wrote on 2014-05-30:#3 We've installed JVM 7 update 60. Will see if problem returns. If not, will close. Stacey Johnson (sjohnson-w) on 2014-06-10 information type: Proprietary → Public Guy Groulx (guy-groulx) wrote on 2014-06-10:#4 Problem has not happened since switching to Java 7u40. Changed in trafodion: status: Confirmed → Fix Released LP Bug: 1324573 - DCS master died after many connections Key: TRAFODION-325 URL: https://issues.apache.org/jira/browse/TRAFODION-325 Project: Apache Trafodion Issue Type: Bug Components: connectivity-dcs Reporter: Guy Groulx Assignee: Matt Brown Priority: Blocker Fix For: 1.0 (pre-incubation) On spinel, we've been pushing dcs connectivity.We're now running up to 1024 connections and may go higher. In a series of tests, each doing 1,2,4,8,16,32,64,128,256,512,1024 connections, the dcs master disappeared on us. core: 2014-05-29 07:11:33 /local/cores/1008/core.1401347493.n001.28720.java hs_err_pid: /opt/hp/squser2/gselva140525/dcs-0.7.0-beta ls bin conf dcs-0.7.0-beta.jar dcs-webapps docs hs_err_pid28720.log lib
[jira] [Closed] (TRAFODION-125) LP Bug: 1233404 - UPSERT/UPDATE cannot update a column to NULL
[ https://issues.apache.org/jira/browse/TRAFODION-125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-125. -- Resolution: Fixed Fix Version/s: 1.0 (pre-incubation) LP Bug: 1233404 - UPSERT/UPDATE cannot update a column to NULL -- Key: TRAFODION-125 URL: https://issues.apache.org/jira/browse/TRAFODION-125 Project: Apache Trafodion Issue Type: Bug Components: sql-general Reporter: Weishiun Tsai Assignee: Anoop Sharma Priority: Critical Fix For: 1.0 (pre-incubation) Updating a column to NULL using UPSERT or UPDATE in TOPL has no effect, even though the statement execution says that the row is inserted/updated, as shown in the following example: cqd mode_seabase 'on'; --- SQL operation complete. set schema seabase.phoenix; --- SQL operation complete. create table t (a int not null not droppable primary key, b char(10)); --- SQL operation complete. insert into t values (1, 'a'),(2, 'b'),(3, 'c'); --- 3 row(s) inserted. upsert into t (a, b) values(1, null); --- 1 row(s) inserted. select * from t; AB --- -- 1 a 2 b 3 c --- 3 row(s) selected. update t set b=null where a=1; --- 1 row(s) updated. select * from t; AB --- -- 1 a 2 b 3 c --- 3 row(s) selected. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-125) LP Bug: 1233404 - UPSERT/UPDATE cannot update a column to NULL
[ https://issues.apache.org/jira/browse/TRAFODION-125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14696119#comment-14696119 ] Atanu Mishra commented on TRAFODION-125: Weishiun Tsai (wei-shiun-tsai) wrote on 2013-10-24: #1 Verified with the build from 10/23/2013, this bug has been fixed: set schema seabase.phoenix; --- SQL operation complete. create table t (a int not null not droppable primary key, b char(10)); --- SQL operation complete. insert into t values (1, 'a'),(2, 'b'),(3, 'c'); --- 3 row(s) inserted. upsert into t (a, b) values(1, null); --- 1 row(s) inserted. select * from t; A B --- -- 1 ? 2 b 3 c --- 3 row(s) selected. Anoop Sharma (anoop-sharma) on 2013-10-31 Changed in trafodion: assignee: nobody → Anoop Sharma (anoop-sharma) Weishiun Tsai (wei-shiun-tsai) on 2013-11-05 Changed in trafodion: status: New → Fix Released LP Bug: 1233404 - UPSERT/UPDATE cannot update a column to NULL -- Key: TRAFODION-125 URL: https://issues.apache.org/jira/browse/TRAFODION-125 Project: Apache Trafodion Issue Type: Bug Components: sql-general Reporter: Weishiun Tsai Assignee: Anoop Sharma Priority: Critical Fix For: 1.0 (pre-incubation) Updating a column to NULL using UPSERT or UPDATE in TOPL has no effect, even though the statement execution says that the row is inserted/updated, as shown in the following example: cqd mode_seabase 'on'; --- SQL operation complete. set schema seabase.phoenix; --- SQL operation complete. create table t (a int not null not droppable primary key, b char(10)); --- SQL operation complete. insert into t values (1, 'a'),(2, 'b'),(3, 'c'); --- 3 row(s) inserted. upsert into t (a, b) values(1, null); --- 1 row(s) inserted. select * from t; AB --- -- 1 a 2 b 3 c --- 3 row(s) selected. update t set b=null where a=1; --- 1 row(s) updated. select * from t; AB --- -- 1 a 2 b 3 c --- 3 row(s) selected. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-130) LP Bug: 1244027 - UPSERT with specified column list changes unspecified column to NULL
[ https://issues.apache.org/jira/browse/TRAFODION-130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14696147#comment-14696147 ] Atanu Mishra commented on TRAFODION-130: Weishiun Tsai (wei-shiun-tsai) wrote on 2013-11-02: #1 Verified on 11/2/2013, this bug has been fixed: set schema seabase.mytest; --- SQL operation complete. create table t (a char(15) no default not null not droppable, b char(12) no de fault not null not droppable, c varchar(5) default null, d varchar(5) default nu ll, constraint pk_t primary key (a, b)); --- SQL operation complete. insert into t values ('TOPL', '', null, 'a'),('TOPL', '', null, 'a') ,('TOPL', '', null, 'a'); --- 3 row(s) inserted. select * from t; A B C D --- - - TOPL ? a TOPL ? a TOPL ? a --- 3 row(s) selected. upsert into t (b, a, c) select b, a, d from t where a='TOPL'; --- 3 row(s) inserted. select * from t; A B C D --- - - TOPL a a TOPL a a TOPL a a --- 3 row(s) selected. Weishiun Tsai (wei-shiun-tsai) on 2013-11-05 Changed in trafodion: status: New → Fix Released LP Bug: 1244027 - UPSERT with specified column list changes unspecified column to NULL -- Key: TRAFODION-130 URL: https://issues.apache.org/jira/browse/TRAFODION-130 Project: Apache Trafodion Issue Type: Bug Components: sql-general Reporter: Weishiun Tsai Assignee: Anoop Sharma Priority: Critical Fix For: 1.0 (pre-incubation) When specifying a list of columns to UPSERT, and when the statement updates records instead of inserting records, it somehow changes a column that is not on the list to NULL. In the following example, the table has 4 columns: a, b, c, d. The UPSERT statement is given the column list (b, a, c). The statement essentially updates column c in every row to the value of column d from the same row. But it somehow also changes all values in column d back to its default value as NULL: set schema seabase.phoenix; --- SQL operation complete. create table t (a char(15) no default not null not droppable, b char(12) no de fault not null not droppable, c varchar(5) default null, d varchar(5) default nu ll, constraint pk_t primary key (a, b)); --- SQL operation complete. insert into t values ('TOPL', '', null, 'a'),('TOPL', '', null, 'a') ,('TOPL', '', null, 'a'); --- 3 row(s) inserted. select * from t; AB C D --- - - TOPL ? a TOPL ? a TOPL ? a --- 3 row(s) selected. upsert into t (b, a, c) select b, a, d from t where a='TOPL'; --- 3 row(s) inserted. select * from t; AB C D --- - - TOPL a ? TOPL a ? TOPL a ? --- 3 row(s) selected. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-130) LP Bug: 1244027 - UPSERT with specified column list changes unspecified column to NULL
[ https://issues.apache.org/jira/browse/TRAFODION-130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-130. -- Resolution: Fixed Assignee: (was: Anoop Sharma) Fix Version/s: 1.0 (pre-incubation) LP Bug: 1244027 - UPSERT with specified column list changes unspecified column to NULL -- Key: TRAFODION-130 URL: https://issues.apache.org/jira/browse/TRAFODION-130 Project: Apache Trafodion Issue Type: Bug Components: sql-general Reporter: Weishiun Tsai Priority: Critical Fix For: 1.0 (pre-incubation) When specifying a list of columns to UPSERT, and when the statement updates records instead of inserting records, it somehow changes a column that is not on the list to NULL. In the following example, the table has 4 columns: a, b, c, d. The UPSERT statement is given the column list (b, a, c). The statement essentially updates column c in every row to the value of column d from the same row. But it somehow also changes all values in column d back to its default value as NULL: set schema seabase.phoenix; --- SQL operation complete. create table t (a char(15) no default not null not droppable, b char(12) no de fault not null not droppable, c varchar(5) default null, d varchar(5) default nu ll, constraint pk_t primary key (a, b)); --- SQL operation complete. insert into t values ('TOPL', '', null, 'a'),('TOPL', '', null, 'a') ,('TOPL', '', null, 'a'); --- 3 row(s) inserted. select * from t; AB C D --- - - TOPL ? a TOPL ? a TOPL ? a --- 3 row(s) selected. upsert into t (b, a, c) select b, a, d from t where a='TOPL'; --- 3 row(s) inserted. select * from t; AB C D --- - - TOPL a ? TOPL a ? TOPL a ? --- 3 row(s) selected. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-152) LP Bug: 1251079 - SELECT ... UPDATE crashes sqlci with a core file
[ https://issues.apache.org/jira/browse/TRAFODION-152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-152. -- Resolution: Fixed Assignee: (was: Anoop Sharma) Fix Version/s: 1.0 (pre-incubation) LP Bug: 1251079 - SELECT ... UPDATE crashes sqlci with a core file -- Key: TRAFODION-152 URL: https://issues.apache.org/jira/browse/TRAFODION-152 Project: Apache Trafodion Issue Type: Bug Components: sql-general Reporter: Weishiun Tsai Priority: Critical Fix For: 1.0 (pre-incubation) The following sequence of statements crashes sqlci. The last SELECT … UPDATE statement returns a bunch of warnings before the executor asserts with a core file, as shown in the following example: set schema seabase.mytest; --- SQL operation complete. create table t (a int, b char(9), c int); --- SQL operation complete. create index t_idx on t(a, b); --- SQL operation complete. insert into t values (3, 'a', 2), (1, 'b', 3), (4, 'c', 1), (2, 'd', 4), (2, ' b', 5), (2, 'c', 3), (1, 'c', 2), (1, 'a', 4); --- 8 row(s) inserted. select * from (update t set c = 100 where a 3) as x order by x.a; *** WARNING[8402] A string overflow occurred during the evaluation of a characte r expression. AB C --- - --- 1 a 100 *** WARNING[8402] A string overflow occurred during the evaluation of a characte r expression. 1 b 100 *** WARNING[8402] A string overflow occurred during the evaluation of a characte r expression. 1 c 100 *** EXECUTOR ASSERTION FAILURE *** Time: Thu Nov 14 00:17:56 2013 *** Process: 10091 *** File: ../executor/ex_queue.h *** Line: 940 *** Message: ex_queue::getHeadEntry() get head on an empty queue Aborted (core dumped) The sqlci core file has a stack like this: (gdb) bt #0 0x751778a5 in raise () from /lib64/libc.so.6 #1 0x75179085 in abort () from /lib64/libc.so.6 #2 0x747e9be6 in assert_botch_abend ( f=0x73c0c474 ../executor/ex_queue.h, l=940, m=0x73c0c440 ex_queue::getHeadEntry() get head on an empty queue, c=0x0) at ../export/NAAbort.cpp:243 #3 0x73a07897 in ex_queue::getHeadEntry (this=0x7fffe4d78af8) at ../executor/ex_queue.h:939 #4 0x73a53c5f in ExOnljTcb::work_phase1 (this=0x7fffe4d799e0) at ../executor/ex_onlj.cpp:431 #5 0x73a57bfe in ExOnljTcb::sWorkPhase1 (tcb=0x7fffe4d799e0) at ../executor/ex_onlj.h:174 #6 0x73b7dc1f in ExSubtask::work (this=0x7fffe4d7a028) at ../executor/ExScheduler.cpp:771 #7 0x73b7cfa0 in ExScheduler::work (this=0x7fffe4d78228, prevWaitTime=0) at ../executor/ExScheduler.cpp:336 #8 0x73a5e427 in ex_root_tcb::fetch (this=0x7fffe4d7a148, cliGlobals=0xbf5100, glob=0x7fffe4d9bcb8, output_desc=0x7fffe4d83430, diagsArea=@0x7fff53e8, timeLimit=-1, newOperation=1, closeCursorOnError=@0x7fff53e4) at ../executor/ex_root.cpp:1848 #9 0x74eeca29 in CliStatement::fetch (this=0x7fffe4d85100, cliGlobals=0xbf5100, output_desc=0x7fffe4d83430, diagsArea=..., newOperation=1) at ../cli/Statement.cpp:5275 #10 0x74e94b63 in SQLCLI_PerformTasks(CliGlobals *, ULng32, SQLSTMT_ID * , SQLDESC_ID *, SQLDESC_ID *, Lng32, Lng32, typedef __va_list_tag __va_list_tag *, SQLCLI_PTR_PAIRS *, SQLCLI_PTR_PAIRS *) (cliGlobals=0xbf5100, tasks=4900, statement_id=0x1329fa0, input_descriptor=0x0, output_descriptor=0x1561bf0, num_input_ptr_pairs=0, num_output_ptr_pairs=0, ap=0x7fff56a0, input_ptr_pairs=0x0, output_ptr_pairs=0x0) at ../cli/Cli.cpp:3518 #11 0x74e95b2f in SQLCLI_Fetch(CliGlobals *, SQLSTMT_ID *, SQLDESC_ID *, Lng32, typedef __va_list_tag __va_list_tag *, SQLCLI_PTR_PAIRS *) ( cliGlobals=0xbf5100, statement_id=0x1329fa0, output_descriptor=0x1561bf0, num_ptr_pairs=0, ap=0x7fff56a0, ptr_pairs=0x0) at ../cli/Cli.cpp:3956 #12 0x74f005f6 in SQL_EXEC_Fetch (statement_id=0x1329fa0, output_descriptor=0x1561bf0, num_ptr_pairs=0) at ../cli/CliExtern.cpp:2714 #13 0x7799e6d7 in SqlCmd::doFetch (sqlci_env=0xbf3910, stmt=0x1329fa0, prep_stmt=0x1a064e0, firstFetch=0, handleError=1, prepcode=0) at ../sqlci/SqlCmd.cpp:1713 #14 0x7799f90d in SqlCmd::do_execute (sqlci_env=0xbf3910, prep_stmt=0x1a064e0, numUnnamedParams=0, unnamedParamArray=0x0, unnamedParamCharSetArray=0x0, prepcode=0) at ../sqlci/SqlCmd.cpp:2177 #15 0x779a159f in DML::process (this=0x14df1b0, sqlci_env=0xbf3910) at ../sqlci/SqlCmd.cpp:2794 #16 0x7798dcf0 in SqlciEnv::executeCommands (this=0xbf3910,
[jira] [Commented] (TRAFODION-349) LP Bug: 1326458 - JDBC T2 driver returns error 8813 accessing the ResultSet of an explain statement
[ https://issues.apache.org/jira/browse/TRAFODION-349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14696241#comment-14696241 ] Atanu Mishra commented on TRAFODION-349: Weishiun Tsai (wei-shiun-tsai) wrote on 2014-08-20: #2 Verified on the 0819_0830 build installed on a workstation. This problem has been fixed. Here is the program output: -bash-4.1$ myrun.sh explain options 'f' select * from tb rs.next() rs.getString(1) rs.getString(1) LC RC OP OPERATOR OPT DESCRIPTION CARD rs.getString(1) - rs.getString(1) rs.getString(1) 1 . 2 root 1.00E+002 rs.getString(1) . . 1 trafodion_scan TB 1.00E+002 -bash-4.1$ Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1326458 - JDBC T2 driver returns error 8813 accessing the ResultSet of an explain statement --- Key: TRAFODION-349 URL: https://issues.apache.org/jira/browse/TRAFODION-349 Project: Apache Trafodion Issue Type: Bug Components: client-jdbc-t2 Reporter: Weishiun Tsai Assignee: Pavani Puppala Priority: Blocker Fix For: 1.0 (pre-incubation) With the JDBC T2 driver, fetching the ResultSet of an explain statement returns the following error: *** ERROR[8813] Trying to fetch from a statement that is in the closed state. This is similar to another bug report https://bugs.launchpad.net/trafodion/+bug/1274281 'JDBC T2 driver returns error 8813 accessing the ResultSet of a select count(*) statement', except for the statement type. BUG#1274281 has been fixed. The fix for this one will probably be similar to the one for BUG#1274281. This is seen on the GIT v0603_0930 build installed on a workstation. Here is a small JDBC program to reproduce this problem: -bash-4.1$ cat mytest.java import java.sql.*; import java.math.*; import java.util.*; import java.io.*; public class mytest { public static void main(String[] args) //throws java.io.IOException { Properties props = null; Connection conn = null; PreparedStatement stmt = null; ResultSet rs = null; String cat = null; String sch = null; String url = null; String query = null; try { String propFile = System.getProperty(hpjdbc.properties); if (propFile != null) { FileInputStream fs = new FileInputStream(new File(propFile)); props = new Properties(); props.load(fs); url = props.getProperty(url); cat = props.getProperty(catalog); sch = props.getProperty(schema); } else { System.out.println(ERROR: hpjdbc.properties is not set. Exiting.); System.exit(0); } // Class.forName(com.hp.sqlmx.SQLMXDriver); Class.forName(org.trafodion.sql.T2Driver); conn = DriverManager.getConnection(url, props); conn.createStatement().execute(drop table if exists tb); conn.createStatement().execute(create table tb (c1 int not null)); conn.createStatement().execute(insert into tb values (1),(2),(3),(4),(5),(6),(7),(8),(9)); System.out.println(explain options 'f' select * from tb); rs = conn.createStatement().executeQuery(explain options 'f' select * from tb); System.out.println(rs.next()); while (rs.next()) { System.out.println(rs.getString(1)); System.out.println(rs.getString(1)); } conn.close(); } catch (SQLException se) { System.out.println(ERROR: SQLException); se.printStackTrace(); System.out.println(se.getMessage()); System.exit(1); } catch (Exception e) { System.out.println(ERROR: Exception); e.printStackTrace(); System.out.println(e.getMessage()); System.exit(1); } } } Here is the execution output of this program: -bash-4.1$ myrun.sh explain options 'f' select * from tb rs.next() ERROR: SQLException *** ERROR[8813] Trying to fetch from a statement that is in the closed state. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-349) LP Bug: 1326458 - JDBC T2 driver returns error 8813 accessing the ResultSet of an explain statement
[ https://issues.apache.org/jira/browse/TRAFODION-349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-349. -- Resolution: Fixed Assignee: (was: Pavani Puppala) Fix Version/s: 1.0 (pre-incubation) LP Bug: 1326458 - JDBC T2 driver returns error 8813 accessing the ResultSet of an explain statement --- Key: TRAFODION-349 URL: https://issues.apache.org/jira/browse/TRAFODION-349 Project: Apache Trafodion Issue Type: Bug Components: client-jdbc-t2 Reporter: Weishiun Tsai Priority: Blocker Fix For: 1.0 (pre-incubation) With the JDBC T2 driver, fetching the ResultSet of an explain statement returns the following error: *** ERROR[8813] Trying to fetch from a statement that is in the closed state. This is similar to another bug report https://bugs.launchpad.net/trafodion/+bug/1274281 'JDBC T2 driver returns error 8813 accessing the ResultSet of a select count(*) statement', except for the statement type. BUG#1274281 has been fixed. The fix for this one will probably be similar to the one for BUG#1274281. This is seen on the GIT v0603_0930 build installed on a workstation. Here is a small JDBC program to reproduce this problem: -bash-4.1$ cat mytest.java import java.sql.*; import java.math.*; import java.util.*; import java.io.*; public class mytest { public static void main(String[] args) //throws java.io.IOException { Properties props = null; Connection conn = null; PreparedStatement stmt = null; ResultSet rs = null; String cat = null; String sch = null; String url = null; String query = null; try { String propFile = System.getProperty(hpjdbc.properties); if (propFile != null) { FileInputStream fs = new FileInputStream(new File(propFile)); props = new Properties(); props.load(fs); url = props.getProperty(url); cat = props.getProperty(catalog); sch = props.getProperty(schema); } else { System.out.println(ERROR: hpjdbc.properties is not set. Exiting.); System.exit(0); } // Class.forName(com.hp.sqlmx.SQLMXDriver); Class.forName(org.trafodion.sql.T2Driver); conn = DriverManager.getConnection(url, props); conn.createStatement().execute(drop table if exists tb); conn.createStatement().execute(create table tb (c1 int not null)); conn.createStatement().execute(insert into tb values (1),(2),(3),(4),(5),(6),(7),(8),(9)); System.out.println(explain options 'f' select * from tb); rs = conn.createStatement().executeQuery(explain options 'f' select * from tb); System.out.println(rs.next()); while (rs.next()) { System.out.println(rs.getString(1)); System.out.println(rs.getString(1)); } conn.close(); } catch (SQLException se) { System.out.println(ERROR: SQLException); se.printStackTrace(); System.out.println(se.getMessage()); System.exit(1); } catch (Exception e) { System.out.println(ERROR: Exception); e.printStackTrace(); System.out.println(e.getMessage()); System.exit(1); } } } Here is the execution output of this program: -bash-4.1$ myrun.sh explain options 'f' select * from tb rs.next() ERROR: SQLException *** ERROR[8813] Trying to fetch from a statement that is in the closed state. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-248) LP Bug: 1315783 - Overall system memory usage is very high for CMPs
[ https://issues.apache.org/jira/browse/TRAFODION-248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-248. -- Resolution: Fixed LP Bug: 1315783 - Overall system memory usage is very high for CMPs --- Key: TRAFODION-248 URL: https://issues.apache.org/jira/browse/TRAFODION-248 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Qifan Chen Assignee: Qifan Chen Priority: Blocker Labels: performance Fix For: 0.6 (pre-incubation) Peter and Guy reported that overall memory usage for CMP is high. On average, it is 140-170MB per CMP. The current max JVM heap size is set at 4GB. So it is critical to reduce the number CMPs. This fix reduces the CMPs per executor from 3 to 2. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-314) LP Bug: 1324220 - JDBC t2 tests core on connection close in dropContext
[ https://issues.apache.org/jira/browse/TRAFODION-314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-314. -- Resolution: Fixed Assignee: (was: Apache Trafodion) Fix Version/s: 1.0 (pre-incubation) LP Bug: 1324220 - JDBC t2 tests core on connection close in dropContext --- Key: TRAFODION-314 URL: https://issues.apache.org/jira/browse/TRAFODION-314 Project: Apache Trafodion Issue Type: Bug Components: sql-exe Reporter: Aruna Sadashiva Priority: Blocker Fix For: 1.0 (pre-incubation) t2 tests core on connection close. (gdb) bt #0 0x003ea1432925 in raise () from /lib64/libc.so.6 #1 0x003ea1434105 in abort () from /lib64/libc.so.6 #2 0x003ea1470837 in __libc_message () from /lib64/libc.so.6 #3 0x003ea1476166 in malloc_printerr () from /lib64/libc.so.6 #4 0x7f2f7e190846 in CmpContext::~CmpContext (this=0x7f2f6ed50090, __in_chrg=value optimized out) at ../arkcmp/CmpContext.cpp:318 #5 0x7f2f7e190af4 in CmpContext::deleteInstance (parentHeap= 0x7f2f7045d1b0) at ../arkcmp/CmpContext.cpp:345 #6 0x7f2f7ce2fac9 in ContextCli::deleteMe (this=0x7f2f7045d1a0) at ../cli/Context.cpp:391 #7 0x7f2f7ce4fd40 in CliGlobals::dropContext (this=0x15732a0, context=0x7f2f7045d1a0) at ../cli/Globals.cpp:651 #8 0x7f2f7ce09efa in SQLCLI_DropContext (cliGlobals=0x15732a0, context_handle=2001) at ../cli/Cli.cpp:1819 #9 0x7f2f7ce0936a in SQLCLI_DeleteContext (cliGlobals=0x15732a0, context_handle=2001) at ../cli/Cli.cpp:1408 #10 0x7f2f7ce7a50b in SQL_EXEC_DeleteContext (contextHandle=2001) at ../cli/CliExtern.cpp:1503 #11 0x7f2f7f121514 in DISCONNECT (pSrvrConnect=0x156cb90) at native/SqlInterface.cpp:2852 #12 0x7f2f7f110a12 in SRVR_CONNECT_HDL::sqlClose (this=0x156cb90) at native/CSrvrConnect.cpp:137 #13 0x7f2f7f12e5f3 in Java_org_trafodion_sql_SQLMXConnection_close ( jenv=0x8319d8, jcls=0x7f2f95d702f0, server=0x0, dialogueId=22465424) at native/SQLMXConnection.cpp:239 #14 0x7f2f92424738 in ?? () #15 0x7f2f95d702a0 in ?? () #16 0x7f2f95d702f8 in ?? () #17 0x00831800 in ?? () #18 0x7f2f92418350 in ?? () #19 0x7f2f95d702a0 in ?? () ---Type return to continue, or q return to quit--- #20 0x0006fcaec408 in ?? () #21 0x7f2f95d70310 in ?? () #22 0x0006fcb27e18 in ?? () #23 0x in ?? () (gdb) *** glibc detected *** java: munmap_chunk(): invalid pointer: 0x7f2f6e0d1478 *** === Backtrace: = /lib64/libc.so.6[0x3ea1476166] /home/trafodion/trafodion/export/lib64d/libarkcmplib.so(_ZN10CmpContextD1Ev+0xf8)[0x7f2f7e190846] /home/trafodion/trafodion/export/lib64d/libarkcmplib.so(_ZN10CmpContext14deleteInstanceEP6NAHeap+0xac)[0x7f2f7e190af4] /home/trafodion/trafodion/export/lib64d/libcli.so(_ZN10ContextCli8deleteMeEv+0x68d)[0x7f2f7ce2fac9] /home/trafodion/trafodion/export/lib64d/libcli.so(_ZN10CliGlobals11dropContextEP10ContextCli+0x7c)[0x7f2f7ce4fd40] /home/trafodion/trafodion/export/lib64d/libcli.so(SQLCLI_DropContext+0xf2)[0x7f2f7ce09efa] /home/trafodion/trafodion/export/lib64d/libcli.so(SQLCLI_DeleteContext+0x2e)[0x7f2f7ce0936a] /home/trafodion/trafodion/export/lib64d/libcli.so(SQL_EXEC_DeleteContext+0x5c)[0x7f2f7ce7a50b] /home/trafodion/trafodion/export/lib64d/libjdbcT2.so(_Z10DISCONNECTP16SRVR_CONNECT_HDL+0x19)[0x7f2f7f121514] /home/trafodion/trafodion/export/lib64d/libjdbcT2.so(_ZN16SRVR_CONNECT_HDL8sqlCloseEv+0x2fe)[0x7f2f7f110a12] /home/trafodion/trafodion/export/lib64d/libjdbcT2.so(Java_org_trafodion_sql_SQLMXConnection_close+0x2d)[0x7f2f7f12e5f3] [0x7f2f92424738] === Memory map: 0040-00401000 r-xp 68:03 8266823 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.55.x86_64/bin/java 0060-00602000 rw-p 68:03 8266823 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.55.x86_64/bin/java 00824000-02414000 rw-p 00:00 0 [heap] 6fc60-6fdb0 rw-p 00:00 0 6fdb0-706c0 rw-p 00:00 0 706c0-71120 rw-p 00:00 0 71120-7ace8 rw-p 00:00 0 7ace8-7b618 rw-p 00:00 0 7b618-8 rw-p 00:00 0 31d240-31d25b5000 r-xp 68:03 7740571 /usr/lib64/libcrypto.so.1.0.1e 31d25b5000-31d27b5000 ---p 001b5000 68:03 7740571 /usr/lib64/libcrypto.so.1.0.1e 31d27b5000-31d27d r--p 001b5000 68:03 7740571 /usr/lib64/libcrypto.so.1.0.1e 31d27d-31d27dc000 rw-p 001d 68:03 7740571 /usr/lib64/libcrypto.so.1.0.1e
[jira] [Commented] (TRAFODION-314) LP Bug: 1324220 - JDBC t2 tests core on connection close in dropContext
[ https://issues.apache.org/jira/browse/TRAFODION-314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14696084#comment-14696084 ] Atanu Mishra commented on TRAFODION-314: Mike Hanlon (mike-hanlon) wrote on 2014-05-30: #2 I got an email from Zuul last night: Zuul has submitted this change and it was merged. Change subject: Return OptDefaults object to the correct heap .. Return OptDefaults object to the correct heap A recent change had allocated CmpContext::optDefaults_ from an NAHeap but attempted to return it to the system heap. This resulted in core-files when connection.close() is called and caused failures in testing the T2 driver and the phoenix test. The change in this commit properly returns the object to its NAHeap by correctly using the NADELETE macro. Change-Id: I1885313c2b7109f69f051efc53e04ef20d4c09d1 Closes-Bug: #1324220 Changed in trafodion: status: In Progress → Fix Committed Stacey Johnson (sjohnson-w) on 2014-06-10 information type: Proprietary → Public Aruna Sadashiva (aruna-sadashiva) on 2014-06-10 Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1324220 - JDBC t2 tests core on connection close in dropContext --- Key: TRAFODION-314 URL: https://issues.apache.org/jira/browse/TRAFODION-314 Project: Apache Trafodion Issue Type: Bug Components: sql-exe Reporter: Aruna Sadashiva Assignee: Apache Trafodion Priority: Blocker t2 tests core on connection close. (gdb) bt #0 0x003ea1432925 in raise () from /lib64/libc.so.6 #1 0x003ea1434105 in abort () from /lib64/libc.so.6 #2 0x003ea1470837 in __libc_message () from /lib64/libc.so.6 #3 0x003ea1476166 in malloc_printerr () from /lib64/libc.so.6 #4 0x7f2f7e190846 in CmpContext::~CmpContext (this=0x7f2f6ed50090, __in_chrg=value optimized out) at ../arkcmp/CmpContext.cpp:318 #5 0x7f2f7e190af4 in CmpContext::deleteInstance (parentHeap= 0x7f2f7045d1b0) at ../arkcmp/CmpContext.cpp:345 #6 0x7f2f7ce2fac9 in ContextCli::deleteMe (this=0x7f2f7045d1a0) at ../cli/Context.cpp:391 #7 0x7f2f7ce4fd40 in CliGlobals::dropContext (this=0x15732a0, context=0x7f2f7045d1a0) at ../cli/Globals.cpp:651 #8 0x7f2f7ce09efa in SQLCLI_DropContext (cliGlobals=0x15732a0, context_handle=2001) at ../cli/Cli.cpp:1819 #9 0x7f2f7ce0936a in SQLCLI_DeleteContext (cliGlobals=0x15732a0, context_handle=2001) at ../cli/Cli.cpp:1408 #10 0x7f2f7ce7a50b in SQL_EXEC_DeleteContext (contextHandle=2001) at ../cli/CliExtern.cpp:1503 #11 0x7f2f7f121514 in DISCONNECT (pSrvrConnect=0x156cb90) at native/SqlInterface.cpp:2852 #12 0x7f2f7f110a12 in SRVR_CONNECT_HDL::sqlClose (this=0x156cb90) at native/CSrvrConnect.cpp:137 #13 0x7f2f7f12e5f3 in Java_org_trafodion_sql_SQLMXConnection_close ( jenv=0x8319d8, jcls=0x7f2f95d702f0, server=0x0, dialogueId=22465424) at native/SQLMXConnection.cpp:239 #14 0x7f2f92424738 in ?? () #15 0x7f2f95d702a0 in ?? () #16 0x7f2f95d702f8 in ?? () #17 0x00831800 in ?? () #18 0x7f2f92418350 in ?? () #19 0x7f2f95d702a0 in ?? () ---Type return to continue, or q return to quit--- #20 0x0006fcaec408 in ?? () #21 0x7f2f95d70310 in ?? () #22 0x0006fcb27e18 in ?? () #23 0x in ?? () (gdb) *** glibc detected *** java: munmap_chunk(): invalid pointer: 0x7f2f6e0d1478 *** === Backtrace: = /lib64/libc.so.6[0x3ea1476166] /home/trafodion/trafodion/export/lib64d/libarkcmplib.so(_ZN10CmpContextD1Ev+0xf8)[0x7f2f7e190846] /home/trafodion/trafodion/export/lib64d/libarkcmplib.so(_ZN10CmpContext14deleteInstanceEP6NAHeap+0xac)[0x7f2f7e190af4] /home/trafodion/trafodion/export/lib64d/libcli.so(_ZN10ContextCli8deleteMeEv+0x68d)[0x7f2f7ce2fac9] /home/trafodion/trafodion/export/lib64d/libcli.so(_ZN10CliGlobals11dropContextEP10ContextCli+0x7c)[0x7f2f7ce4fd40] /home/trafodion/trafodion/export/lib64d/libcli.so(SQLCLI_DropContext+0xf2)[0x7f2f7ce09efa] /home/trafodion/trafodion/export/lib64d/libcli.so(SQLCLI_DeleteContext+0x2e)[0x7f2f7ce0936a] /home/trafodion/trafodion/export/lib64d/libcli.so(SQL_EXEC_DeleteContext+0x5c)[0x7f2f7ce7a50b] /home/trafodion/trafodion/export/lib64d/libjdbcT2.so(_Z10DISCONNECTP16SRVR_CONNECT_HDL+0x19)[0x7f2f7f121514] /home/trafodion/trafodion/export/lib64d/libjdbcT2.so(_ZN16SRVR_CONNECT_HDL8sqlCloseEv+0x2fe)[0x7f2f7f110a12] /home/trafodion/trafodion/export/lib64d/libjdbcT2.so(Java_org_trafodion_sql_SQLMXConnection_close+0x2d)[0x7f2f7f12e5f3] [0x7f2f92424738] === Memory map: 0040-00401000 r-xp 68:03 8266823
[jira] [Commented] (TRAFODION-320) LP Bug: 1324299 - compiler crashes in rangePartitioning
[ https://issues.apache.org/jira/browse/TRAFODION-320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14696085#comment-14696085 ] Atanu Mishra commented on TRAFODION-320: taoufik ben abdellatif (taoufik-abdellatif) wrote on 2014-05-29:#1 The issue is specific to the case when one of the key columns is of type varchar. For this case, the internal logic that decodes the boundary values expects the length to be at the beginning of the varchar. The boundary values as read from hbase don't adhere to this condition. Changes are done in NATable.cpp to update the boundary value to include the length so the decoding logic can process it correctly. Changed in trafodion: status: In Progress → Fix Committed Stacey Johnson (sjohnson-w) on 2014-06-10 information type: Proprietary → Public taoufik ben abdellatif (taoufik-abdellatif) on 2014-10-14 Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1324299 - compiler crashes in rangePartitioning --- Key: TRAFODION-320 URL: https://issues.apache.org/jira/browse/TRAFODION-320 Project: Apache Trafodion Issue Type: Bug Components: sql-cmp Reporter: Apache Trafodion Assignee: Apache Trafodion Priority: Blocker Fix For: 1.0 (pre-incubation) A compiler crash was observed in compiler rangePartitioning after an hbase partition split. The problem can be easily recreated on a workstation by forcing a partition split from the hbase master page. When a crash occurs the following stack trace is generated: #4 0x7fcbe54e76e2 in JVM_handle_linux_signal () from /usr/lib/jvm/java/jre/lib/amd64/server/libjvm.so #5 signal handler called #6 getValueId (this=0x7fcbe1d2e390, addConvNodes=0, tf=ExpTupleDesc::SQLARK_EXPLODED_FORMAT, resultBuffer=0x7fcbc7814608 , resultBufferLength=360, length=0x77779b28, offset=0x77779b24, diagsArea=0x0) at ../optimizer/ItemExpr.h:188 #7 ValueIdList::evalAtCompileTime (this=0x7fcbe1d2e390, addConvNodes=0, tf=ExpTupleDesc::SQLARK_EXPLODED_FORMAT, resultBuffer=0x7fcbc7814608 , resultBufferLength=360, length=0x77779b28, offset=0x77779b24, diagsArea=0x0) at ../optimizer/ValueDesc.cpp:4486 #8 0x7fcbde7a3fa6 in getRangePartitionBoundaryValuesFromEncodedKeys (part_desc_list=value optimized out, numberOfPartitions=20, SQLMPKeytag=0, partColArray= ..., heap=0x7fcbc786a2a0) at ../optimizer/NATable.cpp:1405 #9 createRangePartitionBoundaries (part_desc_list=value optimized out, numberOfPartitions=20, SQLMPKeytag=0, partColArray=..., heap=0x7fcbc786a2a0) at ../optimizer/NATable.cpp:1667 Assigned to LaunchPad User taoufik ben abdellatif -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-144) LP Bug: 1246923 - MXOSRVR connections gets rejected due to compiler internal error
[ https://issues.apache.org/jira/browse/TRAFODION-144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14696107#comment-14696107 ] Atanu Mishra commented on TRAFODION-144: Anoop Sharma (anoop-sharma) wrote on 2013-11-05:#1 code has been updated to return an error if a seaquest catalog is used in a query in Open Source mode. Changed in trafodion: assignee: Suresh Subbiah (suresh-subbiah) → Anoop Sharma (anoop-sharma) Anoop Sharma (anoop-sharma) wrote on 2013-11-24:#2 Error will be returned if Seaquest catalog is used in any stmt (DDL, DML, INVOKE). prepare s from select * from neo.usr.t; *** ERROR[1002] Catalog NEO does not exist or has not been registered on node . *** ERROR[8822] The statement was not prepared. create table neo.usr.tt (a int); *** ERROR[4222] The DDL feature is not supported in this software version. *** ERROR[8822] The statement was not prepared. set schema neo.usr; --- SQL operation complete. invoke t; *** ERROR[4222] The DESCRIBE feature is not supported in this software version. --- SQL operation failed with errors. Changed in trafodion: status: New → Fix Committed Weishiun Tsai (wei-shiun-tsai) on 2014-03-24 Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1246923 - MXOSRVR connections gets rejected due to compiler internal error -- Key: TRAFODION-144 URL: https://issues.apache.org/jira/browse/TRAFODION-144 Project: Apache Trafodion Issue Type: Bug Components: client-jdbc-t4, connectivity-mxosrvr Reporter: Tharak Capirala Assignee: Anoop Sharma Priority: Blocker Labels: connection, reject Fix For: 0.7 (pre-incubation) When connecting from a JDBC T4 client when MXOSRVR tries to execute certain simple SET commands during connection time it receives an internal compiler error and as a result rejects the client connection. For e.g. SET_ODBC_PROCESS ODBC/MX Server4294965290*** ERROR[2006] Internal error: assertion failure (inTableDesc) in file ../optimizer/NATable.cpp at line 4203. [2013-10-31 17:36:45 ODBC/MX Server4294958474*** ERROR[8822] The statement was not prepared. [2013-10-31 17:36:45 This issue is currently impacting QA testing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-123) LP Bug: 1233323 - SELECT query returns error 8415 when DATE type is in primary key
[ https://issues.apache.org/jira/browse/TRAFODION-123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14696114#comment-14696114 ] Atanu Mishra commented on TRAFODION-123: Weishiun Tsai (wei-shiun-tsai) wrote on 2013-10-23: #1 Verified on a build from 10/23/2013, this problem is fixed: create table t (id int no default not null, date1 date no default not null) pr imary key (id, date1); --- SQL operation complete. insert into t values (3, date '2013-01-01'), (2, date '2011-01-01'), (1, date '2012-01-01'); --- 3 row(s) inserted. select id, date1 from t where id 2; ID DATE1 --- -- 1 2012-01-01 --- 1 row(s) selected. Anoop Sharma (anoop-sharma) on 2013-10-31 Changed in trafodion: assignee: nobody → Anoop Sharma (anoop-sharma) Weishiun Tsai (wei-shiun-tsai) on 2013-11-05 Changed in trafodion: status: New → Fix Released LP Bug: 1233323 - SELECT query returns error 8415 when DATE type is in primary key -- Key: TRAFODION-123 URL: https://issues.apache.org/jira/browse/TRAFODION-123 Project: Apache Trafodion Issue Type: Bug Components: sql-general Reporter: Weishiun Tsai Assignee: Anoop Sharma Priority: Critical When the DATE type is used as a primary key, the SELECT query returns error 8415, as shown in the following example. Also shown in the example is that the same query runs fine if DATE is not a primary key. A similar query without the WHERE clause also runs fine when DATE is a primary key. cqd mode_seabase 'on'; --- SQL operation complete. set schema seabase.phoenix; --- SQL operation complete. create table t (id int no default not null, date1 date no default not null) primary key (id); --- SQL operation complete. insert into t values (3, date '2013-01-01'), (2, date '2011-01-01'), (1, date '2012-01-01'); --- 3 row(s) inserted. select id, date1 from t where id 2; ID DATE1 --- -- 1 2012-01-01 --- 1 row(s) selected. drop table t; --- SQL operation complete. create table t (id int no default not null, date1 date no default not null) primary key (id, date1); --- SQL operation complete. insert into t values (3, date '2013-01-01'), (2, date '2011-01-01'), (1, date '2012-01-01'); --- 3 row(s) inserted. select id, date1 from t where id 2; *** ERROR[8415] The provided DATE, TIME, or TIMESTAMP is not valid and cannot be converted. --- 0 row(s) selected. select id, date1 from t; ID DATE1 --- -- 1 2012-01-01 2 2011-01-01 3 2013-01-01 --- 3 row(s) selected. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-123) LP Bug: 1233323 - SELECT query returns error 8415 when DATE type is in primary key
[ https://issues.apache.org/jira/browse/TRAFODION-123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-123. -- Resolution: Fixed Fix Version/s: 0.8 (pre-incubation) LP Bug: 1233323 - SELECT query returns error 8415 when DATE type is in primary key -- Key: TRAFODION-123 URL: https://issues.apache.org/jira/browse/TRAFODION-123 Project: Apache Trafodion Issue Type: Bug Components: sql-general Reporter: Weishiun Tsai Assignee: Anoop Sharma Priority: Critical Fix For: 0.8 (pre-incubation) When the DATE type is used as a primary key, the SELECT query returns error 8415, as shown in the following example. Also shown in the example is that the same query runs fine if DATE is not a primary key. A similar query without the WHERE clause also runs fine when DATE is a primary key. cqd mode_seabase 'on'; --- SQL operation complete. set schema seabase.phoenix; --- SQL operation complete. create table t (id int no default not null, date1 date no default not null) primary key (id); --- SQL operation complete. insert into t values (3, date '2013-01-01'), (2, date '2011-01-01'), (1, date '2012-01-01'); --- 3 row(s) inserted. select id, date1 from t where id 2; ID DATE1 --- -- 1 2012-01-01 --- 1 row(s) selected. drop table t; --- SQL operation complete. create table t (id int no default not null, date1 date no default not null) primary key (id, date1); --- SQL operation complete. insert into t values (3, date '2013-01-01'), (2, date '2011-01-01'), (1, date '2012-01-01'); --- 3 row(s) inserted. select id, date1 from t where id 2; *** ERROR[8415] The provided DATE, TIME, or TIMESTAMP is not valid and cannot be converted. --- 0 row(s) selected. select id, date1 from t; ID DATE1 --- -- 1 2012-01-01 2 2011-01-01 3 2013-01-01 --- 3 row(s) selected. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-135) LP Bug: 1244844 - Alter table drop column IF EXISTS syntax returns error 15001
[ https://issues.apache.org/jira/browse/TRAFODION-135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14696157#comment-14696157 ] Atanu Mishra commented on TRAFODION-135: Weishiun Tsai (wei-shiun-tsai) wrote on 2013-11-03: #1 Verified on 11/2/2013, this bug has been fixed: set schema seabase.phoenix; --- SQL operation complete. create table t (a int not null not droppable primary key); --- SQL operation complete. alter table t drop column if exists blah; --- SQL operation complete. invoke t; -- Definition of Seabase table SEABASE.PHOENIX.T -- Definition current Sun Nov 3 01:23:30 2013 ( A INT NO DEFAULT NOT NULL NOT DROPPABLE ) PRIMARY KEY (A ASC) --- SQL operation complete. Weishiun Tsai (wei-shiun-tsai) on 2013-11-05 Changed in trafodion: status: New → Fix Released LP Bug: 1244844 - Alter table drop column IF EXISTS syntax returns error 15001 -- Key: TRAFODION-135 URL: https://issues.apache.org/jira/browse/TRAFODION-135 Project: Apache Trafodion Issue Type: Bug Components: sql-general Reporter: Weishiun Tsai Assignee: Anoop Sharma Priority: Critical Alter table drop column IF EXISTS syntax is targeted to be supported, but it returns error 15001 right now. set schema seabase.phoenix; --- SQL operation complete. create table t (a int not null not droppable primary key); --- SQL operation complete. alter table t drop column if exists blah; *** ERROR[15001] A syntax error occurred at or before: alter table t drop column if exists blah; ^ (29 characters from start of SQL statement) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-135) LP Bug: 1244844 - Alter table drop column IF EXISTS syntax returns error 15001
[ https://issues.apache.org/jira/browse/TRAFODION-135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-135. -- Resolution: Fixed Assignee: (was: Anoop Sharma) Fix Version/s: 1.0 (pre-incubation) LP Bug: 1244844 - Alter table drop column IF EXISTS syntax returns error 15001 -- Key: TRAFODION-135 URL: https://issues.apache.org/jira/browse/TRAFODION-135 Project: Apache Trafodion Issue Type: Bug Components: sql-general Reporter: Weishiun Tsai Priority: Critical Fix For: 1.0 (pre-incubation) Alter table drop column IF EXISTS syntax is targeted to be supported, but it returns error 15001 right now. set schema seabase.phoenix; --- SQL operation complete. create table t (a int not null not droppable primary key); --- SQL operation complete. alter table t drop column if exists blah; *** ERROR[15001] A syntax error occurred at or before: alter table t drop column if exists blah; ^ (29 characters from start of SQL statement) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-134) LP Bug: 1244840 - Alter table drop column does not return error when the column does not exist
[ https://issues.apache.org/jira/browse/TRAFODION-134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-134. -- Resolution: Fixed Assignee: (was: Anoop Sharma) Fix Version/s: 1.0 (pre-incubation) LP Bug: 1244840 - Alter table drop column does not return error when the column does not exist -- Key: TRAFODION-134 URL: https://issues.apache.org/jira/browse/TRAFODION-134 Project: Apache Trafodion Issue Type: Bug Components: sql-general Reporter: Weishiun Tsai Priority: Critical Fix For: 1.0 (pre-incubation) Alter table drop column does not return error when the column does not exist, as show in the following example: set schema seabase.phoenix; --- SQL operation complete. create table t (a int not null not droppable primary key); --- SQL operation complete. alter table t drop column blah; --- SQL operation complete. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-134) LP Bug: 1244840 - Alter table drop column does not return error when the column does not exist
[ https://issues.apache.org/jira/browse/TRAFODION-134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14696155#comment-14696155 ] Atanu Mishra commented on TRAFODION-134: Weishiun Tsai (wei-shiun-tsai) wrote on 2013-11-03: #1 Verified on 11/2/2013, this bug has been fixed: set schema seabase.phoenix; --- SQL operation complete. create table t (a int not null not droppable primary key); --- SQL operation complete. alter table t drop column blah; *** ERROR[1009] Column BLAH does not exist in the specified table. --- SQL operation failed with errors. Weishiun Tsai (wei-shiun-tsai) on 2013-11-05 Changed in trafodion: status: New → Fix Released LP Bug: 1244840 - Alter table drop column does not return error when the column does not exist -- Key: TRAFODION-134 URL: https://issues.apache.org/jira/browse/TRAFODION-134 Project: Apache Trafodion Issue Type: Bug Components: sql-general Reporter: Weishiun Tsai Assignee: Anoop Sharma Priority: Critical Fix For: 1.0 (pre-incubation) Alter table drop column does not return error when the column does not exist, as show in the following example: set schema seabase.phoenix; --- SQL operation complete. create table t (a int not null not droppable primary key); --- SQL operation complete. alter table t drop column blah; --- SQL operation complete. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-141) LP Bug: 1246494 - Insert into a SALTed table sees MXCMP internal errors
[ https://issues.apache.org/jira/browse/TRAFODION-141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-141. -- Resolution: Fixed Assignee: (was: Hans Zeller) Fix Version/s: 1.0 (pre-incubation) LP Bug: 1246494 - Insert into a SALTed table sees MXCMP internal errors --- Key: TRAFODION-141 URL: https://issues.apache.org/jira/browse/TRAFODION-141 Project: Apache Trafodion Issue Type: Bug Components: sql-general Reporter: Weishiun Tsai Priority: Critical Fix For: 1.0 (pre-incubation) Insert data into a SALTed table sees MXCMP internal error 7000 and 2235, as shown in the following examples: set schema seabase.mytest; --- SQL operation complete. create table t (a int not null not droppable) store by (a) salt using 1 partit ions; --- SQL operation complete. insert into t values (1),(2),(3); *** ERROR[7000] An internal error occurred in the code generator in file ../gene rator/Generator.cpp at line 1303: ValueId 55 (SEABASE.MYTEST.T.SYSKEY...) not found in MapTable 0x7fff2760. *** ERROR[2235] MXCMP Internal Error: An unknown error, originated from file ../ generator/Generator.cpp at line 1661. *** ERROR[8822] The statement was not prepared. drop table t; --- SQL operation complete. create table t (a int not null not droppable) store by (a) salt using 4 partit ions; --- SQL operation complete. insert into t values (1),(2),(3); *** ERROR[7000] An internal error occurred in the code generator in file ../gene rator/Generator.cpp at line 1303: ValueId 55 (SEABASE.MYTEST.T.SYSKEY...) not found in MapTable 0x7fff2760. *** ERROR[2235] MXCMP Internal Error: An unknown error, originated from file ../ generator/Generator.cpp at line 1661. *** ERROR[8822] The statement was not prepared. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-148) LP Bug: 1248732 - check (b 99) constraint has no effect
[ https://issues.apache.org/jira/browse/TRAFODION-148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-148. -- Resolution: Fixed Assignee: (was: Anoop Sharma) Fix Version/s: 0.8 (pre-incubation) LP Bug: 1248732 - check (b 99) constraint has no effect - Key: TRAFODION-148 URL: https://issues.apache.org/jira/browse/TRAFODION-148 Project: Apache Trafodion Issue Type: Bug Components: sql-general Reporter: Weishiun Tsai Priority: Critical Fix For: 0.8 (pre-incubation) The check constraint has no effect. As shown in the following example, check (b 99) constraint is in place for table t, but the insert statement is able to insert a row where b = 200. The insert statement is supposed to see the 8101 error, as in “*** ERROR[8101] The operation is prevented by check constraint SEABASE.MYTEST.MY_CONSTRAINT on table SEABASE.MYTEST.T.” set schema seabase.mytest; --- SQL operation complete. create table t (a char(2) not null not droppable primary key, b int, constrain t my_constraint check (b 99)); --- SQL operation complete. insert into t values ('AZ', 200); --- 1 row(s) inserted. select * from t; A B -- --- AZ 200 --- 1 row(s) selected. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-165) LP Bug: 1270031 - Update statistics sees errors with certain schema name
[ https://issues.apache.org/jira/browse/TRAFODION-165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-165. -- Resolution: Fixed Assignee: (was: Hans Zeller) Fix Version/s: 1.0 (pre-incubation) LP Bug: 1270031 - Update statistics sees errors with certain schema name Key: TRAFODION-165 URL: https://issues.apache.org/jira/browse/TRAFODION-165 Project: Apache Trafodion Issue Type: Bug Components: sql-general Reporter: Weishiun Tsai Priority: Critical Fix For: 1.0 (pre-incubation) Update statistics sees errors on a 10-row table with index. The strange thing is that it only happens when the table is created using certain schema name. In the following example, the first part shows that the commands ran fine when the schema name was trafodion.mytest. The second part shows that the same sequence of commands encountered error 9200/8411/8839/8609 at the update statistics statement when the schema name was trafodion.arkcase_arkt0025. The build is the beta build trafodion-ci-release-trafodion_beta-20140115-v36803_release.tar running on a workstation. -bash-4.1$ sqlci Hewlett-Packard NonStop(TM) SQL/MX Conversational Interface 2.5 (c) Copyright 2003-2010 Hewlett-Packard Development Company, LP. set schema trafodion.mytest; --- SQL operation complete. create table t (pic_x_a PIC X(3) not null not droppable, pic_x_b PIC X(1) not null not droppable, pic_x_c PIC X(2) not null not droppable, PRIMARY KEY(pic_x_c, pic_x_b, pic_x_a) not droppable); --- SQL operation complete. create index ta ON t ( pic_x_a ); --- SQL operation complete. insert into t values ('jo','Z','jo'),('al','Q','al'),('P','P','P'),('B','A','ed'),('jo','C','ek'),('JO','D','em'),('al','E','bo'),(' al','F','di'),('al ','F','al'),(' al','F','al'); --- 10 row(s) inserted. update statistics for table t on every column; --- SQL operation complete. drop table t cascade; --- SQL operation complete. set schema trafodion.arkcase_arkt0025; --- SQL operation complete. create table t (pic_x_a PIC X(3) not null not droppable, pic_x_b PIC X(1) not null not droppable, pic_x_c PIC X(2) not null not droppable, PRIMARY KEY(pic_x_c, pic_x_b, pic_x_a) not droppable); --- SQL operation complete. create index ta ON t ( pic_x_a ); --- SQL operation complete. insert into t values ('jo','Z','jo'),('al','Q','al'),('P','P','P'),('B','A','ed'),('jo','C','ek'),('JO','D','em'),('al','E','bo'),(' al','F','di'),('al ','F','al'),(' al','F','al'); --- 10 row(s) inserted. update statistics for table t on every column; *** ERROR[9200] UPDATE STATISTICS for table TRAFODION.ARKCASE_ARKT0025.T encountered an error (8411) from statement HSinsertHistint::flush(). *** ERROR[8411] A numeric overflow occurred during an arithmetic computation or data conversion. Conversion of Source Type:LARGEINT(REC_BIN64_SIGNED) Source Value:4599332999182373.267 to Target Type:LARGEINT(REC_BIN64_SIGNED). *** ERROR[8839] Transaction was aborted. *** ERROR[9200] UPDATE STATISTICS for table TRAFODION.ARKCASE_ARKT0025.T encountered an error (8411) from statement FLUSH_STATISTICS. *** ERROR[8411] A numeric overflow occurred during an arithmetic computation or data conversion. Conversion of Source Type:LARGEINT(REC_BIN64_SIGNED) Source Value:4599332999182373.267 to Target Type:LARGEINT(REC_BIN64_SIGNED). *** ERROR[8839] Transaction was aborted. *** ERROR[9200] UPDATE STATISTICS for table TRAFODION.ARKCASE_ARKT0025.T encountered an error (8609) from statement Process_Query. *** ERROR[8609] Waited rollback performed without starting a transaction. --- SQL operation failed with errors. drop table t cascade; --- SQL operation complete. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-165) LP Bug: 1270031 - Update statistics sees errors with certain schema name
[ https://issues.apache.org/jira/browse/TRAFODION-165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14696220#comment-14696220 ] Atanu Mishra commented on TRAFODION-165: Hans Zeller (hans-zeller) wrote on 2014-02-19: #1 The value causing the error is the std deviation of frequency, which was defined as a numeric(12,3) in the table and as a double in the code. When converting from static to dynamic SQL, the logic to convert the value was no longer working. I'm going to change the code to store the value as a numeric(12,3) in the update stats code to avoid conversion in the SQL insert statements in the SB_HISTOGRAM_INTERVALS table. Changed in trafodion: status: New → In Progress Hans Zeller (hans-zeller) wrote on 2014-02-19: #2 Fix delivered into project/datalake_64_1 branch, rev. 37671. Changed in trafodion: status: In Progress → Fix Committed Weishiun Tsai (wei-shiun-tsai) wrote on 2014-03-04: #3 Verified on the trafodion-ci-project-datalake_64_1-20140303-v38027_release.tar build. This problem has been fixed: -bash-4.1$ sqlci Trafodion Conversational Interface 0.7.0 (c) Copyright 2014 Hewlett-Packard Development Company, LP. create table t (pic_x_a PIC X(3) not null not droppable, pic_x_b PIC X(1) not null not droppable, pic_x_c PIC X(2) not null not droppable, PRIMARY KEY(pic_x_c, pic_x_b, pic_x_a) not droppable); --- SQL operation complete. create index ta ON t ( pic_x_a ); --- SQL operation complete. insert into t values ('jo','Z','jo'),('al','Q','al'),('P','P','P'),('B','A','ed'),('jo','C','ek'),('JO','D','em'),('al','E','bo'),(' al','F','di'),('al ','F','al'),(' al','F','al'); --- 10 row(s) inserted. update statistics for table t on every column; --- SQL operation complete. drop table t cascade; --- SQL operation complete. set schema trafodion.arkcase_arkt0025; --- SQL operation complete. create table t (pic_x_a PIC X(3) not null not droppable, pic_x_b PIC X(1) not null not droppable, pic_x_c PIC X(2) not null not droppable, PRIMARY KEY(pic_x_c, pic_x_b, pic_x_a) not droppable); --- SQL operation complete. create index ta ON t ( pic_x_a ); --- SQL operation complete. insert into t values ('jo','Z','jo'),('al','Q','al'),('P','P','P'),('B','A','ed'),('jo','C','ek'),('JO','D','em'),('al','E','bo'),(' al','F','di'),('al ','F','al'),(' al','F','al'); --- 10 row(s) inserted. update statistics for table t on every column; --- SQL operation complete. Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1270031 - Update statistics sees errors with certain schema name Key: TRAFODION-165 URL: https://issues.apache.org/jira/browse/TRAFODION-165 Project: Apache Trafodion Issue Type: Bug Components: sql-general Reporter: Weishiun Tsai Assignee: Hans Zeller Priority: Critical Fix For: 1.0 (pre-incubation) Update statistics sees errors on a 10-row table with index. The strange thing is that it only happens when the table is created using certain schema name. In the following example, the first part shows that the commands ran fine when the schema name was trafodion.mytest. The second part shows that the same sequence of commands encountered error 9200/8411/8839/8609 at the update statistics statement when the schema name was trafodion.arkcase_arkt0025. The build is the beta build trafodion-ci-release-trafodion_beta-20140115-v36803_release.tar running on a workstation. -bash-4.1$ sqlci Hewlett-Packard NonStop(TM) SQL/MX Conversational Interface 2.5 (c) Copyright 2003-2010 Hewlett-Packard Development Company, LP. set schema trafodion.mytest; --- SQL operation complete. create table t (pic_x_a PIC X(3) not null not droppable, pic_x_b PIC X(1) not null not droppable, pic_x_c PIC X(2) not null not droppable, PRIMARY KEY(pic_x_c, pic_x_b, pic_x_a) not droppable); --- SQL operation complete. create index ta ON t ( pic_x_a ); --- SQL operation complete. insert into t values ('jo','Z','jo'),('al','Q','al'),('P','P','P'),('B','A','ed'),('jo','C','ek'),('JO','D','em'),('al','E','bo'),(' al','F','di'),('al ','F','al'),(' al','F','al'); --- 10 row(s) inserted. update statistics for table t on every column; --- SQL operation complete. drop table t cascade; --- SQL operation complete. set schema trafodion.arkcase_arkt0025; --- SQL operation complete. create table t (pic_x_a PIC X(3) not null not droppable, pic_x_b PIC X(1) not null not droppable, pic_x_c PIC X(2) not null not droppable, PRIMARY KEY(pic_x_c, pic_x_b, pic_x_a) not droppable); --- SQL operation complete. create index ta ON t ( pic_x_a ); --- SQL operation complete. insert into t values
[jira] [Commented] (TRAFODION-171) LP Bug: 1274281 - JDBC T2 driver returns error 8813 accessing the ResultSet of a select count(*) statement
[ https://issues.apache.org/jira/browse/TRAFODION-171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14696239#comment-14696239 ] Atanu Mishra commented on TRAFODION-171: Pavani Puppala (pavani-puppala) wrote on 2014-05-31:#2 SQL is setting the query type as SQL_EXE_UTIL instead of SQL_SELECT_NON_UNIQUE for select count(*) statements. tags: added: sql-exe removed: client-jdbc-t2 Pavani Puppala (pavani-puppala) on 2014-05-31 Changed in trafodion: status: In Progress → Fix Committed Weishiun Tsai (wei-shiun-tsai) wrote on 2014-06-04: #3 Verified on the GIT 0603_0930 build. This problem has been fixed: -bash-4.1$ myrun.sh select count(*) rs.next() rs.getInt() 9 Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1274281 - JDBC T2 driver returns error 8813 accessing the ResultSet of a select count(*) statement -- Key: TRAFODION-171 URL: https://issues.apache.org/jira/browse/TRAFODION-171 Project: Apache Trafodion Issue Type: Bug Components: sql-exe Reporter: Weishiun Tsai Assignee: Pavani Puppala Priority: Critical Fix For: 0.7 (pre-incubation), 0.8 (pre-incubation) With the JDBC T2 driver, fetching the ResultSet of a select count(*) statement returns the following error: *** ERROR[8813] Trying to fetch from a statement that is in the closed state. This is seen using the beta build trafodion-ci-release-trafodion_beta-20140128-v37024_release.tar. Here is a small JDBC program to reproduce this problem: -bash-4.1$ cat mytest.java import java.sql.*; import java.math.*; import java.util.*; import java.io.*; public class mytest { public static void main(String[] args) //throws java.io.IOException { Properties props = null; Connection conn = null; PreparedStatement stmt = null; ResultSet rs = null; String cat = null; String sch = null; String url = null; String query = null; try { String propFile = System.getProperty(hpjdbc.properties); if (propFile != null) { FileInputStream fs = new FileInputStream(new File(propFile)); props = new Properties(); props.load(fs); url = props.getProperty(url); cat = props.getProperty(catalog); sch = props.getProperty(schema); } else { System.out.println(ERROR: hpjdbc.properties is not set. Exiting.); System.exit(0); } Class.forName(com.hp.sqlmx.SQLMXDriver); conn = DriverManager.getConnection(url, props); conn.createStatement().execute(drop table if exists tb); conn.createStatement().execute(create table tb (c1 int not null)); conn.createStatement().execute(insert into tb values (1),(2),(3),(4),(5),(6),(7),(8),(9)); System.out.println(select count(*)); rs = conn.createStatement().executeQuery(select count(*) from tb); System.out.println(rs.next()); if (rs.next() != false) { System.out.println(rs.getInt()); System.out.println(rs.getInt(1)); } conn.close(); } catch (SQLException se) { System.out.println(ERROR: SQLException); se.printStackTrace(); System.out.println(se.getMessage()); System.exit(1); } catch (Exception e) { System.out.println(ERROR: Exception); e.printStackTrace(); System.out.println(e.getMessage()); System.exit(1); } } } Here is the output of the program: -bash-4.1$ myrun.sh select count(*) rs.next() ERROR: SQLException *** ERROR[8813] Trying to fetch from a statement that is in the closed state. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-171) LP Bug: 1274281 - JDBC T2 driver returns error 8813 accessing the ResultSet of a select count(*) statement
[ https://issues.apache.org/jira/browse/TRAFODION-171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-171. -- Resolution: Fixed Assignee: (was: Pavani Puppala) Fix Version/s: 0.8 (pre-incubation) LP Bug: 1274281 - JDBC T2 driver returns error 8813 accessing the ResultSet of a select count(*) statement -- Key: TRAFODION-171 URL: https://issues.apache.org/jira/browse/TRAFODION-171 Project: Apache Trafodion Issue Type: Bug Components: sql-exe Reporter: Weishiun Tsai Priority: Critical Fix For: 0.7 (pre-incubation), 0.8 (pre-incubation) With the JDBC T2 driver, fetching the ResultSet of a select count(*) statement returns the following error: *** ERROR[8813] Trying to fetch from a statement that is in the closed state. This is seen using the beta build trafodion-ci-release-trafodion_beta-20140128-v37024_release.tar. Here is a small JDBC program to reproduce this problem: -bash-4.1$ cat mytest.java import java.sql.*; import java.math.*; import java.util.*; import java.io.*; public class mytest { public static void main(String[] args) //throws java.io.IOException { Properties props = null; Connection conn = null; PreparedStatement stmt = null; ResultSet rs = null; String cat = null; String sch = null; String url = null; String query = null; try { String propFile = System.getProperty(hpjdbc.properties); if (propFile != null) { FileInputStream fs = new FileInputStream(new File(propFile)); props = new Properties(); props.load(fs); url = props.getProperty(url); cat = props.getProperty(catalog); sch = props.getProperty(schema); } else { System.out.println(ERROR: hpjdbc.properties is not set. Exiting.); System.exit(0); } Class.forName(com.hp.sqlmx.SQLMXDriver); conn = DriverManager.getConnection(url, props); conn.createStatement().execute(drop table if exists tb); conn.createStatement().execute(create table tb (c1 int not null)); conn.createStatement().execute(insert into tb values (1),(2),(3),(4),(5),(6),(7),(8),(9)); System.out.println(select count(*)); rs = conn.createStatement().executeQuery(select count(*) from tb); System.out.println(rs.next()); if (rs.next() != false) { System.out.println(rs.getInt()); System.out.println(rs.getInt(1)); } conn.close(); } catch (SQLException se) { System.out.println(ERROR: SQLException); se.printStackTrace(); System.out.println(se.getMessage()); System.exit(1); } catch (Exception e) { System.out.println(ERROR: Exception); e.printStackTrace(); System.out.println(e.getMessage()); System.exit(1); } } } Here is the output of the program: -bash-4.1$ myrun.sh select count(*) rs.next() ERROR: SQLException *** ERROR[8813] Trying to fetch from a statement that is in the closed state. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-169) LP Bug: 1272095 - [any N] does not have any effect on a particular query
[ https://issues.apache.org/jira/browse/TRAFODION-169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-169. -- Resolution: Fixed Assignee: (was: Anoop Sharma) Fix Version/s: 1.0 (pre-incubation) LP Bug: 1272095 - [any N] does not have any effect on a particular query Key: TRAFODION-169 URL: https://issues.apache.org/jira/browse/TRAFODION-169 Project: Apache Trafodion Issue Type: Bug Components: sql-exe Reporter: Weishiun Tsai Priority: Critical Fix For: 1.0 (pre-incubation) With this particular query, [any N] does not have any effect. As shown in the following output, [any 1] still returns 4 rows, while [first 1] works fine and returns only 1 row. This is seen on the beta build trafodion-ci-release-trafodion_beta-20140117-v36857_release.tar installed on a workstation. create table t (large_int LARGEINT not null not droppable, pic_252 PIC X(246) not null not droppable, pic_1 PIC X not null not droppable, PRIMARY KEY (large_int DESC) not droppable); --- SQL operation complete. insert into t values (3000,'george','D'),(100,'carltons','E'),(1000,'harveys','B'),(300,'Q','X'),(2000,'alexander','B'),(400,'joseph','X'),(200,'squaw','X'),(4000,'valley','D'); --- 8 row(s) inserted. select pic_1,large_int from t where large_int = 100 group by pic_1,large_int having large_int in (100,200,1000,2000) order by pic_1,large_int;+ PIC_1 LARGE_INT - B 1000 B 2000 E 100 X 200 --- 4 row(s) selected. select [any 1] pic_1,large_int from t where large_int = 100 group by pic_1,large_int having large_int in (100,200,1000,2000) order by pic_1,large_int;+ PIC_1 LARGE_INT - B 1000 B 2000 E 100 X 200 --- 4 row(s) selected. select [first 1] pic_1,large_int from t where large_int = 100 group by pic_1,large_int having large_int in (100,200,1000,2000) order by pic_1,large_int;+ PIC_1 LARGE_INT - B 1000 --- 1 row(s) selected. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-177) LP Bug: 1274750 - INSERT into salted (multi-region) table via SQLCI causes SQLCI core.
[ https://issues.apache.org/jira/browse/TRAFODION-177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14696255#comment-14696255 ] Atanu Mishra commented on TRAFODION-177: Hans Zeller (hans-zeller) wrote on 2014-02-20: #3 Download full text (4.1 KiB) This might have to do with the fact that salted tables are pre-split into multiple regions. I checked the query plan and it seems to handle the salting column correctly. Could someone from the TM team have a look? The call stack looks like this: Core was generated by `sqlci'. Program terminated with signal 6, Aborted. #0 0x0033de4328a5 in raise () from /lib64/libc.so.6 (gdb) bt #0 0x0033de4328a5 in raise () from /lib64/libc.so.6 #1 0x0033de434085 in abort () from /lib64/libc.so.6 #2 0x76c93455 in os::abort(bool) () from /opt/home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #3 0x76df3717 in VMError::report_and_die() () from /opt/home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #4 0x76df3cee in crash_handler(int, siginfo*, void*) () from /opt/home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #5 signal handler called #6 0x76c4a635 in methodOopDesc::name_and_sig_as_C_string(char*, int) const () from /opt/home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #7 0x769d1714 in frame::print_on_error(outputStream*, char*, int, bool) const () from /opt/home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #8 0x76df17f0 in VMError::print_stack_trace(outputStream*, JavaThread*, char*, int, bool) () from /opt/home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #9 0x76df1f95 in VMError::report(outputStream*) () from /opt/home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #10 0x76df331a in VMError::report_and_die() () from /opt/home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #11 0x76c96f60 in JVM_handle_linux_signal () from /opt/home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #12 signal handler called #13 0x0033de48997b in memcpy () from /lib64/libc.so.6 #14 0x70f9fc9f in Java_org_apache_hadoop_hbase_client_transactional_RMInterface_registerRegion ( pp_env=0x32be1d8, pv_object=0x7fff34d8, pv_string=0x7fff34d0, pv_dos=0x7fff34c8) at tmregisterregion.cpp:30 #15 0x785c3030785c3030 in ?? () #16 0x785c3030785c3030 in ?? () #17 0x785c3030785c3030 in ?? () #18 0x785c3030785c3030 in ?? () #19 0x785c3030785c3030 in ?? () #20 0x785c3030785c3030 in ?? () #21 0x785c3030785c3030 in ?? () #22 0x785c3030785c3030 in ?? () #23 0x785c3030785c3030 in ?? () #24 0x785c3030785c3030 in ?? () #25 0x785c3030785c3030 in ?? () #26 0x785c3030785c3030 in ?? () #27 0x785c3030785c3030 in ?? () #28 0x785c3030785c3030 in ?? () #29 0x785c3030785c3030 in ?? () #30 0x434e45202c273030 in ?? () #31 0x203e3d204445444f in ?? () #32 0x3162393231346231 in ?? () #33 0x6439633639643466 in ?? () #34 0x6663633730363634 in ?? () #35 0x6638653439626264 in ?? () #36 0x4e54534f48207d2c in ?? () #37 0x7434677c3a454d41 in ?? () #38 0x756f682e33343033 in ?? () #39 0x2e70682e6e6f7473 in ?? () #40 0x54524f507c6d6f63 in ?? () #41 0x38343735357c203a in ?? () #42 0x7fff357c in ?? () #43 0x7fffc8d14333 in ?? () #44 0x00060d71ef10 in ?? () #45 0x00060d71e918 in ?? () #46 0x00060d71e8d0 in ?? () #47 0x00060d71de80 in ?? () #48 0x00060d6dd560 in ?? () #49 0x00060d6dc470 ... Read more... Changed in trafodion: status: New → Confirmed summary:- INSERT into salted table via SQLCI causes SQLCI core. + INSERT into salted (multi-region) table via SQLCI causes SQLCI core. Hans Zeller (hans-zeller) wrote on 2014-02-20: #4 Two more observations: If I suppress pre-splitting the salted table (set numSplits in CmpSeabaseDDL::createSeabaseTable() in file sql/sqlcomp/CmpSeaBaseDDLtable.cpp to 0), then the statement succeeds. If I run this in auto-commit mode (no begin work) it succeeds as well. Oliver Bucaojit (oliver-bucaojit) on 2014-02-21 Changed in trafodion: assignee: nobody → Oliver (oliver-bucaojit) Oliver Bucaojit (oliver-bucaojit) wrote on 2014-02-21: #5 Download full text (19.4 KiB) I have a fix for this issue, it is being caused by salted tables having a longer regionInfo length than unsalted tables due to the startkey and endkey being present in the region name as opposed to them being blank, which is what we encountered in the past. The core was being caused by a memcpy() copying values into a buffer that wasn't large enough. My fix is a simple increase of MAX expected size of the region name from 1024 to 2048. --- log of sqlci test --- sqlci Hewlett-Packard NonStop(TM) SQL/MX Conversational Interface 2.5 (c) Copyright 2003-2010 Hewlett-Packard Development Company, LP. SET SCHEMA TRAFODION.TESTSCH; --- SQL operation complete. SET WARNINGS OFF; -- Insert CPU Rows Prepare CMD from
[jira] [Closed] (TRAFODION-177) LP Bug: 1274750 - INSERT into salted (multi-region) table via SQLCI causes SQLCI core.
[ https://issues.apache.org/jira/browse/TRAFODION-177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-177. -- Resolution: Fixed Assignee: (was: Oliver Bucaojit) Fix Version/s: 1.0 (pre-incubation) LP Bug: 1274750 - INSERT into salted (multi-region) table via SQLCI causes SQLCI core. -- Key: TRAFODION-177 URL: https://issues.apache.org/jira/browse/TRAFODION-177 Project: Apache Trafodion Issue Type: Bug Components: sql-exe Reporter: Guy Groulx Priority: Critical Fix For: 1.0 (pre-incubation) Prepare CMD from +INSERT INTO CPU (SYSTEMNAME, LOADID, INITDATETIME, LOADDATETIME, NODEID, SAMPLEDATETIME, COLLECTMODE, CPUNUM, USERBUSY, NICEBUSY, SYSTEMBUSY, IDLETIME, TOTTIME, WAITTIME, IRQTIME, SOFTIRQTIME, STEALTIME) VALUES ('ZIRCON2','SB1401301516','20140130151707','20140130220941','n001',?, 'D', ?, ?, ?, ?, ?, ?, ?, ?, ?, ?); --- SQL command prepared. BEGIN WORK; --- SQL operation complete. EXECUTE CMD USING +'20140130151800', 0, 0.24500, 0.00600, 0.19000, 0.46800, 0.49900, 0.0, 0.0, 0.05800, 0.0; # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x76949a48, pid=36759, tid=140737353866944 # # JRE version: 7.0_09-b05 # Java VM: Java HotSpot(TM) 64-Bit Server VM (23.5-b02 mixed mode linux-amd64 compressed oops) # Problematic frame: # V [libjvm.so+0x566a48] jni_ReleaseByteArrayElements+0x98 # # Core dump written. Default location: /home/squser2/guy/core or core.36759 # # An error report file with more information is saved as: # /home/squser2/guy/hs_err_pid36759.log # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # Aborted (core dumped) I've attached input file to feed SQLCI to try out. The generated log file is available on ZIRCON at the location above. Core file is on ZIRCON on n001 2014-01-30 22:53:43 /local/cores/1003/core.1391122421.n001.36759.sqlci Core was generated by `sqlci'. Program terminated with signal 6, Aborted. #0 0x7585c8a5 in raise () from /lib64/libc.so.6 #1 0x7585e085 in abort () from /lib64/libc.so.6 #2 0x76b28455 in os::abort(bool) () from /home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #3 0x76c88717 in VMError::report_and_die() () from /home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #4 0x76c88cee in crash_handler(int, siginfo*, void*) () from /home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #5 signal handler called #6 0x76adf635 in methodOopDesc::name_and_sig_as_C_string(char*, int) const () from /home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #7 0x76866714 in frame::print_on_error(outputStream*, char*, int, bool) const () from /home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #8 0x76c867f0 in VMError::print_stack_trace(outputStream*, JavaThread*, char*, int, bool) () from /home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #9 0x76c86f95 in VMError::report(outputStream*) () from /home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #10 0x76c8831a in VMError::report_and_die() () from /home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #11 0x76b2bf60 in JVM_handle_linux_signal () from /home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #12 signal handler called #13 0x76949a48 in jni_ReleaseByteArrayElements () from /home/tools/jdk1.7.0_09_64/jre/lib/amd64/server/libjvm.so #14 0x71894aa1 in ReleaseByteArrayElements (pp_env=0xbec9d8, pv_object=value optimized out, pv_string=value optimized out, pv_dos=0x7fff4f38) at /opt/home/tools/jdk1.7.0_09_64/include/jni.h:1697 #15 Java_org_apache_hadoop_hbase_client_transactional_RMInterface_registerRegion (pp_env=0xbec9d8, pv_object=value optimized out, pv_string=value optimized out, pv_dos=0x7fff4f38) at tmregisterregion.cpp:35 #16 0x785c3030785c3030 in ?? () ... #30 0x785c3030785c3030 in ?? () #31 0x434e45202c273030 in ?? () #32 0x203e3d204445444f in ?? () #33 0x3966373836643866 in ?? () #34 0x3035383864323564 in ?? () #35 0x3464613834653162 in ?? () #36 0x3531383263626662 in ?? () #37 0x4e54534f48207d2c in ?? () #38 0x31306e7c3a454d41 in ?? () #39 0x756c632e6d632e30 in ?? () #40 0x524f507c72657473 in ?? () #41 0x323030367c203a54 in ?? () ---Type return to continue, or q return to quit--- #42 0x7fff7c30 in ?? () #43 0x7fff5048 in ?? () #44 0x7fffe21c9333 in ?? () #45 0x00059cc39da8 in ?? () #46 0x00059cc397b0 in ?? () #47 0x00059cc39768 in ?? () #48 0x00059cc38d28 in ??
[jira] [Commented] (TRAFODION-188) LP Bug: 1286349 - Insert..select returns HBASE_ACCESS_ERROR(-705) and then TMF error 97 in org.apache.hadoop.hbase.regionserver.transactional
[ https://issues.apache.org/jira/browse/TRAFODION-188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14696264#comment-14696264 ] Atanu Mishra commented on TRAFODION-188: Weishiun Tsai (wei-shiun-tsai) wrote on 2014-05-19: #1 Verified on the datalake v40646 build. This problem has been fixed: create table t0 (a int not null not droppable, b int, c int, primary key (a)) salt using 12 partitions; --- SQL operation complete. create table t1 (a int not null not droppable, b int, c int, primary key (a)) salt using 12 partitions; --- SQL operation complete. create table t2 (a int not null not droppable, b int, c int, primary key (a)) salt using 12 partitions; --- SQL operation complete. create table t6 (a int not null not droppable, b int, c int, primary key (a)) salt using 12 partitions; --- SQL operation complete. create table t8 (a int not null not droppable, b int, c int, primary key (a)) salt using 12 partitions; --- SQL operation complete. create table cube1 +(a int not null not droppable, +b int not null not droppable, +c int not null not droppable, +d int, e int, f int, txt char(100), +primary key (a,b,c)) +store by primary key salt using 12 partitions; --- SQL operation complete. insert into t0 values (0,0,0),(1,1,1),(2,2,2),(3,3,3),(4,4,4),(5,5,5),(6,6,6),(7,7,7),(8,8,8),(9,9,9); --- 10 row(s) inserted. upsert using load into t1 select * from t0; --- 10 row(s) inserted. upsert using load into t2 select * from t0; --- 10 row(s) inserted. upsert using load into t6 select t1.a+10*t2.a,t1.a,t2.a from t1,t2; --- 100 row(s) inserted. upsert using load into t8 select t6.a+100*t1.a,t6.a,t1.a from t1,t6; --- 1000 row(s) inserted. insert into cube1 select t1.a, t6.a, t8.a, t1.a, t6.a, t8.a, 'some text' from t1, t6, t8 where t8.a 100; --- 10 row(s) inserted. Changed in trafodion: status: New → Fix Released Weishiun Tsai (wei-shiun-tsai) on 2014-05-19 summary:- Insert..select returns HBASE_ACCSES_ERROR(-705) and then TMF error 97 in + Insert..select returns HBASE_ACCESS_ERROR(-705) and then TMF error 97 in org.apache.hadoop.hbase.regionserver.transactional LP Bug: 1286349 - Insert..select returns HBASE_ACCESS_ERROR(-705) and then TMF error 97 in org.apache.hadoop.hbase.regionserver.transactional - Key: TRAFODION-188 URL: https://issues.apache.org/jira/browse/TRAFODION-188 Project: Apache Trafodion Issue Type: Bug Components: dtm Reporter: Weishiun Tsai Priority: Critical A insert..select statement returns HBASE_ACCSES_ERROR(-705) in org.apache.hadoop.hbase.regionserver.transactional and then TMF error 97 afterwards, as shown here. This problem is fairly reproducible now with the QA test to create and populate hcubedb through sqlci/JDBC. This may or may not be related to https://bugs.launchpad.net/trafodion/+bug/1274716 “Getting TM error 97 for INSERT SELECT hive table”. In this case, all tables from the select list are Trafodion/hbase tables, but in the other case, the select table is a hive table. For now, we use this case to track the problem separately. SQLinsert into cube1 select t1.a, t6.a, t8.a, t1.a, t6.a, t8.a, 'some text' from t1, t6, t8 where t8.a 100; *** ERROR[8448] Unable to access Hbase interface. Call to ExpHbaseInterface::rowExists returned error HBASE_ACCESS_ERROR(-705). Cause: org.apache.hadoop.hbase.client.transactional.UnknownTransactionException: org.apache.hadoop.hbase.client.transactional.UnknownTransactionException: transaction: [4294996353], region: [TRAFODION.G_HCUBEDB.CUBE1,,1393442223602.979ac8a950b9df29d1ab4832dd9acb37.] at org.apache.hadoop.hbase.regionserver.transactional.TransactionalRegion.getTransactionState(TransactionalRegion.java:741) at org.apache.hadoop.hbase.regionserver.transactional.TransactionalRegion.checkAndPut(TransactionalRegion.java:941) at org.apache.hadoop.hbase.regionserver.transactional.TransactionalRegionServer.checkAndPut(TransactionalRegionServer.java:449) at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1428) sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
[jira] [Closed] (TRAFODION-274) LP Bug: 1320334 - Can't run datalake_v40605 because of protobuf.
[ https://issues.apache.org/jira/browse/TRAFODION-274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-274. -- Resolution: Fixed LP Bug: 1320334 - Can't run datalake_v40605 because of protobuf. Key: TRAFODION-274 URL: https://issues.apache.org/jira/browse/TRAFODION-274 Project: Apache Trafodion Issue Type: Bug Components: dtm Reporter: Guy Groulx Assignee: Guy Groulx Priority: Blocker Labels: seapilot Fix For: 0.7 (pre-incubation) Installed datalake_v40605 onto spinel. Yes, I have the libprotobuf.so.6 libraries on the system. ldd tm shows that everything is present. The tms did startup but when trying to enabled txns, they all went down and created cores. /opt/hp/squser2/dtlkV40605/sql/scripts gdb tm /local/cores/1008/core.1400253140.n008.36791.tm … Core was generated by `tm SQMON1.0 7 7 036791 $TM7 tag#0$port#37745$description#n008$ifname#17'. Program terminated with signal 6, Aborted. #0 0x7589e8a5 in raise () from /lib64/libc.so.6 Missing separate debuginfos, use: debuginfo-install boost-filesystem-1.41.0-11.el6_1.2.x86_64 boost-program-options-1.41.0-11.el6_1.2.x86_64 boost-system-1.41.0-11.el6_1.2.x86_64 cyrus-sasl-lib-2.1.23-13.el6.x86_64 glibc-2.12-1.107.el6.x86_64 libgcc-4.4.6-4.el6.x86_64 libstdc++-4.4.6-4.el6.x86_64 libuuid-2.17.2-12.7.el6.x86_64 nss-softokn-freebl-3.12.9-11.el6.x86_64 qpid-cpp-client-0.14-16.el6.x86_64 (gdb) bt #0 0x7589e8a5 in raise () from /lib64/libc.so.6 #1 0x758a0085 in abort () from /lib64/libc.so.6 #2 0x003bffe46f20 in google::protobuf::internal::LogMessage::Finish() () from /usr/lib64/libprotobuf.so.6 #3 0x003bffe579d3 in google::protobuf::MessageLite::AppendToString(std::basic_stringchar, std::char_traitschar, std::allocatorchar *) const () from /usr/lib64/libprotobuf.so.6 #4 0x003bffe57a71 in google::protobuf::MessageLite::SerializeAsString() const () from /usr/lib64/libprotobuf.so.6 #5 0x00408649 in tm_log_event (event_id=value optimized out, severity=value optimized out, temp_string=value optimized out, error_code=-1, rmid=-1, dtmid=value optimized out, seq_num=-1, msgid=-1, xa_error=-1, pool_size=-1, pool_elems=-1, msg_retries=-1, pool_high=-1, pool_low=-1, pool_max=-1, tx_state=101, data=110, data1=-1, data2=-1, string1=0x664160 UP, node=-1, msgid2=-1, offset=-1, tm_event_msg=-1, data4=0) at tmlogging.cpp:225 #6 0x00418f0c in TM_Info::state (this=0x6cd620, pv_state=101) at tminfo.cpp:4945 #7 0x0041d5a5 in TM_Info::tm_up (this=0x6cd620) at tminfo.cpp:4140 #8 0x00420fe1 in TM_Info::init_and_recover_rms (this=0x6cd620) at tminfo.cpp:1970 #9 0x00436c3f in tmTimer_initializeRMs () at tmtimermain.cpp:166 #10 0x004376e9 in timerThread_main (arg=value optimized out) ---Type return to continue, or q return to quit--- at tmtimermain.cpp:322 #11 0x7564e2df in SB_Thread::Thread::disp (this=0x12b51a0, pp_arg=0x12b51a0) at thread.cpp:199 #12 0x7564e737 in thread_fun (pp_arg=0x12b51a0) at thread.cpp:295 #13 0x756519ac in sb_thread_sthr_disp (pp_arg=0x12b5770) at threadl.cpp:241 #14 0x75c06851 in start_thread () from /lib64/libpthread.so.0 #15 0x7595490d in clone () from /lib64/libc.so.6 (gdb) Narendra created a UTT that does not use protobuf. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-274) LP Bug: 1320334 - Can't run datalake_v40605 because of protobuf.
[ https://issues.apache.org/jira/browse/TRAFODION-274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14696061#comment-14696061 ] Atanu Mishra commented on TRAFODION-274: Atanu Mishra (atanu-mishra) wrote on 2014-05-17:#2 Fix checked into datalake branch -- Change owner: Narendra Goyal Change reviewers: None Summary of new features: Turn off seapilot in TM (set TM_USE_SEAPILOT=0 in TM's macros.gmk) as it was causing cores on spinel when using the google protobuf. Launchpad bug # (if any): 1320334 Validation summary: EC build and UTT test on spinel Changes needed in external documentation: None. Tests added: None. Changed in trafodion: status: New → Fix Committed Atanu Mishra (atanu-mishra) on 2014-05-20 Changed in trafodion: Guy Groulx (guy-groulx) wrote on 2014-07-22:#3 Installation and documentation now explain how to install proper libraries. Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1320334 - Can't run datalake_v40605 because of protobuf. Key: TRAFODION-274 URL: https://issues.apache.org/jira/browse/TRAFODION-274 Project: Apache Trafodion Issue Type: Bug Components: dtm Reporter: Guy Groulx Assignee: Guy Groulx Priority: Blocker Labels: seapilot Fix For: 0.7 (pre-incubation) Installed datalake_v40605 onto spinel. Yes, I have the libprotobuf.so.6 libraries on the system. ldd tm shows that everything is present. The tms did startup but when trying to enabled txns, they all went down and created cores. /opt/hp/squser2/dtlkV40605/sql/scripts gdb tm /local/cores/1008/core.1400253140.n008.36791.tm … Core was generated by `tm SQMON1.0 7 7 036791 $TM7 tag#0$port#37745$description#n008$ifname#17'. Program terminated with signal 6, Aborted. #0 0x7589e8a5 in raise () from /lib64/libc.so.6 Missing separate debuginfos, use: debuginfo-install boost-filesystem-1.41.0-11.el6_1.2.x86_64 boost-program-options-1.41.0-11.el6_1.2.x86_64 boost-system-1.41.0-11.el6_1.2.x86_64 cyrus-sasl-lib-2.1.23-13.el6.x86_64 glibc-2.12-1.107.el6.x86_64 libgcc-4.4.6-4.el6.x86_64 libstdc++-4.4.6-4.el6.x86_64 libuuid-2.17.2-12.7.el6.x86_64 nss-softokn-freebl-3.12.9-11.el6.x86_64 qpid-cpp-client-0.14-16.el6.x86_64 (gdb) bt #0 0x7589e8a5 in raise () from /lib64/libc.so.6 #1 0x758a0085 in abort () from /lib64/libc.so.6 #2 0x003bffe46f20 in google::protobuf::internal::LogMessage::Finish() () from /usr/lib64/libprotobuf.so.6 #3 0x003bffe579d3 in google::protobuf::MessageLite::AppendToString(std::basic_stringchar, std::char_traitschar, std::allocatorchar *) const () from /usr/lib64/libprotobuf.so.6 #4 0x003bffe57a71 in google::protobuf::MessageLite::SerializeAsString() const () from /usr/lib64/libprotobuf.so.6 #5 0x00408649 in tm_log_event (event_id=value optimized out, severity=value optimized out, temp_string=value optimized out, error_code=-1, rmid=-1, dtmid=value optimized out, seq_num=-1, msgid=-1, xa_error=-1, pool_size=-1, pool_elems=-1, msg_retries=-1, pool_high=-1, pool_low=-1, pool_max=-1, tx_state=101, data=110, data1=-1, data2=-1, string1=0x664160 UP, node=-1, msgid2=-1, offset=-1, tm_event_msg=-1, data4=0) at tmlogging.cpp:225 #6 0x00418f0c in TM_Info::state (this=0x6cd620, pv_state=101) at tminfo.cpp:4945 #7 0x0041d5a5 in TM_Info::tm_up (this=0x6cd620) at tminfo.cpp:4140 #8 0x00420fe1 in TM_Info::init_and_recover_rms (this=0x6cd620) at tminfo.cpp:1970 #9 0x00436c3f in tmTimer_initializeRMs () at tmtimermain.cpp:166 #10 0x004376e9 in timerThread_main (arg=value optimized out) ---Type return to continue, or q return to quit--- at tmtimermain.cpp:322 #11 0x7564e2df in SB_Thread::Thread::disp (this=0x12b51a0, pp_arg=0x12b51a0) at thread.cpp:199 #12 0x7564e737 in thread_fun (pp_arg=0x12b51a0) at thread.cpp:295 #13 0x756519ac in sb_thread_sthr_disp (pp_arg=0x12b5770) at threadl.cpp:241 #14 0x75c06851 in start_thread () from /lib64/libpthread.so.0 #15 0x7595490d in clone () from /lib64/libc.so.6 (gdb) Narendra created a UTT that does not use protobuf. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (TRAFODION-220) LP Bug: 1305233 - update query's effect is delayed occasionally
[ https://issues.apache.org/jira/browse/TRAFODION-220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14696096#comment-14696096 ] Atanu Mishra commented on TRAFODION-220: Atanu Mishra (atanu-mishra) wrote on 2014-06-06:#4 Change submitted on 6/5 by disabling the early commit reply. Changed in trafodion: status: New → Fix Committed Stacey Johnson (sjohnson-w) on 2014-06-10 information type: Proprietary → Public Weishiun Tsai (wei-shiun-tsai) wrote on 2014-08-26: #5 QA testing has not seen this problem ever since the early commit reply is disabled. Mark this case as problem fixed. Changed in trafodion: status: Fix Committed → Fix Released LP Bug: 1305233 - update query's effect is delayed occasionally --- Key: TRAFODION-220 URL: https://issues.apache.org/jira/browse/TRAFODION-220 Project: Apache Trafodion Issue Type: Bug Components: dtm Reporter: Suresh Subbiah Assignee: John de Roo Priority: Blocker Fix For: 1.0 (pre-incubation) The Sql dev regression test core/test018 occasionally fails with this difference 49,50c49,50 2 10103 4 3 11004 5 --- 2 103 4 3 1004 5 The top 2 lines are from the expected file, while the next 2 are from the log file. The update statement that caused this diff is update T018orig set b = b + 1; --- 3 row(s) updated. select * from T018orig; A B C -- 1 10012 3 2 103 4 3 1004 5 Note that the row with A = 1 has the correct value and reflects that the update has already occured. The next 2 rows (with A = 2 and A = 3) have the problem reported in this bug. The delay is only for a short inyterval since two statements more into the test we have this update which finds the row update T018orig set c = c + 100 where b = 10103; --- 1 row(s) updated. Also a similar update later on in the test works as expected. update T018orig set c = c + 1; --- 3 row(s) updated. select * from T018orig; A B C -- 1 10012 10013 2 10103 10104 3 11004 11005 The table does have an index on column b. create table T018orig (a int NOT NULL, b int, c int, primary key (a)) +#ifMX +no partition +#ifMX +; --- SQL operation complete. create index T018 on T018orig(b); The DIFF and log files are attached. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (TRAFODION-127) LP Bug: 1233425 - UPSHIFT keyword has no effect on a CHAR column
[ https://issues.apache.org/jira/browse/TRAFODION-127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atanu Mishra closed TRAFODION-127. -- Resolution: Fixed Fix Version/s: 1.0 (pre-incubation) LP Bug: 1233425 - UPSHIFT keyword has no effect on a CHAR column Key: TRAFODION-127 URL: https://issues.apache.org/jira/browse/TRAFODION-127 Project: Apache Trafodion Issue Type: Bug Components: sql-general Reporter: Weishiun Tsai Assignee: Anoop Sharma Priority: Critical Fix For: 1.0 (pre-incubation) The UPSHIFT keyword has no effect on a char column, as shown in the following example: cqd mode_seabase 'on'; --- SQL operation complete. set schema seabase.phoenix; --- SQL operation complete. create table t (a int not null not droppable primary key, b char(10) upshift); --- SQL operation complete. insert into t values (1, 'Alpha'),(2, 'ALPHA'); --- 2 row(s) inserted. select * from t; AB --- -- 1 Alpha 2 ALPHA --- 2 row(s) selected. -- This message was sent by Atlassian JIRA (v6.3.4#6332)