[jira] [Assigned] (TRAFODION-3185) Support splitting of Compressed Sequence file

2018-08-16 Thread Suresh Subbiah (JIRA)


 [ 
https://issues.apache.org/jira/browse/TRAFODION-3185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah reassigned TRAFODION-3185:
-

Assignee: Suresh Subbiah

> Support splitting of Compressed Sequence file 
> --
>
> Key: TRAFODION-3185
> URL: https://issues.apache.org/jira/browse/TRAFODION-3185
> Project: Apache Trafodion
>  Issue Type: Sub-task
>  Components: sql-cmp, sql-exe
>Affects Versions: 2.4
>Reporter: Selvaganesan Govindarajan
>Assignee: Suresh Subbiah
>Priority: Major
>
> One of the feature of sequence file is the ability to split it for reading a 
> compressed sequence file. The compression can be done at a record or block 
> level. Trafodion needs to support the ability to split the compressed 
> sequence file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1422) Delete column can be dramatically improved (ALTER statement)

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1422:
--
Fix Version/s: (was: 2.2.0)

> Delete column can be dramatically improved (ALTER statement)
> 
>
> Key: TRAFODION-1422
> URL: https://issues.apache.org/jira/browse/TRAFODION-1422
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: sql-general
>Reporter: Eric Owhadi
>Assignee: Eric Owhadi
>Priority: Minor
>  Labels: performance
>
> The current code path for delete column has not been optimized and can be 
> greatly improved. See comments bellow for many ways to implement optimization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-605) LP Bug: 1366227 - core file from shell during shutdown

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-605:
-
Fix Version/s: (was: 2.2.0)

> LP Bug: 1366227 - core file from shell during shutdown
> --
>
> Key: TRAFODION-605
> URL: https://issues.apache.org/jira/browse/TRAFODION-605
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: foundation
>Reporter: Christopher Sheedy
>Assignee: Atanu Mishra
>Priority: Minor
>
> Change 332 had a failure in core-regress-seabase-cdh4.4 when doing shutdown. 
> http://logs.trafodion.org/32/332/2/check/core-regress-seabase-cdh4.4/6a54826/console.html
>  has:
> Shutting down (normal) the SQ environment!
> Fri Sep 5 18:52:26 UTC 2014
> Processing cluster.conf on local host slave01
> [$Z000BBN] Shell/shell Version 1.0.1 Release 0.8.4 (Build release 
> [0.8.3rc1-203-ga165839master_Bld177], date 20140905_175103)
> ps
> [$Z000BBN] %ps
> [$Z000BBN] NID,PID(os)  PRI TYPE STATES  NAMEPARENT  PROGRAM
> [$Z000BBN]  ---  --- --- --- 
> ---
> [$Z000BBN] 000,00031016 000 WDG  ES--A-- $WDT000 NONEsqwatchdog
> [$Z000BBN] 000,00031017 000 PSD  ES--A-- $PSD000 NONEpstartd
> [$Z000BBN] 000,00031061 001 DTM  ES--A-- $TM0NONEtm
> [$Z000BBN] 000,00013883 001 GEN  ES--A-- $Z000BBNNONEshell
> [$Z000BBN] 001,00031018 000 PSD  ES--A-- $PSD001 NONEpstartd
> [$Z000BBN] 001,00031015 000 WDG  ES--A-- $WDT001 NONEsqwatchdog
> [$Z000BBN] 001,00031139 001 DTM  ES--A-- $TM1NONEtm
> shutdown
> [$Z000BBN] %shutdown
> /home/jenkins/workspace/core-regress-seabase-cdh4.4/trafodion/core/sqf/sql/scripts/sqshell:
>  line 7: 13883 Aborted (core dumped) shell $1 $2 $3 $4 $5 $6 
> $7 $8 $9
> Issued a 'shutdown normal' request
> Shutdown in progress
> # of SQ processes: 0
> SQ Shutdown (normal) from 
> /home/jenkins/workspace/core-regress-seabase-cdh4.4/trafodion/core/sql/regress
>  Successful
> Fri Sep 5 18:52:34 UTC 2014
> + ret=0
> + [[ 0 == 124 ]]
> + echo 'Return code 0'
> Return code 0
> + sudo /usr/local/bin/hbase-sudo.sh stop
> Stopping hbase-master
> Stopping HBase master daemon (hbase-master):[  OK  ]
> stopping master.
> Return code 0
> + echo 'Return code 0'
> Return code 0
> + cd ../../sqf/rundir
> + set +x
> = seabase
> 09/05/14 18:21:07 (RELEASE build)
> 09/05/14 18:23:51  TEST010### PASS ###
> 09/05/14 18:25:38  TEST011### PASS ###
> 09/05/14 18:27:46  TEST012### PASS ###
> 09/05/14 18:29:36  TEST013### PASS ###
> 09/05/14 18:29:50  TEST014### PASS ###
> 09/05/14 18:32:05  TEST016### PASS ###
> 09/05/14 18:32:35  TEST018### PASS ###
> 09/05/14 18:50:28  TEST020### PASS ###
> 09/05/14 18:50:44  TEST022### PASS ###
> 09/05/14 18:52:26  TEST024### PASS ###
> 09/05/14 18:21:07 - 18:52:26  (RELEASE build)
> WARNING: Core files found in 
> /home/jenkins/workspace/core-regress-seabase-cdh4.4/trafodion/core :
> -rw---. 1 jenkins jenkins 44552192 Sep  5 18:52 
> sql/regress/core.slave01.13883.shell
> 
> Total Passed:   10
> Total Failures: 0
> Failure : Found 1 core files
> Build step 'Execute shell' marked build as failure
> The core file's back trace is:
> -bash-4.1$ core_bt -d sql/regress
> core file  : -rw---. 1 jenkins jenkins 44552192 Sep  5 18:52 
> sql/regress/core.slave01.13883.shell
> gdb command: gdb shell sql/regress/core.slave01.13883.shell --batch -n -x 
> /tmp/tmp.xEFWF2xufh 2>&1
> Missing separate debuginfo for
> Try: yum --disablerepo='*' --enablerepo='*-debug*' install 
> /usr/lib/debug/.build-id/1e/0a7d58f454926e2afb4797865d85801ed65ece
> [New Thread 13884]
> [New Thread 13883]
> [Thread debugging using libthread_db enabled]
> Core was generated by `shell -a'.
> Program terminated with signal 6, Aborted.
> #0  0x0030ada32635 in raise () from /lib64/libc.so.6
> #0  0x0030ada32635 in raise () from /lib64/libc.so.6
> #1  0x0030ada33e15 in abort () from /lib64/libc.so.6
> #2  0x00411982 in LIOTM_assert_fun (pp_exp=0x4d4f40 "0", 
> pp_file=0x4d175e "clio.cxx", pv_line=1022, pp_fun=0x4d2d60 "int 
> Local_IO_To_Monitor::process_notice(message_def*)") at clio.cxx:99
> #3  0x00413b26 in Local_IO_To_Monitor::process_notice (this=0x7c6e80, 
> pp_msg=) at clio.cxx:1022
> #4  0x00413e03 in Local_IO_To_Monitor::get_io (this=0x7c6e80, 
> pv_sig=, pp_siginfo=) at 
> clio.cxx:637
> #5  0x00414075 in local_monitor_reader (pp_arg=0x7916) at clio.cxx:154
> #6  0x0030ae2079d1 in start_thread () from /lib64/libpthread.so.0
> #7  0x0030adae886d in clone () from /lib64/libc.so.6



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1175) LP Bug: 1444228 - Trafodion should record the FQDN as client_name in Repository tables

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1175:
--
Fix Version/s: (was: 2.2.0)

> LP Bug: 1444228 - Trafodion should record the FQDN as client_name in 
> Repository tables
> --
>
> Key: TRAFODION-1175
> URL: https://issues.apache.org/jira/browse/TRAFODION-1175
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-odbc-linux, connectivity-general
>Reporter: Chengxin Cai
>Assignee: Gao Jie
>Priority: Major
>
> select distinct application_name, rtrim(client_name) from 
> "_REPOS_".METRIC_QUERY_AGGR_TABLE;
> APPLICATION_NAME  
>   (EXPR)
> --  
> 
> /usr/bin/python sq1176
> TrafCI
> sq1176.houston.hp.com
> --- 2 row(s) selected.
> Actually, sq1176 and sq1176.houston.hp.com are the same client, but they show 
> different result when using odbc or jdbc client.
> They should always be the FQDN whatever the client is.
> And the same problem in METRIC_QUERY_TABLE and METRIC_SESSION_TABLE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1194) LP Bug: 1446917 - T2 tests don't include parallel plans

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1194:
--
Fix Version/s: (was: 2.2.0)

> LP Bug: 1446917 - T2 tests don't include  parallel plans
> 
>
> Key: TRAFODION-1194
> URL: https://issues.apache.org/jira/browse/TRAFODION-1194
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-general
>Reporter: Sandhya Sundaresan
>Assignee: Anuradha Hegde
>Priority: Major
>
> The T2 test suite  should include proper coverage of parallel plans involving 
> ESPs. 
> This is needed to ensure IPC mechanism does not regress with any of the 
> current changes underway. 
> This coverage is especially important wr.t  the new work being done for 
> multi-threaded DCS server. 
> Atleast one test with ESPs should be made a Gating test as well in Jenkins.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1220) LP Bug: 1450515 - Compilation time for NOT IN predicate *much* longer than for IN predicate

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1220:
--
Fix Version/s: (was: 2.2.0)

> LP Bug: 1450515 - Compilation time for NOT IN predicate *much* longer than 
> for IN predicate
> ---
>
> Key: TRAFODION-1220
> URL: https://issues.apache.org/jira/browse/TRAFODION-1220
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Apache Trafodion
>Assignee: Qifan Chen
>Priority: Major
>
> If you have a simple SELECT query with an IN predicate with a list of say 930 
> items in the list, the query will compile (on one of the sqws*** machines) in 
> approximately 13.5 seconds.   If you change the 'IN' to 'NOT IN', the 
> compilation time goes to over 50 seconds.
>  Furthermore, once the changes get made to allow IN lists and NOT IN lists 
> that are say 3000 items long, if you then try the same SELECT query  with a 
> IN list of 3000 items (unique items), the compilation time is about 18-20 
> seconds.  However, if you then change the query to specify NOT IN, the 
> compilation time goes to 15-20 *minutes*. NOTE: Currently, IN lists are 
> limited to about 2230 unique items and NOT IN lists are limited to about 930 
> unique items (beyond those lengths you get stack overflow.)   Once a fix to 
> LP 1323826 gets checked into the source base, you should be able to have IN 
> lists or NOT IN lists with as many as 3000 unique items.
> Example SELECT query (with 930 unique items) that exhibits this behavior is 
> as follows:
> --create table mytable97 (partnum decimal(4,0), partname char(18));
> prepare s1 from select * from mytable97 where partname in (
>   'A10','B10','C10','D10','E10','F10','G10','H10','I10','J10'
>  ,'K10','L10','M10','N10','O10','P10','Q10','R10','S10','T10'
>  ,'AA10','AB10','AC10','AD10','AE10','AF10','AG10','AH10','AI10','AJ10'
>  ,'BA10','BB10','BC10','BD10','BE10','BF10','BG10','BH10','BI10','BJ10'
>  ,'CA10','CB10','CC10','CD10','CE10','CF10','CG10','CH10','CI10','CJ10'
>  ,'DA10','DB10','DC10','DD10','DE10','DF10','DG10','DH10','DI10','DJ10'
>  ,'EA10','EB10','EC10','ED10','EE10','EF10','EG10','EH10','EI10','EJ10'
>  ,'FA10','FB10','FC10','FD10','FE10','FF10','FG10','FH10','FI10','FJ10'
>  ,'GA10','GB10','GC10','GD10','GE10','GF10','GG10','GH10','GI10','GJ10'
>  ,'HA10','HB10','HC10','HD10','HE10','HF10','HG10','HH10','HI10','HJ10'
>  ,'A11','B11','C11','D11','E11','F11','G11','H11','I11','J11'
>  ,'K11','L11','M11','N11','O11','P11','Q11','R11','S11','T11'
>  ,'AA11','AB11','AC11','AD11','AE11','AF11','AG11','AH11','AI11','AJ11'
>  ,'BA11','BB11','BC11','BD11','BE11','BF11','BG11','BH11','BI11','BJ11'
>  ,'CA11','CB11','CC11','CD11','CE11','CF11','CG11','CH11','CI11','CJ11'
>  ,'DA11','DB11','DC11','DD11','DE11','DF11','DG11','DH11','DI11','DJ11'
>  ,'EA11','EB11','EC11','ED11','EE11','EF11','EG11','EH11','EI11','EJ11'
>  ,'FA11','FB11','FC11','FD11','FE11','FF11','FG11','FH11','FI11','FJ11'
>  ,'GA11','GB11','GC11','GD11','GE11','GF11','GG11','GH11','GI11','GJ11'
>  ,'HA11','HB11','HC11','HD11','HE11','HF11','HG11','HH11','HI11','HJ11'
>  ,'A12','B12','C12','D12','E12','F12','G12','H12','I12','J12'
>  ,'K12','L12','M12','N12','O12','P12','Q12','R12','S12','T12'
>  ,'AA12','AB12','AC12','AD12','AE12','AF12','AG12','AH12','AI12','AJ12'
>  ,'BA12','BB12','BC12','BD12','BE12','BF12','BG12','BH12','BI12','BJ12'
>  ,'CA12','CB12','CC12','CD12','CE12','CF12','CG12','CH12','CI12','CJ12'
>  ,'DA12','DB12','DC12','DD12','DE12','DF12','DG12','DH12','DI12','DJ12'
>  ,'EA12','EB12','EC12','ED12','EE12','EF12','EG12','EH12','EI12','EJ12'
>  ,'FA12','FB12','FC12','FD12','FE12','FF12','FG12','FH12','FI12','FJ12'
>  ,'GA12','GB12','GC12','GD12','GE12','GF12','GG12','GH12','GI12','GJ12'
>  ,'HA12','HB12','HC12','HD12','HE12','HF12','HG12','HH12','HI12','HJ12'
>  ,'A13','B13','C13','D13','E13','F13','G13','H13','I13','J13'
>  ,'K13','L13','M13','N13','O13','P13','Q13','R13','S13','T13'
>  ,'AA13','AB13','AC13','AD13','AE13','AF13','AG13','AH13','AI13','AJ13'
>  ,'BA13','BB13','BC13','BD13','BE13','BF13','BG13','BH13','BI13','BJ13'
>  ,'CA13','CB13','CC13','CD13','CE13','CF13','CG13','CH13','CI13','CJ13'
>  ,'DA13','DB13','DC13','DD13','DE13','DF13','DG13','DH13','DI13','DJ13'
>  ,'EA13','EB13','EC13','ED13','EE13','EF13','EG13','EH13','EI13','EJ13'
>  ,'FA13','FB13','FC13','FD13','FE13','FF13','FG13','FH13','FI13','FJ13'
>  ,'GA13','GB13','GC13','GD13','GE13','GF13','GG13','GH13','GI13','GJ13'
>  ,'HA13','HB13','HC13','HD13','HE13','HF13','HG13','HH13','HI13','HJ13'
>  ,'A14','B14','C14','D14','E14','F14','G14','H14','I14','J14'
>  ,'K14','L14','M14','N14','O14','P14','Q14','R14','S14','T14'
>  

[jira] [Updated] (TRAFODION-464) LP Bug: 1344129 - add support enabling/disabling contraints

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-464:
-
Fix Version/s: (was: 2.2.0)

> LP Bug: 1344129 - add support  enabling/disabling contraints
> 
>
> Key: TRAFODION-464
> URL: https://issues.apache.org/jira/browse/TRAFODION-464
> Project: Apache Trafodion
>  Issue Type: Wish
>  Components: sql-cmu
>Reporter: Apache Trafodion
>Assignee: Anoop Sharma
>Priority: Minor
>
> Currently Trafodion does not support enabling and disable constraints
> we may need to add support for:
> - disabling one constraint
> -disabling all constraints
> -enabling one constraint and applying the constraint also as part of enabling 
> it
> -enabling all constraints and applying them  also as part of enabling them



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1037) LP Bug: 1428857 - CMP: create index stmt sometimes generates core

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1037:
--
Fix Version/s: (was: 2.2.0)

> LP Bug: 1428857 - CMP: create index stmt sometimes generates core
> -
>
> Key: TRAFODION-1037
> URL: https://issues.apache.org/jira/browse/TRAFODION-1037
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Ravisha Neelakanthappa
>Assignee: Qifan Chen
>Priority: Major
>
> Trying create an index from sqlci session sometimes results in core though 
> the index gets created. To reproduce this problem try creating as many 
> indices as possible from same sqlci session.
> nravishag4t2966$ sqlci
> Trafodion Conversational Interface 1.1.0
> (c) Copyright 2014 Hewlett-Packard Development Company, LP.
> >>set schema trafodion.orderentry;
> --- SQL operation complete.
> >>CREATE INDEX CUSTOMER_DID ON CUSTOMER
>  (
> C_D_ID ASC
>   )
> ;+>+>+>+>+>
> --- SQL operation complete.
> >>CREATE INDEX CUSTOMER_ID ON CUSTOMER
>  (
> C_ID ASC
>   )
> ;+>+>+>+>+>
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x75c78058, pid=8025, tid=140737178194496
> #
> # JRE version: Java(TM) SE Runtime Environment (7.0_67-b01) (build 
> 1.7.0_67-b01)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode 
> linux-amd64 compressed oops)
> # Problematic frame:
> # C  [libustat.so+0x102058]  NormValue::copy(NormValue const&)+0x10
> #
> nravishag4t2966$ ls
> core.g4t2966.houston.hp.com.8025.tdm_arkcmp
> The core file indicates, arkcmp crashes in 
> nravishag4t2966$ mygdb tdm_arkcmp core.g4t2966.houston.hp.com.8025.tdm_arkcmp
> Core was generated by `tdm_arkcmp SQMON1.1 0 0 008025 $Z0006JA 
> 16.235.163.24:33326 4 0'.
> Program terminated with signal 6, Aborted.
> #0  0x003980a328a5 in raise () from /lib64/libc.so.6
> (gdb) bt
> #0  0x003980a328a5 in raise () from /lib64/libc.so.6
> #1  0x003980a3400d in abort () from /lib64/libc.so.6
> #2  0x70d78a55 in os::abort(bool) ()
>from /opt/home/tools/jdk1.7.0_67/jre/lib/amd64/server/libjvm.so
> #3  0x70ef8f87 in VMError::report_and_die() ()
>from /opt/home/tools/jdk1.7.0_67/jre/lib/amd64/server/libjvm.so
> #4  0x70d7d96f in JVM_handle_linux_signal ()
>from /opt/home/tools/jdk1.7.0_67/jre/lib/amd64/server/libjvm.so
> #5  
> #6  0x75c78058 in NormValue::copy (this=0x7fffdbbb61a0, other=...)
> at ../optimizer/EncodedValue.h:70
> #7  0x75c780b7 in NormValue::operator= (this=0x7fffdbbb61a0, 
> other=...)
> at ../optimizer/EncodedValue.h:87
> #8  0x75c7857e in EncodedValue::operator= (this=0x7fffdbbb6190)
> at ../optimizer/EncodedValue.h:154
> #9  0x74b47bf9 in NACollection::copy 
> (this=0x7fffdbbafab8,
> other=...) at ../common/Collections.cpp:61
> #10 0x74b46c56 in NAList::operator= 
> (this=0x7fffdbbafab8,
> other=...) at ../common/Collections.cpp:808
> #11 0x74b4685f in MCboundaryValueList::operator= (this=0x7fffdbbafab8)
> at ../optimizer/Stats.h:453
> #12 0x74b1c3fe in HistInt::copy (this=0x7fffdbbafa80, other=...)
> at ../optimizer/Stats.cpp:112
> #13 0x75c7848f in HistInt::operator= (this=0x7fffdbbafa80, other=...)
> at ../optimizer/Stats.h:587
> #14 0x746d62bd in NACollection::copy (this=0x7fffdbb906a8, 
> other=...)
> at ../common/Collections.cpp:61
> #15 0x746cf77d in NACollection::NACollection 
> (this=0x7fffdbb906a8,
> #16 0x746ca547 in NAList::NAList (this=0x7fffdbb906a8, 
> other=...,
> heap=0x7fffdca27eb8) at ../common/Collections.h:1968
> #17 0x746c7854 in Histogram::Histogram (this=0x7fffdbb906a8, hist=...,
> h=0x7fffdca27eb8) at ../optimizer/Stats.h:695
> #18 0x74b2801d in ColStats::getHistogramToModify (this=0x7fffdbb8a3e0)
> at ../optimizer/Stats.cpp:3443
> #19 0x748bf07b in NATable::getStatistics (this=0x7fffdb981ba0)
> at ../optimizer/NATable.cpp:6163
> #20 0x74b6da18 in TableDesc::getTableColStats (this=0x7fffdb98a478)
> at ../optimizer/TableDesc.cpp:382
> #21 0x77ac8e2a in TableDesc::tableColStats (this=0x7fffdb98a478)
> at ../optimizer/TableDesc.h:130
> #22 0x749a1afa in Scan::synthLogProp (this=0x7fffdca3b168,
> normWAPtr=0x7fff54e0) at ../optimizer/OptLogRelExpr.cpp:5062
> #23 0x7498f8b9 in RelExpr::synthLogProp (this=0x7fffdca2f680,
> normWAPtr=0x7fff54e0) at ../optimizer/OptLogRelExpr.cpp:619
> #24 0x749a355b in GenericUpdate::synthLogProp (this=0x7fffdca2f680,
> normWAPtr=0x7fff54e0) at ../optimizer/OptLogRelExpr.cpp:5448
> #25 0x7498f8b9 in 

[jira] [Updated] (TRAFODION-1098) LP Bug: 1437314 - NULL value for severity and cpu in some monitor log files

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1098:
--
Fix Version/s: (was: 2.2.0)

> LP Bug: 1437314 - NULL value for severity and cpu in some monitor log files
> ---
>
> Key: TRAFODION-1098
> URL: https://issues.apache.org/jira/browse/TRAFODION-1098
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: foundation
>Reporter: Gao, Rui-Xian
>Assignee: Atanu Mishra
>Priority: Major
>
> severity, component, node_number and cpu is null in some monitor log files, 
> and also got some  '2181' for severity and component.
> System info – centos-mapr1.hpl.hp.com (port num: 37800)
> results --
> 1.   CPU is NULL in some monitor logs.
> SQL>select [first 6] * from udf(event_log_reader('f')) where cpu is NULL;
> LOG_TS SEVERITY   COMPONENTNODE_NUMBER 
> CPU PIN PROCESS_NAME SQL_CODEQUERY_ID 
>   
>   MESSAGE 
>  
> LOG_FILE_NODE LOG_FILE_NAME   
>  
> LOG_FILE_LINE PARSE_STATUS
> -- --  --- 
> --- ---  --- 
> 
>  
> 
>  - 
> 
>  - 
> 2015-03-24 17:36:00.606000 INFO   MON3
> NULL5775 $MONITORNULL NULL
>   
>TID: 5775, Message ID: 101020103, [CMonitor::main], 
> monitor Version 1.0.1 Release 1.1.0 (Build release [1.0.0-237-gf89a122_Bld9], 
> branch f89a122-master, date 20150324_083001), Started! CommType: Sockets  
>2 mon.20150324.17.35.59.centos-mapr3.5775.log  
> 1
> 2015-03-24 17:36:00.607000 INFO   MON3
> NULL5775 $MONITORNULL NULL
>   
>TID: 5775, Message ID: 101010401, [CCluster::CCluster] 
> Validation of node down is enabled
> 2 mon.20150324.17.35.59.centos-mapr3.5775.log 
>  2
> 2015-03-24 17:36:40.004000 NULL   NULLNULL
> NULLNULL NULLNULL NULL
>   
>:24095(0x7f9cc387fac0):ZOO_INFO@log_env@723: Client 
> environment:os.name=Linux 
>0 mon.20150324.17.35.59.centos-mapr5.23499.log 
> 8 
> E
> 2015-03-24 17:36:40.004000 NULL   NULLNULL
> NULLNULL NULLNULL NULL
>   
>:24095(0x7f9cc387fac0):ZOO_INFO@log_env@724: Client 
> environment:os.arch=2.6.32-358.23.2.el6.x86_64
>0 mon.20150324.17.35.59.centos-mapr5.23499.log 
> 9 
> E
> 2015-03-24 17:36:40.004000 NULL   NULLNULL
> NULLNULL NULLNULL NULL
>   
>:24095(0x7f9cc387fac0):ZOO_INFO@log_env@725: Client 
> environment:os.version=#1 SMP Wed Oct 16 18:37:12 UTC 2013
>0 mon.20150324.17.35.59.centos-mapr5.23499.log  

[jira] [Updated] (TRAFODION-986) LP Bug: 1419906 - pthread_mutex calls do not always check return code

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-986:
-
Fix Version/s: (was: 2.2.0)

> LP Bug: 1419906 - pthread_mutex calls do not always check return code
> -
>
> Key: TRAFODION-986
> URL: https://issues.apache.org/jira/browse/TRAFODION-986
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: foundation
>Reporter: dave george
>Assignee: Prashanth Vasudev
>Priority: Major
>
> In quite a few places, the code shows:
>   pthread_mutex_lock() or pthread_mutex_unlock()
> The return code from these calls should be checked.
> A more generic summary would be something like:
> return codes from functions that return error codes should be checked.
> It may be possible to use coverity or other such tool in trying to automate 
> the more general issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1099) LP Bug: 1437384 - sqenvcom.sh. Our CLASSPATH is too big.

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1099:
--
Fix Version/s: (was: 2.2.0)

> LP Bug: 1437384 - sqenvcom.sh.Our CLASSPATH is too big.
> ---
>
> Key: TRAFODION-1099
> URL: https://issues.apache.org/jira/browse/TRAFODION-1099
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-general
>Reporter: Guy Groulx
>Assignee: Sandhya Sundaresan
>Priority: Major
>
> sqenvcom.sh sets up CLASSPATH for trafodion.
> With HDP2.2, this CLASSPATH is huge.   On one of our system, echo $CLASSPATH 
> | wc -l return > 13000 bytes.
> I believe java/Linux truncates these variables when it's too big.
> Since going to HDP 2.2, we've been hit with "class not found" error 
> eventhough the jar is in CLASSPATH.
> http://stackoverflow.com/questions/1237093/using-wildcard-for-classpath 
> explains that we can use wildcards in CLASSPATH to reduce it.
> Rules:
> Use * and not *.jar.Java assumes that * in classpath are for *.jar
> When using export CLASSPATHuse quotes so that * is not expanded.   EG:
> export CLASSPATH=”/usr/hdp/current/hadoop-client/lib/*:${CLASSPATH}”
> We need to modify our sqenvcom.sh to use wildcards instead of putting 
> individual jar.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-909) LP Bug: 1412641 - log4cpp -- Node number in master*.log is always 0

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-909:
-
Fix Version/s: (was: 2.2.0)

> LP Bug: 1412641 - log4cpp -- Node number in master*.log is always 0
> ---
>
> Key: TRAFODION-909
> URL: https://issues.apache.org/jira/browse/TRAFODION-909
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Gao, Rui-Xian
>Assignee: Sandhya Sundaresan
>Priority: Major
>
> Node number in master*.log is always 0, tm.log has correct number.
> SQL>select [first 5] * from udf(event_log_reader('f')) where 
> log_file_name='master_exec_0_7476.log';
> LOG_TS SEVERITY   COMPONENTNODE_NUMBER 
> CPU PIN PROCESS_NAME SQL_CODEQUERY_ID 
>   
>   MESSAGE 
>  
> LOG_FILE_NODE LOG_FILE_NAME   
>  
> LOG_FILE_LINE PARSE_STATUS
> -- --  --- 
> --- ---  --- 
> 
>  
> 
>  - 
> 
>  - 
> 2015-01-19 04:29:53.454000 INFO   SQL.ESP0
>2   24361 $Z020JW1NULL NULL
>   
>An ESP process is launched.
>   
> 0 master_exec_0_7476.log  
>  1
> 2015-01-19 04:29:53.462000 INFO   SQL.ESP0
>2   24360 $Z020JW0NULL NULL
>   
>An ESP process is launched.
>   
> 0 master_exec_0_7476.log  
>  2
> 2015-01-19 04:29:53.452000 INFO   SQL.ESP0
>5   31881 $Z050R0WNULL NULL
>   
>An ESP process is launched.
>   
> 0 master_exec_0_7476.log  
>  1
> 2015-01-19 04:35:23.101000 INFO   SQL.ESP0
>51892 $Z0501J2NULL NULL
>   
>An ESP process is launched.
>   
> 0 master_exec_0_7476.log  
>  2
> 2015-01-19 04:29:53.454000 INFO   SQL.ESP0
>2   24361 $Z020JW1NULL NULL
>   
>An ESP process is launched.
>   
> 0 master_exec_0_7476.log  
>  1
> --- 5 row(s) selected.
> SQL>select [first 5] * from udf(event_log_reader('f')) where log_file_name 
> like 'tm%';
> LOG_TS SEVERITY   COMPONENT   

[jira] [Updated] (TRAFODION-912) LP Bug: 1412806 - log4cpp : incorrect timestamp in logs for SQL info

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-912:
-
Fix Version/s: (was: 2.2.0)

> LP Bug: 1412806 - log4cpp : incorrect timestamp in logs for SQL info
> 
>
> Key: TRAFODION-912
> URL: https://issues.apache.org/jira/browse/TRAFODION-912
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Gao, Rui-Xian
>Assignee: Sandhya Sundaresan
>Priority: Major
>
> There are messages which have LOG_TS later than current timestamp logged into 
> the log file.
> current time is '2015-01-20 05:43:05', but there are messges have '2015-01-20 
> 13:16:16' in the log, only for SQL INFO.
> [trafodion@centos-mapr1 logs]$ date
> Tue Jan 20 05:43:05 PST 2015
> SQL>select * from udf(event_log_reader('f')) where log_ts > 
> timestamp'2015-01-20 06:00:00.00' order by 1;
> LOG_TS SEVERITY   COMPONENTNODE_NUMBER 
> CPU PIN PROCESS_NAME SQL_CODEQUERY_ID 
>   
>   MESSAGE 
>  
> LOG_FILE_NODE LOG_FILE_NAME   
>  
> LOG_FILE_LINE PARSE_STATUS
> -- --  --- 
> --- ---  --- 
> 
>  
> 
>  - 
> 
>  - 
> 2015-01-20 06:57:16.974000 INFO   SQL.ESP0
>5   26257 $Z050LF7NULL NULL
>   
>An ESP process is launched.
>   
> 0 master_exec_1_3719.log  
>  1
> 2015-01-20 06:57:16.974000 INFO   SQL.ESP0
>5   26257 $Z050LF7NULL NULL
>   
>An ESP process is launched.
>   
> 0 master_exec_1_3719.log  
>  1
> 2015-01-20 06:57:17.011000 INFO   SQL.ESP0
>31982 $Z0301LMNULL NULL
>   
>An ESP process is launched.
>   
> 0 master_exec_1_3719.log  
>  1
> 2015-01-20 06:57:17.011000 INFO   SQL.ESP0
>31982 $Z0301LMNULL NULL
>   
>An ESP process is launched.
>   
> 0 master_exec_1_3719.log  
>  1
> 2015-01-20 06:57:17.011000 INFO   SQL.ESP0
>31982 $Z0301LMNULL NULL
>   
>An ESP process is launched.
>   
> 0 master_exec_1_3719.log  
> 

[jira] [Updated] (TRAFODION-785) LP Bug: 1395201 - MSG in sqenvcom.sh about Hadoop not find causing sftp to not work.

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-785:
-
Fix Version/s: (was: 2.2.0)

> LP Bug: 1395201 - MSG in sqenvcom.sh about Hadoop not find causing sftp to 
> not work.
> 
>
> Key: TRAFODION-785
> URL: https://issues.apache.org/jira/browse/TRAFODION-785
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: foundation
>Reporter: Guy Groulx
>Assignee: Atanu Mishra
>Priority: Major
>
> A check is now in sqenvcom.sh which verified if hadoop is available on the 
> node.   If it is not, it displays a message "ERROR: Did not find supported 
> Hadoop distribution".
> Since sqenvcom.sh is used via .bashrc, ie on every ssh, it causes issues.
> eg:   You can not connect sftp to a system where this msg is displayed as 
> sftp does not recognize the message being returned.
> I understand that trafodion software will probably be installed mostly on 
> nodes where hadoop is installed, but in cases where it does not, we should 
> not be affected ssh or sftp ability to connect successfully.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-952) LP Bug: 1415156 - DELETE concurrent with index creation causes corruption

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-952:
-
Fix Version/s: (was: 2.2.0)

> LP Bug: 1415156 - DELETE concurrent with index creation causes corruption
> -
>
> Key: TRAFODION-952
> URL: https://issues.apache.org/jira/browse/TRAFODION-952
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Apache Trafodion
>Assignee: Prashanth Vasudev
>Priority: Major
>
> If queries delete rows from a table during CREATE INDEX there is a risk that 
> the index will have more rows than the base table. See the example sqlci 
> session quoted below. Note that the delete happens in the background with no 
> output shown. The test script is attached.
> >>
> >>obey index_corrupter_traf(cr_table);
> >>create table t113b  (uniq int not null,
> +>   c100k int,   c10K int ,   c1K   int,   c100  int,   
> +>   c10   int,   c1int,   primary key (uniq)  );
> --- SQL operation complete.
> >>
> >>prepare s1 from upsert using load into t113b select
> +>0 + (10 * x10) + (1 * x1) + (1000 * x1000) + 
> +>  (100 * x100) + (10 * x10) +( 1 * x1),
> +>0 + (1 * x1) + (1000 * x1000) + (100 * x100) + 
> +>  (10 * x10) +( 1 * x1),
> +>0 + (1000 * x1000) + (100 * x100) + (10 * x10) + (1 * x1),
> +>0 + (100 * x100) + (10 * x10) + (1 * x1),
> +>0 + (10 * x10) + (1 * x1),
> +>0 + (1 * x1),
> +>0
> +>from (values(0)) t
> +>transpose 0,1,2,3,4,5,6,7,8,9 as x10
> +>transpose 0,1,2,3,4,5,6,7,8,9 as x1
> +>transpose 0,1,2,3,4,5,6,7,8,9 as x1000
> +>transpose 0,1,2,3,4,5,6,7,8,9 as x100
> +>transpose 0,1,2,3,4,5,6,7,8,9 as x10
> +>transpose 0,1,2,3,4,5,6,7,8,9 as x1;
> --- SQL command prepared.
> >>
> >>explain options 'f' s1;
> LC   RC   OP   OPERATOR  OPT   DESCRIPTION   CARD
>          -
> 9.10   root  1.00E+007
> 789tuple_flow1.00E+007
> ..8trafodion_load  T113B 1.00E+000
> 6.7transpose 1.00E+006
> 5.6transpose 1.00E+005
> 4.5transpose 1.00E+004
> 3.4transpose 1.00E+003
> 2.3transpose 1.00E+002
> 1.2transpose 1.00E+001
> ..1values1.00E+000
> --- SQL operation complete.
> >>
> >>display qid for s1;
> QID is MXID11151972122891392598747010206U300_478_S1
> QID details: 
> 
>   Segment Num:  0
>   Segment Name: 
>   Cpu:  0
>   Pin:  15197
>   ExeStartTime: 212289139259874701= 2015/01/27 17:20:59.874701 LCT
>   SessionNum:   2
>   UserName: U3
>   SessionName:  NULL
>   QueryNum: 478
>   StmtName: S1
>   SessionId:MXID11151972122891392598747010206U300
> >>
> >>execute s1;
> --- 100 row(s) inserted.
> >>
> >>get statistics for qid current;
> Qid  
> MXID11151972122891392598747010206U300_478_S1
> Compile Start Time   2015/01/27 17:21:18.824433
> Compile End Time 2015/01/27 17:21:20.080504
> Compile Elapsed Time 0:00:01.256071
> Execute Start Time   2015/01/27 17:21:20.124949
> Execute End Time 2015/01/27 17:22:24.244243
> Execute Elapsed Time 0:01:04.119294
> StateCLOSE
> Rows Affected1,000,000
> SQL Error Code   0
> Stats Error Code 0
> Query Type   SQL_INSERT_NON_UNIQUE
> Sub Query Type   SQL_STMT_NA
> Estimated Accessed Rows  0
> Estimated Used Rows  0
> Parent Qid   NONE
> Parent Query System  NONE
> Child QidNONE
> Number of SQL Processes  1
> Number of Cpus   1
> Transaction Id   -1
> Source Stringupsert using load into t113b select 0 + (10 * 
> x10) + (1 * x1) + (1000 * x1000) +(100 * x100) + (10 * x10) 
> +( 1 * x1), 0 + (1 * x1) + (1000 * x1000) + (100 * x100) +(10 * 
> x10) +( 1 * x1), 0 + (1000 * x1000) + (100 * x100) +
> SQL Source Length613
> Rows Returned0
> First Row Returned Time  -1
> Last Error before AQR0
> Number of AQR retries0
> Delay before AQR 0
> No. of times reclaimed   0
> Cancel Time  -1
> Last Suspend Time-1
> Stats 

[jira] [Updated] (TRAFODION-880) LP Bug: 1409928 - Number of transactions being recovered is always 0

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-880:
-
Fix Version/s: (was: 2.2.0)

> LP Bug: 1409928 - Number of transactions being recovered is always 0
> 
>
> Key: TRAFODION-880
> URL: https://issues.apache.org/jira/browse/TRAFODION-880
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dtm
>Reporter: Marvin Anderson
>Assignee: Atanu Mishra
>Priority: Major
>
> During sqstart a message displays the number of transactions being recovered. 
>  It is always zero even when transactions are being recovered.  It should 
> reflect a non-zero count.  Whether that is actual transactions or regions or 
> TMs recovering doesn't matter but it should display some values.
> Assigned to LaunchPad User Adriana Fuentes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1278) LP Bug: 1465899 - Create table LIKE hive table fails silently

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1278:
--
Fix Version/s: (was: 2.2.0)

> LP Bug: 1465899 - Create table LIKE hive table fails silently
> -
>
> Key: TRAFODION-1278
> URL: https://issues.apache.org/jira/browse/TRAFODION-1278
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-general
>Reporter: Barry Fritchman
>Assignee: Qifan Chen
>Priority: Critical
>
> When using the CREATE TABLE  LIKE  syntax with a hive table as 
> , the statement appears to execute successfully, but the table is in 
> fact not created:
> >>create table traf_orders like hive.hive.orders;
> --- SQL operation complete.
> >>invoke traf_orders;
> *** ERROR[4082] Object TRAFODION.SEABASE.TRAF_ORDERS does not exist or is 
> inaccessible.
> --- SQL operation failed with errors.
> >>
> The problem seems to occur only when a Hive table is the source.  This 
> problem causes an error when attempting to update statistics for a hive table 
> using sampling, because the sample table is not created.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-444) LP Bug: 1342180 - Memory leak in cmp context heap when cli context is deallocated

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-444:
-
Fix Version/s: (was: 2.2.0)

> LP Bug: 1342180 - Memory leak in cmp context heap when cli context is 
> deallocated
> -
>
> Key: TRAFODION-444
> URL: https://issues.apache.org/jira/browse/TRAFODION-444
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Pavani Puppala
>Assignee: Qifan Chen
>Priority: Major
>
> When SQL_EXEC_DeleteContext is called the second cmp context used for 
> reentrant compiler for metadata queries is not being deallocated.  This 
> causes memory leak where the cmp context heap allocated for the metadata cmp 
> context is not deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-608) LP Bug: 1367413 - metadata VERSIONS table need to be updated with current version

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-608:
-
Fix Version/s: (was: 2.2.0)

> LP Bug: 1367413 - metadata VERSIONS table need to be updated with current 
> version
> -
>
> Key: TRAFODION-608
> URL: https://issues.apache.org/jira/browse/TRAFODION-608
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Anoop Sharma
>Assignee: Anoop Sharma
>Priority: Major
>
> Metadata contains VERSIONS table which contains the released software version.
> This table can be accessed by users through SQL interface.
> This should  be updated with latest software version values whenever software 
> is installed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-284) LP Bug: 1321058 - TRAFDSN should be eliminated, Trafodion ODBC linux driver should use the standard config files, odbc.ini and odbcinst.ini

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-284:
-
Fix Version/s: (was: 2.2.0)

> LP Bug: 1321058 - TRAFDSN should be eliminated, Trafodion ODBC linux driver 
> should use the standard config files, odbc.ini and odbcinst.ini
> ---
>
> Key: TRAFODION-284
> URL: https://issues.apache.org/jira/browse/TRAFODION-284
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-odbc-linux
>Reporter: Aruna Sadashiva
>Assignee: Anuradha Hegde
>Priority: Major
>
> Traf ODBC driver should use the standard odbc ini files for configuration, 
> instead of custom config file TRAFDSN. If we don't install to default 
> location, user has to have TRAFDSN in application dir, there is no way to 
> specify TRAFDSN location to driver.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1242) LP Bug: 1457207 - Create table and constraint using the same name returns error 1043

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1242:
--
Fix Version/s: (was: 2.2.0)

> LP Bug: 1457207 - Create table and constraint using the same name returns 
> error 1043
> 
>
> Key: TRAFODION-1242
> URL: https://issues.apache.org/jira/browse/TRAFODION-1242
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Weishiun Tsai
>Assignee: Anoop Sharma
>Priority: Critical
>
> Create a table with a constraint that has the same name as the table itself 
> now returns a perplexing 1043 error complaining that the constraint already 
> exists.  This is a regression introduced sometime between the v0513 build and 
> the v0519 build.  It had been working fine until the v0513 build where SQL 
> tests were last run.
> This is seen on the v0513 build.
> --
> Here is the entire script to reproduce it:
> create schema mytest;
> set schema mytest;
> create table t1 (c1 int , c2 int constraint t1 check (c2 > 10));
> drop schema mytest cascade;
> --
> Here is the execution output:
> >>create schema mytest;
> --- SQL operation complete.
> >>set schema mytest;
> --- SQL operation complete.
> >>create table t1 (c1 int , c2 int constraint t1 check (c2 > 10));
> *** ERROR[1043] Constraint TRAFODION.MYTEST.T1 already exists.
> *** ERROR[1029] Object TRAFODION.MYTEST.T1 could not be created.
> --- SQL operation failed with errors.
> >>drop schema mytest cascade;
> --- SQL operation complete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1241) LP Bug: 1456304 - mtserver - spjs with resultsets failing - no results returned

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1241:
--
Fix Version/s: (was: 2.2.0)

> LP Bug: 1456304 - mtserver - spjs with resultsets failing - no results 
> returned
> ---
>
> Key: TRAFODION-1241
> URL: https://issues.apache.org/jira/browse/TRAFODION-1241
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-jdbc-t2
>Reporter: Aruna Sadashiva
>Assignee: Anuradha Hegde
>Priority: Critical
>
> SPJ tests with result sets failed because there are no result sets returned 
> from the procedure.  the same SPJ works from sqlci, but fails from trafci. 
> Steps:
> -
> SQL>create table t1 (a int not null primary key, b varchar(20));
> SQL>insert into t1 values(111, 'a');
> SQL>insert into t1 values(222, 'b');
> SQL>create library testrs file '/opt/home/trafodion/SPJ/testrs.jar';
> SQL>create procedure RS200()
>language java
>parameter style java
>external name 'Testrs.RS200'
>dynamic result sets 1
>library testrs;
> SQL>call rs200();
> --- SQL operation complete.
> -
> The expected result is:
> SQL >call rs200();
> AB
> ---  
> 111  a
> 222  b
> --- 2 row(s) selected.
> --- SQL operation complete.
> The jar file, testrs.jar, is on amber7 under /opt/home/trafodion/SPJ.  It has 
> the SPJ procedure:
>public static void RS200(ResultSet[] paramArrayOfResultSet)
>throws Exception
>{
>  String str1 = "jdbc:default:connection";
>  
>  String str2 = "select * from t1";
>  Connection localConnection = DriverManager.getConnection(str1);
>  Statement localStatement = localConnection.createStatement();
>  paramArrayOfResultSet[0] = localStatement.executeQuery(str2);
>}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1251) LP Bug: 1459804 - mtserver - ODBC catalog apis fail with comm link failure

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1251:
--
Fix Version/s: (was: 2.2.0)

> LP Bug: 1459804 - mtserver - ODBC catalog apis fail with comm link failure
> --
>
> Key: TRAFODION-1251
> URL: https://issues.apache.org/jira/browse/TRAFODION-1251
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-dcs
>Reporter: Aruna Sadashiva
>Assignee: Rao Kakarlamudi
>Priority: Critical
>
> With mtserver config odbc catalog apis are failing with communication link 
> failure. 
> SQLColumns:
>   In: StatementHandle 
> = 0x005ECF00, 
>   
> CatalogName = "TRAFODION", NameLength1 = 9, SchemaName = "T4QA", NameLength2 
> = 4, 
>   
> TableName = "TABALL", NameLength3 = 6, ColumnName = "%", NameLength4 = 1
>   Return: SQL_ERROR=-1
>   stmt:   szSqlState = "08S01", 
> *pfNativeError = 98, *pcbErrorMsg = 255, *ColumnNumber = -1, *RowNumber = -2
>   
> MessageText = "[TRAF][Trafodion ODBC Driver] Communication link failure. The 
> server timed out or disappeared Platform: PC, Transport: TCPIP, Api: 
> GETCATALOGS, Error type: DRIVER, Process: 
> TCP:g4q0014.houston.hp.com/37806:ODBC, Operation: DO_WRITE_READ, function: 
> RECV_GE"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1250) LP Bug: 1459763 - mtserver - explain plan fails with 'provided input stmt does not exist', works from sqlci

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1250:
--
Fix Version/s: (was: 2.2.0)

> LP Bug: 1459763 - mtserver - explain plan fails with 'provided input stmt 
> does not exist', works from sqlci
> ---
>
> Key: TRAFODION-1250
> URL: https://issues.apache.org/jira/browse/TRAFODION-1250
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-dcs
>Reporter: Aruna Sadashiva
>Assignee: Rao Kakarlamudi
>Priority: Critical
>
> Explain is not working with mtserver thru jdbc. It works ok from sqlci.
> SQL>explain options 'f' select * from t4qa.taball;
>  
> LC   RC   OP   OPERATOR  OPT   DESCRIPTION   CARD   
>          -
>  
> 1.2root  1.00E+004
> ..1trafodion_scan  TABALL1.00E+004
> --- SQL operation complete.
> SQL>prepare s02 from select * from t4qa.taball;
> --- SQL command prepared.
> SQL>explain s02;
> *** ERROR[8804] The provided input statement does not exist in the current 
> context.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1235) LP Bug: 1453969 - User CQDs should not affect internal query compilation and execution

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1235:
--
Fix Version/s: (was: 2.2.0)

> LP Bug: 1453969 - User CQDs should not affect internal query compilation and 
> execution
> --
>
> Key: TRAFODION-1235
> URL: https://issues.apache.org/jira/browse/TRAFODION-1235
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Weishiun Tsai
>Assignee: Sandhya Sundaresan
>Priority: Critical
>
> Currently, user CQD settings would impact internal query compilation and 
> execution.  This is causing strange errors in unexpected places.  The 
> following scenario is one of such examples.  Setting the CQD 
> UPDATE_CLUSTERING_OR_UNIQUE_INDEX_KEY ‘OFF’ caused the next alter rename 
> statement to fail with a perplexing 4033 error.  The statement worked fine 
> after the CQD was reset.
> This case is filed not only about this particular CQD.  In general, we need 
> to avoid sending user CQDs to internal query compilation and execution.  
> Trafodion has hundreds of CQDs.  Some of them may be recommended to customers 
> in the future to improve performance or to work around certain problems.  If 
> we don’t stop sending CQDs to internal query compilation, this problem is 
> bound to show up again and again in the future.
> This problem can be seen on the v0505 build.
> ---
> Here is the entire script to reproduce it:
> create schema testsch;
> set schema testsch;
> create table t1 (a int);
> control query default UPDATE_CLUSTERING_OR_UNIQUE_INDEX_KEY 'OFF';
> alter table t1 rename to t2;
> control query default UPDATE_CLUSTERING_OR_UNIQUE_INDEX_KEY reset;
> alter table t1 rename to t2;
> drop schema testsch cascade;
> ---
> Here is the execution output:
> >>create schema testsch;
> --- SQL operation complete.
> >>set schema testsch;
> --- SQL operation complete.
> >>create table t1 (a int);
> --- SQL operation complete.
> >>control query default UPDATE_CLUSTERING_OR_UNIQUE_INDEX_KEY 'OFF';
> --- SQL operation complete.
> >>alter table t1 rename to t2;
> *** ERROR[4033] Column CATALOG_NAME is a primary or clustering key column and 
> cannot be updated.
> *** ERROR[8822] The statement was not prepared.
> --- SQL operation failed with errors.
> >>control query default UPDATE_CLUSTERING_OR_UNIQUE_INDEX_KEY reset;
> --- SQL operation complete.
> >>alter table t1 rename to t2;
> --- SQL operation complete.
> >>drop schema testsch cascade;
> --- SQL operation complete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1127) LP Bug: 1439541 - mxosrvr core when zookeeper connection gets dropped due to timeout

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1127:
--
Fix Version/s: (was: 2.2.0)

> LP Bug: 1439541 - mxosrvr core when zookeeper connection gets dropped due to 
> timeout
> 
>
> Key: TRAFODION-1127
> URL: https://issues.apache.org/jira/browse/TRAFODION-1127
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-mxosrvr
>Reporter: Aruna Sadashiva
>Assignee: Rao Kakarlamudi
>Priority: Critical
>
> Mxosrvr cores were seen on Zircon during perf tests. the connection to 
> zookeeper is getting dropped (zh=0x0) – maybe because of timeout or other 
> errors.
> #0  0x74a318a5 in raise () from /lib64/libc.so.6
> #1  0x74a3300d in abort () from /lib64/libc.so.6
> #2  0x75d50a55 in os::abort(bool) ()
>from /usr/java/jdk1.7.0_67/jre/lib/amd64/server/libjvm.so
> #3  0x75ed0f87 in VMError::report_and_die() ()
>from /usr/java/jdk1.7.0_67/jre/lib/amd64/server/libjvm.so
> #4  0x75d5596f in JVM_handle_linux_signal ()
>from /usr/java/jdk1.7.0_67/jre/lib/amd64/server/libjvm.so
> #5  
> #6  0x765bcce9 in zoo_exists (zh=0x0, 
> path=0xeb2598 
> "/squser4/dcs/servers/registered/zircon-n018.usa.hp.com:8:88", watch=0, 
> stat=0x7fffe61bf080) at src/zookeeper.c:3503
> #7  0x004c59f5 in updateZKState (currState=CONNECTED, 
> newState=AVAILABLE) at SrvrConnect.cpp:9057
> #8  0x004ca966 in odbc_SQLSvc_TerminateDialogue_ame_ (
> objtag_=0xedb8b0, call_id_=0xedb908, dialogueId=313965727)
> at SrvrConnect.cpp:3885
> #9  0x00493dce in DISPATCH_TCPIPRequest (objtag_=0xedb8b0, 
> call_id_=0xedb908, operation_id=)
> at Interface/odbcs_srvr.cpp:1772
> #10 0x00433882 in BUILD_TCPIP_REQUEST (pnode=0xedb8b0)
> at ../Common/TCPIPSystemSrvr.cpp:603
> #11 0x0043421d in PROCESS_TCPIP_REQUEST (pnode=0xedb8b0)
> ---Type  to continue, or q  to quit---
> at ../Common/TCPIPSystemSrvr.cpp:581
> #12 0x00462406 in CNSKListenerSrvr::tcpip_listener (arg=0xda5510)
> at Interface/linux/Listener_srvr_ps.cpp:400
> #13 0x747e52e0 in sb_thread_sthr_disp (pp_arg=0xeb4c70)
> at threadl.cpp:253
> #14 0x745b1851 in start_thread () from /lib64/libpthread.so.0
> #15 0x74ae790d in clone () from /lib64/libc.so.6



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1162) LP Bug: 1443246 - WRONG QUERY_STATUS/QUERY_SUB_STATUS for canceled queries in METRIC_QUERY_TABLE

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1162:
--
Fix Version/s: (was: 2.2.0)

> LP Bug: 1443246 - WRONG QUERY_STATUS/QUERY_SUB_STATUS for canceled queries in 
> METRIC_QUERY_TABLE
> 
>
> Key: TRAFODION-1162
> URL: https://issues.apache.org/jira/browse/TRAFODION-1162
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-mxosrvr
>Reporter: FengQiang
>Assignee: Gao Jie
>Priority: Critical
>
> Two scenarios for this issue
> 1. Cancel a running query with sql command control query cancel qid 
> "MXID11136022122946686468383500906U300_32053_XX".
> In "_REPOS_".METRIC_QUERY_TABLE, it has value for EXEC_END_UTC_TS. But 
> QUERY_STATUS is COMPLETED and no value for QUERY_SUB_STATUS. Should either of 
> the status tells that the query was canceled?
> 2. Kill the trafci client while a select query is still running.
> In "_REPOS_".METRIC_QUERY_TABLE, it has value for EXEC_END_UTC_TS. But 
> QUERY_STATUS remains 'EXECUTING' and no value for QUERY_SUB_STATUS. Should 
> either of the status tells that the query was canceled?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1208) LP Bug: 1449195 - mxosrvr failing to write to repos session and aggr metric table when we test connection from odbc administrator, errors in master log

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1208:
--
Fix Version/s: (was: 2.2.0)

> LP Bug: 1449195 - mxosrvr failing to write to repos session and aggr metric 
> table when we test connection from odbc administrator,  errors in master log
> 
>
> Key: TRAFODION-1208
> URL: https://issues.apache.org/jira/browse/TRAFODION-1208
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-mxosrvr
>Reporter: Aruna Sadashiva
>Assignee: Anuradha Hegde
>Priority: Critical
>
> After trying etst connection from odbc administrator on Windows, the 
> following errors are logged in the master log file and there are no rows 
> inserted into the metric session and aggr tables. The test connection 
> succeeds. 
> 2015-04-27 17:54:28,169, ERROR, SQL, Node Number: 0, CPU: 3, PIN: 20885, 
> Process Name: $Z030H1Q, SQLCODE: 15001, QID: 
> MXID110030208852122969171678495210306U300_483_STMT_PUBLICATION, 
> *** ERROR[15001] A syntax error occurred at or before: 
> insert into Trafodion."_REPOS_".metric_query_aggr_table 
> values(0,0,0,20885,2088
> 5,3,0,0,'16.235.158.28',0,'$Z030H1Q','MXID11003020885212296917167849521
> 0406U300',CONVERTTIMESTAMP(212296917268168600),CONVERTTIMESTAMP(21229691726
> 8168603),6,3,'DB__ROOT','NONE','SARUNA2','saruna','Trafodion ODBC 
> Data 
> Source 'amethyst' 
> Configuration',0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
>^ (332 characters from start of SQL statement)
> 2015-04-27 17:54:28,169, ERROR, MXOSRVR, Node Number: 3, CPU: 3, PIN:20885, 
> Process Name:$Z030H1Q , , ,A NonStop Process Service error Failed to write 
> statistics: insert into Trafodion."_REPOS_".metric_query_aggr_table 
> values(0,0,0,20885,20885,3,0,0,'16.235.158.28',0,'$Z030H1Q','MXID110030208852122969171678495210406U300',CONVERTTIMESTAMP(212296917268168600),CONVERTTIMESTAMP(212296917268168603),6,3,'DB__ROOT','NONE','SARUNA2','saruna','Trafodion
>  ODBC Data Source 'amethyst' 
> Configuration',0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)Error
>  detail - *** ERROR[15001] A syntax error occurred at or before: 
> insert into Trafodion."_REPOS_".metric_query_aggr_table 
> values(0,0,0,20885,2088
> 5,3,0,0,'16.235.158.28',0,'$Z030H1Q','MXID11003020885212296917167849521
> 0406U300',CONVERTTIMESTAMP(212296917268168600),CONVERTTIMESTAMP(21229691726
> 8168603),6,3,'DB__ROOT','NONE','SARUNA2','saruna','Trafodion ODBC 
> Data 
> Source 'amethyst' 
> Configuration',0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
>^ (332 characters from start of SQL statement) [2015-04-27 
> 17:54:28] has occurred. 
> 2015-04-27 17:54:33,241, ERROR, SQL, Node Number: 0, CPU: 3, PIN: 20885, 
> Process Name: $Z030H1Q, SQLCODE: 15001, QID: 
> MXID110030208852122969171678495210306U300_487_STMT_PUBLICATION, 
> *** ERROR[15001] A syntax error occurred at or before: 
> insert into Trafodion."_REPOS_".metric_session_table 
> values(0,0,0,20885,20885,3
> ,0,0,'16.235.158.28',0,'$Z030H1Q','MXID11003020885212296917167849521040
> 6U300','END',CONVERTTIMESTAMP(212296917268168600),CONVERTTIMESTAMP(21229691
> 7273198784),3,'DB__ROOT','NONE','SARUNA2','saruna','Trafodion ODBC Data 
> Sou
> rce 'amethyst' Configuration',0,0,0,0,0,0,0,0,0,0,0,0,0,0,5827,29,0,0,0,0,0);
> ^ (329 characters from start of SQL statement)
> 2015-04-27 17:54:33,241, ERROR, MXOSRVR, Node Number: 3, CPU: 3, PIN:20885, 
> Process Name:$Z030H1Q , , ,A NonStop Process Service error Failed to write 
> statistics: insert into Trafodion."_REPOS_".metric_session_table 
> values(0,0,0,20885,20885,3,0,0,'16.235.158.28',0,'$Z030H1Q','MXID110030208852122969171678495210406U300','END',CONVERTTIMESTAMP(212296917268168600),CONVERTTIMESTAMP(212296917273198784),3,'DB__ROOT','NONE','SARUNA2','saruna','Trafodion
>  ODBC Data Source 'amethyst' 
> Configuration',0,0,0,0,0,0,0,0,0,0,0,0,0,0,5827,29,0,0,0,0,0)Error detail 
> - *** ERROR[15001] A syntax error occurred at or before: 
> insert into Trafodion."_REPOS_".metric_session_table 
> values(0,0,0,20885,20885,3
> ,0,0,'16.235.158.28',0,'$Z030H1Q','MXID11003020885212296917167849521040
> 6U300','END',CONVERTTIMESTAMP(212296917268168600),CONVERTTIMESTAMP(21229691
> 7273198784),3,'DB__ROOT','NONE','SARUNA2','saruna','Trafodion ODBC Data 
> Sou
> rce 'amethyst' Configuration',0,0,0,0,0,0,0,0,0,0,0,0,0,0,5827,29,0,0,0,0,0);
> ^ (329 characters from start of SQL statement) [2015-04-27 
> 17:54:33] has occurred.



--
This 

[jira] [Updated] (TRAFODION-1240) LP Bug: 1455679 - mtserver - spjs with output params fail with error 29019

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1240:
--
Fix Version/s: (was: 2.2.0)

> LP Bug: 1455679 - mtserver - spjs with output params fail with error 29019
> --
>
> Key: TRAFODION-1240
> URL: https://issues.apache.org/jira/browse/TRAFODION-1240
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-dcs
>Reporter: Aruna Sadashiva
>Assignee: Anuradha Hegde
>Priority: Critical
>
> With multithreaded dcs, SPJs with output params are failing.
> SQL>create library spjcall file '/opt/home/trafodion/SPJ/call.jar';
> SQL>Create procedure N0210 (in in1 int, out out1 int)
> external name 'Procs.N0210'
> library spjcall
> language java
> parameter style java;
> SQL>Call N0210(64548478,?);
> *** ERROR[29019] Parameter 1 for 1 set of parameters is not set
> The SPJ jar file, call.jar, can be found on amber7 under 
> /opt/home/trafodion/SPJ. It has a very simple SPJ procedure:
>   public static void N0210(int paramInt, int[] paramArrayOfInt)
>   {
> paramArrayOfInt[0] = paramInt;
>   }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1107) LP Bug: 1438466 - Multiple tdm_arkcmp child processes started after receipt of HBase error

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1107:
--
Fix Version/s: (was: 2.2.0)

> LP Bug: 1438466 - Multiple tdm_arkcmp child processes started after receipt 
> of HBase error
> --
>
> Key: TRAFODION-1107
> URL: https://issues.apache.org/jira/browse/TRAFODION-1107
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Joanie Cooper
>Assignee: Qifan Chen
>Priority: Critical
>
> During a fresh test of running the compGeneral regression suite
> while artificially interjecting an error return from the TrxRegionEndpoint
> coprocessor, numerous tdm_arkcmp child processes were started.
> Before the error hit, we seemed to have a normal number of compilers
> [$Z0005MG] 000,3170 001 GEN  ES--A-- $Z0002KK$Z0002IVtdm_arkcmp   
>   
> [$Z0005MG] 000,3292 001 GEN  ES--A-- $Z0002P2$Z0002KKtdm_arkcmp   
>   
> [$Z0005MG] 000,3816 001 GEN  ES--A-- $Z000341$Z0002P2tdm_arkcmp   
>   
> [$Z0005MG] 000,3886 001 GEN  ES--A-- $Z000361$Z000341tdm_arkcmp
> After forcing the error, it looks like we have new compilers being generated,
> all ultimately part of the original tdm_arkcmp parent off of the sqlci 
> session.
> This is a result of a drop statement.  From the sqlci window, the
> statement appears hung, as it never returns.  But, it appears the
> compilers keep generating new children and the query ultimately never returns.
> When I killed the query, it had 174 compilers running.
> I tried a pstack for one of the compilers, I’ve attached it below.
> g4t3037{joaniec}3: sqps
> Processing cluster.conf on local host g4t3037.houston.hp.com
> [$Z000AF9] Shell/shell Version 1.0.1 Release 1.1.0 (Build release [joaniec], 
> date 26Mar15)
> [$Z000AF9] %ps  
> [$Z000AF9] NID,PID(os)  PRI TYPE STATES  NAMEPARENT  PROGRAM
> [$Z000AF9]  ---  --- --- --- 
> ---
> [$Z000AF9] 000,00018562 000 WDG  ES--A-- $WDG000 NONEsqwatchdog   
>   
> [$Z000AF9] 000,00018563 000 PSD  ES--A-- $PSD000 NONEpstartd  
>   
> [$Z000AF9] 000,00018592 001 DTM  ES--A-- $TM0NONEtm   
>   
> [$Z000AF9] 000,00019243 001 GEN  ES--A-- $ZSC000 NONEmxsscp   
>   
> [$Z000AF9] 000,00019274 001 SSMP ES--A-- $ZSM000 NONEmxssmp   
>   
> [$Z000AF9] 000,00020982 001 GEN  ES--A-- $ZLOBSRV0   NONEmxlobsrvr
>   
> [$Z000AF9] 000,7356 001 GEN  ES--A-- $Z000606NONEsqlci
>   
> [$Z000AF9] 000,7416 001 GEN  ES--A-- $Z00061W$Z000606tdm_arkcmp   
>   
> [$Z000AF9] 000,7960 001 GEN  ES--A-- $Z0006HF$Z00061Wtdm_arkcmp   
>   
> [$Z000AF9] 000,8021 001 GEN  ES--A-- $Z0006J6$Z0006HFtdm_arkcmp   
>   
> [$Z000AF9] 000,8079 001 GEN  ES--A-- $Z0006KU$Z0006J6tdm_arkcmp   
>   
> [$Z000AF9] 000,8137 001 GEN  ES--A-- $Z0006MH$Z0006KUtdm_arkcmp   
>   
> [$Z000AF9] 000,8194 001 GEN  ES--A-- $Z0006P4$Z0006MHtdm_arkcmp   
>   
> [$Z000AF9] 000,8252 001 GEN  ES--A-- $Z0006QS$Z0006P4tdm_arkcmp   
>   
> [$Z000AF9] 000,8312 001 GEN  ES--A-- $Z0006SH$Z0006QStdm_arkcmp   
>   
> [$Z000AF9] 000,8369 001 GEN  ES--A-- $Z0006U4$Z0006SHtdm_arkcmp   
>   
> [$Z000AF9] 000,8427 001 GEN  ES--A-- $Z0006VS$Z0006U4tdm_arkcmp   
>   
> [$Z000AF9] 000,8491 001 GEN  ES--A-- $Z0006XL$Z0006VStdm_arkcmp   
>   
> [$Z000AF9] 000,9023 001 GEN  ES--A-- $Z0007CT$Z0006XLtdm_arkcmp   
>   
> [$Z000AF9] 000,9081 001 GEN  ES--A-- $Z0007EG$Z0007CTtdm_arkcmp   
>   
> [$Z000AF9] 000,9141 001 GEN  ES--A-- $Z0007G6$Z0007EGtdm_arkcmp   
>   
> [$Z000AF9] 000,9202 001 GEN  ES--A-- $Z0007HX$Z0007G6tdm_arkcmp   
>   
> [$Z000AF9] 000,9262 001 GEN  ES--A-- $Z0007JM$Z0007HXtdm_arkcmp   
>   
> [$Z000AF9] 000,9320 001 GEN  ES--A-- $Z0007LA$Z0007JMtdm_arkcmp   
>   
> [$Z000AF9] 000,9489 001 GEN  ES--A-- $Z0007R4$Z0007LAtdm_arkcmp   
>   
> [$Z000AF9] 000,9547 001 GEN  ES--A-- $Z0007SS$Z0007R4tdm_arkcmp   
>   
> [$Z000AF9] 000,9604 001 GEN  ES--A-- $Z0007UE$Z0007SStdm_arkcmp   
>   
> [$Z000AF9] 000,9661 001 GEN  ES--A-- $Z0007W1$Z0007UEtdm_arkcmp   
>   
> [$Z000AF9] 000,9728 001 GEN  ES--A-- $Z0007XY$Z0007W1tdm_arkcmp   
>   
> [$Z000AF9] 000,00010268 001 GEN  ES--A-- $Z0008DD$Z0007XYtdm_arkcmp   
>   
> [$Z000AF9] 000,00010364 001 GEN  ES--A-- $Z0008G4$Z0008DDtdm_arkcmp   
>   
> [$Z000AF9] 000,00010421 001 GEN  ES--A-- $Z0008HR$Z0008G4tdm_arkcmp   
>   
> 

[jira] [Updated] (TRAFODION-1062) LP Bug: 1432950 - When stats type is set to Query, every query should be logged in the metric query table

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1062:
--
Fix Version/s: (was: 2.2.0)

> LP Bug: 1432950 - When stats type is set to Query, every query should be 
> logged in the metric query table
> -
>
> Key: TRAFODION-1062
> URL: https://issues.apache.org/jira/browse/TRAFODION-1062
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-general
>Reporter: Aruna Sadashiva
>Assignee: Anuradha Hegde
>Priority: Critical
>  Labels: repos
>
> When stats type is set to "query", all queries should be found in 
> _REPOS_.METRIC_QUERY_TABLE. Currently only long running queries, that run 
> longer than interval time get logged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-924) LP Bug: 1413241 - ENDTRANSACTION hang, transaction state FORGETTING

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-924:
-
Fix Version/s: (was: 2.2.0)

> LP Bug: 1413241 - ENDTRANSACTION hang, transaction state FORGETTING
> ---
>
> Key: TRAFODION-924
> URL: https://issues.apache.org/jira/browse/TRAFODION-924
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dtm
>Reporter: Apache Trafodion
>Assignee: Atanu Mishra
>Priority: Critical
>
> A loop to reexecute the seabase developer regression suite hung on the 14th 
> iteration in TEST016. The sqlci console looked like this:
> >>-- char type
> >>create table mcStatPart1
> +>(a int not null not droppable,
> +>b char(10) not null not droppable,
> +>f int, txt char(100),
> +>primary key (a,b))
> +>salt using 8 partitions ;
> --- SQL operation complete.
> >>
> >>insert into mcStatPart1 values 
> >>(1,'123',1,'xyz'),(1,'133',1,'xyz'),(1,'423',1,'xyz'),(2,'111',1,'xyz'),(2,'223',1,'xyz'),(2,'323',1,'xyz'),(2,'423',1,'xyz'),
> +>   
> (3,'123',1,'xyz'),(3,'133',1,'xyz'),(3,'423',1,'xyz'),(4,'111',1,'xyz'),(4,'223',1,'xyz'),(4,'323',1,'xyz'),(4,'423',1,'xyz');
> A pstack of the sqlci (0,13231) showed it blocking in a call to 
> ENDTRANSACTION.   And dtmci showed this for the transaction:
> DTMCI > list
> Transid Owner eventQ  pending Joiners TSEsState
> (0,13742)   0,13231   0   0   0   0   FORGETTING
> Here's a copy of Sean's analysis:
> From: Broeder, Sean 
> Sent: Wednesday, January 21, 2015 8:43 AM
> To: Hanlon, Mike; Cooper, Joanie
> Cc: DeRoo, John
> Subject: RE: ENDTRANSACTION hang, transaction state FORGETTING
> Hi Mike,
> It looks like we have a zookeeper problem right at the time of the commit.  A 
> table is offline:
> 2015-01-21 11:13:45,529 WARN zookeeper.ZKUtil: 
> hconnection-0x1646b7c-0x14aefd0ac4a5e18, quorum=localhost:47570, 
> baseZNode=/hbase Unable to get data of znode 
> /hbase/table/TRAFODION.HBASE.MCSTATPART1
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss for /hbase/table/TRAFODION.HBASE.MCSTATPART1
> Then we fail after 3 retries of sending the commit request
> 2015-01-21 11:14:04,405 ERROR transactional.TransactionManager: doCommitX, 
> result size: 0
> 2015-01-21 11:14:04,405 ERROR transactional.TransactionManager: doCommitX, 
> result size: 0
> Normally we would create a recovery entry for this transaction to redrive 
> commit, but it appears we are unable to do that due to the zookeeper errors 
> 2015-01-21 11:14:04,408 DEBUG 
> client.HConnectionManager$HConnectionImplementation: Removed all cached 
> region locations that map to g4t3005.houston.hp.com,4   2243,1421362639257
> 471340 2015-01-21 11:14:05,255 WARN zookeeper.RecoverableZooKeeper: Possibly 
> transient ZooKeeper, quorum=localhost:47570, 
> exception=org.apache.zookeeper.KeeperExc   
> eption$ConnectionLossException: KeeperErrorCode = ConnectionLoss for 
> /hbase/table/TRAFODION.HBASE.MCSTATPART1
> 471341 2015-01-21 11:14:05,256 WARN zookeeper.RecoverableZooKeeper: Possibly 
> transient ZooKeeper, quorum=localhost:47570, 
> exception=org.apache.zookeeper.KeeperExc   
> eption$ConnectionLossException: KeeperErrorCode = ConnectionLoss for 
> /hbase/table/TRAFODION.HBASE.MCSTATPART1
> 471342 2015-01-21 11:14:05,256 INFO util.RetryCounter: Sleeping 1000ms before 
> retry #0...
> 471343 2015-01-21 11:14:05,256 INFO util.RetryCounter: Sleeping 1000ms before 
> retry #0...
> Hbase looks like it’s having troubles as I can’t even do a list operation 
> from the hbase shell
> 2015-01-21 14:40:28,816 ERROR [main] 
> client.HConnectionManager$HConnectionImplementation: Can't get connection to 
> ZooKeeper: KeeperErrorCode = ConnectionLoss for /hbase
> We need to think of how better to handle this in the TransactionManager, but 
> in reality I’m not sure what we can do if Zookeeper fails.  You can open an 
> LP bug so we have record of it and can discuss what to do.
> Thanks,
> Sean
> _
> From: Hanlon, Mike 
> Sent: Wednesday, January 21, 2015 6:17 AM
> To: Cooper, Joanie
> Cc: Broeder, Sean; DeRoo, John
> Subject: ENDTRANSACTION hang, transaction state FORGETTING
> Hi Joanie,
> Have we seen this before? A SQL regression test (in this case 
> seabase/TEST016) hangs in a call to ENDTRANSACTION. The transaction state is 
> shown in dtmci to be FORGETTING.  It probably is not easy to reproduce, since 
> the problem occurred on the 14th iteration of a loop to re-execute the 
> seabase suite. 
> There are a lot of messages in 
> /opt/home/mhanlon/trafodion/core/sqf/logs/trafodion.dtm.log on my 
> workstation, sqws112. The transid in question is 13742. Would somebody like 

[jira] [Updated] (TRAFODION-939) LP Bug: 1413831 - Phoenix tests run into several error 8810 when other tests are run in parallel with it

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-939:
-
Fix Version/s: (was: 2.2.0)

> LP Bug: 1413831 - Phoenix tests run into several error 8810 when other tests 
> are run in parallel with it
> 
>
> Key: TRAFODION-939
> URL: https://issues.apache.org/jira/browse/TRAFODION-939
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Aruna Sadashiva
>Assignee: Prashanth Vasudev
>Priority: Critical
>
> Running phoenix and jdbc catalog api tests at the same time resulted in 16 
> failures in phoenix with error 8810, during ddl operations as shown below. 
> All the jdbc catalog api tests passed. 
> Phoenix runs fine when no other tests are running on the system. 
> test.java.com.hp.phoenix.end2end.ToNumberFunctionTest
> *** ERROR[8810] Executor ran into an internal failure and returned an 
> error without populating the diagnostics area. This error is being injected 
> to indicate that. [2015-01-22 08:32:31]
> E.E.
> Time: 701.811
> There were 2 failures:
> 1) 
> testKeyProjectionWithIntegerValue(test.java.com.hp.phoenix.end2end.ToNumberFunctionTest)
> java.lang.AssertionError: Failed to drop object: table TO_NUMBER_TABLE
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> test.java.com.hp.phoenix.end2end.BaseTest.dropTestObjects(BaseTest.java:180)
>   at 
> test.java.com.hp.phoenix.end2end.BaseTest.doBaseTestCleanup(BaseTest.java:112)
> On Justin's sugegstion, added ABORT_ON_ERROR for 8810 and the core has this 
> stack:
> #0  0x74a458a5 in raise () from /lib64/libc.so.6
> #1  0x74a4700d in abort () from /lib64/libc.so.6
> #2  0x71248114 in ComCondition::setSQLCODE (
> this=, newSQLCODE=-8810)
> at ../export/ComDiags.cpp:1425
> #3  0x73f66e36 in operator<< (d=..., dgObj=...)
> at ../common/DgBaseType.cpp:138
> #4  0x7437e4e3 in CliStatement::fetch (this=, 
> cliGlobals=0xeeade0, output_desc=, diagsArea=..., 
> newOperation=1) at ../cli/Statement.cpp:5310
> #5  0x74324e0f in SQLCLI_PerformTasks(CliGlobals *, ULng32, 
> SQLSTMT_ID *, SQLDESC_ID *, SQLDESC_ID *, Lng32, Lng32, typedef __va_list_tag 
> __va_list_tag *, SQLCLI_PTR_PAIRS *, SQLCLI_PTR_PAIRS *) 
> (cliGlobals=0xeeade0, tasks=8063, 
> statement_id=0x1fbea88, input_descriptor=0x1fbeab8, 
> output_descriptor=0x0, 
> num_input_ptr_pairs=0, num_output_ptr_pairs=0, ap=0x7fffe41ef030, 
> input_ptr_pairs=0x0, output_ptr_pairs=0x0) at ../cli/Cli.cpp:3382
> #6  0x7438a40b in SQL_EXEC_ClearExecFetchClose (
> statement_id=0x1fbea88, input_descriptor=0x1fbeab8, 
> output_descriptor=0x0, 
> num_input_ptr_pairs=0, num_output_ptr_pairs=0, num_total_ptr_pairs=0)
> at ../cli/CliExtern.cpp:2627
> #7  0x768703bf in SRVR::WSQL_EXEC_ClearExecFetchClose (
> statement_id=0x1fbea88, input_descriptor=, 
> output_descriptor=, 
> num_input_ptr_pairs=, 
> num_output_ptr_pairs=, 
> num_total_ptr_pairs=) at SQLWrapper.cpp:459
> #8  0x76866cff in SRVR::EXECUTE2 (pSrvrStmt=0x1fbe470)
> at sqlinterface.cpp:5520
> #9  0x7689733e in odbc_SQLSvc_Execute2_sme_ (
> objtag_=, call_id_=, 
> dialogueId=, sqlAsyncEnable=, 
> ---Type  to continue, or q  to quit---
> queryTimeout=, inputRowCnt=, 
> sqlStmtType=128, stmtHandle=33285232, cursorLength=0, cursorName=0x0, 
> cursorCharset=1, holdableCursor=0, inValuesLength=0, inValues=0x0, 
> returnCode=0x7fffe41ef928, sqlWarningOrErrorLength=0x7fffe41ef924, 
> sqlWarningOrError=@0x7fffe41ef900, rowsAffected=0x7fffe41ef920, 
> outValuesLength=0x7fffe41ef914, outValues=@0x7fffe41ef8f8)
> at srvrothers.cpp:1517
> #10 0x004cbc42 in odbc_SQLSrvr_ExecDirect_ame_ (objtag_=0x24a84d0, 
> call_id_=0x24a8528, dialogueId=1492150530, 
> stmtLabel=, cursorName=0x0, 
> stmtExplainLabel=, stmtType=0, sqlStmtType=128, 
> sqlString=0x2d43ea4 "drop table PRODUCT_METRICS cascade", 
> sqlAsyncEnable=0, queryTimeout=0, inputRowCnt=0, txnID=0, 
> holdableCursor=0)
> at SrvrConnect.cpp:7636
> #11 0x00494086 in SQLEXECUTE_IOMessage (objtag_=0x24a84d0, 
> call_id_=0x24a8528, operation_id=3012) at Interface/odbcs_srvr.cpp:1734
> #12 0x00494134 in DISPATCH_TCPIPRequest (objtag_=0x24a84d0, 
> call_id_=0x24a8528, operation_id=)
> at Interface/odbcs_srvr.cpp:1799
> #13 0x00433822 in BUILD_TCPIP_REQUEST (pnode=0x24a84d0)
> at ../Common/TCPIPSystemSrvr.cpp:603
> #14 0x004341bd in PROCESS_TCPIP_REQUEST (pnode=0x24a84d0)
> at ../Common/TCPIPSystemSrvr.cpp:581
> #15 0x00462396 in 

[jira] [Updated] (TRAFODION-979) LP Bug: 1418142 - Parallel DDL operations sees error 8810

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-979:
-
Fix Version/s: (was: 2.2.0)

> LP Bug: 1418142 - Parallel DDL operations sees error 8810
> -
>
> Key: TRAFODION-979
> URL: https://issues.apache.org/jira/browse/TRAFODION-979
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Weishiun Tsai
>Assignee: Prashanth Vasudev
>Priority: Critical
>
> To be able to run SQL regression tests in parallel on the same instance is 
> one of the goals that we would like to see happen for the post-1.0 release.  
> In order to do this, Trafodion needs to be able to handle parallel execution 
> with a workload that has a mixture of DDLs and DMLs together.  Right now, 
> Trafodion is not very robust when it comes to handling concurrent DDL 
> executions. (or a DDL execution with another DML execution.  It’s hard to 
> tell if it has to be 2 DDLs to cause all the problems that we seeing.) 
> There are several problems in this area.  The first noticeable one is this 
> particular 8810 error.   QA did an experiment last night by splitting the 
> regression test suites into 2 parts and ran them together on a 4-node 
> cluster.   After both completed, we saw a total of 50 occurrences of this 
> 8810 error:
> -bash-4.1$ grep 8810 */*.log | grep ERROR | wc -l
> 33
> -bash-4.1$ grep 8810 */*.log | grep ERROR | wc -l
> 17
> A typical error looks like this:
> SQL>drop table t10a104;
> *** ERROR[8810] Executor ran into an internal failure and returned an error 
> without populating the diagnostics area. This error is being injected to 
> indicate that. [2015-02-04 07:12:04]
> There is a bug report https://bugs.launchpad.net/trafodion/+bug/1413831 
> ‘Phoenix tests run into several error 8810 when other tests are run in 
> parallel with it´ that describes a similar problem.  But that one has a 
> narrower scope focusing only on phoenix tests and the severity of the bug 
> report is only High.  This one is intended to cover a broader issue of 
> running parallel DDL operations in general.  We will mark it as Critical as 
> we need to remove this obstacle first to see what other problems may lie 
> beneath for executing DDLs in parallel.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-900) LP Bug: 1411541 - [DCS] query with long query text is missing from METRIC_QUERY_TABLE

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-900:
-
Fix Version/s: (was: 2.2.0)

> LP Bug: 1411541 - [DCS] query with long query text is missing from 
> METRIC_QUERY_TABLE
> -
>
> Key: TRAFODION-900
> URL: https://issues.apache.org/jira/browse/TRAFODION-900
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-general
>Reporter: FengQiang
>Assignee: Rao Kakarlamudi
>Priority: Critical
>
> QUERY_TEXT  is VARCHAR(5 CHARS)  in METRIC_QUERY_TABLE.
> Set dcs.server.user.program.statistics.type to 'query' to publish every query.
> When running a query with query text longer than 5, it did not published 
> to METRIC_QUERY_TABLE.
> Should the query text be trimed to fit the datatype and got published?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-911) LP Bug: 1412652 - _REPOS_.METRIC_QUERY_TABLE has rows with QUERY_ID and sometimes empty QUERY_TEXT

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-911:
-
Fix Version/s: (was: 2.2.0)

> LP Bug: 1412652 - _REPOS_.METRIC_QUERY_TABLE has rows with  QUERY_ID 
> and sometimes empty QUERY_TEXT
> -
>
> Key: TRAFODION-911
> URL: https://issues.apache.org/jira/browse/TRAFODION-911
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-mxosrvr
>Reporter: Aruna Sadashiva
>Assignee: Anuradha Hegde
>Priority: Critical
>
> Don't know how to recreate this yet, but _REPOS_.METRIC_QUERY_TABLE has 
> several rows with QUERY_ID set to . The session id looks ok. QUERY_TEXT 
> is sometimes empty. 
> Arvind found 2 places in code where query id is explicitly set to , but 
> not sure how it gets there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-138) LP Bug: 1246183 - volatile table is not dropped after hpdci session ends

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-138:
-
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 1246183 - volatile table is not dropped after hpdci session ends
> 
>
> Key: TRAFODION-138
> URL: https://issues.apache.org/jira/browse/TRAFODION-138
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Weishiun Tsai
>Assignee: Sandhya Sundaresan
>Priority: Critical
> Fix For: 2.3
>
>
> A volatile table is not dropped after the hpdci session has ended.  In the 
> following example, the volatile table persists after several hpdci 
> disconnects and reconnects.  This problem is not seen from sqlci, so I am 
> assuming that the problem is in how mxosvr handles volatile tables.
> -bash-4.1$ hpdci.seascape2-sqtopl7.sh
> Welcome to HP Database Command Interface 3.0
> (c) Copyright 2010-2012 Hewlett-Packard Development Company, LP.
> Connected to Data Source: TDM_Default_DataSource
> SQL>set schema seabase.mytest;
> --- SQL operation complete.
> SQL>create volatile table abc (a int not null not droppable primary key);
> --- SQL operation complete.
> SQL>showddl abc;
> CREATE VOLATILE TABLE ABC
>   (
> AINT NO DEFAULT NOT NULL NOT DROPPABLE
>   , PRIMARY KEY (A ASC)
>   )
> ;
> --- SQL operation complete.
> SQL>exit;
> -bash-4.1$ hpdci.seascape2-sqtopl7.sh
> Welcome to HP Database Command Interface 3.0
> (c) Copyright 2010-2012 Hewlett-Packard Development Company, LP.
> Connected to Data Source: TDM_Default_DataSource
> SQL>set schema seabase.mytest;
> --- SQL operation complete.
> SQL>showddl abc;
> CREATE VOLATILE TABLE ABC
>   (
> AINT NO DEFAULT NOT NULL NOT DROPPABLE
>   , PRIMARY KEY (A ASC)
>   )
> ;
> --- SQL operation complete.
> SQL>exit;
> -bash-4.1$ hpdci.seascape2-sqtopl7.sh
> Welcome to HP Database Command Interface 3.0
> (c) Copyright 2010-2012 Hewlett-Packard Development Company, LP.
> Connected to Data Source: TDM_Default_DataSource
> SQL>set schema seabase.mytest;
> --- SQL operation complete.
> SQL>showddl abc;
> CREATE VOLATILE TABLE ABC
>   (
> AINT NO DEFAULT NOT NULL NOT DROPPABLE
>   , PRIMARY KEY (A ASC)
>   )
> ;
> --- SQL operation complete.
> SQL>exit;
> -bash-4.1$ hpdci.seascape2-sqtopl7.sh
> Welcome to HP Database Command Interface 3.0
> (c) Copyright 2010-2012 Hewlett-Packard Development Company, LP.
> Connected to Data Source: TDM_Default_DataSource
> SQL>set schema seabase.mytest;
> --- SQL operation complete.
> SQL>showddl abc;
> CREATE VOLATILE TABLE ABC
>   (
> AINT NO DEFAULT NOT NULL NOT DROPPABLE
>   , PRIMARY KEY (A ASC)
>   )
> ;
> --- SQL operation complete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-533) LP Bug: 1355042 - SPJ w result set failed with ERROR[11220], SQLCODE of -29261, SQLSTATE of HY000

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-533:
-
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 1355042 - SPJ w result set failed with ERROR[11220], SQLCODE of 
> -29261, SQLSTATE of HY000
> -
>
> Key: TRAFODION-533
> URL: https://issues.apache.org/jira/browse/TRAFODION-533
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Chong Hsu
>Assignee: Kevin Xu
>Priority: Critical
> Fix For: 2.3
>
>
> Tested with Trafodion build, 20140801-0830.
> Calling a SPJ that calls another SPJ with result set:
>public static void RS363()
>  throws Exception
>{
>  String str = "jdbc:default:connection";
>  
>  Connection localConnection = DriverManager.getConnection(str);
>  Statement localStatement = localConnection.createStatement();
>  
>  CallableStatement localCallableStatement = 
> localConnection.prepareCall("{call RS200()}");
>  localCallableStatement.execute();
>}
>public static void RS200(ResultSet[] paramArrayOfResultSet)
>throws Exception
>{
>  String str1 = "jdbc:default:connection";
>  
>  String str2 = "select * from t1";
>  Connection localConnection = DriverManager.getConnection(str1);
>  Statement localStatement = localConnection.createStatement();
>  paramArrayOfResultSet[0] = localStatement.executeQuery(str2);
>}
> it failed with ERROR:
> *** ERROR[11220] A Java method completed with an uncaught 
> java.sql.SQLException with invalid SQLSTATE. The uncaught exception had a 
> SQLCODE of -29261 and SQLSTATE of HY000. Details: java.sql.SQLException: No 
> error message in SQL/MX diagnostics area, but sqlcode is non-zero [2014-08-04 
> 22:57:28]
> The SPJ Jar file is attached. Here are the steps to produce the error:
>   
> set schema testspj;
> create library spjrs file '//Testrs.jar';
> create procedure RS363()
>language java 
>parameter style java  
>external name 'Testrs.RS363'
>dynamic result sets 0
>library spjrs;
> --- SQL operation complete.
> create procedure RS200()
>language java 
>parameter style java  
>external name 'Testrs.RS200' 
>dynamic result sets 1
>library spjrs;
> create table  T1
>   (
> AINT DEFAULT NULL
>   , BINT DEFAULT NULL
>   ) no partitions; 
> Call RS363();
> *** ERROR[11220] A Java method completed with an uncaught 
> java.sql.SQLException with invalid SQLSTATE. The uncaught exception had a 
> SQLCODE of -29261 and SQLSTATE of HY000. Details: java.sql.SQLException: No 
> error message in SQL/MX diagnostics area, but sqlcode is non-zero [2014-08-04 
> 22:57:28]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-179) LP Bug: 1274962 - EXECUTE.BATCH update creates core-file

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-179:
-
Fix Version/s: (was: 2.2.0)

> LP Bug: 1274962 - EXECUTE.BATCH update creates core-file
> 
>
> Key: TRAFODION-179
> URL: https://issues.apache.org/jira/browse/TRAFODION-179
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Guy Groulx
>Assignee: Sandhya Sundaresan
>Priority: Critical
>
> Using atomics suite.
> Test 10 is an update test using JDBC batch feature.
> Test 12 is a delete test using JDBC batch feature.
> Test is configured to batch 500 rows and then does execute batch.   See code 
> for update.  Similar code for delete.
> /*Prepare the insert statement.   */
> String sqlStmtCmd = "UPDATE " + dbTableName + " SET COL1=?, COL2=? WHERE 
> CNT = ?";
> try {
>   sqlStmt = dbConnection.prepareStatement(sqlStmtCmd);
> } catch (SQLException e) {
>   log_error("Exception in update_with_rowsets\n");
>   return(e.getErrorCode());
> }
> ...
> for (i = 0; i < (int)numbtmf; i++) {   <== numbtmf is a passed parameter, 
> set to 500 in my tests.
>   try {
> sqlStmt.setLong(1, up_col1);
> sqlStmt.setLong(2, up_col2);
> sqlStmt.setLong(3, hv_cnt);
> sqlStmt.addBatch();
> hv_cnt++;
> if (hv_cnt >= hv_cnt2) hv_cnt = hv_cnt1; /*  Need to restart. */
>   } catch (SQLException e) {
> log_error("Exception in update_with_rowsets\n");
> SQLException nextException;
> nextException = e;
> retcode = e.getErrorCode();
> do {
>   log_error(nextException.getMessage() + " State: " + 
> nextException.getSQLState() + "\n");
> } while ((nextException = nextException.getNextException()) != null);
>   }
> }
> beginTXN();
> try {
>   resultCountL = sqlStmt.executeBatch();<== This is where is hangs 
> for a very long time.
>   rowCnt+=resultCountL.length;
>   if (resultCountL.length != numbtmf) {
> log_error("Error UPDATING!! resultCount: " + resultCountL.length + 
> "\n");
> retcode = 1;
>   } else {
> for (i = 0; i < resultCountL.length; i++) {
>   if ((resultCountL[i] != 1) && (resultCountL[i] != 
> Statement.SUCCESS_NO_INFO)) {
> log_error("Error UPDATING!! resultCount: " + resultCountL[i] + 
> "\n");
> retcode = resultCountL[i];
> break;
>   }
> }
>   }
>   sqlStmt.clearBatch();
> } catch (SQLException e) {
>   log_error("Exception in update_with_rowsets\n");
>   SQLException nextException;
>   nextException = e;
>   retcode = e.getErrorCode();
>   do {
> log_error(nextException.getMessage() + " State: " + 
> nextException.getSQLState() + "\n");
>   } while ((nextException = nextException.getNextException()) != null);
> }
> commitTXN();
> return retcode;
> Eventually, get: 
> Exception in update_with_rowsets
> Exception in update_with_rowsets
> Batch Update Failed, See next exception for details State: HY000
> Batch Update Failed, See next exception for details State: HY000
> *** ERROR[8448] Unable to access Hbase interface. Call to 
> ExpHbaseInterface::insertRows returned error HBASE_ACCESS_ERROR(-705). Cause: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=10, exceptions:
> Fri Jan 31 14:38:42 UTC 2014, 
> org.apache.hadoop.hbase.client.transactional.TransactionalTable$7@1c36686e, 
> java.lang.NullPointerException
> Fri Jan 31 14:38:43 UTC 2014, 
> org.apache.hadoop.hbase.client.transactional.TransactionalTable$7@1c36686e, 
> java.lang.NullPointerException
> Fri Jan 31 14:38:44 UTC 2014, 
> org.apache.hadoop.hbase.client.transactional.TransactionalTable$7@1c36686e, 
> java.lang.NullPointerException
> Fri Jan 31 14:38:45 UTC 2014, 
> org.apache.hadoop.hbase.client.transactional.TransactionalTable$7@1c36686e, 
> java.lang.NullPointerException
> Fri Jan 31 14:38:47 UTC 2014, 
> org.apache.hadoop.hbase.client.transactional.TransactionalTable$7@1c36686e, 
> java.lang.NullPointerException
> Fri Jan 31 14:38:49 UTC 2014, 
> org.apache.hadoop.hbase.client.transactional.TransactionalTable$7@1c36686e, 
> java.lang.NullPointerException
> Fri Jan 31 14:38:53 UTC 2014, 
> org.apache.hadoop.hbase.client.transactional.TransactionalTable$7@1c36686e, 
> java.lang.NullPointerException
> Fri Jan 31 14:38:57 UTC 2014, 
> org.apache.hadoop.hbase.client.transactional.TransactionalTable$7@1c36686e, 
> java.lang.NullPointerException
> Fri Jan 31 14:39:05 UTC 2014, 
> org.apache.hadoop.hbase.client.transactional.TransactionalTable$7@1c36686e, 
> java.lang.NullPointerException
> Fri Jan 31 14:39:21 

[jira] [Updated] (TRAFODION-1575) Self-referencing update updates the column to a wrong value

2018-03-06 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1575:
--
Fix Version/s: (was: 2.2.0)
   2.3

> Self-referencing update updates the column to a wrong value
> ---
>
> Key: TRAFODION-1575
> URL: https://issues.apache.org/jira/browse/TRAFODION-1575
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 1.3-incubating
> Environment: Can be reproduced on a workstation
>Reporter: David Wayne Birdsall
>Assignee: Selvaganesan Govindarajan
>Priority: Major
> Fix For: 2.3
>
>
> As shown in the following execution output, the update statement tries to 
> update c2 with count(distinct c2) from the same table. While the subquery 
> ‘select c from (select count(distinct c2) from mytable) dt(c)’ returns the 
> correct result 3 when it is run by itself, the update statement using the 
> same subquery updated the column c2 to 2, instead of 3. The updated value 
> always seems to be 1 less in this case.
> Here is the execution output:
> >>create schema mytest;
> --- SQL operation complete.
> >>
> >>create table mytable (c1 char(1), c2 integer);
> --- SQL operation complete.
> >>
> >>insert into mytable values ('A', 100), ('B', 200), ('C', 300);
> --- 3 row(s) inserted.
> >>select * from mytable order by 1;
> C1 C2
> -- ---
> A 100
> B 200
> C 300
> --- 3 row(s) selected.
> >>select c from (select count(distinct c2) from mytable) dt(c);
> C
> 
>3
> --- 1 row(s) selected.
> >>
> >>prepare xx from update mytable set c2 =
> +>(select c from (select count(distinct c2) from mytable) dt(c))
> +>where c2 = 100;
> --- SQL command prepared.
> >>explain options 'f' xx;
> LC RC OP OPERATOR OPT DESCRIPTION CARD
>       -
> 12 . 13 root x 1.00E+001
> 10 11 12 tuple_flow 1.00E+001
> . . 11 trafodion_insert MYTABLE 1.00E+000
> 9 . 10 sort 1.00E+001
> 8 4 9 hybrid_hash_join 1.00E+001
> 6 7 8 nested_join 1.00E+001
> . . 7 trafodion_delete MYTABLE 1.00E+000
> 5 . 6 sort 1.00E+001
> . . 5 trafodion_scan MYTABLE 1.00E+001
> 3 . 4 sort_scalar_aggr 1.00E+000
> 2 . 3 sort_scalar_aggr 1.00E+000
> 1 . 2 hash_groupby 2.00E+000
> . . 1 trafodion_scan MYTABLE 1.00E+002
> --- SQL operation complete.
> >>execute xx;
> --- 1 row(s) updated.
> >>
> >>select * from mytable order by 1;
> C1 C2
> -- ---
> A 2
> B 200
> C 300
> --- 3 row(s) selected.
> >>
> >>drop schema mytest cascade;
> --- SQL operation complete.
> >>
> The value of C2 in row A above should have been updated to 3.
> This problem was found by Wei-Shiun Tsai.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (TRAFODION-1173) LP Bug: 1444088 - Hybrid Query Cache: sqlci may err with JRE SIGSEGV.

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah reassigned TRAFODION-1173:
-

Assignee: Suresh Subbiah  (was: Howard Qin)

> LP Bug: 1444088 - Hybrid Query Cache: sqlci may err with JRE SIGSEGV.
> -
>
> Key: TRAFODION-1173
> URL: https://issues.apache.org/jira/browse/TRAFODION-1173
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Julie Thai
>Assignee: Suresh Subbiah
>Priority: Major
> Fix For: 2.3
>
>
> In sqlci, with HQC on and HQC_LOG specified, a prepared statement was 
> followed with:
> >>--interval 47, same selectivity as interval 51
> >>--interval 47 [jvFN3&789 - jyBT!]789)
> >>--expect = nothing in hqc log; SQC hit
> >>prepare XX from select * from f00 where colchar = 'jyBT!]789';
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x75d80595, pid=2708, tid=140737353866272
> #
> # JRE version: Java(TM) SE Runtime Environment (7.0_75-b13) (build 
> 1.7.0_75-b13)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.75-b04 mixed mode 
> linux-amd64 compressed oops)
> # Problematic frame:
> # C  [libstdc++.so.6+0x91595]  
> std::ostream::sentry::sentry(std::ostream&)+0x25
> #
> # Core dump written. Default location: 
> /opt/home/trafodion/thaiju/HQC/equal_char/core or core.2708
> #
> # An error report file with more information is saved as:
> # /opt/home/trafodion/thaiju/HQC/equal_char/hs_err_pid2708.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.sun.com/bugreport/crash.jsp
> # The crash happened outside the Java Virtual Machine in native code.
> # See problematic frame for where to report the bug.
> #
> Aborted
> No core file found under /opt/home/trafodion/thaiju/HQC/equal_char. But a 
> hs_err_pid2708.log file was generated (included in attached, to_repro.tar). 
> Problem does not reproduce if I explicitly turn off HQC.
> To reproduce:
> 1. download and untar attachment, to_repro.tar
> 1. in a sqlci session, obey setup_char.sql (from tar file)
> 2. in a new sqlci session, obey equal_char.sql (from tar file)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-157) LP Bug: 1252809 - DCS-ODBC-Getting 'Invalid server handle' after bound hstmt is used for a while.

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-157:
-
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 1252809 - DCS-ODBC-Getting 'Invalid server handle' after bound hstmt 
> is used for a while.
> -
>
> Key: TRAFODION-157
> URL: https://issues.apache.org/jira/browse/TRAFODION-157
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-odbc-linux
>Reporter: Aruna Sadashiva
>Assignee: RuoYu Zuo
>Priority: Major
> Fix For: 2.3
>
>
> Using ODBC 64 bit Linux driver.
> 'Invalid server handle' is returned and insert fails when using 
> SQLBindParameter/Prepare/Execute. The SQLExecute is done in a loop. It works 
> for a while, but fails within 10 minutes. Changed the program to reconnect 
> every 5 mins, but still seeing this error. It works on SQ.
> Have attached simple test program to recreate this. To run on SQ remove the 
> SQLExecDirect calls to set CQDs, those are specific to Traf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1172) LP Bug: 1444084 - Hybrid Query Cache: display interval boundaries in virtual table.

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1172:
--
Fix Version/s: (was: 2.2.0)
   2.4

> LP Bug: 1444084 - Hybrid Query Cache:  display interval boundaries in virtual 
> table.
> 
>
> Key: TRAFODION-1172
> URL: https://issues.apache.org/jira/browse/TRAFODION-1172
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Julie Thai
>Assignee: Howard Qin
>Priority: Major
> Fix For: 2.4
>
>
> For collapsing/merging intervals enhancement, displaying of interval 
> boundaries in virtual table would aid in verification of the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-480) LP Bug: 1349644 - Status array returned by batch operations contains wrong return value for T2

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-480:
-
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 1349644 - Status array returned by batch operations contains wrong 
> return value for T2
> --
>
> Key: TRAFODION-480
> URL: https://issues.apache.org/jira/browse/TRAFODION-480
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-jdbc-t2, client-jdbc-t4
>Reporter: Aruna Sadashiva
>Assignee: RuoYu Zuo
>Priority: Major
> Fix For: 2.3
>
>
> The status array returned from T2 contains a different value compared to T4. 
> T4 returns -2 and T2 returns 1. 
> The oracle JDBC documentation states:
> 0 or greater — the command was processed successfully and the value is an 
> update count indicating the number of rows in the database that were affected 
> by the command’s execution Chapter 14 Batch Updates 121
> Statement.SUCCESS_NO_INFO — the command was processed successfully, but the 
> number of rows affected is unknown
> Statement.SUCCESS_NO_INFO is defined as being -2, so your result says 
> everything worked fine, but you won't get information on the number of 
> updated columns.
> For a prepared statement batch, it is not possible to know the number of rows 
> affected in the database by each individual statement in the batch. 
> Therefore, all array elements have a value of -2. According to the JDBC 2.0 
> specification, a value of -2 indicates that the operation was successful but 
> the number of rows affected is unknown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-427) LP Bug: 1339541 - windows ODBC driver internal hp keyword cleanup

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-427:
-
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 1339541 - windows ODBC driver internal hp keyword cleanup
> -
>
> Key: TRAFODION-427
> URL: https://issues.apache.org/jira/browse/TRAFODION-427
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-odbc-windows
>Reporter: Daniel Lu
>Assignee: Daniel Lu
>Priority: Major
> Fix For: 2.3
>
>
> windows ODBC driver still have some code internally use hp keyword.
> for example:
>   E:\win-odbc64\odbcclient\drvr35\drvrglobal.h(99):#define ODBC_RESOURCE_DLL  
>"hp_ores0300.dll"
>   E:\win-odbc64\odbcclient\drvr35adm\drvr35adm.def(10):DESCRIPTION  'hp_oadm 
> Windows Dynamic Link Library'
>   E:\win-odbc64\odbcclient\Drvr35Res\Drvr35Res.def(3):LIBRARY  
> "hp_ores0300"
>   E:\win-odbc64\odbcclient\Drvr35Res\Drvr35Res.def(4):DESCRIPTION  
> 'hp_ores0300 Windows Dynamic Link Library'
>   E:\win-odbc64\odbcclient\drvr35\TCPIPV4\TCPIPV4.def(8):LIBRARY  
> "hp_tcpipv40300"
>   E:\win-odbc64\odbcclient\TranslationDll\TranslationDll.def(10):LIBRARY 
> hp_translation03
>   E:\win-odbc64\odbcclient\drvr35\TCPIPV6\TCPIPV6.def(8):LIBRARY  
> "hp_tcpipv60300"
>   E:\win-odbc64\Install\UpdateDSN\UpdateDSN\UpdateDSN.cpp(161):// 
> wcscat_s(NewDriver,L"\\hp_odbc0200.dll");



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1628) Implement T2 Driver's Rowsets ability to enhance the batch insert performance

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1628:
--
Fix Version/s: (was: 2.2.0)
   2.3

> Implement T2 Driver's Rowsets ability to enhance the batch insert performance
> -
>
> Key: TRAFODION-1628
> URL: https://issues.apache.org/jira/browse/TRAFODION-1628
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: client-jdbc-t2
>Reporter: RuoYu Zuo
>Assignee: RuoYu Zuo
>Priority: Critical
>  Labels: features, performance
> Fix For: 2.3
>
>
> JDBC T2 Driver now has very poor performance of batch insert, because it does 
> not have rowsets ability. Implement rowsets functionality will allow T2 
> Driver performs batch insert operation much faster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1053) LP Bug: 1430938 - In full explain output, begin/end key for char/varchar key column should be min/max if there is no predicated defined on the key column.

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1053:
--
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 1430938 - In full explain output, begin/end key for char/varchar key 
> column should be min/max if there is no predicated defined on the key column.
> --
>
> Key: TRAFODION-1053
> URL: https://issues.apache.org/jira/browse/TRAFODION-1053
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Julie Thai
>Assignee: Howard Qin
>Priority: Major
> Fix For: 2.3
>
>
> In full explain output, begin/end key for char/varchar key column should be 
> min/max 
> if there is no predicated defined on the key column.
> Snippet from TRAFODION_SCAN below:
> key_columns  _SALT_, COLTS, COLVCHRUCS2, COLINTS
> begin_key .. (_SALT_ = %(9)), (COLTS = ),
>  (COLVCHRUCS2 = '洼硡'), (COLINTS = 
> )
> end_key  (_SALT_ = %(9)), (COLTS = ),
>  (COLVCHRUCS2 = '洼湩'), (COLINTS = 
> )
> Expected  (COLVCHRUCS2 = '') and  (COLVCHRUCS2 = '').
> SQL>create table salttbl3 (
> +>colintu int unsigned not null, colints int signed not null,
> +>colsintu smallint unsigned not null, colsints smallint signed not null,
> +>collint largeint not null, colnum numeric(11,3) not null,
> +>colflt float not null, coldec decimal(11,2) not null,
> +>colreal real not null, coldbl double precision not null,
> +>coldate date not null, coltime time not null,
> +>colts timestamp not null,
> +>colchriso char(90) character set iso88591 not null,
> +>colchrucs2 char(111) character set ucs2 not null,
> +>colvchriso varchar(113) character set iso88591 not null,
> +>colvchrucs2 varchar(115) character set ucs2 not null,
> +>PRIMARY KEY (colts ASC, colvchrucs2 DESC, colints ASC))
> +>SALT USING 9 PARTITIONS ON (colints, colvchrucs2, colts);
> --- SQL operation complete.
> SQL>LOAD INTO salttbl3 SELECT
> +>c1+c2*10+c3*100+c4*1000+c5*1,
> +>(c1+c2*10+c3*100+c4*1000+c5*1) - 5,
> +>mod(c1+c2*10+c3*100+c4*1000+c5*1, 65535),
> +>mod(c1+c2*10+c3*100+c4*1000+c5*1, 32767),
> +>(c1+c2*10+c3*100+c4*1000+c5*1) + 549755813888,
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as numeric(11,3)),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as float),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as decimal(11,2)),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as real),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as double precision),
> +>cast(converttimestamp(2106142992 +
> +>(864 * (c1+c2*10+c3*100+c4*1000+c5*1))) as date),
> +>time'00:00:00' + cast(mod(c1+c2*10+c3*100+c4*1000+c5*1,3)
> +>as interval minute),
> +>converttimestamp(2106142992 + (864 *
> +>(c1+c2*10+c3*100+c4*1000+c5*1)) + (100 * (c1+c2*10+c3*100)) +
> +>(6000 * (c1+c2*10)) + (36 * (c1+c2*10))),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as char(90) character set iso88591),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as char(111) character set ucs2),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as varchar(113) character set 
> iso88591),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as varchar(115) character set ucs2)
> +>from (values(1)) t
> +>transpose 0,1,2,3,4,5,6,7,8,9 as c1
> +>transpose 0,1,2,3,4,5,6,7,8,9 as c2
> +>transpose 0,1,2,3,4,5,6,7,8,9 as c3
> +>transpose 0,1,2,3,4,5,6,7,8,9 as c4
> +>transpose 0,1,2,3,4,5,6,7,8,9 as c5;
> UTIL_OUTPUT
> 
> Task: LOAD Status: StartedObject: TRAFODION.SEABASE.SALTTBL3  
>   
> Task:  CLEANUP Status: StartedObject: TRAFODION.SEABASE.SALTTBL3  
>   
> Task:  CLEANUP Status: Ended  Object: TRAFODION.SEABASE.SALTTBL3  
>   
> Task:  DISABLE INDEXE  Status: StartedObject: TRAFODION.SEABASE.SALTTBL3  
>   
> Task:  DISABLE INDEXE  Status: Ended  Object: TRAFODION.SEABASE.SALTTBL3  
>   
> Task:  PREPARATION Status: StartedObject: TRAFODION.SEABASE.SALTTBL3  
>   
>Rows Processed: 10
> Task:  PREPARATION Status: Ended  ET: 00:00:10.332
>   
> Task:  COMPLETION  Status: StartedObject: TRAFODION.SEABASE.SALTTBL3  
>   
> Task:  COMPLETION  Status: Ended  ET: 00:00:02.941
>   
> Task:  POPULATE INDEX  Status: StartedObject: TRAFODION.SEABASE.SALTTBL3  
>   
> Task:  POPULATE INDEX  Status: Ended  ET: 00:00:05.357
>   
> --- SQL operation complete.
> SQL>update 

[jira] [Updated] (TRAFODION-1438) Windows ODBC Driver is not able to create certificate file with long name length (over 30 bytes).

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1438:
--
Fix Version/s: (was: 2.2.0)
   2.4

> Windows ODBC Driver is not able to create certificate file with long name 
> length (over 30 bytes).
> -
>
> Key: TRAFODION-1438
> URL: https://issues.apache.org/jira/browse/TRAFODION-1438
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-odbc-windows
>Affects Versions: 2.0-incubating
> Environment: Windows
>Reporter: RuoYu Zuo
>Assignee: RuoYu Zuo
>Priority: Critical
> Fix For: 2.4
>
>
> Windows ODBC driver stores the certificate file with the server name in its 
> file name, when the server name is long, the driver is not able to handle. 
> For now driver just uses 30 char* buffer to create the file name, thus when 
> it copies a long server name into the file name, it crashes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1442) Linux ODBC Driver is not able to create certificate file with long name length (over 30 bytes).

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1442:
--
Fix Version/s: (was: 2.2.0)
   2.3

> Linux ODBC Driver is not able to create certificate file with long name 
> length (over 30 bytes).
> ---
>
> Key: TRAFODION-1442
> URL: https://issues.apache.org/jira/browse/TRAFODION-1442
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-odbc-linux
>Affects Versions: 2.0-incubating
> Environment: Linunx
>Reporter: RuoYu Zuo
>Assignee: RuoYu Zuo
>Priority: Critical
> Fix For: 2.3
>
>
> Same as Windows driver does, Linux driver also reserved only 30 bytes for 
> certificate file name, there's potential of running into crash.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (TRAFODION-1221) LP Bug: 1450853 - Hybrid Query Cache: query with equals predicate on INTERVAL datatype should not have a non-parameterized literal.

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah reassigned TRAFODION-1221:
-

Assignee: Suresh Subbiah  (was: Howard Qin)

> LP Bug: 1450853 - Hybrid Query Cache: query with equals predicate on INTERVAL 
> datatype should not have a non-parameterized literal.
> ---
>
> Key: TRAFODION-1221
> URL: https://issues.apache.org/jira/browse/TRAFODION-1221
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Julie Thai
>Assignee: Suresh Subbiah
>Priority: Critical
> Fix For: 2.3
>
>
> For query with equal predicate on INTERVAL datatype, both parameterized and 
> non-parameterized literals appear in HybridQueryCacheEntries virtual table. 
> Non-parametrrized literal should be empty.
> SQL>prepare XX from select * from F00INTVL where colintvl = interval '39998' 
> day(6);
> *** WARNING[6008] Statistics for column (COLKEY) from table 
> TRAFODION.QUERYCACHE_HQC.F00INTVL were not available. As a result, the access 
> path chosen might not be the best possible. [2015-04-30 13:31:48]
> --- SQL command prepared.
> SQL>execute show_entries;
> HKEY  
>NUM_HITS   NUM_PLITERALS 
> (EXPR)
>NUM_NPLITERALS (EXPR)  
>   
> 
>  -- - 
> 
>  -- 
> 
> SELECT * FROM F00INTVL WHERE COLINTVL = INTERVAL #NP# DAY ( #NP# ) ;  
> 0 1 
> INTERVAL '39998' DAY(6)
> 1 '39998'
> --- 1 row(s) selected.
> To reproduce:
> create table F00INTVL(
> colkey int not null primary key,
> colintvl interval day(6));
> load into F00INTVL select
> c1+c2*10+c3*100+c4*1000+c5*1+c6*10, --colkey
> cast(cast(mod(c1+c2*10+c3*100+c4*1000+c5*1+c6*10,99)
> as integer) as interval day(6)) --colintvl
> from (values(1)) t
> transpose 0,1,2,3,4,5,6,7,8,9 as c1
> transpose 0,1,2,3,4,5,6,7,8,9 as c2
> transpose 0,1,2,3,4,5,6,7,8,9 as c3
> transpose 0,1,2,3,4,5,6,7,8,9 as c4
> transpose 0,1,2,3,4,5,6,7,8,9 as c5
> transpose 0,1,2,3,4,5,6,7,8,9 as c6;
> update statistics for table F00INTVL on colintvl;
> prepare show_entries from select left(hkey,50), num_pliterals, 
> left(pliterals,15), num_npliterals, left(npliterals,15) from 
> table(HybridQueryCacheEntries('USER', 'LOCAL'));
> prepare XX from select * from F00INTVL where colintvl = interval '39998' 
> day(6);
> execute show_entries;



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1221) LP Bug: 1450853 - Hybrid Query Cache: query with equals predicate on INTERVAL datatype should not have a non-parameterized literal.

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1221:
--
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 1450853 - Hybrid Query Cache: query with equals predicate on INTERVAL 
> datatype should not have a non-parameterized literal.
> ---
>
> Key: TRAFODION-1221
> URL: https://issues.apache.org/jira/browse/TRAFODION-1221
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Julie Thai
>Assignee: Howard Qin
>Priority: Critical
> Fix For: 2.3
>
>
> For query with equal predicate on INTERVAL datatype, both parameterized and 
> non-parameterized literals appear in HybridQueryCacheEntries virtual table. 
> Non-parametrrized literal should be empty.
> SQL>prepare XX from select * from F00INTVL where colintvl = interval '39998' 
> day(6);
> *** WARNING[6008] Statistics for column (COLKEY) from table 
> TRAFODION.QUERYCACHE_HQC.F00INTVL were not available. As a result, the access 
> path chosen might not be the best possible. [2015-04-30 13:31:48]
> --- SQL command prepared.
> SQL>execute show_entries;
> HKEY  
>NUM_HITS   NUM_PLITERALS 
> (EXPR)
>NUM_NPLITERALS (EXPR)  
>   
> 
>  -- - 
> 
>  -- 
> 
> SELECT * FROM F00INTVL WHERE COLINTVL = INTERVAL #NP# DAY ( #NP# ) ;  
> 0 1 
> INTERVAL '39998' DAY(6)
> 1 '39998'
> --- 1 row(s) selected.
> To reproduce:
> create table F00INTVL(
> colkey int not null primary key,
> colintvl interval day(6));
> load into F00INTVL select
> c1+c2*10+c3*100+c4*1000+c5*1+c6*10, --colkey
> cast(cast(mod(c1+c2*10+c3*100+c4*1000+c5*1+c6*10,99)
> as integer) as interval day(6)) --colintvl
> from (values(1)) t
> transpose 0,1,2,3,4,5,6,7,8,9 as c1
> transpose 0,1,2,3,4,5,6,7,8,9 as c2
> transpose 0,1,2,3,4,5,6,7,8,9 as c3
> transpose 0,1,2,3,4,5,6,7,8,9 as c4
> transpose 0,1,2,3,4,5,6,7,8,9 as c5
> transpose 0,1,2,3,4,5,6,7,8,9 as c6;
> update statistics for table F00INTVL on colintvl;
> prepare show_entries from select left(hkey,50), num_pliterals, 
> left(pliterals,15), num_npliterals, left(npliterals,15) from 
> table(HybridQueryCacheEntries('USER', 'LOCAL'));
> prepare XX from select * from F00INTVL where colintvl = interval '39998' 
> day(6);
> execute show_entries;



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1115) LP Bug: 1438934 - MXOSRVRs don't get released after interrupting execution of the client application (ODB)

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1115:
--
Fix Version/s: (was: 2.2.0)
   2.4

> LP Bug: 1438934 - MXOSRVRs don't get released after interrupting execution of 
> the client application (ODB)
> --
>
> Key: TRAFODION-1115
> URL: https://issues.apache.org/jira/browse/TRAFODION-1115
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-mxosrvr
>Reporter: Chirag Bhalgami
>Assignee: Daniel Lu
>Priority: Critical
> Fix For: 2.4
>
>
> MXOSRVRs are not getting released when ODB application is interrupted during 
> execution.
> After restarting DCS, it still shows that odb app is occupying MXOSRVRs.
> Also, executing odb throws following error message:
> -
> odb [2015-03-31 21:19:11]: starting ODBC connection(s)... (1) 1 2 3 4
> Connected to HP Database
> [3] 5,000 records inserted [commit]
> [2] odb [Oloadbuff(9477)] - Error (State: 25000, Native -8606)
> [Trafodion ODBC Driver][Trafodion Database] SQL ERROR:*** ERROR[8606] 
> Transaction subsystem TMF returned error 97 on a commit transaction. 
> [2015-03-31 21:39:47]
> [2] 0 records inserted [commit]
> [3] odb [Oloadbuff(9477)] - Error (State: 25000, Native -8606)
> [Trafodion ODBC Driver][Trafodion Database] SQL ERROR:*** ERROR[8606] 
> Transaction subsystem TMF returned error 97 on a commit transaction. 
> [2015-03-31 21:39:47]
> [3] 5,000 records inserted [commit]
> [4] odb [Oloadbuff(9477)] - Error (State: 25000, Native -8606)
> [Trafodion ODBC Driver][Trafodion Database] SQL ERROR:*** ERROR[8606] 
> Transaction subsystem TMF returned error 97 on a commit transaction. 
> [2015-03-31 21:39:47]
> [4] 0 records inserted [commit]
> [1] odb [Oloadbuff(9477)] - Error (State: 25000, Native -8606)
> [Trafodion ODBC Driver][Trafodion Database] SQL ERROR:*** ERROR[8606] 
> Transaction subsystem TMF returned error 97 on a commit transaction. 
> [2015-03-31 21:39:47]
> [1] 0 records inserted [commit]
> odb [sigcatch(4125)] - Received SIGINT. Exiting
> -
> Trafodion Build: Release [1.0.0-304-ga977ee7_Bld14], branch a977ee7-master, 
> date 20150329_083001)
> Hadoop Distro: HDP 2.2
> HBase Version: 0.98.4.2.2.0.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-646) LP Bug: 1371442 - ODBC driver AppUnicodeType setting is not in DSN level

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-646:
-
Fix Version/s: (was: 2.2.0)
   2.4

> LP Bug: 1371442 - ODBC driver AppUnicodeType setting is not in DSN level
> 
>
> Key: TRAFODION-646
> URL: https://issues.apache.org/jira/browse/TRAFODION-646
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-odbc-linux
>Reporter: Daniel Lu
>Assignee: Daniel Lu
>Priority: Critical
> Fix For: 2.4
>
>
> Currently, AppUnicodeType setting can only be set in [ODBC] section of 
> TRAFDSN or odbc.ini, or by environment variable. this way it is global. 
> affect all applications use same driver. we need make it in DSN level, so 
> every applications that use same driver can be either unicode or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-598) LP Bug: 1365821 - select (insert) with prepared stmt fails with rowset

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-598:
-
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 1365821 - select (insert) with prepared stmt fails with rowset
> --
>
> Key: TRAFODION-598
> URL: https://issues.apache.org/jira/browse/TRAFODION-598
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-dcs
>Reporter: Aruna Sadashiva
>Assignee: RuoYu Zuo
>Priority: Critical
> Fix For: 2.3
>
>
> This came out of : https://answers.launchpad.net/trafodion/+question/253796
> "select syskey from (insert into parts values(?,?,?)) x" does not work as 
> expected with a odbc rowset. A rowset with single row works, but with 
> multiple rows in the rowset, no rows get inserted. 
> The workaround is to execute the select after the insert rowset operation. 
> It also fails with jdbc batch, t4 driver throws a "select not supported in 
> batch" exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1246) LP Bug: 1458011 - Change core file names in Sandbox

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1246:
--
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 1458011 - Change core file names in Sandbox
> ---
>
> Key: TRAFODION-1246
> URL: https://issues.apache.org/jira/browse/TRAFODION-1246
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: installer
>Reporter: Amanda Moran
>Priority: Minor
> Fix For: 2.3
>
>
> When creating a sandbox we should change the name of core files so that users 
> will not have to do it themselves.
> echo "/tmp/cores/core.%e.%p.%h.%t" > /proc/sys/kernel/core_pattern
> Reference: https://sigquit.wordpress.com/2009/03/13/the-core-pattern/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2903) The COLUMN_SIZE fetched from mxosrvr is wrong

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-2903:
--
Fix Version/s: (was: 2.2.0)
   2.3

> The COLUMN_SIZE fetched from mxosrvr is wrong
> -
>
> Key: TRAFODION-2903
> URL: https://issues.apache.org/jira/browse/TRAFODION-2903
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-mxosrvr
>Affects Versions: any
>Reporter: XuWeixin
>Assignee: XuWeixin
>Priority: Major
> Fix For: 2.3
>
>
> 1. DDL: create table TEST (C1 L4PKE date,C2 time,C3 timestamp)
> 2. 
> SQLColumns(hstmt,(SQLTCHAR*)"TRAFODION",SQL_NTS,(SQLTCHAR*)"SEABASE",SQL_NTS,(SQLTCHAR*)"TEST",SQL_NTS,(SQLTCHAR*)"%",SQL_NTS);
> 3. SQLBindCol(hstmt,7,SQL_C_LONG,,0,)
> 4. SQLFetch(hstmt)
> return  DATE ColPrec expect: 10 and actual: 11
>TIME ColPrec expect: 8 and actual: 9
>TIMESTAMP ColPrec expect: 19 and actual: 20



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2899) Catalog API SQLColumns does not support ODBC2.x

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-2899:
--
Fix Version/s: (was: 2.2.0)
   2.3

> Catalog API SQLColumns does not support ODBC2.x
> ---
>
> Key: TRAFODION-2899
> URL: https://issues.apache.org/jira/browse/TRAFODION-2899
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-mxosrvr
>Affects Versions: any
> Environment: Centos 6.7
>Reporter: XuWeixin
>Assignee: XuWeixin
>Priority: Major
> Fix For: 2.3
>
>
> When using ODBC2.x to get description of columns, failure occurs but no error 
> returns.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2472) Alter table hbase options is not transaction enabled.

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-2472:
--
Fix Version/s: (was: 2.2.0)
   2.3

> Alter table hbase options is not transaction enabled.
> -
>
> Key: TRAFODION-2472
> URL: https://issues.apache.org/jira/browse/TRAFODION-2472
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dtm
>Reporter: Prashanth Vasudev
>Assignee: Prashanth Vasudev
>Priority: Major
> Fix For: 2.3
>
>
> Transaction DDL for alter commands is currently disabled. 
> There are few statements such as alter hbase option that is not disabled 
> which results in unpredictable errors. 
> Initially fix would be to disable alter statement to not use DDl transaction. 
> Following this DDL transaction would be enhanced to support of Alter table 
> statement.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2462) TRAFCI gui installer does not work

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-2462:
--
Fix Version/s: (was: 2.2.0)
   2.3

> TRAFCI gui installer does not work
> --
>
> Key: TRAFODION-2462
> URL: https://issues.apache.org/jira/browse/TRAFODION-2462
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-ci
>Affects Versions: 2.1-incubating
>Reporter: Anuradha Hegde
>Assignee: Alex Peng
>Priority: Major
> Fix For: 2.3
>
>
> There are several issues with trafci 
> 1. GUI installer on Windows does not work. Bascially the browse button to 
> upload the T4 jar file and to specify the location of trafci install dir does 
> not function. hence installation does not proceed
> 2.  After a successful install of trafci  on Windows or *nix we notice that 
> lib file contains jdbcT4 and jline jar files..  There is no need to 
> pre-package these files with the product
> 3.  Running any sql statements from TRAF_HOME folder returns the following 
> error 
> SQL>get tables;
> *** ERROR[1394] *** ERROR[16001] No message found for SQLCODE -1394.  
> MXID11292972123518900177319330906U300_877_SQL_CUR_2 
> [2017-01-25 20:44:03]
> But executing the same statement when you are in $TRAF_HOME/sql/scripts 
> folder works.
> 4. Executing the wrapper script 'trafci' returns and message as below and 
> proceeds with successful connection. You don't see this messagewhen executing 
> trafci.sh
> /core/sqf/sql/local_hadoop/dcs-2.1.0/bin/dcs-config.sh: line 
> 90: .: sqenv.sh: file not found
> 5. Executing sql statements in multiples lines causes additional SQL prompt 
> to be displayed
> Connected to Apache Trafodion
> SQL>get tables
> +>SQL>
> 6. on successful connect and disconnect when new mxosrvrs are picked up  the 
> default schema is changed from 'SEABASE' to 'USR' (This might be a server 
> side issue too but will need to debug and find out)
> 7. FC command does not work. Look at trafci manual for examples on how FC 
> command was displayed back, It was shown back with the SQL prompt  
> SQL>fc
> show remoteprocess;
> SQL>   i
> show re moteprocess;
> SQL>
> 8. Did the error message format change?  This should have been syntax error
>   
> SQL>gett;
> *** ERROR[15001] *** ERROR[16001] No message found for SQLCODE -15001.
> gett;
>^ (4 characters from start of SQL statement) 
> MXID11086222123521382568755030206U300_493_SQL_CUR_4 
> [2017-01-25 21:14:18]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-206) LP Bug: 1297518 - DCS - SQLProcedures and SQLProcedureColumns need to be supported

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-206:
-
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 1297518 - DCS - SQLProcedures and SQLProcedureColumns need to be 
> supported
> --
>
> Key: TRAFODION-206
> URL: https://issues.apache.org/jira/browse/TRAFODION-206
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-general
>Reporter: Aruna Sadashiva
>Assignee: Kevin Xu
>Priority: Critical
> Fix For: 2.3
>
>
> DCS needs to implement support for SQLProcedures and SQLProcedureColumns, 
> since traf sql supports SPJs now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2664) Instance will be down when the zookeeper on name node has been down

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-2664:
--
Fix Version/s: (was: 2.2.0)
   2.3

> Instance will be down when the zookeeper on name node has been down
> ---
>
> Key: TRAFODION-2664
> URL: https://issues.apache.org/jira/browse/TRAFODION-2664
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: foundation
>Affects Versions: 2.2.0
> Environment: Test Environment:
> CDH5.4.8: 10.10.23.19:7180, total 6 nodes.
> HDFS-HA and DCS-HA: enabled
> OS: Centos6.8, physic machine.
> SW Build: R2.2.3 (EsgynDB_Enterprise Release 2.2.3 (Build release [sbroeder], 
> branch 1ce8d39-xdc_nari, date 11Jun17)
>Reporter: Jarek
>Assignee: Gonzalo E Correa
>Priority: Critical
>  Labels: build
> Fix For: 2.3
>
>
> Description: Instance will be down when the zookeeper on name node has been 
> down
> Test Steps:
> Step 1. Start OE and 4 long queries with trafci on the first node 
> esggy-clu-n010
> Step 2. Wait several minutes and stop zookeeper on name node of node 
> esggy-clu-n010  in Cloudera Manager page.
> Step 3. With trafci, run a basic query and 4 long queries again.
> In the above Step 3, we will see the whole instance as down after a while. 
> For this test scenario, I tried it several times, always found instance as 
> down.
> Timestamp:
> Test Start Time: 20170616132939
> Test End  Time: 20170616134350
> Stop zookeeper on name node of node esggy-clu-n010: 20170616133344
> Check logs:
> 1) Each node displays the following error:
> 2017-06-16 13:33:46,276, ERROR, MON, Node Number: 0,, PIN: 5017 , Process 
> Name: $MONITOR,,, TID: 5429, Message ID: 101371801, 
> [CZClient::IsZNodeExpired], zoo_exists() for 
> /trafodion/instance/cluster/esggy-clu-n010.esgyn.cn failed with error 
> ZCONNECTIONLOSS
> 2) Zookeeper displays:
> ls /trafodion/instance/cluster
> []
> So, It seems zclient has been lost on each node.
> Location of logs:
> esggy-clu-n010: 
> /data4/jarek/ha.interactive/trafodion_and_cluster_logs/cluster_logs.20170616134816.tar.gz
>  and trafodion_logs.20170616134816.tar.gz
> By the way, because the size of the logs is out of the limited value, so i 
> cannot upload it as the attachment in this JIRA ID.
> How many zookeeper quorum servers in the cluster? total 3.
>   
> dcs.zookeeper.quorum
> 
> esggy-clu-n010.esgyn.cn,esggy-clu-n011.esgyn.cn,esggy-clu-n012.esgyn.cn
>   
> How to access the cluster?
> 1. Login 10.10.10.8 from US machine: trafodion/traf123
> 2. Login 10.10.23.19 from 10.10.10.8: trafodion/traf123



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1923) executor/TEST106 hangs at drop table at times

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1923:
--
Fix Version/s: (was: 2.2.0)
   2.3

> executor/TEST106 hangs at drop table at times
> -
>
> Key: TRAFODION-1923
> URL: https://issues.apache.org/jira/browse/TRAFODION-1923
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 2.0-incubating
>Reporter: Selvaganesan Govindarajan
>Assignee: Prashanth Vasudev
>Priority: Critical
> Fix For: 2.3
>
>
> executor/TEST106 hangs at
> drop table t106a 
> Currently executor/TEST106 test is not run as part of Daily regression build.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2307) Documentation update for REST and DCS

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-2307:
--
Fix Version/s: (was: 2.2.0)
   2.3

> Documentation update for REST and DCS
> -
>
> Key: TRAFODION-2307
> URL: https://issues.apache.org/jira/browse/TRAFODION-2307
> Project: Apache Trafodion
>  Issue Type: Improvement
>Affects Versions: any
>Reporter: Anuradha Hegde
>Assignee: Anuradha Hegde
>Priority: Major
> Fix For: 2.3
>
>
> As an improvement the information documented in DCS and REST will be updated



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2305) After a region split the transactions to check against list is not fully populated

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-2305:
--
Fix Version/s: (was: 2.2.0)
   2.3

> After a region split the transactions to check against list is not fully 
> populated
> --
>
> Key: TRAFODION-2305
> URL: https://issues.apache.org/jira/browse/TRAFODION-2305
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dtm
>Affects Versions: any
>Reporter: Sean Broeder
>Assignee: Sean Broeder
>Priority: Major
> Fix For: 2.3
>
>
> As part of a region split all current transactions and their relationships to 
> one another are written out into a ZKNode entry and later read in by the 
> daughter regions.  However, the transactionsToCheck list is not correctly 
> populated



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (TRAFODION-1748) Error 97 received with large upsert and select statements

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah closed TRAFODION-1748.
-

> Error 97 received with large upsert and select statements
> -
>
> Key: TRAFODION-1748
> URL: https://issues.apache.org/jira/browse/TRAFODION-1748
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dtm
>Affects Versions: 1.3-incubating
>Reporter: Sean Broeder
>Assignee: Sean Broeder
>Priority: Major
> Fix For: 2.0-incubating
>
>
> From Selva-
> The script has just upserted 10 rows and querying these 1 rows 
> repeatedly.  From the RS logs, it looks like memstore got flushed. Currently, 
> I have made the process to loop on getting the error 8606. 
>  
> This query involves ESPs. This error is coming from sqlci at the time of 
> commit.  I assume sqlci must be looping. The looping ends after 3 minutes to 
> proceed further. You can also put sqlci into debug and set loopError=0 to 
> come out of the loop to proceed further.  I also created a core file of sqlci 
> at ~/selva/core.44100.
>  
> If the query is finished, you can do the following to reproduce this issue
>  
> cd ~/selva/LSEG/master/stream
> sqlci
> log traf_stream_run.log ;
> obey traf_stream_run.sql ;
> log ;
> 
> Looking at dtm tracing I can see the regions are throwing an 
> UnknownTransactionException at prepare time, which causes the TM to refresh 
> the RegionLocations and redrive the prepare messages.  These again fail and 
> the transaction is aborted and this eventually percolates back to SQL as an 
> error 97.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (TRAFODION-1748) Error 97 received with large upsert and select statements

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah resolved TRAFODION-1748.
---
Resolution: Fixed

> Error 97 received with large upsert and select statements
> -
>
> Key: TRAFODION-1748
> URL: https://issues.apache.org/jira/browse/TRAFODION-1748
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dtm
>Affects Versions: 1.3-incubating
>Reporter: Sean Broeder
>Assignee: Sean Broeder
>Priority: Major
> Fix For: 2.0-incubating
>
>
> From Selva-
> The script has just upserted 10 rows and querying these 1 rows 
> repeatedly.  From the RS logs, it looks like memstore got flushed. Currently, 
> I have made the process to loop on getting the error 8606. 
>  
> This query involves ESPs. This error is coming from sqlci at the time of 
> commit.  I assume sqlci must be looping. The looping ends after 3 minutes to 
> proceed further. You can also put sqlci into debug and set loopError=0 to 
> come out of the loop to proceed further.  I also created a core file of sqlci 
> at ~/selva/core.44100.
>  
> If the query is finished, you can do the following to reproduce this issue
>  
> cd ~/selva/LSEG/master/stream
> sqlci
> log traf_stream_run.log ;
> obey traf_stream_run.sql ;
> log ;
> 
> Looking at dtm tracing I can see the regions are throwing an 
> UnknownTransactionException at prepare time, which causes the TM to refresh 
> the RegionLocations and redrive the prepare messages.  These again fail and 
> the transaction is aborted and this eventually percolates back to SQL as an 
> error 97.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2597) ESP cores seen during daily builds after hive tests run

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-2597:
--
Fix Version/s: (was: 2.2.0)
   2.3

> ESP cores seen during daily builds after hive tests run
> ---
>
> Key: TRAFODION-2597
> URL: https://issues.apache.org/jira/browse/TRAFODION-2597
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Sandhya Sundaresan
>Priority: Major
> Fix For: 2.3
>
>
> After hive tetss run and pass successfully, soometimes we see core files of 
> ESP with the following trace :
> Thread 6 (Thread 0x7fe4f36e7700 (LWP 46076)):
> #0 0x7fe55ecb168c in pthread_cond_wait@@GLIBC_2.3.2 () from 
> /lib64/libpthread.so.0
> 001 0x7fe565841f1d in ExLobLock::wait (this=0x2dc0580) at 
> ../exp/ExpLOBaccess.cpp:3367
> 002 0x7fe565842f4a in ExLobGlobals::getHdfsRequest (this=0x2dc0550) 
> at ../exp/ExpLOBaccess.cpp:3464
> 003 0x7fe565846a31 in ExLobGlobals::doWorkInThread (this=0x2dc0550) 
> at ../exp/ExpLOBaccess.cpp:3494
> 004 0x7fe565846a69 in workerThreadMain (arg=) at 
> ../exp/ExpLOBaccess.cpp:3300
> 005 0x7fe55ecadaa1 in start_thread () from /lib64/libpthread.so.0
> 006 0x7fe561f1caad in clone () from /lib64/libc.so.6
> Thread 5 (Thread 0x7fe5532cd700 (LWP 45641)):
> #0 0x7fe561f1d0a3 in epoll_wait () from /lib64/libc.so.6
> 001 0x7fe561be08e1 in SB_Trans::Sock_Controller::epoll_wait 
> (this=0x7fe561e32de0, pp_where=0x7fe561c043a8 "Sock_Comp_Thread::run", 
> pv_timeout=-1) at sock.cpp:366
> 002 0x7fe561bdfcf3 in SB_Trans::Sock_Comp_Thread::run 
> (this=0x19190b0) at sock.cpp:108
> 003 0x7fe561bdfb2d in sock_comp_thread_fun (pp_arg=0x19190b0) at 
> sock.cpp:78
> 004 0x7fe5605ce71f in SB_Thread::Thread::disp (this=0x19190b0, 
> pp_arg=0x19190b0) at thread.cpp:214
> 005 0x7fe5605ceb77 in thread_fun (pp_arg=0x19190b0) at thread.cpp:310
> 006 0x7fe5605d1f3e in sb_thread_sthr_disp (pp_arg=0x1922240) at 
> threadl.cpp:270
> 007 0x7fe55ecadaa1 in start_thread () from /lib64/libpthread.so.0
> 008 0x7fe561f1caad in clone () from /lib64/libc.so.6
> Thread 4 (Thread 0x7fe553ecf700 (LWP 45627)):
> #0 0x7fe561e676dd in sigtimedwait () from /lib64/libc.so.6
> 001 0x7fe561ba578f in local_monitor_reader (pp_arg=0x28fd) at 
> ../../../monitor/linux/clio.cxx:291
> 002 0x7fe55ecadaa1 in start_thread () from /lib64/libpthread.so.0
> 003 0x7fe561f1caad in clone () from /lib64/libc.so.6
> Thread 3 (Thread 0x7fe4f5fdf700 (LWP 45725)):
> #0 0x7fe55ecb3a00 in sem_wait () from /lib64/libpthread.so.0
> 001 0x7fe563c78c41 in ?? () from 
> /usr/lib/jvm/java-1.8.0-openjdk.x86_64/jre/lib/amd64/server/libjvm.so
> 002 0x7fe563c6fa4a in ?? () from 
> /usr/lib/jvm/java-1.8.0-openjdk.x86_64/jre/lib/amd64/server/libjvm.so
> 003 0x7fe563db7335 in ?? () from 
> /usr/lib/jvm/java-1.8.0-openjdk.x86_64/jre/lib/amd64/server/libjvm.so
> 004 0x7fe563db7590 in ?? () from 
> /usr/lib/jvm/java-1.8.0-openjdk.x86_64/jre/lib/amd64/server/libjvm.so
> 005 0x7fe563c7a8b2 in ?? () from 
> /usr/lib/jvm/java-1.8.0-openjdk.x86_64/jre/lib/amd64/server/libjvm.so
> 006 0x7fe55ecadaa1 in start_thread () from /lib64/libpthread.so.0
> 007 0x7fe561f1caad in clone () from /lib64/libc.so.6
> Thread 2 (Thread 0x7fe567e2e920 (LWP 45584)):
> #0 0x7fe55ecb1a5e in pthread_cond_timedwait@@GLIBC_2.3.2 () from 
> /lib64/libpthread.so.0
> 001 0x7fe5605d136c in SB_Thread::CV::wait (this=0x1902b38, pv_sec=0, 
> pv_us=39) at 
> /home/jenkins/workspace/build-rh6-AdvEnt2.3-release@2/trafodion/core/sqf/export/include/seabed/int/thread.inl:652
> 002 0x7fe5605d1431 in SB_Thread::CV::wait (this=0x1902b38, 
> pv_lock=true, pv_sec=0, pv_us=39) at 
> /home/jenkins/workspace/build-rh6-AdvEnt2.3-release@2/trafodion/core/sqf/export/include/seabed/int/thread.inl:704
> 003 0x7fe561bb7c6b in SB_Ms_Event_Mgr::wait (this=0x1902a40, 
> pv_us=39) at mseventmgr.inl:354
> 004 0x7fe561bd8c6e in XWAIT_com (pv_mask=1280, pv_time=40, 
> pv_residual=false) at pctl.cpp:982
> 005 0x7fe561bd8a6f in XWAITNO0 (pv_mask=1280, pv_time=40) at 
> pctl.cpp:905
> 006 0x7fe564e2b59a in IpcSetOfConnections::waitOnSet 
> (this=0x7fe5532ce288, timeout=-1, calledByESP=1, timedout=0x7ffd57ec2c88) at 
> ../common/Ipc.cpp:1607
> 007 0x0040718c in waitOnAll (argc=3, argv=0x7ffd57ec2de8, 
> guaReceiveFastStart=0x0) at ../common/Ipc.h:3094
> 008 runESP (argc=3, argv=0x7ffd57ec2de8, guaReceiveFastStart=0x0) at 
> ../bin/ex_esp_main.cpp:416
> 009 0x004075d3 in main (argc=3, argv=0x7ffd57ec2de8) at 
> 

[jira] [Updated] (TRAFODION-1112) LP Bug: 1438888 - Error message incorrect when describing non existing procedure

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1112:
--
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 143 - Error message incorrect when describing non existing 
> procedure
> 
>
> Key: TRAFODION-1112
> URL: https://issues.apache.org/jira/browse/TRAFODION-1112
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-security
>Reporter: Paul Low
>Assignee: Suresh Subbiah
>Priority: Minor
> Fix For: 2.3
>
>
> Minor issue.
> Users may be confused by  the error message that returns when trying to 
> execute 'showddl procedure T1' when T1 is not a procedure.
> T1 does not exist as a procedure, but T1 does exist as a table object.
> The text in the error message is technically incorrect because object T1 does 
> exist, just not as a procedure.
> SQL>create schema schema1;
> --- SQL operation complete.
> SQL>set schema schema1;
> --- SQL operation complete.
> SQL>create table t1 (c1 int not null primary key, c2 int);
> --- SQL operation complete.
> SQL>grant select on table t1 to qauser_sqlqaa;
> --- SQL operation complete.
> SQL>showddl procedure t1;
> *** ERROR[1389] Object T1 does not exist in Trafodion. 
> *** ERROR[4082] Object TRAFODION.SCHEMA1.T1 does not exist or is inaccessible
> SQL>drop schema schema1 cascade;
> --- SQL operation complete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1803) Range delete on tables with nullable key columns deletes fewer rows

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1803:
--
Fix Version/s: (was: 2.2.0)
   2.3

> Range delete on tables with nullable key columns deletes fewer rows 
> 
>
> Key: TRAFODION-1803
> URL: https://issues.apache.org/jira/browse/TRAFODION-1803
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 1.2-incubating
>Reporter: Suresh Subbiah
>Assignee: Suresh Subbiah
>Priority: Major
> Fix For: 2.3
>
>
> When a table has nullable columns in the primary/store by key and these 
> columns have null values, delete and update statements may affect fewer rows 
> than intended.
> For example
> >>cqd allow_nullable_unique_key_constraint 'on' ;
> --- SQL operation complete.
> CREATE TABLE TRAFODION.JIRA.T1
>   (
> AINT DEFAULT NULL SERIALIZED
>   , BINT DEFAULT NULL SERIALIZED
>   , PRIMARY KEY (A ASC, B ASC)
>   )
> ;
> --- SQL operation complete.
> >>insert into t1 values (1, null) ;
> --- 1 row(s) inserted.
> >>delete from t1 where a = 1 ;
> --- 0 row(s) deleted.
> >>delete from t1 ;
> --- 0 row(s) deleted.
> >>delete from t1 where a =1 and b is null ;
> --- 1 row(s) deleted.
> >>explain delete from t1 where a =1  ;
> TRAFODION_DELETE ==  SEQ_NO 2NO CHILDREN
> TABLE_NAME ... TRAFODION.JIRA.T1
> REQUESTS_IN . 10
> ROWS/REQUEST . 1
> EST_OPER_COST  0.17
> EST_TOTAL_COST ... 0.17
> DESCRIPTION
>   max_card_est .. 99
>   fragment_id  0
>   parent_frag  (none)
>   fragment_type .. master
>   iud_type ... trafodion_delete TRAFODION.JIRA.T1
>   predicate .. (A = %(1)) and (B = B)
>   begin_key .. (A = %(1)) and (B = B)
>   end_key  (A = %(1)) and (B = B)
>  Similar issue can be seen for update statements too
>  
>  >>CREATE TABLE TRAFODION.JIRA.T2
>   (
> AINT DEFAULT NULL SERIALIZED
>   , BINT DEFAULT NULL SERIALIZED
>   , CINT DEFAULT NULL SERIALIZED
>   , PRIMARY KEY (A ASC, B ASC)
>   )
> ;+>+>+>+>+>+>+>
> --- SQL operation complete.
> >>
> >>
> >>insert into t2 values (1, null, 3) ;
> --- 1 row(s) inserted.
> >>update t2 set c = 30 where a = 1 ;
> --- 0 row(s) updated.
>  
> TRAFODION_UPDATE ==  SEQ_NO 2NO CHILDREN
> TABLE_NAME ... TRAFODION.JIRA.T2
> REQUESTS_IN .. 1
> ROWS_OUT . 1
> EST_OPER_COST  0
> EST_TOTAL_COST ... 0
> DESCRIPTION
>   max_card_est .. 99
>   fragment_id  0
>   parent_frag  (none)
>   fragment_type .. master
>   iud_type ... trafodion_update TRAFODION.JIRA.T2
>   new_rec_expr ... (C assign %(30))
>   predicate .. (A = %(1)) and (B = B)
>   begin_key .. (A = %(1)) and (B = B)
>   end_key  (A = %(1)) and (B = B)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1801) Inserting NULL for all key columns in a table causes a failure

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1801:
--
Fix Version/s: (was: 2.2.0)
   2.3

> Inserting NULL for all key columns in a table causes a failure
> --
>
> Key: TRAFODION-1801
> URL: https://issues.apache.org/jira/browse/TRAFODION-1801
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 1.2-incubating
>Reporter: Suresh Subbiah
>Assignee: Suresh Subbiah
>Priority: Major
> Fix For: 2.3
>
>
> cqd allow_nullable_unique_key_constraint 'on' ;
> >>create table t1 (a int, b int, primary key (a,b)) ;
> --- SQL operation complete.
> >>showddl t1 ;
> CREATE TABLE TRAFODION.JIRA.T1
>   (
> AINT DEFAULT NULL SERIALIZED
>   , BINT DEFAULT NULL SERIALIZED
>   , PRIMARY KEY (A ASC, B ASC)
>   )
> ;
> --- SQL operation complete.
> >>insert into t1(a) values (1);
> --- 1 row(s) inserted.
> >>insert into t1(b) values (2) ;
> --- 1 row(s) inserted.
> >>select * from t1 ;
> AB  
> ---  ---
>   1?
>   ?2
> --- 2 row(s) selected.
> >>insert into t1(a) values(3) ;
> --- 1 row(s) inserted.
> >>select * from t1 ;
> AB  
> ---  ---
>   1?
>   3?
>   ?2
> --- 3 row(s) selected.
> -- fails
> >>insert into t1 values (null, null) ;
> *** ERROR[8448] Unable to access Hbase interface. Call to 
> ExpHbaseInterface::checkAndInsertRow returned error HBASE_ACCESS_ERROR(-706). 
> Cause: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=35, exceptions:
> Tue Feb 02 19:58:34 UTC 2016, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@4c2e0b96, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-777) LP Bug: 1394488 - Bulk load for volatile table gets FileNotFoundException

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-777:
-
Fix Version/s: (was: 2.2.0)
   2.4

> LP Bug: 1394488 - Bulk load for volatile table gets FileNotFoundException
> -
>
> Key: TRAFODION-777
> URL: https://issues.apache.org/jira/browse/TRAFODION-777
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Barry Fritchman
>Assignee: Suresh Subbiah
>Priority: Major
> Fix For: 2.4
>
>
> When attempting to perform a bulk load into a volatile table, like this:
> create volatile table vps primary key (ps_partkey, ps_suppkey) no load as 
> select * from partsupp;
> cqd comp_bool_226 'on';
> cqd TRAF_LOAD_PREP_TMP_LOCATION '/bulkload/';
> cqd TRAF_LOAD_TAKE_SNAPSHOT 'OFF';
> load into vps select * from partsupp;
> An error 8448 is raised due to a java.io.FileNotFoundException:
> Task: LOAD Status: StartedObject: TRAFODION.HBASE.VPS
> Task:  CLEANUP Status: StartedObject: TRAFODION.HBASE.VPS
> Task:  CLEANUP Status: Ended  Object: TRAFODION.HBASE.VPS
> Task:  DISABLE INDEXE  Status: StartedObject: TRAFODION.HBASE.VPS
> Task:  DISABLE INDEXE  Status: Ended  Object: TRAFODION.HBASE.VPS
> Task:  PREPARATION Status: StartedObject: TRAFODION.HBASE.VPS
>Rows Processed: 160 
> Task:  PREPARATION Status: Ended  ET: 00:01:20.660
> Task:  COMPLETION  Status: StartedObject: TRAFODION.HBASE.VPS
> *** ERROR[8448] Unable to access Hbase interface. Call to 
> ExpHbaseInterface::doBulkLoad returned error HBASE_DOBULK_LOAD_ERROR(-714). 
> Cause: 
> java.io.FileNotFoundException: File /bulkload/TRAFODION.HBASE.VPS does not 
> exist.
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:654)
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:102)
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:712)
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:708)
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:708)
> org.trafodion.sql.HBaseAccess.HBulkLoadClient.doBulkLoad(HBulkLoadClient.java:442)
> It appears that the presumed qualification of the volatile table name is 
> incorrect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1212) LP Bug: 1449732 - Drop schema cascade returns error 1069

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1212:
--
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 1449732 - Drop schema cascade returns error 1069
> 
>
> Key: TRAFODION-1212
> URL: https://issues.apache.org/jira/browse/TRAFODION-1212
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmu
>Reporter: Weishiun Tsai
>Assignee: Suresh Subbiah
>Priority: Critical
> Fix For: 2.3
>
>
> The frequency of ‘drop schema cascade’ returning error 1069 is still pretty 
> high, even after several attempts to address this issue.  This is causing a 
> lot of headache for the QA regression testing.  After each regression testing 
> run, there are always several schemas that couldn’t be dropped and needed to 
> be manually cleaned up.
> Multiple issues may lead to this problem.  This just happens to be one 
> scenario that is quite reproducible now.  In this particular scenario, the 
> schema contains a TMUDF library qaTmudfLib and 2 TMUDF functions qa_tmudf1 
> and qa_tmudf2.  qa_tmudf1 is a valid function, while qa_tmudf2 has a bogus 
> external name and a call to it is expected to see an error.
> After invoking both, a drop schema cascade almost always returns error 1069.
> This is seen on the r1.1.0rc3 (v0427) build installed on a workstation and it 
> is fairly reproducible with this build.  To reproduce it:
> (1) Download the attached tar file and untar it to get the 3 files in there. 
> Put the files in any directory .
> (2) Make sure that you have run ./sqenv.sh of your Trafodion instance first 
> as building UDF needs $MY_SQROOT for the header files.
> (3) Run build.sh
> (4) Change the line “create library qaTmudfLib file 
> '/qaTMUdfTest.so';” in mytest.sql and fill in 
> (5) From sqlci, obey mytest.sql
> Here is the execution output:
> >>log mytest.log clear;
> >>drop schema mytest cascade;
> *** ERROR[1003] Schema TRAFODION.MYTEST does not exist.
> --- SQL operation failed with errors.
> >>create schema mytest;
> --- SQL operation complete.
> >>set schema mytest;
> --- SQL operation complete.
> >>
> >>create library qaTmudfLib file '/qaTMUdfTest.so';
> --- SQL operation complete.
> >>
> >>create table mytable (a int, b int);
> --- SQL operation complete.
> >>insert into mytable values (1,1),(2,2);
> --- 2 row(s) inserted.
> >>
> >>create table_mapping function qa_tmudf1()
> +>external name 'QA_TMUDF'
> +>language cpp
> +>library qaTmudfLib;
> --- SQL operation complete.
> >>
> >>select * from UDF(qa_tmudf1(TABLE(select * from mytable)));
> AB
> ---  ---
>   11
>   22
> --- 2 row(s) selected.
> >>
> >>create table_mapping function qa_tmudf2()
> +>external name 'DONTEXIST'
> +>language cpp
> +>library qaTmudfLib;
> --- SQL operation complete.
> >>
> >>select * from UDF(qa_tmudf2(TABLE(select * from mytable)));
> *** ERROR[11246] An error occurred locating function 'DONTEXIST' in library 
> 'qaTMUdfTest.so'.
> *** ERROR[8822] The statement was not prepared.
> >>
> >>drop schema mytest cascade;
> *** ERROR[1069] Schema TRAFODION.MYTEST could not be dropped.
> --- SQL operation failed with errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1145) LP Bug: 1441784 - UDF: Lack of checking for scalar UDF input/output values

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1145:
--
Fix Version/s: (was: 2.2.0)
   2.3

> LP Bug: 1441784 - UDF: Lack of checking for scalar UDF input/output values
> --
>
> Key: TRAFODION-1145
> URL: https://issues.apache.org/jira/browse/TRAFODION-1145
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Weishiun Tsai
>Assignee: Suresh Subbiah
>Priority: Critical
> Fix For: 2.3
>
> Attachments: udf_bug (1).tar
>
>
> Ideally, input/output values for a scalar UDF should be verified at the 
> create function time.  But this check is not in place right now.  As a 
> result, a lot of ill-constructed input/output values are left to be handled 
> at the run time.  And the behavior at the run time is haphazard at best.
> Here shows 3 examples of such behavior:
> (a) myudf1 defines 2 input values with the same name.  Create function does 
> not return an error.  But the invocation at the run time returns a perplexing 
> 4457 error indicating internal out-of-range index error.
> (b) myudf2 defines an input value and an output value with the same name.  
> Create function does not return an error.  But the invocation at the run time 
> returns a perplexing 4457 error complaining that there is no output value.
> (c) myudf3 defines 2 output values with the same name.  Create function does 
> not return an error.  The invocation at the run time simply ignores the 2nd 
> output value, as well as the fact that the C function only defines 1 output 
> value.  It returns one value as if the 2nd output value was never defined at 
> all.
> This is seen on the v0407 build installed on a workstation. To reproduce it:
> (1) Download the attached tar file and untar it to get the 3 files in there. 
> Put the files in any directory .
> (2) Make sure that you have run ./sqenv.sh of your Trafodion instance first 
> as building UDF needs $MY_SQROOT for the header files.
> (3) run build.sh
> (4) Change the line “create library qa_udf_lib file '/myudf.so';”; in 
> mytest.sql and fill in 
> (5) From sqlci, obey mytest.sql
> 
> Here is the execution output:
> >>create schema mytest;
> --- SQL operation complete.
> >>set schema mytest;
> --- SQL operation complete.
> >>
> >>create library qa_udf_lib file '/myudf.so';
> --- SQL operation complete.
> >>
> >>create table mytable (a int, b int);
> --- SQL operation complete.
> >>insert into mytable values (1,1),(2,2),(3,3);
> --- 3 row(s) inserted.
> >>
> >>create function myudf1
> +>(INVAL int, INVAL int)
> +>returns (OUTVAL int)
> +>language c
> +>parameter style sql
> +>external name 'qa_func_int32'
> +>library qa_udf_lib
> +>deterministic
> +>state area size 1024
> +>allow any parallelism
> +>no sql;
> --- SQL operation complete.
> >>
> >>select myudf1(a, b) from mytable;
> *** ERROR[4457] An error was encountered processing metadata for user-defined 
> function TRAFODION.MYTEST.MYUDF1.  Details: Internal error in 
> setInOrOutParam(): index position out of range..
> *** ERROR[8822] The statement was not prepared.
> >>
> >>create function myudf2
> +>(INVAL int)
> +>returns (INVAL int)
> +>language c
> +>parameter style sql
> +>external name 'qa_func_int32'
> +>library qa_udf_lib
> +>deterministic
> +>state area size 1024
> +>allow any parallelism
> +>no sql;
> --- SQL operation complete.
> >>
> >>select myudf2(a) from mytable;
> *** ERROR[4457] An error was encountered processing metadata for user-defined 
> function TRAFODION.MYTEST.MYUDF2.  Details: User-defined functions must have 
> at least one registered output value.
> *** ERROR[8822] The statement was not prepared.
> >>
> >>create function myudf3
> +>(INVAL int)
> +>returns (OUTVAL int, OUTVAL int)
> +>language c
> +>parameter style sql
> +>external name 'qa_func_int32'
> +>library qa_udf_lib
> +>deterministic
> +>state area size 1024
> +>allow any parallelism
> +>no sql;
> --- SQL operation complete.
> >>
> >>select myudf3(a) from mytable;
> OUTVAL
> ---
>   1
>   2
>   3
> --- 3 row(s) selected.
> >>
> >>drop function myudf1 cascade;
> --- SQL operation complete.
> >>drop function myudf2 cascade;
> --- SQL operation complete.
> >>drop function myudf3 cascade;
> --- SQL operation complete.
> >>drop library qa_udf_lib cascade;
> --- SQL operation complete.
> >>drop schema mytest cascade;
> --- SQL operation complete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1141) LP Bug: 1441378 - UDF: Multi-valued scalar UDF with clob/blob cores sqlci with SIGSEGV

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1141:
--
Fix Version/s: (was: 2.2.0)
   2.4

> LP Bug: 1441378 - UDF: Multi-valued scalar UDF with clob/blob cores sqlci 
> with SIGSEGV
> --
>
> Key: TRAFODION-1141
> URL: https://issues.apache.org/jira/browse/TRAFODION-1141
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Weishiun Tsai
>Assignee: Suresh Subbiah
>Priority: Critical
> Fix For: 2.4
>
> Attachments: udf_bug.tar
>
>
> While a single-valued scalar UDF works fine with the clob or blob data type.  
> A multi-valued scalar UDF cores sqlci with SIGSEGV even with just 2 clob or 
> blob output values. 
> Since clob and blob data types require large buffers, I am assuming this type 
> of scalar UDF is stressing the heap used internally somewhere.  But a core is 
> always bad.  If there is a limit on how clob and blob can be handled in a 
> scalar UDF, a check should be put in place and an error should be returned 
> more gracefully.
> This is seen on the v0407 build installed on a workstation. To reproduce it:
> (1) Download the attached tar file and untar it to get the 3 files in there. 
> Put the files in any directory .
> (2) Make sure that you have run ./sqenv.sh of your Trafodion instance first 
> as building UDF needs $MY_SQROOT for the header files.
> (3) run build.sh
> (4) Change the line “create library qa_udf_lib file '/myudf.so';”; in 
> mytest.sql and fill in 
> (5) From sqlci, obey mytest.sql
> ---
> Here is the execution output:
> >>create schema mytest;
> --- SQL operation complete.
> >>set schema mytest;
> --- SQL operation complete.
> >>
> >>create library qa_udf_lib file '/myudf.so';
> --- SQL operation complete.
> >>
> >>create function qa_udf_clob
> +>(INVAL clob)
> +>returns (c_clob clob)
> +>language c
> +>parameter style sql
> +>external name 'qa_func_vcstruct'
> +>library qa_udf_lib
> +>deterministic
> +>state area size 1024
> +>allow any parallelism
> +>no sql;
> --- SQL operation complete.
> >>
> >>create function qa_udf_blob
> +>(INVAL blob)
> +>returns (c_blob blob)
> +>language c
> +>parameter style sql
> +>external name 'qa_func_vcstruct'
> +>library qa_udf_lib
> +>deterministic
> +>state area size 1024
> +>allow any parallelism
> +>no sql;
> --- SQL operation complete.
> >>
> >>create function qa_udf_clob_mvf
> +>(INVAL clob)
> +>returns (c_clob1 clob, c_clob2 clob)
> +>language c
> +>parameter style sql
> +>external name 'qa_func_vcstruct_mvf'
> +>library qa_udf_lib
> +>deterministic
> +>state area size 1024
> +>allow any parallelism
> +>no sql;
> --- SQL operation complete.
> >>
> >>create function qa_udf_blob_mvf
> +>(INVAL blob)
> +>returns (c_blob1 blob, c_blob2 blob)
> +>language c
> +>parameter style sql
> +>external name 'qa_func_vcstruct_mvf'
> +>library qa_udf_lib
> +>deterministic
> +>state area size 1024
> +>allow any parallelism
> +>no sql;
> --- SQL operation complete.
> >>
> >>create table mytable (c_clob clob, c_blob blob);
> --- SQL operation complete.
> >>insert into mytable values ('CLOB_1', 'BLOB_1');
> --- 1 row(s) inserted.
> >>
> >>select
> +>cast(qa_udf_clob(c_clob) as char(10)),
> +>cast(qa_udf_blob(c_blob) as char(10))
> +>from mytable;
> (EXPR)  (EXPR)
> --  --
> CLOB_1  BLOB_1
> --- 1 row(s) selected.
> >>
> >>select qa_udf_clob_mvf(c_clob) from mytable;
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x74c5b9a2, pid=18680, tid=140737187650592
> #
> # JRE version: Java(TM) SE Runtime Environment (7.0_67-b01) (build 
> 1.7.0_67-b01)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode 
> linux-amd64 compressed oops)
> # Problematic frame:
> # C  [libexecutor.so+0x2489a2]  ExSimpleSQLBuffer::init(NAMemory*)+0x92
> #
> # Core dump written. Default location: /core or core.18680
> #
> # An error report file with more information is saved as:
> # /hs_err_pid18680.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.sun.com/bugreport/crash.jsp
> #
> Aborted (core dumped)
> ---
> Here is the stack trace of the core.
> (gdb) bt
> #0  0x0039e28328a5 in raise () from /lib64/libc.so.6
> #1  0x0039e283400d in abort () from /lib64/libc.so.6
> #2  0x77120a55 in os::abort(bool) ()
>from /opt/home/tools/jdk1.7.0_67/jre/lib/amd64/server/libjvm.so
> #3  0x772a0f87 in VMError::report_and_die() ()
>from /opt/home/tools/jdk1.7.0_67/jre/lib/amd64/server/libjvm.so
> #4  0x772a150e 

[jira] [Updated] (TRAFODION-1014) LP Bug: 1421747 - SQL Upsert using load periodically not saving all rows

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-1014:
--
Fix Version/s: (was: 2.2.0)
   2.4

> LP Bug: 1421747 - SQL Upsert using load periodically not saving all rows
> 
>
> Key: TRAFODION-1014
> URL: https://issues.apache.org/jira/browse/TRAFODION-1014
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Gary W Hall
>Assignee: Suresh Subbiah
>Priority: Critical
> Fix For: 2.4
>
>
> When running a script that initiates 32 parallel streams loading a table, we 
> have found that periodically there are gaps in the resulting saved data...for 
> example we will find that we are missing stock items #29485 thru #30847 
> inclusive for Warehouse #5.  The number of gaps found for a given load run 
> varies...normally none, but I've seen as many as eight gaps of missing data.
> The sql statement used in all streams is as follows:
> sql_statement = "upsert using load into " + stock_table_name
>   + " (S_I_ID, S_W_ID, S_QUANTITY, S_DIST_01, S_DIST_02, 
> S_DIST_03, S_DIST_04,"
>   + " S_DIST_05, S_DIST_06, S_DIST_07, S_DIST_08, S_DIST_09, 
> S_DIST_10,"
>   + " S_YTD, S_ORDER_CNT, S_REMOTE_CNT, S_DATA)"
>   + " values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 
> ?)";
> This is not easily repeatable…I’ve run the script to drop/create/load this 
> table 12 times today, resulting in some missing rows 4 of the 12 times.  
> Worst case we were missing 0.03% of the required rows in the table…obviously, 
> ANY missing data is not acceptable.
> Our test environment control parameters (in case any are of value to you)...
> OrderEntryLoader
>   Load Starting : 2015-02-13 04:58:13
>PropertyFile : trafodion.properties
>Datebase : trafodion
>  Schema : trafodion.javabench
> ScaleFactor : 512
> Streams : 32
>Maintian : true
>Load : true
>  AutoCommit : true
>   BatchSize : 1000
>  Upsert : true
>   UsingLoad : true
>  IntervalLength : 60



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-531) LP Bug: 1355034 - SPJ w result set failed with ERROR[8413]

2018-03-05 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-531:
-
Fix Version/s: (was: 2.2.0)
   2.4

> LP Bug: 1355034 - SPJ w result set failed with ERROR[8413]
> --
>
> Key: TRAFODION-531
> URL: https://issues.apache.org/jira/browse/TRAFODION-531
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Chong Hsu
>Assignee: Suresh Subbiah
>Priority: Critical
> Fix For: 2.4
>
>
> Tested with Trafodion build, 20140801-0830.
> Calling a SPJ with result set:
>public static void NS786(String paramString, ResultSet[] 
> paramArrayOfResultSet)
>  throws Exception
>{
>  String str1 = "jdbc:default:connection";
>  
>  Connection localConnection = DriverManager.getConnection(str1);
>  String str2 = "select * from " + paramString;
>  Statement localStatement = localConnection.createStatement();
>  paramArrayOfResultSet[0] = localStatement.executeQuery(str2);
>}
> it failed with ERROR[8413]:
> *** ERROR[8413] The string argument contains characters that cannot be 
> converted. [2014-08-11 04:06:32]
> *** ERROR[8402] A string overflow occurred during the evaluation of a 
> character expression. Conversion of Source Type:LARGEINT(REC_BIN64_SIGNED) 
> Source Value:79341348341248 to Target Type:CHAR(REC_BYTE_F_ASCII). 
> [2014-08-11 04:06:32]
> The SPJ Jar file is attached. Here are the steps to produce the error:
>   
> set schema testspj;
> create library spjrs file '//Testrs.jar';
> create procedure RS786(varchar(100))
>language java 
>parameter style java  
>external name 'Testrs.NS786'
>dynamic result sets 1
>library spjrs;
> create table datetime_interval (
> date_keydate not null,
> date_coldate default date '0001-01-01',
> time_coltime default time '00:00:00',
> timestamp_col   timestamp
>  default timestamp 
> '0001-01-01:00:00:00.00',
> interval_year   interval year default interval '00' year,
> yr2_to_mo   interval year to month
>  default interval '00-00' year to month,
> yr6_to_mo   interval year(6) to month
>  default interval '00-00' year(6) to 
> month,
> yr16_to_mo  interval year(16) to month default
>   interval '-00' year(16) to 
> month,
> year18  interval year(18) default
>  interval '00' year(18),
> day2interval day default interval '00' day,
> day18   interval day(18)
>  default interval '00' 
> day(18),
> day16_to_hr interval day(16) to hour
> default interval ':00' day(16) to 
> hour,
> day14_to_mininterval day(14) to minute default  
>   interval '00:00:00' day(14) to 
> minute,
> day5_to_second6 interval day(5) to second(6) default
>  interval '0:00:00:00.00' day(5) to second(6),
> hour2   interval hour default interval '00' hour,
> hour18  interval hour(18)
>  default interval '00' 
> hour(18),
> hour16_to_min   interval hour(16) to minute default
>   interval ':00' hour(16) to minute,
> hour14_to_ss0   interval hour(14) to second(0) default
>   interval '00:00:00' hour(14) to 
> second(0),
> hour10_to_second4interval hour(10) to second(4) default
>  interval '00:00:00.' hour(10) to 
> second(4),
> min2interval minute default interval '00' minute,
> min18   interval minute(18) default
>  interval '00' minute(18),
> min13_s3interval minute(13) to second(3) default
> interval '0:00.000' minute(13) to 
> second(3),
> min16_s0interval minute(16) to second(0) default
> interval ':00' minute(16) to 
> second(0),
> seconds interval second default interval '00' second,
> seconds5interval second(5) default interval '0' second(5),
> seconds18   interval second(18,0) default
>  interval '00' second(18,0),
> seconds15   interval 

[jira] [Issue Comment Deleted] (TRAFODION-2943) shebang in file core/sqf/sql/scripts/cleanat is broken

2018-02-01 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-2943:
--
Comment: was deleted

(was: Whoops, this is not AdvEnt2.3, but Trafodion2.3.  Let me check again.

 )

> shebang in file core/sqf/sql/scripts/cleanat is broken
> --
>
> Key: TRAFODION-2943
> URL: https://issues.apache.org/jira/browse/TRAFODION-2943
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dev-environment
>Affects Versions: any
>Reporter: Wenjun Zhu
>Priority: Trivial
> Fix For: 2.3
>
>
> In file core/sqf/sql/scripts/cleanat is broken, the shebang should be '#!', 
> but it lacks '#' at present.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (TRAFODION-2943) shebang in file core/sqf/sql/scripts/cleanat is broken

2018-02-01 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah updated TRAFODION-2943:
--
Comment: was deleted

(was: Can you please help confirmwhere you are seeing this?  I have checked our 
a new branch of AdvEnt2.3 and I see the !.  Git blame shows the source line was 
last modified in May 2016.

[sbroeder@edev05 Adv2.3]$ git blame core/sqf/sql/scripts/cleanat
3024ce7a (Narendra Goyal 2016-05-07 08:44:18 + 1) #!/bin/bash

 )

> shebang in file core/sqf/sql/scripts/cleanat is broken
> --
>
> Key: TRAFODION-2943
> URL: https://issues.apache.org/jira/browse/TRAFODION-2943
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dev-environment
>Affects Versions: any
>Reporter: Wenjun Zhu
>Priority: Trivial
> Fix For: 2.3
>
>
> In file core/sqf/sql/scripts/cleanat is broken, the shebang should be '#!', 
> but it lacks '#' at present.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2868) bug of hbase option TTL

2018-01-05 Thread Suresh Subbiah (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16313870#comment-16313870
 ] 

Suresh Subbiah commented on TRAFODION-2868:
---

This is a documentation error. HBase has changed the constant denoting FOREVER 
from -1 to Integer.MAX_VALUE. Please see 
https://hbase.apache.org/apidocs/src-html/org/apache/hadoop/hbase/HConstants.html#line.621
 (excerpt below)

 /**
   * Unlimited time-to-live.
   */
//  public static final int FOREVER = -1;
public static final int FOREVER = Integer.MAX_VALUE;

If MAX_VALUE is provided as the value for TTL, vanilla HBase changes it to 
FOREVER. For example, from HBase shell

hbase(main):016:0> create 'jira2868', {NAME => 'f1', TTL => 2147483647}
0 row(s) in 0.1750 seconds

=> Hbase::Table - jira2868
hbase(main):017:0> describe 'jira2868'
Table jira2868 is ENABLED   
jira2868
COLUMN FAMILIES DESCRIPTION 
{NAME => 'f1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_
SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL =
> 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => '
false', BLOCKCACHE => 'true'}  

It is not clear what the meaning is when TTL is -1 or negative in general. 
Trafodion does not accept negative values for TTL though HBase accepts it.

Ref Manual has been changed to reflect this.

> bug of hbase option TTL 
> 
>
> Key: TRAFODION-2868
> URL: https://issues.apache.org/jira/browse/TRAFODION-2868
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: any
>Reporter: liyingshuai
>Assignee: Suresh Subbiah
>
> create a table with TTL option, then an error occurred:
> CREATE TABLE t_hbaseoption22(c1 INT NOT NULL, c2 varchar(5)) SALT USING 2 
> PARTITIONS ON (c1) store BY (c1) HBASE_OPTIONS (TTL='-1');
> *** ERROR[8448] Unable to access Hbase interface. Call to 
> CmpSeabaseDDL::generateHbaseOptionsArray() returned error 
> HBASE_CREATE_OPTIONS_ERROR(710). Cause: TTL. [2017-12-25 17:45:01]
> 
> as described in SQL REFERENCE MANUAL that the accepted value of TTL can 
> be '-1' (forever) and  'positive-integer'
>  
> i found the following source code, if the value is -1 then throw an error 
> and TTL in hbase source code -1 means forever:
>  
> CmpSeabaseDDLcommon.cpp
> // 代码中设置不能等于-1
> else if ((hbaseOption->key() == "TIME_TO_LIVE") || (hbaseOption->key() == 
> "TTL"))
>{
> if ((str_atoi(hbaseOption->val().data(), 
> hbaseOption->val().length()) == -1) || 
> (!hbaseCreateOptionsArray[HBASE_TTL].empty()))
> isError = TRUE;
> hbaseCreateOptionsArray[HBASE_TTL] = hbaseOption->val();



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (TRAFODION-2868) bug of hbase option TTL

2018-01-02 Thread Suresh Subbiah (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Subbiah reassigned TRAFODION-2868:
-

Assignee: Suresh Subbiah

> bug of hbase option TTL 
> 
>
> Key: TRAFODION-2868
> URL: https://issues.apache.org/jira/browse/TRAFODION-2868
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: any
>Reporter: liyingshuai
>Assignee: Suresh Subbiah
>
> create a table with TTL option, then an error occurred:
> CREATE TABLE t_hbaseoption22(c1 INT NOT NULL, c2 varchar(5)) SALT USING 2 
> PARTITIONS ON (c1) store BY (c1) HBASE_OPTIONS (TTL='-1');
> *** ERROR[8448] Unable to access Hbase interface. Call to 
> CmpSeabaseDDL::generateHbaseOptionsArray() returned error 
> HBASE_CREATE_OPTIONS_ERROR(710). Cause: TTL. [2017-12-25 17:45:01]
> 
> as described in SQL REFERENCE MANUAL that the accepted value of TTL can 
> be '-1' (forever) and  'positive-integer'
>  
> i found the following source code, if the value is -1 then throw an error 
> and TTL in hbase source code -1 means forever:
>  
> CmpSeabaseDDLcommon.cpp
> // 代码中设置不能等于-1
> else if ((hbaseOption->key() == "TIME_TO_LIVE") || (hbaseOption->key() == 
> "TTL"))
>{
> if ((str_atoi(hbaseOption->val().data(), 
> hbaseOption->val().length()) == -1) || 
> (!hbaseCreateOptionsArray[HBASE_TTL].empty()))
> isError = TRUE;
> hbaseCreateOptionsArray[HBASE_TTL] = hbaseOption->val();



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)