[jira] [Commented] (TRAFODION-2974) Some predefined UDFs should be regular UDFs so we can revoke rights

2018-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16385412#comment-16385412
 ] 

ASF GitHub Bot commented on TRAFODION-2974:
---

Github user DaveBirdsall commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1460#discussion_r172066807
  
--- Diff: core/sql/bin/SqlciErrors.txt ---
@@ -1331,6 +1331,7 @@ $1~String1 
 4320 Z 9 BEGINNER MAJOR DBADMIN Stream access is not allowed on 
multi-partitioned table or index, when flag ATTEMPT_ASYNCHRONOUS_ACCESS is set 
to OFF. Object in scope: $0~TableName.
 4321 Z 9 BEGINNER MAJOR DBADMIN An embedded update/delete is not 
allowed on a partitioned table, when flag ATTEMPT_ASYNCHRONOUS_ACCESS is set to 
OFF. Object in scope: $0~TableName.
 4322 0A000 9 BEGINNER MAJOR DBADMIN A column with BLOB datatype cannot 
be used in this clause or function.
+4323 Z 9 BEGINNER MAJOR DBADMIN Use of predefined UDF $0~String0 
is deprecated and this function will be removed in a future release. Please use 
the function with the same name in schema TRAFODION."_LIBMGR_" instead. You may 
need to issue this command first: INITIALIZE TRAFODION, UPGRADE LIBRARY 
MANAGEMENT.
--- End diff --

Consider adding this message to the Trafodion Messages Guide.


> Some predefined UDFs should be regular UDFs so we can revoke rights
> ---
>
> Key: TRAFODION-2974
> URL: https://issues.apache.org/jira/browse/TRAFODION-2974
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmu
>Affects Versions: 2.2.0
>Reporter: Hans Zeller
>Assignee: Hans Zeller
>Priority: Major
> Fix For: 2.3
>
>
> Roberta pointed out that we have two predefined UDFs, EVENT_LOG_READER and 
> JDBC, where the system administrator should have the ability to control who 
> can execute these functions.
> To do this, these two UDFs cannot be "predefined" UDFs anymore, since those 
> don't have the metadata that's required for doing grant and revoke.
> Roberta also pointed out that the JDBC UDF should refuse to connect to the T2 
> driver, for security reasons.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-924) LP Bug: 1413241 - ENDTRANSACTION hang, transaction state FORGETTING

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-924:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1413241 - ENDTRANSACTION hang, transaction state FORGETTING
> ---
>
> Key: TRAFODION-924
> URL: https://issues.apache.org/jira/browse/TRAFODION-924
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dtm
>Reporter: Apache Trafodion
>Assignee: Atanu Mishra
>Priority: Critical
> Fix For: 2.2.0
>
>
> A loop to reexecute the seabase developer regression suite hung on the 14th 
> iteration in TEST016. The sqlci console looked like this:
> >>-- char type
> >>create table mcStatPart1
> +>(a int not null not droppable,
> +>b char(10) not null not droppable,
> +>f int, txt char(100),
> +>primary key (a,b))
> +>salt using 8 partitions ;
> --- SQL operation complete.
> >>
> >>insert into mcStatPart1 values 
> >>(1,'123',1,'xyz'),(1,'133',1,'xyz'),(1,'423',1,'xyz'),(2,'111',1,'xyz'),(2,'223',1,'xyz'),(2,'323',1,'xyz'),(2,'423',1,'xyz'),
> +>   
> (3,'123',1,'xyz'),(3,'133',1,'xyz'),(3,'423',1,'xyz'),(4,'111',1,'xyz'),(4,'223',1,'xyz'),(4,'323',1,'xyz'),(4,'423',1,'xyz');
> A pstack of the sqlci (0,13231) showed it blocking in a call to 
> ENDTRANSACTION.   And dtmci showed this for the transaction:
> DTMCI > list
> Transid Owner eventQ  pending Joiners TSEsState
> (0,13742)   0,13231   0   0   0   0   FORGETTING
> Here's a copy of Sean's analysis:
> From: Broeder, Sean 
> Sent: Wednesday, January 21, 2015 8:43 AM
> To: Hanlon, Mike; Cooper, Joanie
> Cc: DeRoo, John
> Subject: RE: ENDTRANSACTION hang, transaction state FORGETTING
> Hi Mike,
> It looks like we have a zookeeper problem right at the time of the commit.  A 
> table is offline:
> 2015-01-21 11:13:45,529 WARN zookeeper.ZKUtil: 
> hconnection-0x1646b7c-0x14aefd0ac4a5e18, quorum=localhost:47570, 
> baseZNode=/hbase Unable to get data of znode 
> /hbase/table/TRAFODION.HBASE.MCSTATPART1
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss for /hbase/table/TRAFODION.HBASE.MCSTATPART1
> Then we fail after 3 retries of sending the commit request
> 2015-01-21 11:14:04,405 ERROR transactional.TransactionManager: doCommitX, 
> result size: 0
> 2015-01-21 11:14:04,405 ERROR transactional.TransactionManager: doCommitX, 
> result size: 0
> Normally we would create a recovery entry for this transaction to redrive 
> commit, but it appears we are unable to do that due to the zookeeper errors 
> 2015-01-21 11:14:04,408 DEBUG 
> client.HConnectionManager$HConnectionImplementation: Removed all cached 
> region locations that map to g4t3005.houston.hp.com,4   2243,1421362639257
> 471340 2015-01-21 11:14:05,255 WARN zookeeper.RecoverableZooKeeper: Possibly 
> transient ZooKeeper, quorum=localhost:47570, 
> exception=org.apache.zookeeper.KeeperExc   
> eption$ConnectionLossException: KeeperErrorCode = ConnectionLoss for 
> /hbase/table/TRAFODION.HBASE.MCSTATPART1
> 471341 2015-01-21 11:14:05,256 WARN zookeeper.RecoverableZooKeeper: Possibly 
> transient ZooKeeper, quorum=localhost:47570, 
> exception=org.apache.zookeeper.KeeperExc   
> eption$ConnectionLossException: KeeperErrorCode = ConnectionLoss for 
> /hbase/table/TRAFODION.HBASE.MCSTATPART1
> 471342 2015-01-21 11:14:05,256 INFO util.RetryCounter: Sleeping 1000ms before 
> retry #0...
> 471343 2015-01-21 11:14:05,256 INFO util.RetryCounter: Sleeping 1000ms before 
> retry #0...
> Hbase looks like it’s having troubles as I can’t even do a list operation 
> from the hbase shell
> 2015-01-21 14:40:28,816 ERROR [main] 
> client.HConnectionManager$HConnectionImplementation: Can't get connection to 
> ZooKeeper: KeeperErrorCode = ConnectionLoss for /hbase
> We need to think of how better to handle this in the TransactionManager, but 
> in reality I’m not sure what we can do if Zookeeper fails.  You can open an 
> LP bug so we have record of it and can discuss what to do.
> Thanks,
> Sean
> _
> From: Hanlon, Mike 
> Sent: Wednesday, January 21, 2015 6:17 AM
> To: Cooper, Joanie
> Cc: Broeder, Sean; DeRoo, John
> Subject: ENDTRANSACTION hang, transaction state FORGETTING
> Hi Joanie,
> Have we seen this before? A SQL regression test (in this case 
> seabase/TEST016) hangs in a call to ENDTRANSACTION. The transaction state is 
> shown in dtmci to be FORGETTING.  It probably is not easy to reproduce, since 
> the problem occurred on the 14th iteration of a loop to re-execute the 
> seabase suite. 
> There are a lot of messages in 
> /opt/home/mhanlon/trafodion/core/sqf/logs/trafodion.dtm.log on my 
> workstation, 

[jira] [Updated] (TRAFODION-1250) LP Bug: 1459763 - mtserver - explain plan fails with 'provided input stmt does not exist', works from sqlci

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1250:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1459763 - mtserver - explain plan fails with 'provided input stmt 
> does not exist', works from sqlci
> ---
>
> Key: TRAFODION-1250
> URL: https://issues.apache.org/jira/browse/TRAFODION-1250
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-dcs
>Reporter: Aruna Sadashiva
>Assignee: Rao Kakarlamudi
>Priority: Critical
> Fix For: 2.2.0
>
>
> Explain is not working with mtserver thru jdbc. It works ok from sqlci.
> SQL>explain options 'f' select * from t4qa.taball;
>  
> LC   RC   OP   OPERATOR  OPT   DESCRIPTION   CARD   
>          -
>  
> 1.2root  1.00E+004
> ..1trafodion_scan  TABALL1.00E+004
> --- SQL operation complete.
> SQL>prepare s02 from select * from t4qa.taball;
> --- SQL command prepared.
> SQL>explain s02;
> *** ERROR[8804] The provided input statement does not exist in the current 
> context.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1127) LP Bug: 1439541 - mxosrvr core when zookeeper connection gets dropped due to timeout

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1127:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1439541 - mxosrvr core when zookeeper connection gets dropped due to 
> timeout
> 
>
> Key: TRAFODION-1127
> URL: https://issues.apache.org/jira/browse/TRAFODION-1127
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-mxosrvr
>Reporter: Aruna Sadashiva
>Assignee: Rao Kakarlamudi
>Priority: Critical
> Fix For: 2.2.0
>
>
> Mxosrvr cores were seen on Zircon during perf tests. the connection to 
> zookeeper is getting dropped (zh=0x0) – maybe because of timeout or other 
> errors.
> #0  0x74a318a5 in raise () from /lib64/libc.so.6
> #1  0x74a3300d in abort () from /lib64/libc.so.6
> #2  0x75d50a55 in os::abort(bool) ()
>from /usr/java/jdk1.7.0_67/jre/lib/amd64/server/libjvm.so
> #3  0x75ed0f87 in VMError::report_and_die() ()
>from /usr/java/jdk1.7.0_67/jre/lib/amd64/server/libjvm.so
> #4  0x75d5596f in JVM_handle_linux_signal ()
>from /usr/java/jdk1.7.0_67/jre/lib/amd64/server/libjvm.so
> #5  
> #6  0x765bcce9 in zoo_exists (zh=0x0, 
> path=0xeb2598 
> "/squser4/dcs/servers/registered/zircon-n018.usa.hp.com:8:88", watch=0, 
> stat=0x7fffe61bf080) at src/zookeeper.c:3503
> #7  0x004c59f5 in updateZKState (currState=CONNECTED, 
> newState=AVAILABLE) at SrvrConnect.cpp:9057
> #8  0x004ca966 in odbc_SQLSvc_TerminateDialogue_ame_ (
> objtag_=0xedb8b0, call_id_=0xedb908, dialogueId=313965727)
> at SrvrConnect.cpp:3885
> #9  0x00493dce in DISPATCH_TCPIPRequest (objtag_=0xedb8b0, 
> call_id_=0xedb908, operation_id=)
> at Interface/odbcs_srvr.cpp:1772
> #10 0x00433882 in BUILD_TCPIP_REQUEST (pnode=0xedb8b0)
> at ../Common/TCPIPSystemSrvr.cpp:603
> #11 0x0043421d in PROCESS_TCPIP_REQUEST (pnode=0xedb8b0)
> ---Type  to continue, or q  to quit---
> at ../Common/TCPIPSystemSrvr.cpp:581
> #12 0x00462406 in CNSKListenerSrvr::tcpip_listener (arg=0xda5510)
> at Interface/linux/Listener_srvr_ps.cpp:400
> #13 0x747e52e0 in sb_thread_sthr_disp (pp_arg=0xeb4c70)
> at threadl.cpp:253
> #14 0x745b1851 in start_thread () from /lib64/libpthread.so.0
> #15 0x74ae790d in clone () from /lib64/libc.so.6



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1099) LP Bug: 1437384 - sqenvcom.sh. Our CLASSPATH is too big.

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1099:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1437384 - sqenvcom.sh.Our CLASSPATH is too big.
> ---
>
> Key: TRAFODION-1099
> URL: https://issues.apache.org/jira/browse/TRAFODION-1099
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-general
>Reporter: Guy Groulx
>Assignee: Sandhya Sundaresan
>Priority: Major
> Fix For: 2.2.0
>
>
> sqenvcom.sh sets up CLASSPATH for trafodion.
> With HDP2.2, this CLASSPATH is huge.   On one of our system, echo $CLASSPATH 
> | wc -l return > 13000 bytes.
> I believe java/Linux truncates these variables when it's too big.
> Since going to HDP 2.2, we've been hit with "class not found" error 
> eventhough the jar is in CLASSPATH.
> http://stackoverflow.com/questions/1237093/using-wildcard-for-classpath 
> explains that we can use wildcards in CLASSPATH to reduce it.
> Rules:
> Use * and not *.jar.Java assumes that * in classpath are for *.jar
> When using export CLASSPATHuse quotes so that * is not expanded.   EG:
> export CLASSPATH=”/usr/hdp/current/hadoop-client/lib/*:${CLASSPATH}”
> We need to modify our sqenvcom.sh to use wildcards instead of putting 
> individual jar.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1194) LP Bug: 1446917 - T2 tests don't include parallel plans

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1194:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1446917 - T2 tests don't include  parallel plans
> 
>
> Key: TRAFODION-1194
> URL: https://issues.apache.org/jira/browse/TRAFODION-1194
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-general
>Reporter: Sandhya Sundaresan
>Assignee: Anuradha Hegde
>Priority: Major
> Fix For: 2.2.0
>
>
> The T2 test suite  should include proper coverage of parallel plans involving 
> ESPs. 
> This is needed to ensure IPC mechanism does not regress with any of the 
> current changes underway. 
> This coverage is especially important wr.t  the new work being done for 
> multi-threaded DCS server. 
> Atleast one test with ESPs should be made a Gating test as well in Jenkins.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-206) LP Bug: 1297518 - DCS - SQLProcedures and SQLProcedureColumns need to be supported

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-206:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1297518 - DCS - SQLProcedures and SQLProcedureColumns need to be 
> supported
> --
>
> Key: TRAFODION-206
> URL: https://issues.apache.org/jira/browse/TRAFODION-206
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-general
>Reporter: Aruna Sadashiva
>Assignee: Kevin Xu
>Priority: Critical
> Fix For: 2.2.0
>
>
> DCS needs to implement support for SQLProcedures and SQLProcedureColumns, 
> since traf sql supports SPJs now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-911) LP Bug: 1412652 - _REPOS_.METRIC_QUERY_TABLE has rows with QUERY_ID and sometimes empty QUERY_TEXT

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-911:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1412652 - _REPOS_.METRIC_QUERY_TABLE has rows with  QUERY_ID 
> and sometimes empty QUERY_TEXT
> -
>
> Key: TRAFODION-911
> URL: https://issues.apache.org/jira/browse/TRAFODION-911
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-mxosrvr
>Reporter: Aruna Sadashiva
>Assignee: Anuradha Hegde
>Priority: Critical
> Fix For: 2.2.0
>
>
> Don't know how to recreate this yet, but _REPOS_.METRIC_QUERY_TABLE has 
> several rows with QUERY_ID set to . The session id looks ok. QUERY_TEXT 
> is sometimes empty. 
> Arvind found 2 places in code where query id is explicitly set to , but 
> not sure how it gets there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1141) LP Bug: 1441378 - UDF: Multi-valued scalar UDF with clob/blob cores sqlci with SIGSEGV

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1141:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1441378 - UDF: Multi-valued scalar UDF with clob/blob cores sqlci 
> with SIGSEGV
> --
>
> Key: TRAFODION-1141
> URL: https://issues.apache.org/jira/browse/TRAFODION-1141
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Weishiun Tsai
>Assignee: Suresh Subbiah
>Priority: Critical
> Fix For: 2.2.0
>
> Attachments: udf_bug.tar
>
>
> While a single-valued scalar UDF works fine with the clob or blob data type.  
> A multi-valued scalar UDF cores sqlci with SIGSEGV even with just 2 clob or 
> blob output values. 
> Since clob and blob data types require large buffers, I am assuming this type 
> of scalar UDF is stressing the heap used internally somewhere.  But a core is 
> always bad.  If there is a limit on how clob and blob can be handled in a 
> scalar UDF, a check should be put in place and an error should be returned 
> more gracefully.
> This is seen on the v0407 build installed on a workstation. To reproduce it:
> (1) Download the attached tar file and untar it to get the 3 files in there. 
> Put the files in any directory .
> (2) Make sure that you have run ./sqenv.sh of your Trafodion instance first 
> as building UDF needs $MY_SQROOT for the header files.
> (3) run build.sh
> (4) Change the line “create library qa_udf_lib file '/myudf.so';”; in 
> mytest.sql and fill in 
> (5) From sqlci, obey mytest.sql
> ---
> Here is the execution output:
> >>create schema mytest;
> --- SQL operation complete.
> >>set schema mytest;
> --- SQL operation complete.
> >>
> >>create library qa_udf_lib file '/myudf.so';
> --- SQL operation complete.
> >>
> >>create function qa_udf_clob
> +>(INVAL clob)
> +>returns (c_clob clob)
> +>language c
> +>parameter style sql
> +>external name 'qa_func_vcstruct'
> +>library qa_udf_lib
> +>deterministic
> +>state area size 1024
> +>allow any parallelism
> +>no sql;
> --- SQL operation complete.
> >>
> >>create function qa_udf_blob
> +>(INVAL blob)
> +>returns (c_blob blob)
> +>language c
> +>parameter style sql
> +>external name 'qa_func_vcstruct'
> +>library qa_udf_lib
> +>deterministic
> +>state area size 1024
> +>allow any parallelism
> +>no sql;
> --- SQL operation complete.
> >>
> >>create function qa_udf_clob_mvf
> +>(INVAL clob)
> +>returns (c_clob1 clob, c_clob2 clob)
> +>language c
> +>parameter style sql
> +>external name 'qa_func_vcstruct_mvf'
> +>library qa_udf_lib
> +>deterministic
> +>state area size 1024
> +>allow any parallelism
> +>no sql;
> --- SQL operation complete.
> >>
> >>create function qa_udf_blob_mvf
> +>(INVAL blob)
> +>returns (c_blob1 blob, c_blob2 blob)
> +>language c
> +>parameter style sql
> +>external name 'qa_func_vcstruct_mvf'
> +>library qa_udf_lib
> +>deterministic
> +>state area size 1024
> +>allow any parallelism
> +>no sql;
> --- SQL operation complete.
> >>
> >>create table mytable (c_clob clob, c_blob blob);
> --- SQL operation complete.
> >>insert into mytable values ('CLOB_1', 'BLOB_1');
> --- 1 row(s) inserted.
> >>
> >>select
> +>cast(qa_udf_clob(c_clob) as char(10)),
> +>cast(qa_udf_blob(c_blob) as char(10))
> +>from mytable;
> (EXPR)  (EXPR)
> --  --
> CLOB_1  BLOB_1
> --- 1 row(s) selected.
> >>
> >>select qa_udf_clob_mvf(c_clob) from mytable;
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x74c5b9a2, pid=18680, tid=140737187650592
> #
> # JRE version: Java(TM) SE Runtime Environment (7.0_67-b01) (build 
> 1.7.0_67-b01)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode 
> linux-amd64 compressed oops)
> # Problematic frame:
> # C  [libexecutor.so+0x2489a2]  ExSimpleSQLBuffer::init(NAMemory*)+0x92
> #
> # Core dump written. Default location: /core or core.18680
> #
> # An error report file with more information is saved as:
> # /hs_err_pid18680.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.sun.com/bugreport/crash.jsp
> #
> Aborted (core dumped)
> ---
> Here is the stack trace of the core.
> (gdb) bt
> #0  0x0039e28328a5 in raise () from /lib64/libc.so.6
> #1  0x0039e283400d in abort () from /lib64/libc.so.6
> #2  0x77120a55 in os::abort(bool) ()
>from /opt/home/tools/jdk1.7.0_67/jre/lib/amd64/server/libjvm.so
> #3  0x772a0f87 in VMError::report_and_die() ()
>from /opt/home/tools/jdk1.7.0_67/jre/lib/amd64/server/libjvm.so
> #4  

[jira] [Updated] (TRAFODION-912) LP Bug: 1412806 - log4cpp : incorrect timestamp in logs for SQL info

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-912:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1412806 - log4cpp : incorrect timestamp in logs for SQL info
> 
>
> Key: TRAFODION-912
> URL: https://issues.apache.org/jira/browse/TRAFODION-912
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Gao, Rui-Xian
>Assignee: Sandhya Sundaresan
>Priority: Major
> Fix For: 2.2.0
>
>
> There are messages which have LOG_TS later than current timestamp logged into 
> the log file.
> current time is '2015-01-20 05:43:05', but there are messges have '2015-01-20 
> 13:16:16' in the log, only for SQL INFO.
> [trafodion@centos-mapr1 logs]$ date
> Tue Jan 20 05:43:05 PST 2015
> SQL>select * from udf(event_log_reader('f')) where log_ts > 
> timestamp'2015-01-20 06:00:00.00' order by 1;
> LOG_TS SEVERITY   COMPONENTNODE_NUMBER 
> CPU PIN PROCESS_NAME SQL_CODEQUERY_ID 
>   
>   MESSAGE 
>  
> LOG_FILE_NODE LOG_FILE_NAME   
>  
> LOG_FILE_LINE PARSE_STATUS
> -- --  --- 
> --- ---  --- 
> 
>  
> 
>  - 
> 
>  - 
> 2015-01-20 06:57:16.974000 INFO   SQL.ESP0
>5   26257 $Z050LF7NULL NULL
>   
>An ESP process is launched.
>   
> 0 master_exec_1_3719.log  
>  1
> 2015-01-20 06:57:16.974000 INFO   SQL.ESP0
>5   26257 $Z050LF7NULL NULL
>   
>An ESP process is launched.
>   
> 0 master_exec_1_3719.log  
>  1
> 2015-01-20 06:57:17.011000 INFO   SQL.ESP0
>31982 $Z0301LMNULL NULL
>   
>An ESP process is launched.
>   
> 0 master_exec_1_3719.log  
>  1
> 2015-01-20 06:57:17.011000 INFO   SQL.ESP0
>31982 $Z0301LMNULL NULL
>   
>An ESP process is launched.
>   
> 0 master_exec_1_3719.log  
>  1
> 2015-01-20 06:57:17.011000 INFO   SQL.ESP0
>31982 $Z0301LMNULL NULL
>   
>An ESP process is launched.
>   
> 0 master_exec_1_3719.log

[jira] [Updated] (TRAFODION-1438) Windows ODBC Driver is not able to create certificate file with long name length (over 30 bytes).

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1438:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> Windows ODBC Driver is not able to create certificate file with long name 
> length (over 30 bytes).
> -
>
> Key: TRAFODION-1438
> URL: https://issues.apache.org/jira/browse/TRAFODION-1438
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-odbc-windows
>Affects Versions: 2.0-incubating
> Environment: Windows
>Reporter: RuoYu Zuo
>Assignee: RuoYu Zuo
>Priority: Critical
> Fix For: 2.2.0
>
>
> Windows ODBC driver stores the certificate file with the server name in its 
> file name, when the server name is long, the driver is not able to handle. 
> For now driver just uses 30 char* buffer to create the file name, thus when 
> it copies a long server name into the file name, it crashes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-284) LP Bug: 1321058 - TRAFDSN should be eliminated, Trafodion ODBC linux driver should use the standard config files, odbc.ini and odbcinst.ini

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-284:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1321058 - TRAFDSN should be eliminated, Trafodion ODBC linux driver 
> should use the standard config files, odbc.ini and odbcinst.ini
> ---
>
> Key: TRAFODION-284
> URL: https://issues.apache.org/jira/browse/TRAFODION-284
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-odbc-linux
>Reporter: Aruna Sadashiva
>Assignee: Anuradha Hegde
>Priority: Major
> Fix For: 2.2.0
>
>
> Traf ODBC driver should use the standard odbc ini files for configuration, 
> instead of custom config file TRAFDSN. If we don't install to default 
> location, user has to have TRAFDSN in application dir, there is no way to 
> specify TRAFDSN location to driver.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1241) LP Bug: 1456304 - mtserver - spjs with resultsets failing - no results returned

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1241:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1456304 - mtserver - spjs with resultsets failing - no results 
> returned
> ---
>
> Key: TRAFODION-1241
> URL: https://issues.apache.org/jira/browse/TRAFODION-1241
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-jdbc-t2
>Reporter: Aruna Sadashiva
>Assignee: Anuradha Hegde
>Priority: Critical
> Fix For: 2.2.0
>
>
> SPJ tests with result sets failed because there are no result sets returned 
> from the procedure.  the same SPJ works from sqlci, but fails from trafci. 
> Steps:
> -
> SQL>create table t1 (a int not null primary key, b varchar(20));
> SQL>insert into t1 values(111, 'a');
> SQL>insert into t1 values(222, 'b');
> SQL>create library testrs file '/opt/home/trafodion/SPJ/testrs.jar';
> SQL>create procedure RS200()
>language java
>parameter style java
>external name 'Testrs.RS200'
>dynamic result sets 1
>library testrs;
> SQL>call rs200();
> --- SQL operation complete.
> -
> The expected result is:
> SQL >call rs200();
> AB
> ---  
> 111  a
> 222  b
> --- 2 row(s) selected.
> --- SQL operation complete.
> The jar file, testrs.jar, is on amber7 under /opt/home/trafodion/SPJ.  It has 
> the SPJ procedure:
>public static void RS200(ResultSet[] paramArrayOfResultSet)
>throws Exception
>{
>  String str1 = "jdbc:default:connection";
>  
>  String str2 = "select * from t1";
>  Connection localConnection = DriverManager.getConnection(str1);
>  Statement localStatement = localConnection.createStatement();
>  paramArrayOfResultSet[0] = localStatement.executeQuery(str2);
>}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-444) LP Bug: 1342180 - Memory leak in cmp context heap when cli context is deallocated

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-444:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1342180 - Memory leak in cmp context heap when cli context is 
> deallocated
> -
>
> Key: TRAFODION-444
> URL: https://issues.apache.org/jira/browse/TRAFODION-444
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Pavani Puppala
>Assignee: Qifan Chen
>Priority: Major
> Fix For: 2.2.0
>
>
> When SQL_EXEC_DeleteContext is called the second cmp context used for 
> reentrant compiler for metadata queries is not being deallocated.  This 
> causes memory leak where the cmp context heap allocated for the metadata cmp 
> context is not deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-598) LP Bug: 1365821 - select (insert) with prepared stmt fails with rowset

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-598:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1365821 - select (insert) with prepared stmt fails with rowset
> --
>
> Key: TRAFODION-598
> URL: https://issues.apache.org/jira/browse/TRAFODION-598
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-dcs
>Reporter: Aruna Sadashiva
>Assignee: RuoYu Zuo
>Priority: Critical
> Fix For: 2.2.0
>
>
> This came out of : https://answers.launchpad.net/trafodion/+question/253796
> "select syskey from (insert into parts values(?,?,?)) x" does not work as 
> expected with a odbc rowset. A rowset with single row works, but with 
> multiple rows in the rowset, no rows get inserted. 
> The workaround is to execute the select after the insert rowset operation. 
> It also fails with jdbc batch, t4 driver throws a "select not supported in 
> batch" exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1145) LP Bug: 1441784 - UDF: Lack of checking for scalar UDF input/output values

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1145:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1441784 - UDF: Lack of checking for scalar UDF input/output values
> --
>
> Key: TRAFODION-1145
> URL: https://issues.apache.org/jira/browse/TRAFODION-1145
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Weishiun Tsai
>Assignee: Suresh Subbiah
>Priority: Critical
> Fix For: 2.2.0
>
> Attachments: udf_bug (1).tar
>
>
> Ideally, input/output values for a scalar UDF should be verified at the 
> create function time.  But this check is not in place right now.  As a 
> result, a lot of ill-constructed input/output values are left to be handled 
> at the run time.  And the behavior at the run time is haphazard at best.
> Here shows 3 examples of such behavior:
> (a) myudf1 defines 2 input values with the same name.  Create function does 
> not return an error.  But the invocation at the run time returns a perplexing 
> 4457 error indicating internal out-of-range index error.
> (b) myudf2 defines an input value and an output value with the same name.  
> Create function does not return an error.  But the invocation at the run time 
> returns a perplexing 4457 error complaining that there is no output value.
> (c) myudf3 defines 2 output values with the same name.  Create function does 
> not return an error.  The invocation at the run time simply ignores the 2nd 
> output value, as well as the fact that the C function only defines 1 output 
> value.  It returns one value as if the 2nd output value was never defined at 
> all.
> This is seen on the v0407 build installed on a workstation. To reproduce it:
> (1) Download the attached tar file and untar it to get the 3 files in there. 
> Put the files in any directory .
> (2) Make sure that you have run ./sqenv.sh of your Trafodion instance first 
> as building UDF needs $MY_SQROOT for the header files.
> (3) run build.sh
> (4) Change the line “create library qa_udf_lib file '/myudf.so';”; in 
> mytest.sql and fill in 
> (5) From sqlci, obey mytest.sql
> 
> Here is the execution output:
> >>create schema mytest;
> --- SQL operation complete.
> >>set schema mytest;
> --- SQL operation complete.
> >>
> >>create library qa_udf_lib file '/myudf.so';
> --- SQL operation complete.
> >>
> >>create table mytable (a int, b int);
> --- SQL operation complete.
> >>insert into mytable values (1,1),(2,2),(3,3);
> --- 3 row(s) inserted.
> >>
> >>create function myudf1
> +>(INVAL int, INVAL int)
> +>returns (OUTVAL int)
> +>language c
> +>parameter style sql
> +>external name 'qa_func_int32'
> +>library qa_udf_lib
> +>deterministic
> +>state area size 1024
> +>allow any parallelism
> +>no sql;
> --- SQL operation complete.
> >>
> >>select myudf1(a, b) from mytable;
> *** ERROR[4457] An error was encountered processing metadata for user-defined 
> function TRAFODION.MYTEST.MYUDF1.  Details: Internal error in 
> setInOrOutParam(): index position out of range..
> *** ERROR[8822] The statement was not prepared.
> >>
> >>create function myudf2
> +>(INVAL int)
> +>returns (INVAL int)
> +>language c
> +>parameter style sql
> +>external name 'qa_func_int32'
> +>library qa_udf_lib
> +>deterministic
> +>state area size 1024
> +>allow any parallelism
> +>no sql;
> --- SQL operation complete.
> >>
> >>select myudf2(a) from mytable;
> *** ERROR[4457] An error was encountered processing metadata for user-defined 
> function TRAFODION.MYTEST.MYUDF2.  Details: User-defined functions must have 
> at least one registered output value.
> *** ERROR[8822] The statement was not prepared.
> >>
> >>create function myudf3
> +>(INVAL int)
> +>returns (OUTVAL int, OUTVAL int)
> +>language c
> +>parameter style sql
> +>external name 'qa_func_int32'
> +>library qa_udf_lib
> +>deterministic
> +>state area size 1024
> +>allow any parallelism
> +>no sql;
> --- SQL operation complete.
> >>
> >>select myudf3(a) from mytable;
> OUTVAL
> ---
>   1
>   2
>   3
> --- 3 row(s) selected.
> >>
> >>drop function myudf1 cascade;
> --- SQL operation complete.
> >>drop function myudf2 cascade;
> --- SQL operation complete.
> >>drop function myudf3 cascade;
> --- SQL operation complete.
> >>drop library qa_udf_lib cascade;
> --- SQL operation complete.
> >>drop schema mytest cascade;
> --- SQL operation complete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-979) LP Bug: 1418142 - Parallel DDL operations sees error 8810

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-979:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1418142 - Parallel DDL operations sees error 8810
> -
>
> Key: TRAFODION-979
> URL: https://issues.apache.org/jira/browse/TRAFODION-979
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Weishiun Tsai
>Assignee: Prashanth Vasudev
>Priority: Critical
> Fix For: 2.2.0
>
>
> To be able to run SQL regression tests in parallel on the same instance is 
> one of the goals that we would like to see happen for the post-1.0 release.  
> In order to do this, Trafodion needs to be able to handle parallel execution 
> with a workload that has a mixture of DDLs and DMLs together.  Right now, 
> Trafodion is not very robust when it comes to handling concurrent DDL 
> executions. (or a DDL execution with another DML execution.  It’s hard to 
> tell if it has to be 2 DDLs to cause all the problems that we seeing.) 
> There are several problems in this area.  The first noticeable one is this 
> particular 8810 error.   QA did an experiment last night by splitting the 
> regression test suites into 2 parts and ran them together on a 4-node 
> cluster.   After both completed, we saw a total of 50 occurrences of this 
> 8810 error:
> -bash-4.1$ grep 8810 */*.log | grep ERROR | wc -l
> 33
> -bash-4.1$ grep 8810 */*.log | grep ERROR | wc -l
> 17
> A typical error looks like this:
> SQL>drop table t10a104;
> *** ERROR[8810] Executor ran into an internal failure and returned an error 
> without populating the diagnostics area. This error is being injected to 
> indicate that. [2015-02-04 07:12:04]
> There is a bug report https://bugs.launchpad.net/trafodion/+bug/1413831 
> ‘Phoenix tests run into several error 8810 when other tests are run in 
> parallel with it´ that describes a similar problem.  But that one has a 
> narrower scope focusing only on phoenix tests and the severity of the bug 
> report is only High.  This one is intended to cover a broader issue of 
> running parallel DDL operations in general.  We will mark it as Critical as 
> we need to remove this obstacle first to see what other problems may lie 
> beneath for executing DDLs in parallel.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-531) LP Bug: 1355034 - SPJ w result set failed with ERROR[8413]

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-531:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1355034 - SPJ w result set failed with ERROR[8413]
> --
>
> Key: TRAFODION-531
> URL: https://issues.apache.org/jira/browse/TRAFODION-531
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Chong Hsu
>Assignee: Suresh Subbiah
>Priority: Critical
> Fix For: 2.2.0
>
>
> Tested with Trafodion build, 20140801-0830.
> Calling a SPJ with result set:
>public static void NS786(String paramString, ResultSet[] 
> paramArrayOfResultSet)
>  throws Exception
>{
>  String str1 = "jdbc:default:connection";
>  
>  Connection localConnection = DriverManager.getConnection(str1);
>  String str2 = "select * from " + paramString;
>  Statement localStatement = localConnection.createStatement();
>  paramArrayOfResultSet[0] = localStatement.executeQuery(str2);
>}
> it failed with ERROR[8413]:
> *** ERROR[8413] The string argument contains characters that cannot be 
> converted. [2014-08-11 04:06:32]
> *** ERROR[8402] A string overflow occurred during the evaluation of a 
> character expression. Conversion of Source Type:LARGEINT(REC_BIN64_SIGNED) 
> Source Value:79341348341248 to Target Type:CHAR(REC_BYTE_F_ASCII). 
> [2014-08-11 04:06:32]
> The SPJ Jar file is attached. Here are the steps to produce the error:
>   
> set schema testspj;
> create library spjrs file '//Testrs.jar';
> create procedure RS786(varchar(100))
>language java 
>parameter style java  
>external name 'Testrs.NS786'
>dynamic result sets 1
>library spjrs;
> create table datetime_interval (
> date_keydate not null,
> date_coldate default date '0001-01-01',
> time_coltime default time '00:00:00',
> timestamp_col   timestamp
>  default timestamp 
> '0001-01-01:00:00:00.00',
> interval_year   interval year default interval '00' year,
> yr2_to_mo   interval year to month
>  default interval '00-00' year to month,
> yr6_to_mo   interval year(6) to month
>  default interval '00-00' year(6) to 
> month,
> yr16_to_mo  interval year(16) to month default
>   interval '-00' year(16) to 
> month,
> year18  interval year(18) default
>  interval '00' year(18),
> day2interval day default interval '00' day,
> day18   interval day(18)
>  default interval '00' 
> day(18),
> day16_to_hr interval day(16) to hour
> default interval ':00' day(16) to 
> hour,
> day14_to_mininterval day(14) to minute default  
>   interval '00:00:00' day(14) to 
> minute,
> day5_to_second6 interval day(5) to second(6) default
>  interval '0:00:00:00.00' day(5) to second(6),
> hour2   interval hour default interval '00' hour,
> hour18  interval hour(18)
>  default interval '00' 
> hour(18),
> hour16_to_min   interval hour(16) to minute default
>   interval ':00' hour(16) to minute,
> hour14_to_ss0   interval hour(14) to second(0) default
>   interval '00:00:00' hour(14) to 
> second(0),
> hour10_to_second4interval hour(10) to second(4) default
>  interval '00:00:00.' hour(10) to 
> second(4),
> min2interval minute default interval '00' minute,
> min18   interval minute(18) default
>  interval '00' minute(18),
> min13_s3interval minute(13) to second(3) default
> interval '0:00.000' minute(13) to 
> second(3),
> min16_s0interval minute(16) to second(0) default
> interval ':00' minute(16) to 
> second(0),
> seconds interval second default interval '00' second,
> seconds5interval second(5) default interval '0' second(5),
> seconds18   interval second(18,0) default
>  interval '00' second(18,0),
> seconds15   interval 

[jira] [Updated] (TRAFODION-2427) trafodion 2.0.1 install occurs an error ERROR: unable to find hbase-trx-cdh5_5-*.jar5-*.jar

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-2427:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> trafodion 2.0.1 install occurs an error ERROR: unable to find 
> hbase-trx-cdh5_5-*.jar5-*.jar
> ---
>
> Key: TRAFODION-2427
> URL: https://issues.apache.org/jira/browse/TRAFODION-2427
> Project: Apache Trafodion
>  Issue Type: Question
>  Components: installer
>Reporter: jacklee
>Priority: Major
>  Labels: beginner
> Fix For: 2.2.0
>
>
> trafodion 2.0.1 install occurs an error
> ***INFO: Cloudera installed will run traf_cloudera_mods
> ***ERROR: unable to find 
> /usr/lib/trafodion/apache-trafodion_server-2.0.1-incubating/export/lib/hbase-trx-cdh5_5-*.jar
> ***ERROR: traf_cloudera_mods exited with error.
> ***ERROR: Please check log files.
> ***ERROR: Exiting
> help somebody help me,thanks



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-777) LP Bug: 1394488 - Bulk load for volatile table gets FileNotFoundException

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-777:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1394488 - Bulk load for volatile table gets FileNotFoundException
> -
>
> Key: TRAFODION-777
> URL: https://issues.apache.org/jira/browse/TRAFODION-777
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Barry Fritchman
>Assignee: Suresh Subbiah
>Priority: Major
> Fix For: 2.2.0
>
>
> When attempting to perform a bulk load into a volatile table, like this:
> create volatile table vps primary key (ps_partkey, ps_suppkey) no load as 
> select * from partsupp;
> cqd comp_bool_226 'on';
> cqd TRAF_LOAD_PREP_TMP_LOCATION '/bulkload/';
> cqd TRAF_LOAD_TAKE_SNAPSHOT 'OFF';
> load into vps select * from partsupp;
> An error 8448 is raised due to a java.io.FileNotFoundException:
> Task: LOAD Status: StartedObject: TRAFODION.HBASE.VPS
> Task:  CLEANUP Status: StartedObject: TRAFODION.HBASE.VPS
> Task:  CLEANUP Status: Ended  Object: TRAFODION.HBASE.VPS
> Task:  DISABLE INDEXE  Status: StartedObject: TRAFODION.HBASE.VPS
> Task:  DISABLE INDEXE  Status: Ended  Object: TRAFODION.HBASE.VPS
> Task:  PREPARATION Status: StartedObject: TRAFODION.HBASE.VPS
>Rows Processed: 160 
> Task:  PREPARATION Status: Ended  ET: 00:01:20.660
> Task:  COMPLETION  Status: StartedObject: TRAFODION.HBASE.VPS
> *** ERROR[8448] Unable to access Hbase interface. Call to 
> ExpHbaseInterface::doBulkLoad returned error HBASE_DOBULK_LOAD_ERROR(-714). 
> Cause: 
> java.io.FileNotFoundException: File /bulkload/TRAFODION.HBASE.VPS does not 
> exist.
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:654)
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:102)
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:712)
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:708)
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:708)
> org.trafodion.sql.HBaseAccess.HBulkLoadClient.doBulkLoad(HBulkLoadClient.java:442)
> It appears that the presumed qualification of the volatile table name is 
> incorrect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2305) After a region split the transactions to check against list is not fully populated

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-2305:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> After a region split the transactions to check against list is not fully 
> populated
> --
>
> Key: TRAFODION-2305
> URL: https://issues.apache.org/jira/browse/TRAFODION-2305
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dtm
>Affects Versions: any
>Reporter: Sean Broeder
>Assignee: Sean Broeder
>Priority: Major
> Fix For: 2.2.0
>
>
> As part of a region split all current transactions and their relationships to 
> one another are written out into a ZKNode entry and later read in by the 
> daughter regions.  However, the transactionsToCheck list is not correctly 
> populated



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1173) LP Bug: 1444088 - Hybrid Query Cache: sqlci may err with JRE SIGSEGV.

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1173:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1444088 - Hybrid Query Cache: sqlci may err with JRE SIGSEGV.
> -
>
> Key: TRAFODION-1173
> URL: https://issues.apache.org/jira/browse/TRAFODION-1173
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Julie Thai
>Assignee: Howard Qin
>Priority: Major
> Fix For: 2.2.0
>
>
> In sqlci, with HQC on and HQC_LOG specified, a prepared statement was 
> followed with:
> >>--interval 47, same selectivity as interval 51
> >>--interval 47 [jvFN3&789 - jyBT!]789)
> >>--expect = nothing in hqc log; SQC hit
> >>prepare XX from select * from f00 where colchar = 'jyBT!]789';
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x75d80595, pid=2708, tid=140737353866272
> #
> # JRE version: Java(TM) SE Runtime Environment (7.0_75-b13) (build 
> 1.7.0_75-b13)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.75-b04 mixed mode 
> linux-amd64 compressed oops)
> # Problematic frame:
> # C  [libstdc++.so.6+0x91595]  
> std::ostream::sentry::sentry(std::ostream&)+0x25
> #
> # Core dump written. Default location: 
> /opt/home/trafodion/thaiju/HQC/equal_char/core or core.2708
> #
> # An error report file with more information is saved as:
> # /opt/home/trafodion/thaiju/HQC/equal_char/hs_err_pid2708.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.sun.com/bugreport/crash.jsp
> # The crash happened outside the Java Virtual Machine in native code.
> # See problematic frame for where to report the bug.
> #
> Aborted
> No core file found under /opt/home/trafodion/thaiju/HQC/equal_char. But a 
> hs_err_pid2708.log file was generated (included in attached, to_repro.tar). 
> Problem does not reproduce if I explicitly turn off HQC.
> To reproduce:
> 1. download and untar attachment, to_repro.tar
> 1. in a sqlci session, obey setup_char.sql (from tar file)
> 2. in a new sqlci session, obey equal_char.sql (from tar file)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1053) LP Bug: 1430938 - In full explain output, begin/end key for char/varchar key column should be min/max if there is no predicated defined on the key column.

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1053:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1430938 - In full explain output, begin/end key for char/varchar key 
> column should be min/max if there is no predicated defined on the key column.
> --
>
> Key: TRAFODION-1053
> URL: https://issues.apache.org/jira/browse/TRAFODION-1053
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Julie Thai
>Assignee: Howard Qin
>Priority: Major
> Fix For: 2.2.0
>
>
> In full explain output, begin/end key for char/varchar key column should be 
> min/max 
> if there is no predicated defined on the key column.
> Snippet from TRAFODION_SCAN below:
> key_columns  _SALT_, COLTS, COLVCHRUCS2, COLINTS
> begin_key .. (_SALT_ = %(9)), (COLTS = ),
>  (COLVCHRUCS2 = '洼硡'), (COLINTS = 
> )
> end_key  (_SALT_ = %(9)), (COLTS = ),
>  (COLVCHRUCS2 = '洼湩'), (COLINTS = 
> )
> Expected  (COLVCHRUCS2 = '') and  (COLVCHRUCS2 = '').
> SQL>create table salttbl3 (
> +>colintu int unsigned not null, colints int signed not null,
> +>colsintu smallint unsigned not null, colsints smallint signed not null,
> +>collint largeint not null, colnum numeric(11,3) not null,
> +>colflt float not null, coldec decimal(11,2) not null,
> +>colreal real not null, coldbl double precision not null,
> +>coldate date not null, coltime time not null,
> +>colts timestamp not null,
> +>colchriso char(90) character set iso88591 not null,
> +>colchrucs2 char(111) character set ucs2 not null,
> +>colvchriso varchar(113) character set iso88591 not null,
> +>colvchrucs2 varchar(115) character set ucs2 not null,
> +>PRIMARY KEY (colts ASC, colvchrucs2 DESC, colints ASC))
> +>SALT USING 9 PARTITIONS ON (colints, colvchrucs2, colts);
> --- SQL operation complete.
> SQL>LOAD INTO salttbl3 SELECT
> +>c1+c2*10+c3*100+c4*1000+c5*1,
> +>(c1+c2*10+c3*100+c4*1000+c5*1) - 5,
> +>mod(c1+c2*10+c3*100+c4*1000+c5*1, 65535),
> +>mod(c1+c2*10+c3*100+c4*1000+c5*1, 32767),
> +>(c1+c2*10+c3*100+c4*1000+c5*1) + 549755813888,
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as numeric(11,3)),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as float),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as decimal(11,2)),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as real),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as double precision),
> +>cast(converttimestamp(2106142992 +
> +>(864 * (c1+c2*10+c3*100+c4*1000+c5*1))) as date),
> +>time'00:00:00' + cast(mod(c1+c2*10+c3*100+c4*1000+c5*1,3)
> +>as interval minute),
> +>converttimestamp(2106142992 + (864 *
> +>(c1+c2*10+c3*100+c4*1000+c5*1)) + (100 * (c1+c2*10+c3*100)) +
> +>(6000 * (c1+c2*10)) + (36 * (c1+c2*10))),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as char(90) character set iso88591),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as char(111) character set ucs2),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as varchar(113) character set 
> iso88591),
> +>cast(c1+c2*10+c3*100+c4*1000+c5*1 as varchar(115) character set ucs2)
> +>from (values(1)) t
> +>transpose 0,1,2,3,4,5,6,7,8,9 as c1
> +>transpose 0,1,2,3,4,5,6,7,8,9 as c2
> +>transpose 0,1,2,3,4,5,6,7,8,9 as c3
> +>transpose 0,1,2,3,4,5,6,7,8,9 as c4
> +>transpose 0,1,2,3,4,5,6,7,8,9 as c5;
> UTIL_OUTPUT
> 
> Task: LOAD Status: StartedObject: TRAFODION.SEABASE.SALTTBL3  
>   
> Task:  CLEANUP Status: StartedObject: TRAFODION.SEABASE.SALTTBL3  
>   
> Task:  CLEANUP Status: Ended  Object: TRAFODION.SEABASE.SALTTBL3  
>   
> Task:  DISABLE INDEXE  Status: StartedObject: TRAFODION.SEABASE.SALTTBL3  
>   
> Task:  DISABLE INDEXE  Status: Ended  Object: TRAFODION.SEABASE.SALTTBL3  
>   
> Task:  PREPARATION Status: StartedObject: TRAFODION.SEABASE.SALTTBL3  
>   
>Rows Processed: 10
> Task:  PREPARATION Status: Ended  ET: 00:00:10.332
>   
> Task:  COMPLETION  Status: StartedObject: TRAFODION.SEABASE.SALTTBL3  
>   
> Task:  COMPLETION  Status: Ended  ET: 00:00:02.941
>   
> Task:  POPULATE INDEX  Status: StartedObject: TRAFODION.SEABASE.SALTTBL3  
>   
> Task:  POPULATE INDEX  Status: Ended  ET: 00:00:05.357
>   
> --- SQL operation complete.
> 

[jira] [Updated] (TRAFODION-1422) Delete column can be dramatically improved (ALTER statement)

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1422:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> Delete column can be dramatically improved (ALTER statement)
> 
>
> Key: TRAFODION-1422
> URL: https://issues.apache.org/jira/browse/TRAFODION-1422
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: sql-general
>Reporter: Eric Owhadi
>Assignee: Eric Owhadi
>Priority: Minor
>  Labels: performance
> Fix For: 2.2.0
>
>
> The current code path for delete column has not been optimized and can be 
> greatly improved. See comments bellow for many ways to implement optimization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2462) TRAFCI gui installer does not work

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-2462:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> TRAFCI gui installer does not work
> --
>
> Key: TRAFODION-2462
> URL: https://issues.apache.org/jira/browse/TRAFODION-2462
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-ci
>Affects Versions: 2.1-incubating
>Reporter: Anuradha Hegde
>Assignee: Alex Peng
>Priority: Major
> Fix For: 2.2.0
>
>
> There are several issues with trafci 
> 1. GUI installer on Windows does not work. Bascially the browse button to 
> upload the T4 jar file and to specify the location of trafci install dir does 
> not function. hence installation does not proceed
> 2.  After a successful install of trafci  on Windows or *nix we notice that 
> lib file contains jdbcT4 and jline jar files..  There is no need to 
> pre-package these files with the product
> 3.  Running any sql statements from TRAF_HOME folder returns the following 
> error 
> SQL>get tables;
> *** ERROR[1394] *** ERROR[16001] No message found for SQLCODE -1394.  
> MXID11292972123518900177319330906U300_877_SQL_CUR_2 
> [2017-01-25 20:44:03]
> But executing the same statement when you are in $TRAF_HOME/sql/scripts 
> folder works.
> 4. Executing the wrapper script 'trafci' returns and message as below and 
> proceeds with successful connection. You don't see this messagewhen executing 
> trafci.sh
> /core/sqf/sql/local_hadoop/dcs-2.1.0/bin/dcs-config.sh: line 
> 90: .: sqenv.sh: file not found
> 5. Executing sql statements in multiples lines causes additional SQL prompt 
> to be displayed
> Connected to Apache Trafodion
> SQL>get tables
> +>SQL>
> 6. on successful connect and disconnect when new mxosrvrs are picked up  the 
> default schema is changed from 'SEABASE' to 'USR' (This might be a server 
> side issue too but will need to debug and find out)
> 7. FC command does not work. Look at trafci manual for examples on how FC 
> command was displayed back, It was shown back with the SQL prompt  
> SQL>fc
> show remoteprocess;
> SQL>   i
> show re moteprocess;
> SQL>
> 8. Did the error message format change?  This should have been syntax error
>   
> SQL>gett;
> *** ERROR[15001] *** ERROR[16001] No message found for SQLCODE -15001.
> gett;
>^ (4 characters from start of SQL statement) 
> MXID11086222123521382568755030206U300_493_SQL_CUR_4 
> [2017-01-25 21:14:18]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-185) LP Bug: 1282307 - DCS - schema setting seems to be retained from previous session

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-185:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1282307 - DCS - schema setting seems to be retained from previous 
> session
> -
>
> Key: TRAFODION-185
> URL: https://issues.apache.org/jira/browse/TRAFODION-185
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-general
>Reporter: Aruna Sadashiva
>Assignee: Anuradha Hegde
>Priority: Critical
> Fix For: 2.2.0
>
>
> Noticed this on sq151, the default schema for new connections is not SEABASE, 
> when connecting to 2 specific servers, it is PHOENIX. Schema setting is being 
> retained from previous session. 
> Build Version:
> Trafodion Platform  : Release 0.7.0 
> Trafodion Connectivity Services : Version 1.0.0 Release 0.7.0 (Build debug 
> [37599], date 15Feb14)
> Trafodion JDBC Type 4 Driver: Traf_JDBC_Type4_Build_37599
> Trafodion Command Interface : TrafCI_Build_37599



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-916) LP Bug: 1412955 - Master Executor reporting error 8605

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-916:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1412955 - Master Executor reporting error 8605
> --
>
> Key: TRAFODION-916
> URL: https://issues.apache.org/jira/browse/TRAFODION-916
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-mxosrvr
>Reporter: Buddy Wilbanks
>Assignee: Arvind Narain
>Priority: Critical
>  Labels: 8605, commit, transaction
> Fix For: 2.2.0
>
>
> We are seeing this error message streaming during the order entry benchmark.
> 2015-01-17 11:25:49,116, ERROR, SQL, Node Number: 0, CPU: 0, PIN: 63243, 
> Process Name: $Z001GLY, SQLCODE: 8605, QID: MXID11632432122882535634324540
> 206U300_24_STMT_COMMIT_1, *** ERROR[8605] Committing a 
> transaction which has not started.
> 2015-01-17 11:25:50,667, ERROR, SQL, Node Number: 0, CPU: 0, PIN: 63243, 
> Process Name: $Z001GLY, SQLCODE: 8605, QID: MXID11632432122882535634324540
> 206U300_24_STMT_COMMIT_1, *** ERROR[8605] Committing a 
> transaction which has not started.
> 2015-01-17 11:26:17,823, ERROR, SQL, Node Number: 0, CPU: 0, PIN: 63243, 
> Process Name: $Z001GLY, SQLCODE: 8605, QID: MXID11632432122882535634324540
> 206U300_24_STMT_COMMIT_1, *** ERROR[8605] Committing a 
> transaction which has not started.
> The counts are prolific.  The master above shows every couple of seconds.
> master_exec_0_13398.log:10241
> master_exec_0_13942.log:12065
> master_exec_0_17055.log:10110
> master_exec_0_19990.log:11068
> master_exec_0_20733.log:11818
> master_exec_0_21408.log:10560
> master_exec_0_22203.log:10633
> master_exec_0_22274.log:1223
> master_exec_0_24271.log:10096
> master_exec_0_24852.log:1477
> master_exec_0_24895.log:10791
> master_exec_0_24990.log:12213
> master_exec_0_26398.log:11618
> master_exec_0_26536.log:1298
> The only place where the error occurs is when commitTransaction is called and 
> there isn’t an xnInProgress.
> executor/ex_transaction.cpp: 
> EXE_COMMIT_TRANSACTION_ERROR, _);
> exp/ExpErrorEnums.h:  EXE_COMMIT_TRANSACTION_ERROR   = 8605,
> short ExTransaction::commitTransaction(NABoolean waited)
> {
>   dp2Xns_ = FALSE;
>   if (! xnInProgress())
> {
>   // Set the transaDiagsArea.
>   // This is the first error. So reset the diags area.
>   if (transDiagsArea_)
>   {
> transDiagsArea_->decrRefCount();
> transDiagsArea_ = NULL;
>   }
>   ExRaiseSqlError(heap_, _,
>   EXE_COMMIT_TRANSACTION_ERROR, _);
>   return -1;
> }
> It must have something to do with inherited transactions. inheritTransaction 
> is the only place where we set XnInProgress_ to false.   The order entry does 
> not use ESPs so there is no reason to have an inherited transaction.  
> Guy has shown the problem with a simple test case.
> First scenario:
> - I’m going to do a prepare/exec without using transactions. 
> /home/squser2> trafci.sh
> Welcome to Trafodion Command Interface 
> Copyright(C) 2013-2014 Hewlett-Packard Development Company, L.P.
> User Name: squser2
> Host Name/IP Address: n001:21000
> Password: 
> Connected to Trafodion 
> SQL>set schema mxoltp;
> --- SQL operation complete.
> SQL>prepare cmd from select [first 10]* from tbl500;
> *** WARNING[6008] Statistics for column (CNT) from table 
> TRAFODION.MXOLTP.TBL500 were not available. As a result, the access path 
> chosen might not be the best possible. [2015-01-20 19:26:52]
> --- SQL command prepared.
> SQL>execute cmd;
> CNTCOL1   COL2   COL3   COL4   COL5   COL6   
> COL7   COL10  
>  
> -- -- -- -- -- -- -- 
> -- 
> 
>  3  1  2  0  4  5  6  
> 7 
> AAAL
> …
>176  1  2  1  

[jira] [Updated] (TRAFODION-1208) LP Bug: 1449195 - mxosrvr failing to write to repos session and aggr metric table when we test connection from odbc administrator, errors in master log

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1208:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1449195 - mxosrvr failing to write to repos session and aggr metric 
> table when we test connection from odbc administrator,  errors in master log
> 
>
> Key: TRAFODION-1208
> URL: https://issues.apache.org/jira/browse/TRAFODION-1208
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-mxosrvr
>Reporter: Aruna Sadashiva
>Assignee: Anuradha Hegde
>Priority: Critical
> Fix For: 2.2.0
>
>
> After trying etst connection from odbc administrator on Windows, the 
> following errors are logged in the master log file and there are no rows 
> inserted into the metric session and aggr tables. The test connection 
> succeeds. 
> 2015-04-27 17:54:28,169, ERROR, SQL, Node Number: 0, CPU: 3, PIN: 20885, 
> Process Name: $Z030H1Q, SQLCODE: 15001, QID: 
> MXID110030208852122969171678495210306U300_483_STMT_PUBLICATION, 
> *** ERROR[15001] A syntax error occurred at or before: 
> insert into Trafodion."_REPOS_".metric_query_aggr_table 
> values(0,0,0,20885,2088
> 5,3,0,0,'16.235.158.28',0,'$Z030H1Q','MXID11003020885212296917167849521
> 0406U300',CONVERTTIMESTAMP(212296917268168600),CONVERTTIMESTAMP(21229691726
> 8168603),6,3,'DB__ROOT','NONE','SARUNA2','saruna','Trafodion ODBC 
> Data 
> Source 'amethyst' 
> Configuration',0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
>^ (332 characters from start of SQL statement)
> 2015-04-27 17:54:28,169, ERROR, MXOSRVR, Node Number: 3, CPU: 3, PIN:20885, 
> Process Name:$Z030H1Q , , ,A NonStop Process Service error Failed to write 
> statistics: insert into Trafodion."_REPOS_".metric_query_aggr_table 
> values(0,0,0,20885,20885,3,0,0,'16.235.158.28',0,'$Z030H1Q','MXID110030208852122969171678495210406U300',CONVERTTIMESTAMP(212296917268168600),CONVERTTIMESTAMP(212296917268168603),6,3,'DB__ROOT','NONE','SARUNA2','saruna','Trafodion
>  ODBC Data Source 'amethyst' 
> Configuration',0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)Error
>  detail - *** ERROR[15001] A syntax error occurred at or before: 
> insert into Trafodion."_REPOS_".metric_query_aggr_table 
> values(0,0,0,20885,2088
> 5,3,0,0,'16.235.158.28',0,'$Z030H1Q','MXID11003020885212296917167849521
> 0406U300',CONVERTTIMESTAMP(212296917268168600),CONVERTTIMESTAMP(21229691726
> 8168603),6,3,'DB__ROOT','NONE','SARUNA2','saruna','Trafodion ODBC 
> Data 
> Source 'amethyst' 
> Configuration',0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
>^ (332 characters from start of SQL statement) [2015-04-27 
> 17:54:28] has occurred. 
> 2015-04-27 17:54:33,241, ERROR, SQL, Node Number: 0, CPU: 3, PIN: 20885, 
> Process Name: $Z030H1Q, SQLCODE: 15001, QID: 
> MXID110030208852122969171678495210306U300_487_STMT_PUBLICATION, 
> *** ERROR[15001] A syntax error occurred at or before: 
> insert into Trafodion."_REPOS_".metric_session_table 
> values(0,0,0,20885,20885,3
> ,0,0,'16.235.158.28',0,'$Z030H1Q','MXID11003020885212296917167849521040
> 6U300','END',CONVERTTIMESTAMP(212296917268168600),CONVERTTIMESTAMP(21229691
> 7273198784),3,'DB__ROOT','NONE','SARUNA2','saruna','Trafodion ODBC Data 
> Sou
> rce 'amethyst' Configuration',0,0,0,0,0,0,0,0,0,0,0,0,0,0,5827,29,0,0,0,0,0);
> ^ (329 characters from start of SQL statement)
> 2015-04-27 17:54:33,241, ERROR, MXOSRVR, Node Number: 3, CPU: 3, PIN:20885, 
> Process Name:$Z030H1Q , , ,A NonStop Process Service error Failed to write 
> statistics: insert into Trafodion."_REPOS_".metric_session_table 
> values(0,0,0,20885,20885,3,0,0,'16.235.158.28',0,'$Z030H1Q','MXID110030208852122969171678495210406U300','END',CONVERTTIMESTAMP(212296917268168600),CONVERTTIMESTAMP(212296917273198784),3,'DB__ROOT','NONE','SARUNA2','saruna','Trafodion
>  ODBC Data Source 'amethyst' 
> Configuration',0,0,0,0,0,0,0,0,0,0,0,0,0,0,5827,29,0,0,0,0,0)Error detail 
> - *** ERROR[15001] A syntax error occurred at or before: 
> insert into Trafodion."_REPOS_".metric_session_table 
> values(0,0,0,20885,20885,3
> ,0,0,'16.235.158.28',0,'$Z030H1Q','MXID11003020885212296917167849521040
> 6U300','END',CONVERTTIMESTAMP(212296917268168600),CONVERTTIMESTAMP(21229691
> 7273198784),3,'DB__ROOT','NONE','SARUNA2','saruna','Trafodion ODBC Data 
> Sou
> rce 'amethyst' Configuration',0,0,0,0,0,0,0,0,0,0,0,0,0,0,5827,29,0,0,0,0,0);
> ^ (329 characters from start of SQL 

[jira] [Updated] (TRAFODION-605) LP Bug: 1366227 - core file from shell during shutdown

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-605:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1366227 - core file from shell during shutdown
> --
>
> Key: TRAFODION-605
> URL: https://issues.apache.org/jira/browse/TRAFODION-605
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: foundation
>Reporter: Christopher Sheedy
>Assignee: Atanu Mishra
>Priority: Minor
> Fix For: 2.2.0
>
>
> Change 332 had a failure in core-regress-seabase-cdh4.4 when doing shutdown. 
> http://logs.trafodion.org/32/332/2/check/core-regress-seabase-cdh4.4/6a54826/console.html
>  has:
> Shutting down (normal) the SQ environment!
> Fri Sep 5 18:52:26 UTC 2014
> Processing cluster.conf on local host slave01
> [$Z000BBN] Shell/shell Version 1.0.1 Release 0.8.4 (Build release 
> [0.8.3rc1-203-ga165839master_Bld177], date 20140905_175103)
> ps
> [$Z000BBN] %ps
> [$Z000BBN] NID,PID(os)  PRI TYPE STATES  NAMEPARENT  PROGRAM
> [$Z000BBN]  ---  --- --- --- 
> ---
> [$Z000BBN] 000,00031016 000 WDG  ES--A-- $WDT000 NONEsqwatchdog
> [$Z000BBN] 000,00031017 000 PSD  ES--A-- $PSD000 NONEpstartd
> [$Z000BBN] 000,00031061 001 DTM  ES--A-- $TM0NONEtm
> [$Z000BBN] 000,00013883 001 GEN  ES--A-- $Z000BBNNONEshell
> [$Z000BBN] 001,00031018 000 PSD  ES--A-- $PSD001 NONEpstartd
> [$Z000BBN] 001,00031015 000 WDG  ES--A-- $WDT001 NONEsqwatchdog
> [$Z000BBN] 001,00031139 001 DTM  ES--A-- $TM1NONEtm
> shutdown
> [$Z000BBN] %shutdown
> /home/jenkins/workspace/core-regress-seabase-cdh4.4/trafodion/core/sqf/sql/scripts/sqshell:
>  line 7: 13883 Aborted (core dumped) shell $1 $2 $3 $4 $5 $6 
> $7 $8 $9
> Issued a 'shutdown normal' request
> Shutdown in progress
> # of SQ processes: 0
> SQ Shutdown (normal) from 
> /home/jenkins/workspace/core-regress-seabase-cdh4.4/trafodion/core/sql/regress
>  Successful
> Fri Sep 5 18:52:34 UTC 2014
> + ret=0
> + [[ 0 == 124 ]]
> + echo 'Return code 0'
> Return code 0
> + sudo /usr/local/bin/hbase-sudo.sh stop
> Stopping hbase-master
> Stopping HBase master daemon (hbase-master):[  OK  ]
> stopping master.
> Return code 0
> + echo 'Return code 0'
> Return code 0
> + cd ../../sqf/rundir
> + set +x
> = seabase
> 09/05/14 18:21:07 (RELEASE build)
> 09/05/14 18:23:51  TEST010### PASS ###
> 09/05/14 18:25:38  TEST011### PASS ###
> 09/05/14 18:27:46  TEST012### PASS ###
> 09/05/14 18:29:36  TEST013### PASS ###
> 09/05/14 18:29:50  TEST014### PASS ###
> 09/05/14 18:32:05  TEST016### PASS ###
> 09/05/14 18:32:35  TEST018### PASS ###
> 09/05/14 18:50:28  TEST020### PASS ###
> 09/05/14 18:50:44  TEST022### PASS ###
> 09/05/14 18:52:26  TEST024### PASS ###
> 09/05/14 18:21:07 - 18:52:26  (RELEASE build)
> WARNING: Core files found in 
> /home/jenkins/workspace/core-regress-seabase-cdh4.4/trafodion/core :
> -rw---. 1 jenkins jenkins 44552192 Sep  5 18:52 
> sql/regress/core.slave01.13883.shell
> 
> Total Passed:   10
> Total Failures: 0
> Failure : Found 1 core files
> Build step 'Execute shell' marked build as failure
> The core file's back trace is:
> -bash-4.1$ core_bt -d sql/regress
> core file  : -rw---. 1 jenkins jenkins 44552192 Sep  5 18:52 
> sql/regress/core.slave01.13883.shell
> gdb command: gdb shell sql/regress/core.slave01.13883.shell --batch -n -x 
> /tmp/tmp.xEFWF2xufh 2>&1
> Missing separate debuginfo for
> Try: yum --disablerepo='*' --enablerepo='*-debug*' install 
> /usr/lib/debug/.build-id/1e/0a7d58f454926e2afb4797865d85801ed65ece
> [New Thread 13884]
> [New Thread 13883]
> [Thread debugging using libthread_db enabled]
> Core was generated by `shell -a'.
> Program terminated with signal 6, Aborted.
> #0  0x0030ada32635 in raise () from /lib64/libc.so.6
> #0  0x0030ada32635 in raise () from /lib64/libc.so.6
> #1  0x0030ada33e15 in abort () from /lib64/libc.so.6
> #2  0x00411982 in LIOTM_assert_fun (pp_exp=0x4d4f40 "0", 
> pp_file=0x4d175e "clio.cxx", pv_line=1022, pp_fun=0x4d2d60 "int 
> Local_IO_To_Monitor::process_notice(message_def*)") at clio.cxx:99
> #3  0x00413b26 in Local_IO_To_Monitor::process_notice (this=0x7c6e80, 
> pp_msg=) at clio.cxx:1022
> #4  0x00413e03 in Local_IO_To_Monitor::get_io (this=0x7c6e80, 
> pv_sig=, pp_siginfo=) at 
> clio.cxx:637
> #5  0x00414075 in local_monitor_reader (pp_arg=0x7916) at clio.cxx:154
> #6  0x0030ae2079d1 in start_thread () from /lib64/libpthread.so.0
> #7  0x0030adae886d in clone () from /lib64/libc.so.6



--
This 

[jira] [Updated] (TRAFODION-1175) LP Bug: 1444228 - Trafodion should record the FQDN as client_name in Repository tables

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1175:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1444228 - Trafodion should record the FQDN as client_name in 
> Repository tables
> --
>
> Key: TRAFODION-1175
> URL: https://issues.apache.org/jira/browse/TRAFODION-1175
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-odbc-linux, connectivity-general
>Reporter: Chengxin Cai
>Assignee: Gao Jie
>Priority: Major
> Fix For: 2.2.0
>
>
> select distinct application_name, rtrim(client_name) from 
> "_REPOS_".METRIC_QUERY_AGGR_TABLE;
> APPLICATION_NAME  
>   (EXPR)
> --  
> 
> /usr/bin/python sq1176
> TrafCI
> sq1176.houston.hp.com
> --- 2 row(s) selected.
> Actually, sq1176 and sq1176.houston.hp.com are the same client, but they show 
> different result when using odbc or jdbc client.
> They should always be the FQDN whatever the client is.
> And the same problem in METRIC_QUERY_TABLE and METRIC_SESSION_TABLE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1278) LP Bug: 1465899 - Create table LIKE hive table fails silently

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1278:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1465899 - Create table LIKE hive table fails silently
> -
>
> Key: TRAFODION-1278
> URL: https://issues.apache.org/jira/browse/TRAFODION-1278
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-general
>Reporter: Barry Fritchman
>Assignee: Qifan Chen
>Priority: Critical
> Fix For: 2.2.0
>
>
> When using the CREATE TABLE  LIKE  syntax with a hive table as 
> , the statement appears to execute successfully, but the table is in 
> fact not created:
> >>create table traf_orders like hive.hive.orders;
> --- SQL operation complete.
> >>invoke traf_orders;
> *** ERROR[4082] Object TRAFODION.SEABASE.TRAF_ORDERS does not exist or is 
> inaccessible.
> --- SQL operation failed with errors.
> >>
> The problem seems to occur only when a Hive table is the source.  This 
> problem causes an error when attempting to update statistics for a hive table 
> using sampling, because the sample table is not created.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2308) JDBC T4 support read LOB

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-2308:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> JDBC T4 support read LOB
> 
>
> Key: TRAFODION-2308
> URL: https://issues.apache.org/jira/browse/TRAFODION-2308
> Project: Apache Trafodion
>  Issue Type: Sub-task
>  Components: client-jdbc-t4, connectivity-mxosrvr
>Affects Versions: 2.1-incubating
>Reporter: Weiqing Xu
>Assignee: Weiqing Xu
>Priority: Major
> Fix For: 2.2.0
>
>
> JDBC T4 need implement some API to support CLOB and BLOB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1801) Inserting NULL for all key columns in a table causes a failure

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1801:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> Inserting NULL for all key columns in a table causes a failure
> --
>
> Key: TRAFODION-1801
> URL: https://issues.apache.org/jira/browse/TRAFODION-1801
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 1.2-incubating
>Reporter: Suresh Subbiah
>Assignee: Suresh Subbiah
>Priority: Major
> Fix For: 2.2.0
>
>
> cqd allow_nullable_unique_key_constraint 'on' ;
> >>create table t1 (a int, b int, primary key (a,b)) ;
> --- SQL operation complete.
> >>showddl t1 ;
> CREATE TABLE TRAFODION.JIRA.T1
>   (
> AINT DEFAULT NULL SERIALIZED
>   , BINT DEFAULT NULL SERIALIZED
>   , PRIMARY KEY (A ASC, B ASC)
>   )
> ;
> --- SQL operation complete.
> >>insert into t1(a) values (1);
> --- 1 row(s) inserted.
> >>insert into t1(b) values (2) ;
> --- 1 row(s) inserted.
> >>select * from t1 ;
> AB  
> ---  ---
>   1?
>   ?2
> --- 2 row(s) selected.
> >>insert into t1(a) values(3) ;
> --- 1 row(s) inserted.
> >>select * from t1 ;
> AB  
> ---  ---
>   1?
>   3?
>   ?2
> --- 3 row(s) selected.
> -- fails
> >>insert into t1 values (null, null) ;
> *** ERROR[8448] Unable to access Hbase interface. Call to 
> ExpHbaseInterface::checkAndInsertRow returned error HBASE_ACCESS_ERROR(-706). 
> Cause: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=35, exceptions:
> Tue Feb 02 19:58:34 UTC 2016, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@4c2e0b96, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1923) executor/TEST106 hangs at drop table at times

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1923:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> executor/TEST106 hangs at drop table at times
> -
>
> Key: TRAFODION-1923
> URL: https://issues.apache.org/jira/browse/TRAFODION-1923
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 2.0-incubating
>Reporter: Selvaganesan Govindarajan
>Assignee: Prashanth Vasudev
>Priority: Critical
> Fix For: 2.2.0
>
>
> executor/TEST106 hangs at
> drop table t106a 
> Currently executor/TEST106 test is not run as part of Daily regression build.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2472) Alter table hbase options is not transaction enabled.

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-2472:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> Alter table hbase options is not transaction enabled.
> -
>
> Key: TRAFODION-2472
> URL: https://issues.apache.org/jira/browse/TRAFODION-2472
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dtm
>Reporter: Prashanth Vasudev
>Assignee: Prashanth Vasudev
>Priority: Major
> Fix For: 2.2.0
>
>
> Transaction DDL for alter commands is currently disabled. 
> There are few statements such as alter hbase option that is not disabled 
> which results in unpredictable errors. 
> Initially fix would be to disable alter statement to not use DDl transaction. 
> Following this DDL transaction would be enhanced to support of Alter table 
> statement.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-608) LP Bug: 1367413 - metadata VERSIONS table need to be updated with current version

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-608:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1367413 - metadata VERSIONS table need to be updated with current 
> version
> -
>
> Key: TRAFODION-608
> URL: https://issues.apache.org/jira/browse/TRAFODION-608
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Anoop Sharma
>Assignee: Anoop Sharma
>Priority: Major
> Fix For: 2.2.0
>
>
> Metadata contains VERSIONS table which contains the released software version.
> This table can be accessed by users through SQL interface.
> This should  be updated with latest software version values whenever software 
> is installed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-986) LP Bug: 1419906 - pthread_mutex calls do not always check return code

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-986:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1419906 - pthread_mutex calls do not always check return code
> -
>
> Key: TRAFODION-986
> URL: https://issues.apache.org/jira/browse/TRAFODION-986
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: foundation
>Reporter: dave george
>Assignee: Prashanth Vasudev
>Priority: Major
> Fix For: 2.2.0
>
>
> In quite a few places, the code shows:
>   pthread_mutex_lock() or pthread_mutex_unlock()
> The return code from these calls should be checked.
> A more generic summary would be something like:
> return codes from functions that return error codes should be checked.
> It may be possible to use coverity or other such tool in trying to automate 
> the more general issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-157) LP Bug: 1252809 - DCS-ODBC-Getting 'Invalid server handle' after bound hstmt is used for a while.

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-157:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1252809 - DCS-ODBC-Getting 'Invalid server handle' after bound hstmt 
> is used for a while.
> -
>
> Key: TRAFODION-157
> URL: https://issues.apache.org/jira/browse/TRAFODION-157
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-odbc-linux
>Reporter: Aruna Sadashiva
>Assignee: RuoYu Zuo
>Priority: Major
> Fix For: 2.2.0
>
>
> Using ODBC 64 bit Linux driver.
> 'Invalid server handle' is returned and insert fails when using 
> SQLBindParameter/Prepare/Execute. The SQLExecute is done in a loop. It works 
> for a while, but fails within 10 minutes. Changed the program to reconnect 
> every 5 mins, but still seeing this error. It works on SQ.
> Have attached simple test program to recreate this. To run on SQ remove the 
> SQLExecDirect calls to set CQDs, those are specific to Traf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-909) LP Bug: 1412641 - log4cpp -- Node number in master*.log is always 0

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-909:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1412641 - log4cpp -- Node number in master*.log is always 0
> ---
>
> Key: TRAFODION-909
> URL: https://issues.apache.org/jira/browse/TRAFODION-909
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Gao, Rui-Xian
>Assignee: Sandhya Sundaresan
>Priority: Major
> Fix For: 2.2.0
>
>
> Node number in master*.log is always 0, tm.log has correct number.
> SQL>select [first 5] * from udf(event_log_reader('f')) where 
> log_file_name='master_exec_0_7476.log';
> LOG_TS SEVERITY   COMPONENTNODE_NUMBER 
> CPU PIN PROCESS_NAME SQL_CODEQUERY_ID 
>   
>   MESSAGE 
>  
> LOG_FILE_NODE LOG_FILE_NAME   
>  
> LOG_FILE_LINE PARSE_STATUS
> -- --  --- 
> --- ---  --- 
> 
>  
> 
>  - 
> 
>  - 
> 2015-01-19 04:29:53.454000 INFO   SQL.ESP0
>2   24361 $Z020JW1NULL NULL
>   
>An ESP process is launched.
>   
> 0 master_exec_0_7476.log  
>  1
> 2015-01-19 04:29:53.462000 INFO   SQL.ESP0
>2   24360 $Z020JW0NULL NULL
>   
>An ESP process is launched.
>   
> 0 master_exec_0_7476.log  
>  2
> 2015-01-19 04:29:53.452000 INFO   SQL.ESP0
>5   31881 $Z050R0WNULL NULL
>   
>An ESP process is launched.
>   
> 0 master_exec_0_7476.log  
>  1
> 2015-01-19 04:35:23.101000 INFO   SQL.ESP0
>51892 $Z0501J2NULL NULL
>   
>An ESP process is launched.
>   
> 0 master_exec_0_7476.log  
>  2
> 2015-01-19 04:29:53.454000 INFO   SQL.ESP0
>2   24361 $Z020JW1NULL NULL
>   
>An ESP process is launched.
>   
> 0 master_exec_0_7476.log  
>  1
> --- 5 row(s) selected.
> SQL>select [first 5] * from udf(event_log_reader('f')) where log_file_name 
> like 

[jira] [Updated] (TRAFODION-138) LP Bug: 1246183 - volatile table is not dropped after hpdci session ends

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-138:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1246183 - volatile table is not dropped after hpdci session ends
> 
>
> Key: TRAFODION-138
> URL: https://issues.apache.org/jira/browse/TRAFODION-138
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Weishiun Tsai
>Assignee: Sandhya Sundaresan
>Priority: Critical
> Fix For: 2.2.0
>
>
> A volatile table is not dropped after the hpdci session has ended.  In the 
> following example, the volatile table persists after several hpdci 
> disconnects and reconnects.  This problem is not seen from sqlci, so I am 
> assuming that the problem is in how mxosvr handles volatile tables.
> -bash-4.1$ hpdci.seascape2-sqtopl7.sh
> Welcome to HP Database Command Interface 3.0
> (c) Copyright 2010-2012 Hewlett-Packard Development Company, LP.
> Connected to Data Source: TDM_Default_DataSource
> SQL>set schema seabase.mytest;
> --- SQL operation complete.
> SQL>create volatile table abc (a int not null not droppable primary key);
> --- SQL operation complete.
> SQL>showddl abc;
> CREATE VOLATILE TABLE ABC
>   (
> AINT NO DEFAULT NOT NULL NOT DROPPABLE
>   , PRIMARY KEY (A ASC)
>   )
> ;
> --- SQL operation complete.
> SQL>exit;
> -bash-4.1$ hpdci.seascape2-sqtopl7.sh
> Welcome to HP Database Command Interface 3.0
> (c) Copyright 2010-2012 Hewlett-Packard Development Company, LP.
> Connected to Data Source: TDM_Default_DataSource
> SQL>set schema seabase.mytest;
> --- SQL operation complete.
> SQL>showddl abc;
> CREATE VOLATILE TABLE ABC
>   (
> AINT NO DEFAULT NOT NULL NOT DROPPABLE
>   , PRIMARY KEY (A ASC)
>   )
> ;
> --- SQL operation complete.
> SQL>exit;
> -bash-4.1$ hpdci.seascape2-sqtopl7.sh
> Welcome to HP Database Command Interface 3.0
> (c) Copyright 2010-2012 Hewlett-Packard Development Company, LP.
> Connected to Data Source: TDM_Default_DataSource
> SQL>set schema seabase.mytest;
> --- SQL operation complete.
> SQL>showddl abc;
> CREATE VOLATILE TABLE ABC
>   (
> AINT NO DEFAULT NOT NULL NOT DROPPABLE
>   , PRIMARY KEY (A ASC)
>   )
> ;
> --- SQL operation complete.
> SQL>exit;
> -bash-4.1$ hpdci.seascape2-sqtopl7.sh
> Welcome to HP Database Command Interface 3.0
> (c) Copyright 2010-2012 Hewlett-Packard Development Company, LP.
> Connected to Data Source: TDM_Default_DataSource
> SQL>set schema seabase.mytest;
> --- SQL operation complete.
> SQL>showddl abc;
> CREATE VOLATILE TABLE ABC
>   (
> AINT NO DEFAULT NOT NULL NOT DROPPABLE
>   , PRIMARY KEY (A ASC)
>   )
> ;
> --- SQL operation complete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1260) LP Bug: 1461629 - T2 driver not returning proper error msg for several unsupported catalog apis

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1260:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1461629 - T2 driver not returning proper error msg for several 
> unsupported catalog apis
> ---
>
> Key: TRAFODION-1260
> URL: https://issues.apache.org/jira/browse/TRAFODION-1260
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-jdbc-t2
>Reporter: Aruna Sadashiva
>Assignee: Kevin Xu
>Priority: Critical
> Fix For: 2.2.0
>
>
> For the following unsupported JDBC catalog apis, T4 driver returns proper 
> error msg, but T2 returns syntax error with some junk characters (or crashes 
> with mtserver):
> TestCat23(TestCatNew): Exception in test JDBC TestCat23 - Get Table 
> Privileges..*** ERROR[15001] A syntax error occurred at or before: 
> èÕ9÷ÿ;
> ^ (3 characters from start of SQL statem
> TestCat24(TestCatNew): Exception in test JDBC TestCat24 - Get Column 
> Privileges..*** ERROR[15001] A syntax error occurred at or before: 
> èÕ9÷ÿ;
> ^ (3 characters from start of SQL statem
> TestCat25(TestCatNew): Exception in test JDBC TestCat25 - Get Best Row 
> Identifier..*** ERROR[15001] A syntax error occurred at or before: 
> øÕ9÷ÿ;
> ^ (3 characters from start of SQL statem
> TestCat26(TestCatNew): Exception in test JDBC TestCat26 - Get Version 
> Columns..*** ERROR[15001] A syntax error occurred at or before: 
> øÕ9÷ÿ;
> ^ (3 characters from start of SQL statem
> TestCat27(TestCatNew): Exception in test JDBC TestCat27 - Get Procedures..*** 
> ERROR[15001] A syntax error occurred at or before: 
> Ö9÷ÿ;
> ^ (1 characters from start of SQL stateme
> TestCat28(TestCatNew): Exception in test JDBC TestCat28 - Get Procedure 
> Columns..*** ERROR[15001] A syntax error occurred at or before: 
> Ö9÷ÿ;
> ^ (1 characters from start of SQL stateme
> TestCat29(TestCatNew): Exception in test JDBC TestCat29 - Get Exported 
> Keys..*** ERROR[15001] A syntax error occurred at or before: 
> øÕ9÷ÿ;
> ^ (3 characters from start of SQL statem
> TestCat30(TestCatNew): Exception in test JDBC TestCat30 - Get Imported 
> Keys..*** ERROR[15001] A syntax error occurred at or before: 
> øÕ9÷ÿ;
> ^ (3 characters from start of SQL statem
> TestCat31(TestCatNew): Exception in test JDBC TestCat31 - Get Index Info..*** 
> ERROR[15001] A syntax error occurred at or before: 
> Ö9÷ÿ;
> ^ (1 characters from start of SQL stateme
> false
> [trafodion@n007 basic]$



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1246) LP Bug: 1458011 - Change core file names in Sandbox

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1246:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1458011 - Change core file names in Sandbox
> ---
>
> Key: TRAFODION-1246
> URL: https://issues.apache.org/jira/browse/TRAFODION-1246
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: installer
>Reporter: Amanda Moran
>Priority: Minor
> Fix For: 2.2.0
>
>
> When creating a sandbox we should change the name of core files so that users 
> will not have to do it themselves.
> echo "/tmp/cores/core.%e.%p.%h.%t" > /proc/sys/kernel/core_pattern
> Reference: https://sigquit.wordpress.com/2009/03/13/the-core-pattern/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1115) LP Bug: 1438934 - MXOSRVRs don't get released after interrupting execution of the client application (ODB)

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1115:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1438934 - MXOSRVRs don't get released after interrupting execution of 
> the client application (ODB)
> --
>
> Key: TRAFODION-1115
> URL: https://issues.apache.org/jira/browse/TRAFODION-1115
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-mxosrvr
>Reporter: Chirag Bhalgami
>Assignee: Daniel Lu
>Priority: Critical
> Fix For: 2.2.0
>
>
> MXOSRVRs are not getting released when ODB application is interrupted during 
> execution.
> After restarting DCS, it still shows that odb app is occupying MXOSRVRs.
> Also, executing odb throws following error message:
> -
> odb [2015-03-31 21:19:11]: starting ODBC connection(s)... (1) 1 2 3 4
> Connected to HP Database
> [3] 5,000 records inserted [commit]
> [2] odb [Oloadbuff(9477)] - Error (State: 25000, Native -8606)
> [Trafodion ODBC Driver][Trafodion Database] SQL ERROR:*** ERROR[8606] 
> Transaction subsystem TMF returned error 97 on a commit transaction. 
> [2015-03-31 21:39:47]
> [2] 0 records inserted [commit]
> [3] odb [Oloadbuff(9477)] - Error (State: 25000, Native -8606)
> [Trafodion ODBC Driver][Trafodion Database] SQL ERROR:*** ERROR[8606] 
> Transaction subsystem TMF returned error 97 on a commit transaction. 
> [2015-03-31 21:39:47]
> [3] 5,000 records inserted [commit]
> [4] odb [Oloadbuff(9477)] - Error (State: 25000, Native -8606)
> [Trafodion ODBC Driver][Trafodion Database] SQL ERROR:*** ERROR[8606] 
> Transaction subsystem TMF returned error 97 on a commit transaction. 
> [2015-03-31 21:39:47]
> [4] 0 records inserted [commit]
> [1] odb [Oloadbuff(9477)] - Error (State: 25000, Native -8606)
> [Trafodion ODBC Driver][Trafodion Database] SQL ERROR:*** ERROR[8606] 
> Transaction subsystem TMF returned error 97 on a commit transaction. 
> [2015-03-31 21:39:47]
> [1] 0 records inserted [commit]
> odb [sigcatch(4125)] - Received SIGINT. Exiting
> -
> Trafodion Build: Release [1.0.0-304-ga977ee7_Bld14], branch a977ee7-master, 
> date 20150329_083001)
> Hadoop Distro: HDP 2.2
> HBase Version: 0.98.4.2.2.0.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1151) LP Bug: 1442483 - SQL queries hang when Region Server goes down

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1151:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1442483 - SQL queries hang when Region Server goes down
> ---
>
> Key: TRAFODION-1151
> URL: https://issues.apache.org/jira/browse/TRAFODION-1151
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Sandhya Sundaresan
>Assignee: Sandhya Sundaresan
>Priority: Major
> Fix For: 2.2.0
>
>
> When the RS goes down - for example killed by zookeeper due to out of memory 
> conditions, the SQL query that was executing hangs and doesn't timeout for 
> atleast an hour . 
> But subsequent SQL queries immediately detect that RS is down and retirn an 
> Hbase -705 error. 
> The timeout values in JNI calls need to be investigated to see why the hang 
> happens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1221) LP Bug: 1450853 - Hybrid Query Cache: query with equals predicate on INTERVAL datatype should not have a non-parameterized literal.

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1221:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1450853 - Hybrid Query Cache: query with equals predicate on INTERVAL 
> datatype should not have a non-parameterized literal.
> ---
>
> Key: TRAFODION-1221
> URL: https://issues.apache.org/jira/browse/TRAFODION-1221
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Julie Thai
>Assignee: Howard Qin
>Priority: Critical
> Fix For: 2.2.0
>
>
> For query with equal predicate on INTERVAL datatype, both parameterized and 
> non-parameterized literals appear in HybridQueryCacheEntries virtual table. 
> Non-parametrrized literal should be empty.
> SQL>prepare XX from select * from F00INTVL where colintvl = interval '39998' 
> day(6);
> *** WARNING[6008] Statistics for column (COLKEY) from table 
> TRAFODION.QUERYCACHE_HQC.F00INTVL were not available. As a result, the access 
> path chosen might not be the best possible. [2015-04-30 13:31:48]
> --- SQL command prepared.
> SQL>execute show_entries;
> HKEY  
>NUM_HITS   NUM_PLITERALS 
> (EXPR)
>NUM_NPLITERALS (EXPR)  
>   
> 
>  -- - 
> 
>  -- 
> 
> SELECT * FROM F00INTVL WHERE COLINTVL = INTERVAL #NP# DAY ( #NP# ) ;  
> 0 1 
> INTERVAL '39998' DAY(6)
> 1 '39998'
> --- 1 row(s) selected.
> To reproduce:
> create table F00INTVL(
> colkey int not null primary key,
> colintvl interval day(6));
> load into F00INTVL select
> c1+c2*10+c3*100+c4*1000+c5*1+c6*10, --colkey
> cast(cast(mod(c1+c2*10+c3*100+c4*1000+c5*1+c6*10,99)
> as integer) as interval day(6)) --colintvl
> from (values(1)) t
> transpose 0,1,2,3,4,5,6,7,8,9 as c1
> transpose 0,1,2,3,4,5,6,7,8,9 as c2
> transpose 0,1,2,3,4,5,6,7,8,9 as c3
> transpose 0,1,2,3,4,5,6,7,8,9 as c4
> transpose 0,1,2,3,4,5,6,7,8,9 as c5
> transpose 0,1,2,3,4,5,6,7,8,9 as c6;
> update statistics for table F00INTVL on colintvl;
> prepare show_entries from select left(hkey,50), num_pliterals, 
> left(pliterals,15), num_npliterals, left(npliterals,15) from 
> table(HybridQueryCacheEntries('USER', 'LOCAL'));
> prepare XX from select * from F00INTVL where colintvl = interval '39998' 
> day(6);
> execute show_entries;



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1628) Implement T2 Driver's Rowsets ability to enhance the batch insert performance

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1628:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> Implement T2 Driver's Rowsets ability to enhance the batch insert performance
> -
>
> Key: TRAFODION-1628
> URL: https://issues.apache.org/jira/browse/TRAFODION-1628
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: client-jdbc-t2
>Reporter: RuoYu Zuo
>Assignee: RuoYu Zuo
>Priority: Critical
>  Labels: features, performance
> Fix For: 2.2.0
>
>
> JDBC T2 Driver now has very poor performance of batch insert, because it does 
> not have rowsets ability. Implement rowsets functionality will allow T2 
> Driver performs batch insert operation much faster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-994) LP Bug: 1420523 - ODBC: Several values returned by SQLColumns are incorrect

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-994:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1420523 - ODBC: Several values returned by SQLColumns are incorrect
> ---
>
> Key: TRAFODION-994
> URL: https://issues.apache.org/jira/browse/TRAFODION-994
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-general
>Reporter: JiepingZhang
>Priority: Critical
> Fix For: 2.2.0
>
>
> Below are the failures in SQLColumn API testing:
> 1. In the resultset returned by SQLColumn API, value of column ColNullable is 
> 2 rather than 1, column REMARK is empty.
> Test create table =>create table GTN2BSG5FQ (KXE2QSC7HC char(10) CHARACTER 
> SET ISO88591) no partition
> ===
> SQLColumns: compare results of columns fetched for following column
> The Column Name is KXE2QSC7HC and column type is char
> ***ERROR: ColNullable expect: 1 and actual: 2 are not matched
> ***ERROR: Remark expect: CHARACTER  CHARACTER SET ISO88591 and actual:  are 
> not matched
> Number of rows fetched: 1
> 2. Somehow if the table has more than 3 columns, the 3rd column seems got 
> lost as nothing regarding the 3rd column is returned in the resultset. For 
> test case below, 3rd column E5IPGXAHNB has no info in the resultset.
> 19:18:38  Test create table =>create table GTN2BSG5FQ (KXE2QSC7HC char(10) 
> CHARACTER SET ISO88591,RMSYLIFAR4 varchar(10) CHARACTER SET 
> ISO88591,E5IPGXAHNB long varchar CHARACTER SET ISO88591,ZQW9LNYDG3 
> decimal(10,5)) no partition
> ===
> 19:18:40 SQLColumns: Test #3
> SQLColumns: SQLColumns function call executed correctly.
> SQLColumns: compare results of columns fetched for following column
> The Column Name is KXE2QSC7HC and column type is char
> ***ERROR: ColNullable expect: 1 and actual: 2 are not matched
> ***ERROR: Remark expect: CHARACTER CHARACTER SET ISO88591 and actual: are not 
> matched
> SQLColumns: compare results of columns fetched for following column
> The Column Name is RMSYLIFAR4 and column type is varchar
> ***ERROR: ColNullable expect: 1 and actual: 2 are not matched
> ***ERROR: Remark expect: VARCHAR CHARACTER SET ISO88591 and actual: are not 
> matched
> SQLColumns: compare results of columns fetched for following column
> The Column Name is E5IPGXAHNB and column type is long varchar
> ***ERROR: ColName expect: E5IPGXAHNB and actual: ZQW9LNYDG3 are not matched
> ***ERROR: ColDataType expect: 12 and actual: 3 are not matched
> ***ERROR: ColTypeName expect: VARCHAR and actual: DECIMAL are not matched
> ***ERROR: ColPrec expect: 2000 and actual: 10 are not matched
> ***ERROR: ColLen expect: 2000 and actual: 12 are not matched
> ***ERROR: ColScale expect: 0 and actual: 5 are not matched
> ***ERROR: ColRadix expect: 0 and actual: 10 are not matched
> ***ERROR: ColNullable expect: 1 and actual: 2 are not matched
> ***ERROR: Remark expect: VARCHAR CHARACTER SET ISO88591 and actual: are not 
> matched
> Number of rows fetched: 3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1162) LP Bug: 1443246 - WRONG QUERY_STATUS/QUERY_SUB_STATUS for canceled queries in METRIC_QUERY_TABLE

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1162:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1443246 - WRONG QUERY_STATUS/QUERY_SUB_STATUS for canceled queries in 
> METRIC_QUERY_TABLE
> 
>
> Key: TRAFODION-1162
> URL: https://issues.apache.org/jira/browse/TRAFODION-1162
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-mxosrvr
>Reporter: FengQiang
>Assignee: Gao Jie
>Priority: Critical
> Fix For: 2.2.0
>
>
> Two scenarios for this issue
> 1. Cancel a running query with sql command control query cancel qid 
> "MXID11136022122946686468383500906U300_32053_XX".
> In "_REPOS_".METRIC_QUERY_TABLE, it has value for EXEC_END_UTC_TS. But 
> QUERY_STATUS is COMPLETED and no value for QUERY_SUB_STATUS. Should either of 
> the status tells that the query was canceled?
> 2. Kill the trafci client while a select query is still running.
> In "_REPOS_".METRIC_QUERY_TABLE, it has value for EXEC_END_UTC_TS. But 
> QUERY_STATUS remains 'EXECUTING' and no value for QUERY_SUB_STATUS. Should 
> either of the status tells that the query was canceled?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-480) LP Bug: 1349644 - Status array returned by batch operations contains wrong return value for T2

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-480:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1349644 - Status array returned by batch operations contains wrong 
> return value for T2
> --
>
> Key: TRAFODION-480
> URL: https://issues.apache.org/jira/browse/TRAFODION-480
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-jdbc-t2, client-jdbc-t4
>Reporter: Aruna Sadashiva
>Assignee: RuoYu Zuo
>Priority: Major
> Fix For: 2.2.0
>
>
> The status array returned from T2 contains a different value compared to T4. 
> T4 returns -2 and T2 returns 1. 
> The oracle JDBC documentation states:
> 0 or greater — the command was processed successfully and the value is an 
> update count indicating the number of rows in the database that were affected 
> by the command’s execution Chapter 14 Batch Updates 121
> Statement.SUCCESS_NO_INFO — the command was processed successfully, but the 
> number of rows affected is unknown
> Statement.SUCCESS_NO_INFO is defined as being -2, so your result says 
> everything worked fine, but you won't get information on the number of 
> updated columns.
> For a prepared statement batch, it is not possible to know the number of rows 
> affected in the database by each individual statement in the batch. 
> Therefore, all array elements have a value of -2. According to the JDBC 2.0 
> specification, a value of -2 indicates that the operation was successful but 
> the number of rows affected is unknown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1212) LP Bug: 1449732 - Drop schema cascade returns error 1069

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1212:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1449732 - Drop schema cascade returns error 1069
> 
>
> Key: TRAFODION-1212
> URL: https://issues.apache.org/jira/browse/TRAFODION-1212
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmu
>Reporter: Weishiun Tsai
>Assignee: Suresh Subbiah
>Priority: Critical
> Fix For: 2.2.0
>
>
> The frequency of ‘drop schema cascade’ returning error 1069 is still pretty 
> high, even after several attempts to address this issue.  This is causing a 
> lot of headache for the QA regression testing.  After each regression testing 
> run, there are always several schemas that couldn’t be dropped and needed to 
> be manually cleaned up.
> Multiple issues may lead to this problem.  This just happens to be one 
> scenario that is quite reproducible now.  In this particular scenario, the 
> schema contains a TMUDF library qaTmudfLib and 2 TMUDF functions qa_tmudf1 
> and qa_tmudf2.  qa_tmudf1 is a valid function, while qa_tmudf2 has a bogus 
> external name and a call to it is expected to see an error.
> After invoking both, a drop schema cascade almost always returns error 1069.
> This is seen on the r1.1.0rc3 (v0427) build installed on a workstation and it 
> is fairly reproducible with this build.  To reproduce it:
> (1) Download the attached tar file and untar it to get the 3 files in there. 
> Put the files in any directory .
> (2) Make sure that you have run ./sqenv.sh of your Trafodion instance first 
> as building UDF needs $MY_SQROOT for the header files.
> (3) Run build.sh
> (4) Change the line “create library qaTmudfLib file 
> '/qaTMUdfTest.so';” in mytest.sql and fill in 
> (5) From sqlci, obey mytest.sql
> Here is the execution output:
> >>log mytest.log clear;
> >>drop schema mytest cascade;
> *** ERROR[1003] Schema TRAFODION.MYTEST does not exist.
> --- SQL operation failed with errors.
> >>create schema mytest;
> --- SQL operation complete.
> >>set schema mytest;
> --- SQL operation complete.
> >>
> >>create library qaTmudfLib file '/qaTMUdfTest.so';
> --- SQL operation complete.
> >>
> >>create table mytable (a int, b int);
> --- SQL operation complete.
> >>insert into mytable values (1,1),(2,2);
> --- 2 row(s) inserted.
> >>
> >>create table_mapping function qa_tmudf1()
> +>external name 'QA_TMUDF'
> +>language cpp
> +>library qaTmudfLib;
> --- SQL operation complete.
> >>
> >>select * from UDF(qa_tmudf1(TABLE(select * from mytable)));
> AB
> ---  ---
>   11
>   22
> --- 2 row(s) selected.
> >>
> >>create table_mapping function qa_tmudf2()
> +>external name 'DONTEXIST'
> +>language cpp
> +>library qaTmudfLib;
> --- SQL operation complete.
> >>
> >>select * from UDF(qa_tmudf2(TABLE(select * from mytable)));
> *** ERROR[11246] An error occurred locating function 'DONTEXIST' in library 
> 'qaTMUdfTest.so'.
> *** ERROR[8822] The statement was not prepared.
> >>
> >>drop schema mytest cascade;
> *** ERROR[1069] Schema TRAFODION.MYTEST could not be dropped.
> --- SQL operation failed with errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1107) LP Bug: 1438466 - Multiple tdm_arkcmp child processes started after receipt of HBase error

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1107:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1438466 - Multiple tdm_arkcmp child processes started after receipt 
> of HBase error
> --
>
> Key: TRAFODION-1107
> URL: https://issues.apache.org/jira/browse/TRAFODION-1107
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Reporter: Joanie Cooper
>Assignee: Qifan Chen
>Priority: Critical
> Fix For: 2.2.0
>
>
> During a fresh test of running the compGeneral regression suite
> while artificially interjecting an error return from the TrxRegionEndpoint
> coprocessor, numerous tdm_arkcmp child processes were started.
> Before the error hit, we seemed to have a normal number of compilers
> [$Z0005MG] 000,3170 001 GEN  ES--A-- $Z0002KK$Z0002IVtdm_arkcmp   
>   
> [$Z0005MG] 000,3292 001 GEN  ES--A-- $Z0002P2$Z0002KKtdm_arkcmp   
>   
> [$Z0005MG] 000,3816 001 GEN  ES--A-- $Z000341$Z0002P2tdm_arkcmp   
>   
> [$Z0005MG] 000,3886 001 GEN  ES--A-- $Z000361$Z000341tdm_arkcmp
> After forcing the error, it looks like we have new compilers being generated,
> all ultimately part of the original tdm_arkcmp parent off of the sqlci 
> session.
> This is a result of a drop statement.  From the sqlci window, the
> statement appears hung, as it never returns.  But, it appears the
> compilers keep generating new children and the query ultimately never returns.
> When I killed the query, it had 174 compilers running.
> I tried a pstack for one of the compilers, I’ve attached it below.
> g4t3037{joaniec}3: sqps
> Processing cluster.conf on local host g4t3037.houston.hp.com
> [$Z000AF9] Shell/shell Version 1.0.1 Release 1.1.0 (Build release [joaniec], 
> date 26Mar15)
> [$Z000AF9] %ps  
> [$Z000AF9] NID,PID(os)  PRI TYPE STATES  NAMEPARENT  PROGRAM
> [$Z000AF9]  ---  --- --- --- 
> ---
> [$Z000AF9] 000,00018562 000 WDG  ES--A-- $WDG000 NONEsqwatchdog   
>   
> [$Z000AF9] 000,00018563 000 PSD  ES--A-- $PSD000 NONEpstartd  
>   
> [$Z000AF9] 000,00018592 001 DTM  ES--A-- $TM0NONEtm   
>   
> [$Z000AF9] 000,00019243 001 GEN  ES--A-- $ZSC000 NONEmxsscp   
>   
> [$Z000AF9] 000,00019274 001 SSMP ES--A-- $ZSM000 NONEmxssmp   
>   
> [$Z000AF9] 000,00020982 001 GEN  ES--A-- $ZLOBSRV0   NONEmxlobsrvr
>   
> [$Z000AF9] 000,7356 001 GEN  ES--A-- $Z000606NONEsqlci
>   
> [$Z000AF9] 000,7416 001 GEN  ES--A-- $Z00061W$Z000606tdm_arkcmp   
>   
> [$Z000AF9] 000,7960 001 GEN  ES--A-- $Z0006HF$Z00061Wtdm_arkcmp   
>   
> [$Z000AF9] 000,8021 001 GEN  ES--A-- $Z0006J6$Z0006HFtdm_arkcmp   
>   
> [$Z000AF9] 000,8079 001 GEN  ES--A-- $Z0006KU$Z0006J6tdm_arkcmp   
>   
> [$Z000AF9] 000,8137 001 GEN  ES--A-- $Z0006MH$Z0006KUtdm_arkcmp   
>   
> [$Z000AF9] 000,8194 001 GEN  ES--A-- $Z0006P4$Z0006MHtdm_arkcmp   
>   
> [$Z000AF9] 000,8252 001 GEN  ES--A-- $Z0006QS$Z0006P4tdm_arkcmp   
>   
> [$Z000AF9] 000,8312 001 GEN  ES--A-- $Z0006SH$Z0006QStdm_arkcmp   
>   
> [$Z000AF9] 000,8369 001 GEN  ES--A-- $Z0006U4$Z0006SHtdm_arkcmp   
>   
> [$Z000AF9] 000,8427 001 GEN  ES--A-- $Z0006VS$Z0006U4tdm_arkcmp   
>   
> [$Z000AF9] 000,8491 001 GEN  ES--A-- $Z0006XL$Z0006VStdm_arkcmp   
>   
> [$Z000AF9] 000,9023 001 GEN  ES--A-- $Z0007CT$Z0006XLtdm_arkcmp   
>   
> [$Z000AF9] 000,9081 001 GEN  ES--A-- $Z0007EG$Z0007CTtdm_arkcmp   
>   
> [$Z000AF9] 000,9141 001 GEN  ES--A-- $Z0007G6$Z0007EGtdm_arkcmp   
>   
> [$Z000AF9] 000,9202 001 GEN  ES--A-- $Z0007HX$Z0007G6tdm_arkcmp   
>   
> [$Z000AF9] 000,9262 001 GEN  ES--A-- $Z0007JM$Z0007HXtdm_arkcmp   
>   
> [$Z000AF9] 000,9320 001 GEN  ES--A-- $Z0007LA$Z0007JMtdm_arkcmp   
>   
> [$Z000AF9] 000,9489 001 GEN  ES--A-- $Z0007R4$Z0007LAtdm_arkcmp   
>   
> [$Z000AF9] 000,9547 001 GEN  ES--A-- $Z0007SS$Z0007R4tdm_arkcmp   
>   
> [$Z000AF9] 000,9604 001 GEN  ES--A-- $Z0007UE$Z0007SStdm_arkcmp   
>   
> [$Z000AF9] 000,9661 001 GEN  ES--A-- $Z0007W1$Z0007UEtdm_arkcmp   
>   
> [$Z000AF9] 000,9728 001 GEN  ES--A-- $Z0007XY$Z0007W1tdm_arkcmp   
>   
> [$Z000AF9] 000,00010268 001 GEN  ES--A-- $Z0008DD$Z0007XYtdm_arkcmp   
>   
> [$Z000AF9] 000,00010364 001 GEN  ES--A-- $Z0008G4$Z0008DDtdm_arkcmp   
>   
> [$Z000AF9] 000,00010421 

[jira] [Updated] (TRAFODION-1575) Self-referencing update updates the column to a wrong value

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1575:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> Self-referencing update updates the column to a wrong value
> ---
>
> Key: TRAFODION-1575
> URL: https://issues.apache.org/jira/browse/TRAFODION-1575
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 1.3-incubating
> Environment: Can be reproduced on a workstation
>Reporter: David Wayne Birdsall
>Assignee: Selvaganesan Govindarajan
>Priority: Major
> Fix For: 2.2.0
>
>
> As shown in the following execution output, the update statement tries to 
> update c2 with count(distinct c2) from the same table. While the subquery 
> ‘select c from (select count(distinct c2) from mytable) dt(c)’ returns the 
> correct result 3 when it is run by itself, the update statement using the 
> same subquery updated the column c2 to 2, instead of 3. The updated value 
> always seems to be 1 less in this case.
> Here is the execution output:
> >>create schema mytest;
> --- SQL operation complete.
> >>
> >>create table mytable (c1 char(1), c2 integer);
> --- SQL operation complete.
> >>
> >>insert into mytable values ('A', 100), ('B', 200), ('C', 300);
> --- 3 row(s) inserted.
> >>select * from mytable order by 1;
> C1 C2
> -- ---
> A 100
> B 200
> C 300
> --- 3 row(s) selected.
> >>select c from (select count(distinct c2) from mytable) dt(c);
> C
> 
>3
> --- 1 row(s) selected.
> >>
> >>prepare xx from update mytable set c2 =
> +>(select c from (select count(distinct c2) from mytable) dt(c))
> +>where c2 = 100;
> --- SQL command prepared.
> >>explain options 'f' xx;
> LC RC OP OPERATOR OPT DESCRIPTION CARD
>       -
> 12 . 13 root x 1.00E+001
> 10 11 12 tuple_flow 1.00E+001
> . . 11 trafodion_insert MYTABLE 1.00E+000
> 9 . 10 sort 1.00E+001
> 8 4 9 hybrid_hash_join 1.00E+001
> 6 7 8 nested_join 1.00E+001
> . . 7 trafodion_delete MYTABLE 1.00E+000
> 5 . 6 sort 1.00E+001
> . . 5 trafodion_scan MYTABLE 1.00E+001
> 3 . 4 sort_scalar_aggr 1.00E+000
> 2 . 3 sort_scalar_aggr 1.00E+000
> 1 . 2 hash_groupby 2.00E+000
> . . 1 trafodion_scan MYTABLE 1.00E+002
> --- SQL operation complete.
> >>execute xx;
> --- 1 row(s) updated.
> >>
> >>select * from mytable order by 1;
> C1 C2
> -- ---
> A 2
> B 200
> C 300
> --- 3 row(s) selected.
> >>
> >>drop schema mytest cascade;
> --- SQL operation complete.
> >>
> The value of C2 in row A above should have been updated to 3.
> This problem was found by Wei-Shiun Tsai.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2348) TransactionState.hasConflict returns true if it gets a null pointer exception

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-2348:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> TransactionState.hasConflict returns true if it gets a null pointer exception
> -
>
> Key: TRAFODION-2348
> URL: https://issues.apache.org/jira/browse/TRAFODION-2348
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dtm
>Affects Versions: 2.1-incubating, any
>Reporter: Sean Broeder
>Assignee: Sean Broeder
>Priority: Major
> Fix For: 2.2.0
>
>
> In the middle of hasConflict the TransactionState object compares its 
> writeOrder list to various other transactions.  In this case, we get a Null 
> pointer exception in the trasnaction to check against, so we return true to 
> has conflict and the transaction aborts.
> 2016-11-07 20:00:28,673 WARN 
> org.apache.hadoop.hbase.regionserver.transactional.TransactionState: 
> TrxTransactionState hasConflict: 
> Unable to get row - this Transaction [[transactionId: 12919375954 regionTX: 
> false status: PENDING neverReadOnly: false scan Size: 28 write Size: 14 
> startSQ: 34310]] 
> checkAgainst Transaction [[transactionId: 17214542234 regionTX: false status: 
> ABORTED neverReadOnly: false scan Size: 0 write Size: 0 startSQ: 34296 
> commitedSQ:34314]]  Exception:
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.transactional.TrxTransactionState.hasConflict(TrxTransactionState.java:469)
> at 
> org.apache.hadoop.hbase.regionserver.transactional.TrxTransactionState.hasConflict(TrxTransactionState.java:438)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.hasConflict(TrxRegionEndpoint.java:6389)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.commitRequest(TrxRegionEndpoint.java:6138)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.commitRequest(TrxRegionEndpoint.java:6077)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.commitRequest(TrxRegionEndpoint.java:894)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.generated.TrxRegionProtos$TrxRegionService.callMethod(TrxRegionProtos.java:49510)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7054)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1746)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1728)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31447)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> at java.lang.Thread.run(Thread.java:745)
> 2016-11-07 20:00:28,674 ERROR 
> org.apache.hadoop.hbase.regionserver.transactional.TransactionState: 
> TrxTransactionState hasConflict: 
> Returning true. This transaction [transactionId: 12919375954 regionTX: false 
> status: PENDING neverReadOnly: false scan Size: 28 write Size: 14 startSQ: 
> 34310] Caught exception from transaction [transactionId: 17214542234 
> regionTX: false status: ABORTED neverReadOnly: false scan Size: 0 write Size: 
> 0 startSQ: 34296 commitedSQ:34314], regionInfo is 
> [TRAFODION.JAVABENCH.OE_ORDERLINE_192,\x00\x00\x00\x1D\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1478575978122.228b0109fcab4c57c25d7f1326f40f4e.],
>  exception
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.transactional.TrxTransactionState.hasConflict(TrxTransactionState.java:469)
> at 
> org.apache.hadoop.hbase.regionserver.transactional.TrxTransactionState.hasConflict(TrxTransactionState.java:438)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.hasConflict(TrxRegionEndpoint.java:6389)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.commitRequest(TrxRegionEndpoint.java:6138)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.commitRequest(TrxRegionEndpoint.java:6077)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.commitRequest(TrxRegionEndpoint.java:894)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.generated.TrxRegionProtos$TrxRegionService.callMethod(TrxRegionProtos.java:49510)
>

[jira] [Updated] (TRAFODION-646) LP Bug: 1371442 - ODBC driver AppUnicodeType setting is not in DSN level

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-646:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1371442 - ODBC driver AppUnicodeType setting is not in DSN level
> 
>
> Key: TRAFODION-646
> URL: https://issues.apache.org/jira/browse/TRAFODION-646
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-odbc-linux
>Reporter: Daniel Lu
>Assignee: Daniel Lu
>Priority: Critical
> Fix For: 2.2.0
>
>
> Currently, AppUnicodeType setting can only be set in [ODBC] section of 
> TRAFDSN or odbc.ini, or by environment variable. this way it is global. 
> affect all applications use same driver. we need make it in DSN level, so 
> every applications that use same driver can be either unicode or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-533) LP Bug: 1355042 - SPJ w result set failed with ERROR[11220], SQLCODE of -29261, SQLSTATE of HY000

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-533:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1355042 - SPJ w result set failed with ERROR[11220], SQLCODE of 
> -29261, SQLSTATE of HY000
> -
>
> Key: TRAFODION-533
> URL: https://issues.apache.org/jira/browse/TRAFODION-533
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Chong Hsu
>Assignee: Kevin Xu
>Priority: Critical
> Fix For: 2.2.0
>
>
> Tested with Trafodion build, 20140801-0830.
> Calling a SPJ that calls another SPJ with result set:
>public static void RS363()
>  throws Exception
>{
>  String str = "jdbc:default:connection";
>  
>  Connection localConnection = DriverManager.getConnection(str);
>  Statement localStatement = localConnection.createStatement();
>  
>  CallableStatement localCallableStatement = 
> localConnection.prepareCall("{call RS200()}");
>  localCallableStatement.execute();
>}
>public static void RS200(ResultSet[] paramArrayOfResultSet)
>throws Exception
>{
>  String str1 = "jdbc:default:connection";
>  
>  String str2 = "select * from t1";
>  Connection localConnection = DriverManager.getConnection(str1);
>  Statement localStatement = localConnection.createStatement();
>  paramArrayOfResultSet[0] = localStatement.executeQuery(str2);
>}
> it failed with ERROR:
> *** ERROR[11220] A Java method completed with an uncaught 
> java.sql.SQLException with invalid SQLSTATE. The uncaught exception had a 
> SQLCODE of -29261 and SQLSTATE of HY000. Details: java.sql.SQLException: No 
> error message in SQL/MX diagnostics area, but sqlcode is non-zero [2014-08-04 
> 22:57:28]
> The SPJ Jar file is attached. Here are the steps to produce the error:
>   
> set schema testspj;
> create library spjrs file '//Testrs.jar';
> create procedure RS363()
>language java 
>parameter style java  
>external name 'Testrs.RS363'
>dynamic result sets 0
>library spjrs;
> --- SQL operation complete.
> create procedure RS200()
>language java 
>parameter style java  
>external name 'Testrs.RS200' 
>dynamic result sets 1
>library spjrs;
> create table  T1
>   (
> AINT DEFAULT NULL
>   , BINT DEFAULT NULL
>   ) no partitions; 
> Call RS363();
> *** ERROR[11220] A Java method completed with an uncaught 
> java.sql.SQLException with invalid SQLSTATE. The uncaught exception had a 
> SQLCODE of -29261 and SQLSTATE of HY000. Details: java.sql.SQLException: No 
> error message in SQL/MX diagnostics area, but sqlcode is non-zero [2014-08-04 
> 22:57:28]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-785) LP Bug: 1395201 - MSG in sqenvcom.sh about Hadoop not find causing sftp to not work.

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-785:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1395201 - MSG in sqenvcom.sh about Hadoop not find causing sftp to 
> not work.
> 
>
> Key: TRAFODION-785
> URL: https://issues.apache.org/jira/browse/TRAFODION-785
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: foundation
>Reporter: Guy Groulx
>Assignee: Atanu Mishra
>Priority: Major
> Fix For: 2.2.0
>
>
> A check is now in sqenvcom.sh which verified if hadoop is available on the 
> node.   If it is not, it displays a message "ERROR: Did not find supported 
> Hadoop distribution".
> Since sqenvcom.sh is used via .bashrc, ie on every ssh, it causes issues.
> eg:   You can not connect sftp to a system where this msg is displayed as 
> sftp does not recognize the message being returned.
> I understand that trafodion software will probably be installed mostly on 
> nodes where hadoop is installed, but in cases where it does not, we should 
> not be affected ssh or sftp ability to connect successfully.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-939) LP Bug: 1413831 - Phoenix tests run into several error 8810 when other tests are run in parallel with it

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-939:
---
Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1413831 - Phoenix tests run into several error 8810 when other tests 
> are run in parallel with it
> 
>
> Key: TRAFODION-939
> URL: https://issues.apache.org/jira/browse/TRAFODION-939
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Aruna Sadashiva
>Assignee: Prashanth Vasudev
>Priority: Critical
> Fix For: 2.2.0
>
>
> Running phoenix and jdbc catalog api tests at the same time resulted in 16 
> failures in phoenix with error 8810, during ddl operations as shown below. 
> All the jdbc catalog api tests passed. 
> Phoenix runs fine when no other tests are running on the system. 
> test.java.com.hp.phoenix.end2end.ToNumberFunctionTest
> *** ERROR[8810] Executor ran into an internal failure and returned an 
> error without populating the diagnostics area. This error is being injected 
> to indicate that. [2015-01-22 08:32:31]
> E.E.
> Time: 701.811
> There were 2 failures:
> 1) 
> testKeyProjectionWithIntegerValue(test.java.com.hp.phoenix.end2end.ToNumberFunctionTest)
> java.lang.AssertionError: Failed to drop object: table TO_NUMBER_TABLE
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> test.java.com.hp.phoenix.end2end.BaseTest.dropTestObjects(BaseTest.java:180)
>   at 
> test.java.com.hp.phoenix.end2end.BaseTest.doBaseTestCleanup(BaseTest.java:112)
> On Justin's sugegstion, added ABORT_ON_ERROR for 8810 and the core has this 
> stack:
> #0  0x74a458a5 in raise () from /lib64/libc.so.6
> #1  0x74a4700d in abort () from /lib64/libc.so.6
> #2  0x71248114 in ComCondition::setSQLCODE (
> this=, newSQLCODE=-8810)
> at ../export/ComDiags.cpp:1425
> #3  0x73f66e36 in operator<< (d=..., dgObj=...)
> at ../common/DgBaseType.cpp:138
> #4  0x7437e4e3 in CliStatement::fetch (this=, 
> cliGlobals=0xeeade0, output_desc=, diagsArea=..., 
> newOperation=1) at ../cli/Statement.cpp:5310
> #5  0x74324e0f in SQLCLI_PerformTasks(CliGlobals *, ULng32, 
> SQLSTMT_ID *, SQLDESC_ID *, SQLDESC_ID *, Lng32, Lng32, typedef __va_list_tag 
> __va_list_tag *, SQLCLI_PTR_PAIRS *, SQLCLI_PTR_PAIRS *) 
> (cliGlobals=0xeeade0, tasks=8063, 
> statement_id=0x1fbea88, input_descriptor=0x1fbeab8, 
> output_descriptor=0x0, 
> num_input_ptr_pairs=0, num_output_ptr_pairs=0, ap=0x7fffe41ef030, 
> input_ptr_pairs=0x0, output_ptr_pairs=0x0) at ../cli/Cli.cpp:3382
> #6  0x7438a40b in SQL_EXEC_ClearExecFetchClose (
> statement_id=0x1fbea88, input_descriptor=0x1fbeab8, 
> output_descriptor=0x0, 
> num_input_ptr_pairs=0, num_output_ptr_pairs=0, num_total_ptr_pairs=0)
> at ../cli/CliExtern.cpp:2627
> #7  0x768703bf in SRVR::WSQL_EXEC_ClearExecFetchClose (
> statement_id=0x1fbea88, input_descriptor=, 
> output_descriptor=, 
> num_input_ptr_pairs=, 
> num_output_ptr_pairs=, 
> num_total_ptr_pairs=) at SQLWrapper.cpp:459
> #8  0x76866cff in SRVR::EXECUTE2 (pSrvrStmt=0x1fbe470)
> at sqlinterface.cpp:5520
> #9  0x7689733e in odbc_SQLSvc_Execute2_sme_ (
> objtag_=, call_id_=, 
> dialogueId=, sqlAsyncEnable=, 
> ---Type  to continue, or q  to quit---
> queryTimeout=, inputRowCnt=, 
> sqlStmtType=128, stmtHandle=33285232, cursorLength=0, cursorName=0x0, 
> cursorCharset=1, holdableCursor=0, inValuesLength=0, inValues=0x0, 
> returnCode=0x7fffe41ef928, sqlWarningOrErrorLength=0x7fffe41ef924, 
> sqlWarningOrError=@0x7fffe41ef900, rowsAffected=0x7fffe41ef920, 
> outValuesLength=0x7fffe41ef914, outValues=@0x7fffe41ef8f8)
> at srvrothers.cpp:1517
> #10 0x004cbc42 in odbc_SQLSrvr_ExecDirect_ame_ (objtag_=0x24a84d0, 
> call_id_=0x24a8528, dialogueId=1492150530, 
> stmtLabel=, cursorName=0x0, 
> stmtExplainLabel=, stmtType=0, sqlStmtType=128, 
> sqlString=0x2d43ea4 "drop table PRODUCT_METRICS cascade", 
> sqlAsyncEnable=0, queryTimeout=0, inputRowCnt=0, txnID=0, 
> holdableCursor=0)
> at SrvrConnect.cpp:7636
> #11 0x00494086 in SQLEXECUTE_IOMessage (objtag_=0x24a84d0, 
> call_id_=0x24a8528, operation_id=3012) at Interface/odbcs_srvr.cpp:1734
> #12 0x00494134 in DISPATCH_TCPIPRequest (objtag_=0x24a84d0, 
> call_id_=0x24a8528, operation_id=)
> at Interface/odbcs_srvr.cpp:1799
> #13 0x00433822 in BUILD_TCPIP_REQUEST (pnode=0x24a84d0)
> at ../Common/TCPIPSystemSrvr.cpp:603
> #14 0x004341bd in PROCESS_TCPIP_REQUEST (pnode=0x24a84d0)
> at 

[jira] [Updated] (TRAFODION-1122) LP Bug: 1439376 - UDF: Scalar UDF returns strange warnings when handling SQLUDR_DOUBLE

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1122:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1439376 - UDF: Scalar UDF returns strange warnings when handling 
> SQLUDR_DOUBLE
> --
>
> Key: TRAFODION-1122
> URL: https://issues.apache.org/jira/browse/TRAFODION-1122
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Weishiun Tsai
>Assignee: Suresh Subbiah
>Priority: Critical
> Fix For: 2.2.0
>
> Attachments: udf_bug (2).tar, udf_bug (3).tar
>
>
> In the following example, the UDF is implemented to take one SQLUDR_DOUBLE 
> input and return the same value as the SQLUDR_DOUBLE output:
> SQLUDR_LIBFUNC SQLUDR_DOUBLE qa_func_double (
>   SQLUDR_DOUBLE *in,
>   SQLUDR_DOUBLE *out,
>   SQLUDR_INT16 *inInd,
>   SQLUDR_INT16 *outInd,
>   SQLUDR_TRAIL_ARGS)
> {
>   if (calltype == SQLUDR_CALLTYPE_FINAL)
> return SQLUDR_SUCCESS;
>   if (SQLUDR_GETNULLIND(inInd) == SQLUDR_NULL)
> SQLUDR_SETNULLIND(outInd);
>   else
> *out = *in;
>   return SQLUDR_SUCCESS;
> }
> At the execution time, 2 functions are defined to map to this UDF 
> implementation with 2 different data types: FLOAT and DOUBLE PRECISION 
> respectively.  The results were returned properly, but a strange 11250 
> warning was returned as well:
> *** WARNING[11250] User-defined function TRAFODION.MYTEST.QA_UDF_FLOAT 
> completed with a warning with SQLSTATE 0. Details: No SQL message text 
> was provided by user-defined function TRAFODION.MYTEST.QA_UDF_FLOAT.
> *** WARNING[11250] User-defined function 
> TRAFODION.MYTEST.QA_UDF_DOUBLE_PRECISION completed with a warning with 
> SQLSTATE 0. Details: No SQL message text was provided by user-defined 
> function TRAFODION.MYTEST.QA_UDF_DOUBLE_PRECISION.
> This is seen on the v0331 build installed on a workstation.  To reproduce it:
> (1) Download the attached tar file and untar it to get the 3 files in there. 
> Put the files in any directory .
> (2) Make sure that you have run ./sqenv.sh of your Trafodion instance first 
> as building UDF needs $MY_SQROOT for the header files.
> (3) run build.sh
> (4) Change the line “create library qa_udf_lib file '/myudf.so';”; in 
> mytest.sql and fill in 
> (5) From sqlci, obey mytest.sql
> ---
> Here is the execution output:
> >>create schema mytest;
> --- SQL operation complete.
> >>set schema mytest;
> --- SQL operation complete.
> >>
> >>create library qa_udf_lib file '/myudf.so';
> --- SQL operation complete.
> >>
> >>create function qa_udf_float
> +>(INVAL float(10))
> +>returns (OUTVAL float(10))
> +>language c
> +>parameter style sql
> +>external name 'qa_func_double'
> +>library qa_udf_lib
> +>deterministic
> +>state area size 1024
> +>allow any parallelism
> +>no sql;
> --- SQL operation complete.
> >>
> >>create function qa_udf_double_precision
> +>(INVAL double precision)
> +>returns (OUTVAL double precision)
> +>language c
> +>parameter style sql
> +>external name 'qa_func_double'
> +>library qa_udf_lib
> +>deterministic
> +>state area size 1024
> +>allow any parallelism
> +>no sql;
> --- SQL operation complete.
> >>
> >>create table mytable1 (c float);
> --- SQL operation complete.
> >>insert into mytable1 values (1.1);
> --- 1 row(s) inserted.
> >>select qa_udf_float(c) from mytable1;
> *** WARNING[11250] User-defined function TRAFODION.MYTEST.QA_UDF_FLOAT 
> completed with a warning with SQLSTATE 0. Details: No SQL message text 
> was provided by user-defined function TRAFODION.MYTEST.QA_UDF_FLOAT.
> OUTVAL
> -
>  1.10016E+000
> *** WARNING[11250] User-defined function TRAFODION.MYTEST.QA_UDF_FLOAT 
> completed with a warning with SQLSTATE 0. Details: No SQL message text 
> was provided by user-defined function TRAFODION.MYTEST.QA_UDF_FLOAT.
> --- 1 row(s) selected.
> >>
> >>create table mytable2 (c double precision);
> --- SQL operation complete.
> >>insert into mytable2 values (2.2);
> --- 1 row(s) inserted.
> >>select qa_udf_double_precision(c) from mytable2;
> *** WARNING[11250] User-defined function 
> TRAFODION.MYTEST.QA_UDF_DOUBLE_PRECISION completed with a warning with 
> SQLSTATE 0. Details: No SQL message text was provided by user-defined 
> function TRAFODION.MYTEST.QA_UDF_DOUBLE_PRECISION.
> OUTVAL
> -
>  2.20032E+000
> *** WARNING[11250] User-defined function 
> TRAFODION.MYTEST.QA_UDF_DOUBLE_PRECISION completed with a warning with 
> SQLSTATE 0. Details: No SQL message text was provided by user-defined 
> function TRAFODION.MYTEST.QA_UDF_DOUBLE_PRECISION.
> 

[jira] [Updated] (TRAFODION-1242) LP Bug: 1457207 - Create table and constraint using the same name returns error 1043

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1242:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> LP Bug: 1457207 - Create table and constraint using the same name returns 
> error 1043
> 
>
> Key: TRAFODION-1242
> URL: https://issues.apache.org/jira/browse/TRAFODION-1242
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Reporter: Weishiun Tsai
>Assignee: Anoop Sharma
>Priority: Critical
> Fix For: 2.2.0
>
>
> Create a table with a constraint that has the same name as the table itself 
> now returns a perplexing 1043 error complaining that the constraint already 
> exists.  This is a regression introduced sometime between the v0513 build and 
> the v0519 build.  It had been working fine until the v0513 build where SQL 
> tests were last run.
> This is seen on the v0513 build.
> --
> Here is the entire script to reproduce it:
> create schema mytest;
> set schema mytest;
> create table t1 (c1 int , c2 int constraint t1 check (c2 > 10));
> drop schema mytest cascade;
> --
> Here is the execution output:
> >>create schema mytest;
> --- SQL operation complete.
> >>set schema mytest;
> --- SQL operation complete.
> >>create table t1 (c1 int , c2 int constraint t1 check (c2 > 10));
> *** ERROR[1043] Constraint TRAFODION.MYTEST.T1 already exists.
> *** ERROR[1029] Object TRAFODION.MYTEST.T1 could not be created.
> --- SQL operation failed with errors.
> >>drop schema mytest cascade;
> --- SQL operation complete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1442) Linux ODBC Driver is not able to create certificate file with long name length (over 30 bytes).

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1442:

Fix Version/s: (was: 2.1-incubating)
   2.2.0

> Linux ODBC Driver is not able to create certificate file with long name 
> length (over 30 bytes).
> ---
>
> Key: TRAFODION-1442
> URL: https://issues.apache.org/jira/browse/TRAFODION-1442
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-odbc-linux
>Affects Versions: 2.0-incubating
> Environment: Linunx
>Reporter: RuoYu Zuo
>Assignee: RuoYu Zuo
>Priority: Critical
> Fix For: 2.2.0
>
>
> Same as Windows driver does, Linux driver also reserved only 30 bytes for 
> certificate file name, there's potential of running into crash.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2348) TransactionState.hasConflict returns true if it gets a null pointer exception

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-2348:

Fix Version/s: (was: 1.3-incubating)
   2.0-incubating

> TransactionState.hasConflict returns true if it gets a null pointer exception
> -
>
> Key: TRAFODION-2348
> URL: https://issues.apache.org/jira/browse/TRAFODION-2348
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dtm
>Affects Versions: 2.1-incubating, any
>Reporter: Sean Broeder
>Assignee: Sean Broeder
>Priority: Major
> Fix For: 2.1-incubating
>
>
> In the middle of hasConflict the TransactionState object compares its 
> writeOrder list to various other transactions.  In this case, we get a Null 
> pointer exception in the trasnaction to check against, so we return true to 
> has conflict and the transaction aborts.
> 2016-11-07 20:00:28,673 WARN 
> org.apache.hadoop.hbase.regionserver.transactional.TransactionState: 
> TrxTransactionState hasConflict: 
> Unable to get row - this Transaction [[transactionId: 12919375954 regionTX: 
> false status: PENDING neverReadOnly: false scan Size: 28 write Size: 14 
> startSQ: 34310]] 
> checkAgainst Transaction [[transactionId: 17214542234 regionTX: false status: 
> ABORTED neverReadOnly: false scan Size: 0 write Size: 0 startSQ: 34296 
> commitedSQ:34314]]  Exception:
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.transactional.TrxTransactionState.hasConflict(TrxTransactionState.java:469)
> at 
> org.apache.hadoop.hbase.regionserver.transactional.TrxTransactionState.hasConflict(TrxTransactionState.java:438)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.hasConflict(TrxRegionEndpoint.java:6389)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.commitRequest(TrxRegionEndpoint.java:6138)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.commitRequest(TrxRegionEndpoint.java:6077)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.commitRequest(TrxRegionEndpoint.java:894)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.generated.TrxRegionProtos$TrxRegionService.callMethod(TrxRegionProtos.java:49510)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7054)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1746)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1728)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31447)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> at java.lang.Thread.run(Thread.java:745)
> 2016-11-07 20:00:28,674 ERROR 
> org.apache.hadoop.hbase.regionserver.transactional.TransactionState: 
> TrxTransactionState hasConflict: 
> Returning true. This transaction [transactionId: 12919375954 regionTX: false 
> status: PENDING neverReadOnly: false scan Size: 28 write Size: 14 startSQ: 
> 34310] Caught exception from transaction [transactionId: 17214542234 
> regionTX: false status: ABORTED neverReadOnly: false scan Size: 0 write Size: 
> 0 startSQ: 34296 commitedSQ:34314], regionInfo is 
> [TRAFODION.JAVABENCH.OE_ORDERLINE_192,\x00\x00\x00\x1D\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1478575978122.228b0109fcab4c57c25d7f1326f40f4e.],
>  exception
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.transactional.TrxTransactionState.hasConflict(TrxTransactionState.java:469)
> at 
> org.apache.hadoop.hbase.regionserver.transactional.TrxTransactionState.hasConflict(TrxTransactionState.java:438)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.hasConflict(TrxRegionEndpoint.java:6389)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.commitRequest(TrxRegionEndpoint.java:6138)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.commitRequest(TrxRegionEndpoint.java:6077)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.commitRequest(TrxRegionEndpoint.java:894)
> at 
> 

[jira] [Updated] (TRAFODION-1923) executor/TEST106 hangs at drop table at times

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1923:

Fix Version/s: (was: 2.0-incubating)
   2.1-incubating

> executor/TEST106 hangs at drop table at times
> -
>
> Key: TRAFODION-1923
> URL: https://issues.apache.org/jira/browse/TRAFODION-1923
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 2.0-incubating
>Reporter: Selvaganesan Govindarajan
>Assignee: Prashanth Vasudev
>Priority: Critical
> Fix For: 2.1-incubating
>
>
> executor/TEST106 hangs at
> drop table t106a 
> Currently executor/TEST106 test is not run as part of Daily regression build.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2305) After a region split the transactions to check against list is not fully populated

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-2305:

Fix Version/s: (was: 2.0-incubating)
   2.1-incubating

> After a region split the transactions to check against list is not fully 
> populated
> --
>
> Key: TRAFODION-2305
> URL: https://issues.apache.org/jira/browse/TRAFODION-2305
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dtm
>Affects Versions: any
>Reporter: Sean Broeder
>Assignee: Sean Broeder
>Priority: Major
> Fix For: 2.1-incubating
>
>
> As part of a region split all current transactions and their relationships to 
> one another are written out into a ZKNode entry and later read in by the 
> daughter regions.  However, the transactionsToCheck list is not correctly 
> populated



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-1748) Error 97 received with large upsert and select statements

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1748:

Fix Version/s: (was: 2.0-incubating)
   2.1-incubating

> Error 97 received with large upsert and select statements
> -
>
> Key: TRAFODION-1748
> URL: https://issues.apache.org/jira/browse/TRAFODION-1748
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dtm
>Affects Versions: 1.3-incubating
>Reporter: Sean Broeder
>Assignee: Sean Broeder
>Priority: Major
> Fix For: 2.1-incubating
>
>
> From Selva-
> The script has just upserted 10 rows and querying these 1 rows 
> repeatedly.  From the RS logs, it looks like memstore got flushed. Currently, 
> I have made the process to loop on getting the error 8606. 
>  
> This query involves ESPs. This error is coming from sqlci at the time of 
> commit.  I assume sqlci must be looping. The looping ends after 3 minutes to 
> proceed further. You can also put sqlci into debug and set loopError=0 to 
> come out of the loop to proceed further.  I also created a core file of sqlci 
> at ~/selva/core.44100.
>  
> If the query is finished, you can do the following to reproduce this issue
>  
> cd ~/selva/LSEG/master/stream
> sqlci
> log traf_stream_run.log ;
> obey traf_stream_run.sql ;
> log ;
> 
> Looking at dtm tracing I can see the regions are throwing an 
> UnknownTransactionException at prepare time, which causes the TM to refresh 
> the RegionLocations and redrive the prepare messages.  These again fail and 
> the transaction is aborted and this eventually percolates back to SQL as an 
> error 97.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2348) TransactionState.hasConflict returns true if it gets a null pointer exception

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-2348:

Fix Version/s: (was: 2.0-incubating)
   2.1-incubating

> TransactionState.hasConflict returns true if it gets a null pointer exception
> -
>
> Key: TRAFODION-2348
> URL: https://issues.apache.org/jira/browse/TRAFODION-2348
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dtm
>Affects Versions: 2.1-incubating, any
>Reporter: Sean Broeder
>Assignee: Sean Broeder
>Priority: Major
> Fix For: 2.1-incubating
>
>
> In the middle of hasConflict the TransactionState object compares its 
> writeOrder list to various other transactions.  In this case, we get a Null 
> pointer exception in the trasnaction to check against, so we return true to 
> has conflict and the transaction aborts.
> 2016-11-07 20:00:28,673 WARN 
> org.apache.hadoop.hbase.regionserver.transactional.TransactionState: 
> TrxTransactionState hasConflict: 
> Unable to get row - this Transaction [[transactionId: 12919375954 regionTX: 
> false status: PENDING neverReadOnly: false scan Size: 28 write Size: 14 
> startSQ: 34310]] 
> checkAgainst Transaction [[transactionId: 17214542234 regionTX: false status: 
> ABORTED neverReadOnly: false scan Size: 0 write Size: 0 startSQ: 34296 
> commitedSQ:34314]]  Exception:
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.transactional.TrxTransactionState.hasConflict(TrxTransactionState.java:469)
> at 
> org.apache.hadoop.hbase.regionserver.transactional.TrxTransactionState.hasConflict(TrxTransactionState.java:438)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.hasConflict(TrxRegionEndpoint.java:6389)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.commitRequest(TrxRegionEndpoint.java:6138)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.commitRequest(TrxRegionEndpoint.java:6077)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.commitRequest(TrxRegionEndpoint.java:894)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.generated.TrxRegionProtos$TrxRegionService.callMethod(TrxRegionProtos.java:49510)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7054)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1746)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1728)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31447)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> at java.lang.Thread.run(Thread.java:745)
> 2016-11-07 20:00:28,674 ERROR 
> org.apache.hadoop.hbase.regionserver.transactional.TransactionState: 
> TrxTransactionState hasConflict: 
> Returning true. This transaction [transactionId: 12919375954 regionTX: false 
> status: PENDING neverReadOnly: false scan Size: 28 write Size: 14 startSQ: 
> 34310] Caught exception from transaction [transactionId: 17214542234 
> regionTX: false status: ABORTED neverReadOnly: false scan Size: 0 write Size: 
> 0 startSQ: 34296 commitedSQ:34314], regionInfo is 
> [TRAFODION.JAVABENCH.OE_ORDERLINE_192,\x00\x00\x00\x1D\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1478575978122.228b0109fcab4c57c25d7f1326f40f4e.],
>  exception
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.transactional.TrxTransactionState.hasConflict(TrxTransactionState.java:469)
> at 
> org.apache.hadoop.hbase.regionserver.transactional.TrxTransactionState.hasConflict(TrxTransactionState.java:438)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.hasConflict(TrxRegionEndpoint.java:6389)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.commitRequest(TrxRegionEndpoint.java:6138)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.commitRequest(TrxRegionEndpoint.java:6077)
> at 
> org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint.commitRequest(TrxRegionEndpoint.java:894)
> at 
> 

[jira] [Updated] (TRAFODION-1748) Error 97 received with large upsert and select statements

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1748:

Fix Version/s: (was: 1.3-incubating)
   2.0-incubating

> Error 97 received with large upsert and select statements
> -
>
> Key: TRAFODION-1748
> URL: https://issues.apache.org/jira/browse/TRAFODION-1748
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dtm
>Affects Versions: 1.3-incubating
>Reporter: Sean Broeder
>Assignee: Sean Broeder
>Priority: Major
> Fix For: 2.1-incubating
>
>
> From Selva-
> The script has just upserted 10 rows and querying these 1 rows 
> repeatedly.  From the RS logs, it looks like memstore got flushed. Currently, 
> I have made the process to loop on getting the error 8606. 
>  
> This query involves ESPs. This error is coming from sqlci at the time of 
> commit.  I assume sqlci must be looping. The looping ends after 3 minutes to 
> proceed further. You can also put sqlci into debug and set loopError=0 to 
> come out of the loop to proceed further.  I also created a core file of sqlci 
> at ~/selva/core.44100.
>  
> If the query is finished, you can do the following to reproduce this issue
>  
> cd ~/selva/LSEG/master/stream
> sqlci
> log traf_stream_run.log ;
> obey traf_stream_run.sql ;
> log ;
> 
> Looking at dtm tracing I can see the regions are throwing an 
> UnknownTransactionException at prepare time, which causes the TM to refresh 
> the RegionLocations and redrive the prepare messages.  These again fail and 
> the transaction is aborted and this eventually percolates back to SQL as an 
> error 97.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TRAFODION-2427) trafodion 2.0.1 install occurs an error ERROR: unable to find hbase-trx-cdh5_5-*.jar5-*.jar

2018-03-04 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-2427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-2427:

Fix Version/s: (was: 2.0-incubating)
   2.1-incubating

> trafodion 2.0.1 install occurs an error ERROR: unable to find 
> hbase-trx-cdh5_5-*.jar5-*.jar
> ---
>
> Key: TRAFODION-2427
> URL: https://issues.apache.org/jira/browse/TRAFODION-2427
> Project: Apache Trafodion
>  Issue Type: Question
>  Components: installer
>Reporter: jacklee
>Priority: Major
>  Labels: beginner
> Fix For: 2.1-incubating
>
>
> trafodion 2.0.1 install occurs an error
> ***INFO: Cloudera installed will run traf_cloudera_mods
> ***ERROR: unable to find 
> /usr/lib/trafodion/apache-trafodion_server-2.0.1-incubating/export/lib/hbase-trx-cdh5_5-*.jar
> ***ERROR: traf_cloudera_mods exited with error.
> ***ERROR: Please check log files.
> ***ERROR: Exiting
> help somebody help me,thanks



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)